Refine
Has Fulltext
- yes (13157) (remove)
Year of publication
Document Type
- Article (3990)
- Postprint (3251)
- Doctoral Thesis (2502)
- Monograph/Edited Volume (966)
- Review (558)
- Preprint (446)
- Part of Periodical (417)
- Master's Thesis (260)
- Conference Proceeding (245)
- Working Paper (241)
Language
- German (6944)
- English (5913)
- Spanish (80)
- French (75)
- Multiple languages (62)
- Russian (62)
- Hebrew (9)
- Italian (6)
- Portuguese (2)
- Hungarian (1)
Keywords
- Germany (118)
- Deutschland (105)
- Sprachtherapie (77)
- climate change (77)
- Patholinguistik (73)
- patholinguistics (73)
- Nachhaltigkeit (61)
- European Union (59)
- Europäische Union (58)
- Klimawandel (57)
Institute
- Extern (1365)
- MenschenRechtsZentrum (942)
- Institut für Physik und Astronomie (710)
- Institut für Biochemie und Biologie (704)
- Wirtschaftswissenschaften (579)
- Institut für Chemie (552)
- Institut für Mathematik (519)
- Institut für Romanistik (508)
- Institut für Geowissenschaften (506)
- Mathematisch-Naturwissenschaftliche Fakultät (489)
In the last two decades, process mining has developed from a niche
discipline to a significant research area with considerable impact on academia and industry. Process mining enables organisations to identify the running business processes from historical execution data. The first requirement of any process mining technique is an event log, an artifact that represents concrete business process executions in the form of sequence of events. These logs can be extracted from the organization's information systems and are used by process experts to retrieve deep insights from the organization's running processes. Considering the events pertaining to such logs, the process models can be automatically discovered and enhanced or annotated with performance-related information. Besides behavioral information, event logs contain domain specific data, albeit implicitly. However, such data are usually overlooked and, thus, not utilized to their full potential.
Within the process mining area, we address in this thesis the research gap of discovering, from event logs, the contextual information that cannot be captured by applying existing process mining techniques. Within this research gap, we identify four key problems and tackle them by looking at an event log from different angles. First, we address the problem of deriving an event log in the absence of a proper database access and domain knowledge. The second problem is related to the under-utilization of the implicit domain knowledge present in an event log that can increase the understandability of the discovered process model. Next, there is a lack of a holistic representation of the historical data manipulation at the process model level of abstraction. Last but not least, each process model presumes to be independent of other process models when discovered from an event log, thus, ignoring possible data dependencies between processes within an organization.
For each of the problems mentioned above, this thesis proposes a dedicated method. The first method provides a solution to extract an event log only from the transactions performed on the database that are stored in the form of redo logs. The second method deals with discovering the underlying data model that is implicitly embedded in the event log, thus, complementing the discovered process model with important domain knowledge information. The third method captures, on the process model level, how the data affects the running process instances. Lastly, the fourth method is about the discovery of the relations between business processes (i.e., how they exchange data) from a set of event logs and explicitly representing such complex interdependencies in a business process architecture.
All the methods introduced in this thesis are implemented as a prototype and their feasibility is proven by being applied on real-life event logs.
Die vorgelegte Dissertation befasst sich mit der frühen Wortsegmentierung im monolingualen und bilingualen Spracherwerb. Die Wortsegmentierung stellt eine der wesentlichen Herausforderungen für Säuglinge im Spracherwerb dar, da gesprochene Sprache kontinuierlich ist und Wortgrenzen nicht zuverlässig durch akustische Pausen markiert werden. Zahlreiche Studien konnten für mehrere Sprachen zeigen, dass sich Segmentierungsfähigkeiten von monolingualen Säuglingen zwischen dem 6. und 12. Lebensmonat herausbilden (z. B. Englisch: Jusczyk, Houston & Newsome, 1999; Französisch: Nazzi, Mersad, Sundara, Iakimova & Polka, 2014; Deutsch: Höhle & Weissenborn, 2003; Bartels, Darcy & Höhle, 2009). Frühe Wortsegmentierungsfähigkeiten sind sprachspezifisch (Polka & Sundara, 2012). Crosslinguistische Studien zeigten, dass eine sprachübergreifende Segmentierung für einsprachig aufwachsende Säuglinge nur erfolgreich bewältigt wird, wenn die nicht-native Sprache rhythmische Eigenschaften mit ihrer Muttersprache teilt (Houston, Jusczyk, Kuijpers, Coolen & Cutler, 2000; Höhle, 2002; Polka & Sundara, 2012).
In vier Studien dieser Dissertation wurden mit behavioralen (Headturn Preference Paradigma) und elektrophysiologischen Untersuchungen (Elektroenzephalografie) monolingual Deutsch aufwachsende und bilingual Deutsch-Französisch aufwachsende Säuglinge im Alter von 9 Monaten untersucht. Dabei wurde der Frage nachgegangen, ob monolingual Deutsch aufwachsende Säuglinge im Alter von 9 Monaten in der Lage sind, ihre Muttersprache Deutsch und die rhythmisch unähnliche Sprache Französisch zu segmentieren. Mit anderen Worten: Können monolinguale Säuglinge im Alter von 9 Monaten ihre Segmentierungsprozeduren modifizieren bzw. von ihrer bevorzugten Segmentierung abweichen, um auch nicht-muttersprachlichen Input erfolgreich zu segmentieren?
Bezogen auf die bilingualen Sprachlerner wurde der Frage nachgegangen, ob zweisprachig aufwachsende Säuglinge vergleichbare Segmentierungsfähigkeiten wie monolingual aufwachsende Säuglinge aufweisen und ob sich zudem ein Einfluss der Sprachdominanz auf die Entwicklung der Wortsegmentierungsfähigkeiten in einer bilingualen Population zeigt.
Durch die gewählten Methoden konnten sowohl Verhaltenskorrelate als auch elektrophysiologische Korrelate zur Beantwortung der Fragestellungen herangezogen werden. Darüber hinaus ermöglichte das EEG durch ereigniskorrelierte Potenziale (EKPs) einen Einblick in Lern- und Verarbeitungsprozesse, die mit Verhaltensmethoden nicht erfassbar waren.
Die Ergebnisse zeigen, dass monolingual Deutsch aufwachsende Säuglinge im Alter von 9 Monaten sowohl ihre Muttersprache als auch die nicht-native Sprache Französisch erfolgreich segmentieren. Die Fähigkeit zur Segmentierung der nicht-nativen Sprache Französisch wird jedoch beeinflusst von der Muttersprache: monolinguale Säuglinge, die mit Französisch zuerst getestet wurden, segmentierten sowohl das Französische als auch das im Anschluss präsentierte deutsche Sprachmaterial. Monolinguale Säuglinge die zuerst mit Deutsch und anschließend mit Französisch getestet wurden, segmentierten die deutschen Stimuli, jedoch nicht das französische Sprachmaterial.
Bilingual Deutsch-Französisch aufwachsende Säuglinge segmentieren im Alter von 9 Monaten beide Muttersprachen erfolgreich. Die Ergebnisse deuten zudem auf einen Einfluss der Sprachdominanz auf die Wortsegmentierungsfähigkeiten von zweisprachig aufwachsenden Säuglingen. Die balancierten Bilingualen segmentieren beide Muttersprachen erfolgreich, die unbalancierten Bilingualen zeigen nur für die jeweils dominante Sprache eine erfolgreiche Segmentierung.
Zusammenfassend liefert diese Arbeit erstmals Evidenz für eine erfolgreiche sprachübergreifende Segmentierung in prosodisch differenten Sprachen unterschiedlicher Rhythmusklassen in einer monolingualen Population. Darüber hinaus liefern die Studien dieser Arbeit Evidenz dafür, dass bilingual aufwachsende Säuglinge bezogen auf die Wortsegmentierungsfähigkeiten eine vergleichbare Entwicklung wie einsprachig aufwachsende Sprachlerner zeigen. Dieses Ergebnis erweitert die Datenlage bisheriger Studien, die für verschiedene Entwicklungsschritte im Spracherwerb keine Verzögerung, sondern eine zu monolingual aufwachsenden Säuglingen vergleichbare Entwicklung innerhalb einer bilingualen Population nachweisen konnten (Sprachdiskrimination: Byers-Heinlein, Burns & Werker, 2010; Bosch & Sebastian-Galles, 1997; Phonemdiskrimination: Albareda-Castellot, Pons & Sebastián-Gallés, 2011; Wahrnehmung rhythmischer Eigenschaften: Bijeljac-Babic, Höhle & Nazzi, 2016).
Hantaviruses (HVs) are a group of zoonotic viruses that infect human beings primarily through aerosol transmission of rodent excreta and urine samplings. HVs are classified geographically into: Old World HVs (OWHVs) that are found in Europe and Asia, and New World HVs (NWHVs) that are observed in the Americas. These different strains can cause severe hantavirus diseases with pronounced renal syndrome or severe cardiopulmonary system distress. HVs can be extremely lethal, with NWHV infections reaching up to 40 % mortality rate. HVs are known to generate epidemic outbreaks in many parts of the world including Germany, which has seen periodic HV infections over the past decade. HV has a trisegmented genome. The small segment (S) encodes the nucleocapsid protein (NP), the middle segment (M) encodes the glycoproteins (GPs) Gn and Gc which forms up to tetramers and primarily monomers \& dimers upon independent expression respectively and large segment (L) encodes RNA dependent RNA polymerase (RdRp). Interactions between these viral proteins are crucial in providing mechanistic insights into HV virion development. Despite best efforts, there continues to be lack of quantification of these associations in living cells. This is required in developing the mechanistic models for HV viral assembly. This dissertation focuses on three key questions pertaining to the initial steps of virion formation that primarily involves the GPs and NP.
The research investigations in this work were completed using Fluorescence Correlation Spectroscopy (FCS) approaches. FCS is frequently used in assessing the biophysical features of bio-molecules including protein concentration and diffusion dynamics and circumvents the requirement of protein overexpression. FCS was primarily applied in this thesis to evaluate protein multimerization, at single cell resolution.
The first question addressed which GP spike formation model proposed by Hepojoki et al.(2010) appropriately describes the evidence in living cells. A novel in cellulo assay was developed to evaluate the amount of fluorescently labelled and unlabeled GPs upon co-expression. The results clearly showed that Gn and Gc initially formed a heterodimeric Gn:Gc subunit. This sub-unit then multimerizes with congruent Gn:Gc subunits to generate the final GP spike. Based on these interactions, models describing the formation of GP complex (with multiple GP spike subunits) were additionally developed.
HV GP assembly primarily takes place in the Golgi apparatus (GA) of infected cells. Interestingly, NWHV GPs are hypothesized to assemble at the plasma membrane (PM). This led to the second research question in this thesis, in which a systematic comparison between OWHV and NWHV GPs was conducted to validate this hypothesis. Surprisingly, GP localization at the PM was congruently observed with OWHV and NWHV GPs. Similar results were also discerned with OWHV and NWHV GP localization in the absence of cytoskeletal factors that regulate HV trafficking in cells.
The final question focused on quantifying the NP-GP interactions and understanding their influence of NP and GP multimerization. Gc mutlimers were detected in the presence of NP and complimented by the presence of localized regions of high NP-Gc interactions in the perinuclear region of living cells. Gc-CT domain was shown to influence NP-Gc associations. Gn, on the other hand, formed up to tetrameric complexes, independent from the presence of NP.
The results in this dissertation sheds light on the initial steps of HV virion formation by quantifying homo and heterotypic interactions involving NP and GPs, which otherwise are very difficult to perform. Finally, the in cellulo methodologies implemented in this work can be potentially extended to understand other key interactions involved in HV virus assembly.
Este trabajo pretende demostrar que en la obra narrativa del escritor Tomás Carrasquilla Naranjo (1858 - 1940) hay un Wahrheitsgehalt (Benjamin, 2012), la concreción temporal de una idea, que se materializa a través de lo que aquí he denominado imagen de la religiosidad popular. Esto quiere decir que la obra del antioqueño estaría construida a la manera de un gran mosaico, en el que pese a los variados y disparejos elementos que la componen, la unión de todos produce una imagen (Bild). En dicha imagen se representa la experiencia histórica de lo moderno en los sectores populares, a partir de la unión fugaz entre los rezagos de tradiciones vetustas y las formas de vida más novedosas. Lejos de las convenciones de su época, donde la pregunta por la experiencia de lo moderno redunda en los ámbitos metropolitanos y el papel del artista, Carrasquilla se pregunta por lo que ocurre en los extensos ámbitos rurales o liminares entre lo citadino y lo rural, y sus respectivos entrecruzamientos. Los sujetos que habitan estos ámbitos, al carecer de herramientas conceptuales que les permita definir esta nueva “experiencia viviente”, esa nueva Structures and Feeling como la denomina Raymond Williams (2019); apelan a lo único que conocen, los vetustos saberes transmitidos oralmente para explicar su ahora.
En este sentido, es posible afirmar que Carrasquilla, valiéndose de esta imagen de la religiosidad popular, intentó establecer un diálogo en el campo de lo literario, desde el que postuló una idea de lo moderno diferencial. En varias ocasiones, el antioqueño manifestó que la literatura debía incorporar las experiencias locales al diálogo de lo universal. Ejemplo de esto es el símil de la literatura con el sistema planetario, pues, según él, las relaciones de jerarquía se establecen cuando los países que producen modas literarias, los planetas (Europa), relegan a los otros a ser simples satélites, es decir, a imitar (Carrasquilla, 1991). Hoy en día, se aprecia en aquella crítica dirigida a sus paisanos, los modernistas antioqueños, una reivindicación de la alteridad. Por lo que aquí se postula, que si bien dichas vivencias, no son similares a las que se dan en los nacientes ámbitos metropolitanos, donde las mercancías representan a los nuevos sustitutos de la fe; en esos extensos ámbitos, en apariencia provincianos y alejados del contacto con otras culturas y saberes, la imagen de religiosidad popular viene a desempeñar el mismo papel que aquellas. En otras palabras, “indem an Dingen ihr Gebrauchswert abstirbt” (utilidad o adoración), la subjetividad del personaje las carga con “Intentionen von Wunsch und Angst” (Benjamin, 2013a.), convirtiéndolas en objetos de contemplación, bien sea portándolas o coleccionándolas. De manera similar Carrasquilla se habría valido del cúmulo de saberes (Wissen) residuales de su hipotético público lector, heredado de diversas áreas culturales -durante el proceso de la colonización-, sus respectivos y heterogéneos tiempos y lenguas particulares (Ette, 2019), para aunarlos a las experiencias profanas actuales. Así, la obra (cuento o novela) representaría artísticamente “formas de vida” popular, a través de las cuales se “experimenta estéticamente” cómo se sobrevive (überleben) (Ette, 2015) a la modernidad en los sectores marginados. Es decir, solo desde lo vetusto y ruinoso de la religiosidad popular, otrora sagrado, es posible explicar la experiencia de lo moderno, su aquí y ahora.
Air pollution has been a persistent global problem in the past several hundred years. While some industrialized nations have shown improvements in their air quality through stricter regulation, others have experienced declines as they rapidly industrialize. The WHO’s 2021 update of their recommended air pollution limit values reflects the substantial impacts on human health of pollutants such as NO2 and O3, as recent epidemiological evidence suggests substantial long-term health impacts of air pollution even at low concentrations. Alongside developments in our understanding of air pollution's health impacts, the new technology of low-cost sensors (LCS) has been taken up by both academia and industry as a new method for measuring air pollution. Due primarily to their lower cost and smaller size, they can be used in a variety of different applications, including in the development of higher resolution measurement networks, in source identification, and in measurements of air pollution exposure. While significant efforts have been made to accurately calibrate LCS with reference instrumentation and various statistical models, accuracy and precision remain limited by variable sensor sensitivity. Furthermore, standard procedures for calibration still do not exist and most proprietary calibration algorithms are black-box, inaccessible to the public. This work seeks to expand the knowledge base on LCS in several different ways: 1) by developing an open-source calibration methodology; 2) by deploying LCS at high spatial resolution in urban environments to test their capability in measuring microscale changes in urban air pollution; 3) by connecting LCS deployments with the implementation of local mobility policies to provide policy advice on resultant changes in air quality.
In a first step, it was found that LCS can be consistently calibrated with good performance against reference instrumentation using seven general steps: 1) assessing raw data distribution, 2) cleaning data, 3) flagging data, 4) model selection and tuning, 5) model validation, 6) exporting final predictions, and 7) calculating associated uncertainty. By emphasizing the need for consistent reporting of details at each step, most crucially on model selection, validation, and performance, this work pushed forward with the effort towards standardization of calibration methodologies. In addition, with the open-source publication of code and data for the seven-step methodology, advances were made towards reforming the largely black-box nature of LCS calibrations.
With a transparent and reliable calibration methodology established, LCS were then deployed in various street canyons between 2017 and 2020. Using two types of LCS, metal oxide (MOS) and electrochemical (EC), their performance in capturing expected patterns of urban NO2 and O3 pollution was evaluated. Results showed that calibrated concentrations from MOS and EC sensors matched general diurnal patterns in NO2 and O3 pollution measured using reference instruments. While MOS proved to be unreliable for discerning differences among measured locations within the urban environment, the concentrations measured with calibrated EC sensors matched expectations from modelling studies on NO2 and O3 pollution distribution in street canyons. As such, it was concluded that LCS are appropriate for measuring urban air quality, including for assisting urban-scale air pollution model development, and can reveal new insights into air pollution in urban environments.
To achieve the last goal of this work, two measurement campaigns were conducted in connection with the implementation of three mobility policies in Berlin. The first involved the construction of a pop-up bike lane on Kottbusser Damm in response to the COVID-19 pandemic, the second surrounded the temporary implementation of a community space on Böckhstrasse, and the last was focused on the closure of a portion of Friedrichstrasse to all motorized traffic. In all cases, measurements of NO2 were collected before and after the measure was implemented to assess changes in air quality resultant from these policies. Results from the Kottbusser Damm experiment showed that the bike-lane reduced NO2 concentrations that cyclists were exposed to by 22 ± 19%. On Friedrichstrasse, the street closure reduced NO2 concentrations to the level of the urban background without worsening the air quality on side streets. These valuable results were communicated swiftly to partners in the city administration responsible for evaluating the policies’ success and future, highlighting the ability of LCS to provide policy-relevant results.
As a new technology, much is still to be learned about LCS and their value to academic research in the atmospheric sciences. Nevertheless, this work has advanced the state of the art in several ways. First, it contributed a novel open-source calibration methodology that can be used by a LCS end-users for various air pollutants. Second, it strengthened the evidence base on the reliability of LCS for measuring urban air quality, finding through novel deployments in street canyons that LCS can be used at high spatial resolution to understand microscale air pollution dynamics. Last, it is the first of its kind to connect LCS measurements directly with mobility policies to understand their influences on local air quality, resulting in policy-relevant findings valuable for decisionmakers. It serves as an example of the potential for LCS to expand our understanding of air pollution at various scales, as well as their ability to serve as valuable tools in transdisciplinary research.
The East African Rift System (EARS) is a significant example of active tectonics, which provides opportunities to examine the stages of continental faulting and landscape evolution. The southwest extension of the EARS is one of the most significant examples of active tectonics nowadays, however, seismotectonic research in the area has been scarce, despite the fundamental importance of neotectonics. Our first study area is located between the Northern Province of Zambia and the southeastern Katanga Province of the Democratic Republic of Congo. Lakes Mweru and Mweru Wantipa are part of the southwest extension of the EARS. Fault analysis reveals that, since the Miocene, movements along the active Mweru-Mweru Wantipa Fault System (MMFS) have been largely responsible for the reorganization of the landscape and the drainage patterns across the southwestern branch of the EARS. To investigate the spatial and temporal patterns of fluvial-lacustrine landscape development, we determined in-situ cosmogenic 10Be and 26Al in a total of twenty-six quartzitic bedrock samples that were collected from knickpoints across the Mporokoso Plateau (south of Lake Mweru) and the eastern part of the Kundelungu Plateau (north of Lake Mweru). Samples from the Mporokoso Plateau and close to the MMFS provide evidence of temporary burial. By contrast, surfaces located far from the MMFS appear to have remained uncovered since their initial exposure as they show consistent 10Be and 26Al exposure ages ranging up to ~830 ka. Reconciliation of the observed burial patterns with morphotectonic and stratigraphic analysis reveals the existence of an extensive paleo-lake during the Pleistocene. Through hypsometric analyses of the dated knickpoints, the potential maximum water level of the paleo-lake is constrained to ~1200 m asl (present lake lavel: 917 m asl). High denudation rates (up to ~40 mm ka-1) along the eastern Kundelungu Plateau suggest that footwall uplift, resulting from normal faulting, caused river incision, possibly controlling paleo-lake drainage. The lake level was reduced gradually reaching its current level at ~350 ka.
Parallel to the MMFS in the north, the Upemba Fault System (UFS) extends across the southeastern Katanga Province of the Democratic Republic of Congo. This part of our research is focused on the geomorphological behavior of the Kiubo Waterfalls. The waterfalls are the currently active knickpoint of the Lufira River, which flows into the Upemba Depression. Eleven bedrock samples along the Lufira River and its tributary stream, Luvilombo River, were collected. In-situ cosmogenic 10Be and 26Al were used in order to constrain the K constant of the Stream Power Law equation. Constraining the K constant allowed us to calculate the knickpoint retreat rate of the Kiubo Waterfalls at ~0.096 m a-1. Combining the calculated retreat rate of the knickpoint with DNA sequencing from fish populations, we managed to present extrapolation models and estimate the location of the onset of the Kiubo Waterfalls, revealing its connection to the seismicity of the UFS.
Soil is today considered a non-renewable resource on societal time scale, as the rate of soil loss is higher than the one of soil formation.
Soil formation is complex, can take several thousands of years and is influenced by a variety of factors, one of them is time. Oftentimes, there is the assumption of constant and progressive conditions for soil and/or profile development (i.e., steady-state). In reality, for most of the soils, their (co-)evolution leads to a complex and irregular soil development in time and space characterised by “progressive” and “regressive” phases.
Lateral transport of soil material (i.e., soil erosion) is one of the principal processes shaping the land surface and soil profile during “regressive” phases and one of the major environmental problems the world faces.
Anthropogenic activities like agriculture can exacerbate soil erosion. Thus, it is of vital importance to distinguish short-term soil redistribution rates (i.e., within decades) influenced by human activities differ from long-term natural rates. To do so, soil erosion (and denudation) rates can be determined by using a set of isotope methods that cover different time scales at landscape level.
With the aim to unravel the co-evolution of weathering, soil profile development and lateral redistribution on a landscape level, we used Pluthonium-239+240 (239+240Pu), Beryllium-10 (10Be, in situ and meteoric) and Radiocarbon (14C) to calculate short- and long-term erosion rates in two settings, i.e., a natural and an anthropogenic environment in the hummocky ground moraine landscape of the Uckermark, North-eastern Germany. The main research questions were:
1. How do long-term and short-term rates of soil redistributing processes differ?
2. Are rates calculated from in situ 10Be comparable to those of using meteoric 10Be?
3. How do soil redistribution rates (short- and long-term) in an agricultural and in a natural landscape compare to each other?
4. Are the soil patterns observed in northern Germany purely a result of past events (natural and/or anthropogenic) or are they imbedded in ongoing processes?
Erosion and deposition are reflected in a catena of soil profiles with no or almost no erosion on flat positions (hilltop), strong erosion on the mid-slope and accumulation of soil material at the toeslope position. These three characteristic process domains were chosen within the CarboZALF-D experimental site, characterised by intense anthropogenic activities. Likewise, a hydrosequence in an ancient forest was chosen for this study and being regarded as a catena strongly influenced by natural soil transport.
The following main results were obtained using the above-mentioned range of isotope methods available to measure soil redistribution rates depending on the time scale needed (e.g., 239+240Pu, 10Be, 14C):
1. Short-term erosion rates are one order of magnitude higher than long-term rates in agricultural settings.
2. Both meteoric and in situ 10Be are suitable soil tracers to measure the long-term soil redistribution rates giving similar results in an anthropogenic environment for different landscape positions (e.g., hilltop, mid-slope, toeslope)
3. Short-term rates were extremely low/negligible in a natural landscape and very high in an agricultural landscape – -0.01 t ha-1 yr-1 (average value) and -25 t ha-1 yr-1 respectively. On the contrary, long-term rates in the forested landscape are comparable to those calculated in the agricultural area investigated with average values of -1.00 t ha-1 yr-1 and -0.79 t ha-1 yr-1.
4. Soil patterns observed in the forest might be due to human impact and activities started after the first settlements in the region, earlier than previously postulated, between 4.5 and 6.8 kyr BP, and not a result of recent soil erosion.
5. Furthermore, long-term soil redistribution rates are similar independently from the settings, meaning past natural soil mass redistribution processes still overshadow the present anthropogenic erosion processes.
Overall, this study could make important contributions to the deciphering of the co-evolution of weathering, soil profile development and lateral redistribution in North-eastern Germany. The multi-methodological approach used can be challenged by the application in a wider range of landscapes and geographic regions.
Development of electrochemical antibody-based and enzymatic assays for mycotoxin analysis in food
(2023)
Electrochemical methods are promising to meet the demand for easy-to-use devices monitoring key parameters in the food industry. Many companies run own lab procedures for mycotoxin analysis, but it is a major goal to simplify the analysis. The enzyme-linked immunosorbent assay using horseradish peroxidase as enzymatic label, together with 3,3',5,5' tetramethylbenzidine (TMB)/H2O2 as substrates allows sensitive mycotoxin detection with optical detection methods. For the miniaturization of the detection step, an electrochemical system for mycotoxin analysis was developed. To this end, the electrochemical detection of TMB was studied by cyclic voltammetry on different screen-printed electrodes (carbon and gold) and at different pH values (pH 1 and pH 4). A stable electrode reaction, which is the basis for the further construction of the electrochemical detection system, could be achieved at pH 1 on gold electrodes. An amperometric detection method for oxidized TMB, using a custom-made flow cell for screen-printed electrodes, was established and applied for a competitive magnetic bead-based immunoassay for the mycotoxin ochratoxin A. A limit of detection of 150 pM (60 ng/L) could be obtained and the results were verified with optical detection. The applicability of the magnetic bead-based immunoassay was tested in spiked beer using a handheld potentiostat connected via Bluetooth to a smartphone for amperometric detection allowing to quantify ochratoxin A down to 1.2 nM (0.5 µg/L).
Based on the developed electrochemical detection system for TMB, the applicability of the approach was demonstrated with a magnetic bead-based immunoassay for the ergot alkaloid, ergometrine. Under optimized assay conditions a limit of detection of 3 nM (1 µg/L) was achieved and in spiked rye flour samples ergometrine levels in a range from 25 to 250 µg/kg could be quantified. All results were verified with optical detection. The developed electrochemical detection method for TMB gives great promise for the detection of TMB in many other HRP-based assays.
A new sensing approach, based on an enzymatic electrochemical detection system for the mycotoxin fumonisin B1 was established using an Aspergillus niger fumonisin amine oxidase (AnFAO). AnFAO was produced recombinantly in E. coli as maltose-binding protein fusion protein and catalyzes the oxidative deamination of fumonisins, producing hydrogen peroxide. It was found that AnFAO has a high storage and temperature stability. The enzyme was coupled covalently to magnetic particles, and the enzymatically produced H2O2 in the reaction with fumonisin B1 was detected amperometrically in a flow injection system using Prussian blue/carbon electrodes and the custom-made wall-jet flow cell. Fumonisin B1 could be quantified down to 1.5 µM (≈ 1 mg/L). The developed system represents a new approach to detect mycotoxins using enzymes and electrochemical methods.
Evaluation of nitrogen dynamics in high-order streams and rivers based on high-frequency monitoring
(2023)
Nutrient storage, transform and transport are important processes for achieving environmental and ecological health, as well as conducting water management plans. Nitrogen is one of the most noticeable elements due to its impacts on tremendous consequences of eutrophication in aquatic systems. Among all nitrogen components, researches on nitrate are blooming because of widespread deployments of in-situ high-frequency sensors. Monitoring and studying nitrate can become a paradigm for any other reactive substances that may damage environmental conditions and cause economic losses.
Identifying nitrate storage and its transport within a catchment are inspiring to the management of agricultural activities and municipal planning. Storm events are periods when hydrological dynamics activate the exchange between nitrate storage and flow pathways. In this dissertation, long-term high-frequency monitoring data at three gauging stations in the Selke river were used to quantify event-scale nitrate concentration-discharge (C-Q) hysteretic relationships. The Selke catchment is characterized into three nested subcatchments by heterogeneous physiographic conditions and land use. With quantified hysteresis indices, impacts of seasonality and landscape gradients on C-Q relationships are explored. For example, arable area has deep nitrate legacy and can be activated with high intensity precipitation during wetting/wet periods (i.e., the strong hydrological connectivity). Hence, specific shapes of C-Q relationships in river networks can identify targeted locations and periods for agricultural management actions within the catchment to decrease nitrate output into downstream aquatic systems like the ocean.
The capacity of streams for removing nitrate is of both scientific and social interest, which makes the quantification motivated. Although measurements of nitrate dynamics are advanced compared to other substances, the methodology to directly quantify nitrate uptake pathways is still limited spatiotemporally. The major problem is the complex convolution of hydrological and biogeochemical processes, which limits in-situ measurements (e.g., isotope addition) usually to small streams with steady flow conditions. This makes the extrapolation of nitrate dynamics to large streams highly uncertain. Hence, understanding of in-stream nitrate dynamic in large rivers is still necessary. High-frequency monitoring of nitrate mass balance between upstream and downstream measurement sites can quantitatively disentangle multi-path nitrate uptake dynamics at the reach scale (3-8 km). In this dissertation, we conducted this approach in large stream reaches with varying hydro-morphological and environmental conditions for several periods, confirming its success in disentangling nitrate uptake pathways and their temporal dynamics. Net nitrate uptake, autotrophic assimilation and heterotrophic uptake were disentangled, as well as their various diel and seasonal patterns. Natural streams generally can remove more nitrate under similar environmental conditions and heterotrophic uptake becomes dominant during post-wet seasons. Such two-station monitoring provided novel insights into reach-scale nitrate uptake processes in large streams.
Long-term in-stream nitrate dynamics can also be evaluated with the application of water quality model. This is among the first time to use a data-model fusion approach to upscale the two-station methodology in large-streams with complex flow dynamics under long-term high-frequency monitoring, assessing the in-stream nitrate retention and its responses to drought disturbances from seasonal to sub-daily scale. Nitrate retention (both net uptake and net release) exhibited substantial seasonality, which also differed in the investigated normal and drought years. In the normal years, winter and early spring seasons exhibited extensive net releases, then general net uptake occurred after the annual high-flow season at later spring and early summer with autotrophic processes dominating and during later summer-autumn low-flow periods with heterotrophy-characteristics predominating. Net nitrate release occurred since late autumn until the next early spring. In the drought years, the late-autumn net releases were not so consistently persisted as in the normal years and the predominance of autotrophic processes occurred across seasons. Aforementioned comprehensive results of nitrate dynamics on stream scale facilitate the understanding of instream processes, as well as raise the importance of scientific monitoring schemes for hydrology and water quality parameters.
Extreme weather and climate events are one of the greatest dangers for present-day society. Therefore, it is important to provide reliable statements on what changes in extreme events can be expected along with future global climate change. However, the projected overall response to future climate change is generally a result of a complex interplay between individual physical mechanisms originated within the different climate subsystems. Hence, a profound understanding of these individual contributions is required in order to provide meaningful assessments of future changes in extreme events. One aspect of climate change is the recently observed phenomenon of Arctic Amplification and the related dramatic Arctic sea ice decline, which is expected to continue over the next decades. The question to what extent Arctic sea ice loss is able to affect atmospheric dynamics and extreme events over mid-latitudes has received a lot of attention over recent years and still remains a highly debated topic.
In this respect, the objective of this thesis is to contribute to a better understanding on the impact of future Arctic sea ice retreat on European temperature extremes and large-scale atmospheric dynamics.
The outcomes are based on model data from the atmospheric general circulation model ECHAM6. Two different sea ice sensitivity simulations from the Polar Amplification Intercomparison Project are employed and contrasted to a present day reference experiment: one experiment with prescribed future sea ice loss over the entire Arctic, as well as another one with sea ice reductions only locally prescribed over the Barents-Kara Sea.% prescribed over the entire Arctic, as well as only locally over the Barent/Karasea with a present day reference experiment.
The first part of the thesis focuses on how future Arctic sea ice reductions affect large-scale atmospheric dynamics over the Northern Hemisphere in terms of occurrence frequency changes of five preferred Euro-Atlantic circulation regimes. When compared to circulation regimes computed from ERA5 it shows that ECHAM6 is able to realistically simulate the regime structures. Both ECHAM6 sea ice sensitivity experiments exhibit similar regime frequency changes. Consistent with tendencies found in ERA5, a more frequent occurrence of a Scandinavian blocking pattern in midwinter is for instance detected under future sea ice conditions in the sensitivity experiments. Changes in occurrence frequencies of circulation regimes in summer season are however barely detected.
After identifying suitable regime storylines for the occurrence of European temperature extremes in winter, the previously detected regime frequency changes are used to quantify dynamically and thermodynamically driven contributions to sea ice-induced changes in European winter temperature extremes.
It is for instance shown how the preferred occurrence of a Scandinavian blocking regime under low sea ice conditions dynamically contributes to more frequent midwinter cold extreme occurrences over Central Europe. In addition, a reduced occurrence frequency of a Atlantic trough regime is linked to reduced winter warm extremes over Mid-Europe. Furthermore, it is demonstrated how the overall thermodynamical warming effect due to sea ice loss can result in less (more) frequent winter cold (warm) extremes, and consequently counteracts the dynamically induced changes.
Compared to winter season, circulation regimes in summer are less suitable as storylines for the occurrence of summer heat extremes.
Therefore, an approach based on circulation analogues is employed in order to quantify thermodyamically and dynamically driven contributions to sea ice-induced changes of summer heat extremes over three different European sectors. Reduced occurrences of blockings over Western Russia are detected in the ECHAM6 sea ice sensitivity experiments; however, arguing for dynamically and thermodynamically induced contributions to changes in summer heat extremes remains rather challenging.
The shallow Earth’s layers are at the interplay of many physical processes: some being driven by atmospheric forcing (precipitation, temperature...) whereas others take their origins at depth, for instance ground shaking due to seismic activity. These forcings cause the subsurface to continuously change its mechanical properties, therefore modulating the strength of the surface geomaterials and hydrological fluxes. Because our societies settle and rely on the layers hosting these time-dependent properties, constraining the hydro-mechanical dynamics of the shallow subsurface is crucial for our future geographical development. One way to investigate the ever-changing physical changes occurring under our feet is through the inference of seismic velocity changes from ambient noise, a technique called seismic interferometry. In this dissertation, I use this method to monitor the evolution of groundwater storage and damage induced by earthquakes. Two research lines are investigated that comprise the key controls of groundwater recharge in steep landscapes and the predictability and duration of the transient physical properties due to earthquake ground shaking. These two types of dynamics modulate each other and influence the velocity changes in ways that are challenging to disentangle. A part of my doctoral research also addresses this interaction. Seismic data from a range of field settings spanning several climatic conditions (wet to arid climate) in various seismic-prone areas are considered. I constrain the obtained seismic velocity time-series using simple physical models, independent dataset, geophysical tools and nonlinear analysis. Additionally, a methodological development is proposed to improve the time-resolution of passive seismic monitoring.
This cumulative dissertation consists of three full empirical investigations based on three separate collections of data dealing with the phenomenon of negotiations in audit processes, which are combined in two research articles. In the first study, I examine internal auditors’ views on negotiation interactions with auditees. My research is based on 23 semi-structured interviews with internal auditors (14 in-house and 9 external service providers) to gain insight into when and about what (RQ1), why (RQ2), and how (RQ3) they negotiate with auditees. By adapting the Gibbins et al. (2001) negotiation framework to the context of internal auditing, I obtain specific process (negotiation issue, auditor-auditee process, and outcome) and context elements that form the basis of my analyses. Through the additional use of inductive procedures, I conclude that internal auditors negotiate when they face professional and non-professional resistance from auditees during the audit process (RQ1). This resistance occurs in a variety of audit types and audit issues. Internal auditors choose negotiations to overcome this resistance primarily out of functional interest, as they cannot simply instruct auditees to acknowledge the findings and implement the required actions (RQ2). I find that the implementation of the required actions is the main goal of the respondents, which is also an important quality factor for internal auditing. Although few respondents interpret these interactions with auditees as negotiations, all respondents use a variety of negotiation strategies to create value (e.g., cost cutting, logrolling, and bridging) and claim value (e.g. positional commitment and threats) (RQ3). Finally, I contribute to empirical research on internal audit negotiations and internal audit quality by shedding light on the black box of internal auditor-auditee interactions. The second study consists of two experiments that examine the effects of tax auditors’ emotion expressions during tax audit negotiations. In the first experiment, we demonstrate that auditors expressing anger obtain more concessions from taxpayers than auditors expressing happiness. This reveals that taxpayers interpret auditors’ emotions strategically and do not respond affectively. In the second experiment, we show that the experience with an auditor who expressed either happiness or anger reduces taxpayers’ post-audit compliance compared to the experience with an emotionally neutral auditor. Apparently, taxpayers use their experience with an emotional auditor to rationalize later noncompliance. Taken together, both experiments show the potentially detrimental effects of positive and negative emotion expressions by the auditor and point to the benefits of avoiding emotion expressions. We find that when auditors avoid emotion expressions this does not result in fewer concessions from taxpayers than when auditors express anger. However, when auditors avoid emotion expressions this leads to a significantly better evaluation of the taxpayer-auditor relationship and significantly reduces taxpayers’ post-audit noncompliance.
Despite the popularity of thermoresponsive polymers, much is still unknown about their behavior, how it is triggered, and what factors influence it, hindering the full exploitation of their potential. One particularly puzzling phenomenon is called co-nonsolvency, in which a polymer is soluble in two individual solvents, but counter-intuitively becomes insoluble in mixtures of both. Despite the innumerous potential applications of such systems, including actuators, viscosity regulators and as carrier structures, this field has not yet been extensively studied apart from the classical example of poly(N isopropyl acrylamide) (PNIPAM) in mixtures of water and methanol. Therefore, this thesis focuses on evaluating how changes in the chemical structure of the polymers impact the thermoresponsive, aggregation and co-nonsolvency behaviors of both homopolymers and amphiphilic block copolymers. Within this scope, both the synthesis of the polymers and their characterization in solution is investigated. Homopolymers were synthesized by conventional free radical polymerization, whereas block copolymers were synthesized by consecutive reversible addition fragmentation chain transfer (RAFT) polymerizations. The synthesis of the monomers N isopropyl methacrylamide (NIPMAM) and N vinyl isobutyramide (NVIBAM), as well as a few chain transfer agents is also covered. Through turbidimetry measurements, the thermoresponsive and co-nonsolvency behavior of PNIPMAM and PNVIBAM homopolymers is then compared to the well-known PNIPAM, in aqueous solutions with 9 different organic co-solvents. Additionally, the effects of end-groups, molar mass, and concentration are investigated. Despite the similarity of their chemical structures, the 3 homopolymers show significant differences in transition temperatures and some divergences in their co-nonsolvency behavior. More complex systems are also evaluated, namely amphiphilic di- and triblock copolymers of PNIPAM and PNIPMAM with polystyrene and poly(methyl methacrylate) hydrophobic blocks. Dynamic light scattering is used to evaluate their aggregation behavior in aqueous and mixed aqueous solutions, and how it is affected by the chemical structure of the blocks, the chain architecture, presence of cosolvents and polymer concentration. The results obtained shed light into the thermoresponsive, co-nonsolvency and aggregation behavior of these polymers in solution, providing valuable information for the design of systems with a desired aggregation behavior, and that generate targeted responses to temperature and solvent mixture changes.
The urge of light utilization in fabrication of materials is as encouraging as challenging. Steadily increasing energy consumption in accordance with rapid population growth, is requiring a corresponding solution within the same rate of occurrence speed. Therefore, creating, designing and manufacturing materials that can interact with light and in further be applicable as well as disposable in photo-based applications are very much under attention of researchers. In the era of sustainability for renewable energy systems, semiconductor-based photoactive materials have received great attention not only based on solar and/or hydrocarbon fuels generation from solar energy, but also successful stimulation of photocatalytic reactions such as water splitting, pollutant degradation and organic molecule synthesisThe turning point had been reached for water splitting with an electrochemical cell consisting of TiO2-Pt electrode illuminated by UV light as energy source rather than an external voltage, that successfully pursued water photolysis by Fujishima and Honda in 1972. Ever since, there has been a great deal of interest in research of semiconductors (e.g. metal oxide, metal-free organic, noble-metal complex) exhibiting effective band gap for photochemical reactions. In the case of environmental friendliness, toxicity of metal-based semiconductors brings some restrictions in possible applications. Regarding this, very robust and ‘earth-abundant’ organic semiconductor, graphitic carbon nitride has been synthesized and successfully applied in photoinduced applications as novel photocatalyst. Properties such as suitable band gap, low charge carrier recombination and feasibility for scaling up, pave the way of advance combination with other catalysts to gather higher photoactivity based on compatible heterojunction.
This dissertation aims to demonstrate a series of combinations between organic semiconductor g-CN and polymer materials that are forged through photochemistry, either in synthesis or in application. Fabrication and design processes as well as applications performed in accordance to the scope of thesis will be elucidated in detail. In addition to UV light, more attention is placed on visible light as energy source with a vision of more sustainability and better scalability in creation of novel materials and solar energy based applications.
Lithium-ion capacitors (LICs) are promising energy storage devices by asymmetrically combining anode with a high energy density close to lithium-ion batteries and cathode with a high power density and long-term stability close to supercapacitors. For the further improvement of LICs, the development of electrode materials with hierarchical porosity, nitrogen-rich lithiophilic sites, and good electrical conductivity is essential. Nitrogen-rich all-carbon composite hybrids are suitable for these conditions along with high stability and tunability, resulting in a breakthrough to achieve the high performance of LICs. In this thesis, two different all-carbon composites are suggested to unveil how the pore structure of lithiophilic composites influences the properties of LICs. Firstly, the composite with 0-dimensional zinc-templated carbon (ZTC) and hexaazatriphenylene-hexacarbonitrile (HAT) is examined how the pore structure is connected to Li-ion storage property as LIC electrode. As the pore structure of HAT/ZTC composite is easily tunable depending on the synthetic factor and ratio of each component, the results will allow deeper insights into Li-ion dynamics in different porosity, and low-cost synthesis by optimization of the HAT:ZTC ratio. Secondly, the composite with 1-dimensional nanoporous carbon fiber (ACF) and cost-effective melamine is proposed as a promising all-carbon hybrid for large-scale application. Since ACF has ultra-micropores, the numerical structure-property relationships will be calculated out not only from total pore volume but more specifically from ultra-micropore volume. From these results above, it would be possible to understand how hybrid all-carbon composites interact with lithium ions in nanoscale as well as how structural properties affect the energy storage performance. Based on this understanding derived from the simple materials modeling, it will provide a clue to design the practical hybrid materials for efficient electrodes in LICs.
The collaboration-based professional development approach Lesson Study (LS), which has its roots in the Japanese education system, has gained international recognition over the past three decades and spread quickly throughout the world. LS is a collaborative method to professional development (PD) that incorporates multiple characteristics that have been identified in the research literature as key to effective PD. Specifically, LS is a long-term process that consists of subsequent inquiry cycles, it is site-based and integrated in teachers’ practice, it encourages collaboration and reflection, places a strong emphasis on student learning, and it typically involves external experts that support the process or offer additional insights.
As LS integrates all these characteristics, it has rapidly gained international popularity since the turn of the 21st century and is currently being practiced in over 40 countries around the world. This international borrowing of the idea of LS to new national contexts has given rise to a research field that aims to investigate the effectiveness of LS on teacher learning as well as the circumstances and mechanisms that make LS effective in various settings around the world. Such research is important, as borrowing educational innovations and adapting them to new contexts can be a challenging process. Educational innovations that fail to deliver the expected outcomes tend to be abandoned prematurely and before they have been completely understood or a substantial research base has been established.
In order to prevent LS from early abandonment, Lewis and colleagues outlined three critical research needs in 2006, not long after LS was initially introduced to the United States. These research needs included (1) developing a descriptive knowledge base on LS, (2) examining the mechanisms by which teachers learn through LS, and (3) using design-based research cycles to analyze and improve LS.
This dissertation set out to take stock of the progress that has been made on these research needs over the past 20 years. The scoping review conducted for the framework of this dissertation indicates that, while a large and international knowledge base has been developed, the field has not yet produced reliable evidence of the effectiveness of LS. Based on the scoping review, this dissertation makes the case that Lewis et al.’s (2006) critical research needs should be updated. In order to do so, a number of limitations to the current knowledge base on LS need to be addressed. These limitations include (1) the frequent lack of comparable and replicable descriptions of the LS intervention in publications, (2) the incoherent use or lack of use of theoretical frameworks to explain teacher learning through LS, (3) the inconsistent use of terminology and concepts, and (4) the lack of scientific rigor in research studies and of established ways or tools to measure the effectiveness of LS.
This dissertation aims to advance the critical research needs in the field by examining the extent and nature of these limitations in three research studies. The focus of these studies lies on the LS stages of observation and reflection, as these stages have a high potential to facilitate teacher learning. The first study uses a mixed-method design to examine how teachers at German primary schools reflect critically together. The study derives a theory-based definition of critical and collaborative reflection in order to re-frame the reflection element in LS.
The second study, a systematic review of 129 articles on LS, assess how transparent research articles are in reporting how teachers observed and reflected together. In addition, it is investigated whether these articles provide any kind of theorization for the stages of observation and reflection.
The third study proposes a conceptual model for the field of LS that is based on existing models of continuous professional development and research findings on team effectiveness and collaboration. The model describes the dimensions of input, mediating mechanisms, and outcomes in order to provide a conceptual grid to teachers’ continuous professional development through LS.
Fabricating electronic devices from natural, renewable resources has been a common goal in engineering and materials science for many years. In this regard, carbon is of special significance due to its biological compatibility. In the laboratory, carbonized materials and their composites have been proven as promising solutions for a range of future applications in electronics, optoelectronics, or catalytic systems. On the industrial scale, however, their application is inhibited by tedious and expensive preparation processes and a lack of control over the processing and material parameters. Therefore, we are exploring new concepts for the direct utilization of functional carbonized materials in electronic applications. In particular, laser-induced carbonization (carbon laser-patterning (CLaP)) is emerging as a new tool for the precise and selective synthesis of functional carbon-based materials for flexible on-chip applications.
We developed an integrated approach for on-the-spot laser-induced synthesis of flexible, carbonized films with specific functionalities. To this end, we design versatile precursor inks made from naturally abundant starting compounds and reactants to cast films which are carbonized with an infrared laser to obtain functional patterns of conductive porous carbon networks. In our studies we obtained deep mechanistic insights into the formation process and the microstructure of laser-patterned carbons (LP-C). We shed light on the kinetic reaction mechanism based on the interplay between the precursor properties and the reaction conditions. Furthermore, we investigated the use of porogens, additives, and reactants to provide a toolbox for the chemical and physical fine-tuning of the electronic and surface properties and the targeted integration of functional sites into the carbon network. Based on this knowledge, we developed prototype resistive chemical and mechanical sensors. In further studies, we show the applicability of LP-C as electrode materials in electrocatalytic and charge-storage applications.
To put our findings into a common perspective, our results are embedded into the context of general carbonization strategies, fundamentals of laser-induced materials processing, and a broad literature review on state-of-the-art laser-carbonization, in the general part.
Reiz der Revolution
(2023)
Die Dissertation untersucht die vielseitigen Verflechtungen und Transfers im Rahmen der deutschen Nicaraguasolidarität der späten 1970er und der 1980er Jahre. Bereits im Vorfeld ihres Machtantritts hatten die Sandinistas in beiden Lagern um ausländische staatliche und zivile Unterstützung geworben. Nun gestalteten sie mit dem sandinistischen Reformstaat zugleich ein internationales Netz an Solidaritätsbeziehungen aus, die zur Finanzierung ihrer sozialreformerischen Programme, aber auch zur Legitimation ihrer Herrschaft dienten.
Allein in der Bundesrepublik entstanden mehrere hundert Solidaritätsgruppen. In der DDR löste die politische Führung eine staatlich gelenkte Solidarisierung mit Nicaragua aus, der sich zehntausende Menschen und unabhängige Basisinitiativen anschlossen. Trotz ihrer Verwurzelung in rivalisierenden Systemen und der Heterogenität ihrer Weltbilder – von christlicher Soziallehre bis zur kritischen Linken – arbeiteten etliche Solidaritätsinitiativen in beiden Ländern am selben Zielobjekt: einem Nicaragua jenseits der Blöcke. Gemeinsam mit ihren nicaraguanischen Projektpartner_innen eröffneten sie auf transnationaler Ebene einen neuen Raum für Kommunikation und stießen dabei auf Differenzen und Auseinandersetzungen über politische Ideen, die beiderseits des Atlantiks neue Praktiken anregten.
Die Forschungsarbeit basiert auf einer umfangreichen Quellenauswertung in insgesamt 13 Archiven, darunter das Archiv der Robert-Havemann-Gesellschaft, das Archiv der BStU, verschiedene westdeutsche Bewegungsarchive und die archivalischen Nachlässe des nicaraguanischen Kulturministeriums.
Justice structures societies and social relations of any kind; its psychological integration provides a fundamental cornerstone for social, moral, and personality development. The trait justice sensitivity captures individual differences in responses toward perceived injustice (JS; Schmitt et al., 2005, 2010). JS has shown substantial relations to social and moral behavior in adult and adolescent samples; however, it was not yet investigated in middle childhood despite this being a sensitive phase for personality development. JS differentiates in underlying perspectives that are either more self- or other-oriented regarding injustice, with diverging outcome relations. The present research project investigated JS and its perspectives in children aged 6 to 12 years with a special focus on variables of social and moral development as potential correlates and outcomes in four cross-sectional studies. Study 1 started with a closer investigation of JS trait manifestation, measurement, and relations to important variables from the nomological network, such as temperamental dimensions, social-cognitive skills, and global pro- and antisocial behavior in a pilot sample of children from south Germany. Study 2 investigated relations between JS and distributive behavior following distributive principles in a large-scale data set of children from Berlin and Brandenburg. Study 3 explored the relations of JS with moral reasoning, moral emotions, and moral identity as important precursors of moral development in the same large-scale data set. Study 4 investigated punishment motivation to even out, prevent, or compensate norm transgressions in a subsample, whereby JS was considered as a potential predictor of different punishment motives. All studies indicated that a large-scale, economic measurement of JS is possible at least from middle childhood onward. JS showed relations to temperamental dimensions, social skills, global social behavior; distributive decisions and preferences for distributive principles; moral reasoning, emotions, and identity; as well as with punishment motivation; indicating that trait JS is highly relevant for social and moral development. The underlying self- or other-oriented perspectives showed diverging correlate and outcome relations mostly in line with theory and previous findings from adolescent and adult samples, but also provided new theoretical ideas on the construct and its differentiation. Findings point to an early internal justice motive underlying trait JS, but additional motivations underlying the JS perspectives. Caregivers, educators, and clinical psychologists should pay attention to children’s JS and toward promoting an adaptive justice-related personality development to foster children’s prosocial and moral development as well as their mental health.
Visual perception is a complex and dynamic process that plays a crucial role in how we perceive and interact with the world. The eyes move in a sequence of saccades and fixations, actively modulating perception by moving different parts of the visual world into focus. Eye movement behavior can therefore offer rich insights into the underlying cognitive mechanisms and decision processes. Computational models in combination with a rigorous statistical framework are critical for advancing our understanding in this field, facilitating the testing of theory-driven predictions and accounting for observed data. In this thesis, I investigate eye movement behavior through the development of two mechanistic, generative, and theory-driven models. The first model is based on experimental research regarding the distribution of attention, particularly around the time of a saccade, and explains statistical characteristics of scan paths. The second model implements a self-avoiding random walk within a confining potential to represent the microscopic fixational drift, which is present even while the eye is at rest, and investigates the relationship to microsaccades. Both models are implemented in a likelihood-based framework, which supports the use of data assimilation methods to perform Bayesian parameter inference at the level of individual participants, analyses of the marginal posteriors of the interpretable parameters, model comparisons, and posterior predictive checks. The application of these methods enables a thorough investigation of individual variability in the space of parameters. Results show that dynamical modeling and the data assimilation framework are highly suitable for eye movement research and, more generally, for cognitive modeling.
This dissertation focuses on the handling of time in dialogue. Specifically, it investigates how humans bridge time, or “buy time”, when they are expected to convey information that is not yet available to them (e.g. a travel agent searching for a flight in a long list while the customer is on the line, waiting). It also explores the feasibility of modeling such time-bridging behavior in spoken dialogue systems, and it examines
how endowing such systems with more human-like time-bridging capabilities may affect humans’ perception of them.
The relevance of time-bridging in human-human dialogue seems to stem largely from a need to avoid lengthy pauses, as these may cause both confusion and discomfort among the participants of a conversation (Levinson, 1983; Lundholm Fors, 2015). However, this avoidance of prolonged silence is at odds with the incremental nature of speech production in dialogue (Schlangen and Skantze, 2011): Speakers often start to verbalize their contribution before it is fully formulated, and sometimes even before they possess the information they need to provide, which may result in them running out of content mid-turn.
In this work, we elicit conversational data from humans, to learn how they avoid being silent while they search for information to convey to their interlocutor. We identify commonalities in the types of resources employed by different speakers, and we propose a classification scheme. We explore ways of modeling human time-buying behavior computationally, and we evaluate the effect on human listeners of embedding this behavior in a spoken dialogue system.
Our results suggest that a system using conversational speech to bridge time while searching for information to convey (as humans do) can provide a better experience in several respects than one which remains silent for a long period of time. However, not all speech serves this purpose equally: Our experiments also show that a system whose time-buying behavior is more varied (i.e. which exploits several categories from the classification scheme we developed and samples them based on information from human data) can prevent overestimation of waiting time when compared, for example, with a system that repeatedly asks the interlocutor to wait (even if these requests for waiting are phrased differently each time). Finally, this research shows that it is possible to model human time-buying behavior on a relatively small corpus, and that a system using such a model can be preferred by participants over one employing a simpler strategy, such as randomly choosing utterances to produce during the wait —even when the utterances used by both strategies are the same.
Advances in hydrogravimetry
(2023)
The interest of the hydrological community in the gravimetric method has steadily increased within the last decade. This is reflected by numerous studies from many different groups with a broad range of approaches and foci. Many of those are traditionally rather hydrology-oriented groups who recognized gravimetry as a potential added value for their hydrological investigations. While this resulted in a variety of interesting and useful findings, contributing to extend the respective knowledge and confirming the methodological potential, on the other hand, many interesting and unresolved questions emerged.
This thesis manifests efforts, analyses and solutions carried out in this regard. Addressing and evaluating many of those unresolved questions, the research contributes to advancing hydrogravimetry, the combination of gravimetric and hydrological methods, in showing how gravimeters are a highly useful tool for applied hydrological field research.
In the first part of the thesis, traditional setups of stationary terrestrial superconducting gravimeters are addressed. They are commonly installed within a dedicated building, the impermeable structure of which shields the underlying soil from natural exchange of water masses (infiltration, evapotranspiration, groundwater recharge). As gravimeters are most sensitive to mass changes directly beneath the meter, this could impede their suitability for local hydrological process investigations, especially for near-surface water storage changes (WSC). By studying temporal local hydrological dynamics at a dedicated site equipped with traditional hydrological measurement devices, both below and next to the building, the impact of these absent natural dynamics on the gravity observations were quantified. A comprehensive analysis with both a data-based and model-based approach led to the development of an alternative method for dealing with this limitation. Based on determinable parameters, this approach can be transferred to a broad range of measurement sites where gravimeters are deployed in similar structures. Furthermore, the extensive considerations on this topic enabled a more profound understanding of this so called umbrella effect.
The second part of the thesis is a pilot study about the field deployment of a superconducting gravimeter. A newly developed field enclosure for this gravimeter was tested in an outdoor installation adjacent to the building used to investigate the umbrella effect. Analyzing and comparing the gravity observations from both indoor and outdoor gravimeters showed performance with respect to noise and stable environmental conditions was equivalent while the sensitivity to near-surface WSC was highly increased for the field deployed instrument. Furthermore it was demonstrated that the latter setup showed gravity changes independent of the depth where mass changes occurred, given their sufficiently wide horizontal extent. As a consequence, the field setup suits monitoring of WSC for both short and longer time periods much better. Based on a coupled data-modeling approach, its gravity time series was successfully used to infer and quantify local water budget components (evapotranspiration, lateral subsurface discharge) on the daily to annual time scale.
The third part of the thesis applies data from a gravimeter field deployment for applied hydrological process investigations. To this end, again at the same site, a sprinkling experiment was conducted in a 15 x 15 m area around the gravimeter. A simple hydro-gravimetric model was developed for calculating the gravity response resulting from water redistribution in the subsurface. It was found that, from a theoretical point of view, different subsurface water distribution processes (macro pore flow, preferential flow, wetting front advancement, bypass flow and perched water table rise) lead to a characteristic shape of their resulting gravity response curve. Although by using this approach it was possible to identify a dominating subsurface water distribution process for this site, some clear limitations stood out. Despite the advantage for field installations that gravimetry is a non-invasive and integral method, the problem of non-uniqueness could only be overcome by additional measurements (soil moisture, electric resistivity tomography) within a joint evaluation. Furthermore, the simple hydrological model was efficient for theoretical considerations but lacked the capability to resolve some heterogeneous spatial structures of water distribution up to a needed scale. Nevertheless, this unique setup for plot to small scale hydrological process research underlines the high potential of gravimetery and the benefit of a field deployment.
The fourth and last part is dedicated to the evaluation of potential uncertainties arising from the processing of gravity observations. The gravimeter senses all mass variations in an integral way, with the gravitational attraction being directly proportional to the magnitude of the change and inversely proportional to the square of the distance of the change. Consequently, all gravity effects (for example, tides, atmosphere, non-tidal ocean loading, polar motion, global hydrology and local hydrology) are included in an aggregated manner. To isolate the signal components of interest for a particular investigation, all non-desired effects have to be removed from the observations. This process is called reduction. The large-scale effects (tides, atmosphere, non-tidal ocean loading and global hydrology) cannot be measured directly and global model data is used to describe and quantify each effect. Within the reduction process, model errors and uncertainties propagate into the residual, the result of the reduction. The focus of this part of the thesis is quantifying the resulting, propagated uncertainty for each individual correction. Different superconducting gravimeter installations were evaluated with respect to their topography, distance to the ocean and the climate regime. Furthermore, different time periods of aggregated gravity observation data were assessed, ranging from 1 hour up to 12 months. It was found that uncertainties were highest for a frequency of 6 months and smallest for hourly frequencies. Distance to the ocean influences the uncertainty of the non-tidal ocean loading component, while geographical latitude affects uncertainties of the global hydrological component. It is important to highlight that the resulting correction-induced uncertainties in the residual have the potential to mask the signal of interest, depending on the signal magnitude and its frequency. These findings can be used to assess the value of gravity data across a range of applications and geographic settings.
In an overarching synthesis all results and findings are discussed with a general focus on their added value for bringing hydrogravimetric field research to a new level. The conceptual and applied methodological benefits for hydrological studies are highlighted. Within an outlook for future setups and study designs, it was once again shown what enormous potential is offered by gravimeters as hydrological field tools.
This cumulative doctoral thesis consists of five empirical studies examining various aspects of crisis and change from a management-accounting perspective. Within the first study, a bibliometric analysis is conducted. More precisely, based on publications between the financial crisis (since 2007) and the COVID-19 crisis (starting in 2020), the crisis literature in management accounting is investigated to uncover the most influential aspects of the field and to analyze the theoretical foundations of the literature. Moreover, this investigation also serves to identify future research streams and to provide starting points for future research. Based on a survey, the second study investigates the impact of several management-accounting tools on organizational resilience and its effect on a company’s competitive advantage during a crisis. The results show that their target-oriented use positively influences organizational resilience and contributes to the company’s competitive advantage during the crisis. The third study provides a more detailed view on the relationship between budgeting and risk management and their benefit for companies in times of crisis. For this purpose, the relationship between the relevance of budgeting functions and risk management in the company and the corresponding impact on company performance are investigated. The results show a positive relationship. However, a crisis can also affect the relationship between the company and its shareholders: Thus, the fourth study – based on publicly available data and a survey – examines the consequences of virtual annual general meetings on shareholder rights. The results show that, temporarily, particularly the right to information was severely restricted. For the following year, this problem was fixed, and ultimately, the virtual option was introduced permanently. The crisis has thus brought about a lasting change. But not only crises cause changes: The fifth study, also based on survey data, investigates the changes in the role of management accountants caused by digitalization. More precisely, it investigates how management accountants deal with tasks that are considered outdated and unattractive. The results of the study show that different types of personalities also act differently as far as the willingness to do those unattractive tasks is concerned, and career ambitions also influence that willingness. In addition to this, the results provide insights into the motivation of management accountants to conduct tasks and thus counteract existing assumptions based on stereotypes and clichés circulating within the research community.
Humboldtian science aims at an empirically supported transdisciplinary and at the same time transareal development of a world consciousness. In the development of this world consciousness, not only Europe and the Americas, but also Central Asia and especially China play an important role. The Humboldt Center for Transdisciplinary Studies (HCTS) in Changsha, is attempting to address the fact that China has been largely left out of international Humboldt studies and that Alexander von Humboldt was intensively engaged with Central Asia and China for decades. Therefore, the Humboldt Center in Changsha sets itself the goal of expanding Humboldt Studies to include this important aspect, to stimulate and coordinate special research work, and to build scientific and cultural bridges between Germany and China, Europe and Asia.
Der vorliegende Text versucht, das in der Humboldt-Forschung weithin bekannte und gut erforschte Thema „Humboldt und die Sklaverei“ biographisch neu einzuordnen und orientiert sich dabei an folgender These: Humboldt durchläuft in den Jahrzehnten nach Beginn seiner Reise durch die amerikanischen Tropen (1799–1804) verschiedene Phasen in der Beschäftigung mit dem Thema Sklaverei. Im Laufe dieser Phasen, die keineswegs einer chronologischen Ordnung folgen, sondern zum Teil parallel verlaufen, nimmt Humboldt verschiedene soziale Rollen an, die ihren Ausdruck in spezifischen Haltungen und Handlungen finden. In ihrer Summe zeichnen sie ein für das Verständnis von Humboldts Persönlichkeit typisches Psychogramm: ein selbstbewusster Moralist, ein rigoroser, beinahe kriminalistisch agierender Wissenschaftler, ein politisch zurückhaltender Akteur.
Der Humboldtsche Magnetische Verein (1829–1834) mit seinem Zentrum in Berlin, an dem 4 weitere Stationen mitwirkten, hatte einen Vorläufer, die Societas meteorologica Palatina (1780–1795). Diese verfügte über 17 über die Nordhalbkugel verteilte Stationen, an denen magnetische Beobachtungen durchgeführt wurden. Der Nachfolgeverein mit 61 über den Globus verteilten Stationen war der Göttinger Magnetische Verein (1834–1841).
Der Humboldtsche Magnetische Verein war der erste, an dem die Gleichzeitigkeit der Beobachtungen, sog. korrespondierende Beobachtungen, anhand Berliner Zeit eingeführt wurden. Diese Methode wurde in Göttingen, wo Gauß und Weber seit 1834 über ein Magnetisches Observatorium verfügten, übernommen, modifiziert und verbessert, alle 61 angeschlossenen Stationen beobachteten gemäß Göttinger mittlerer Zeit.
Im Gedenken an Heinz Krumpel
Mit meinem Freund Heinz Krumpel verband mich eine stets heitere, unbedingte, jahrzehntelange Freundschaft. Ich darf sagen, dass nie etwas diese Freundschaft trübte. In unseren Gesprächen gab es niemals eine Einleitung, ein wechselseitiges Sich-Beschnuppern, eine Einstimmung auf den jeweils Anderen. „Glaubst Du auch, dass Clavijero der wichtigste Aufklärungsphilosoph Lateinamerikas war?“ oder „Kants kategorischer Imperativ gilt noch heute, meinst Du nicht?“ waren übliche Eröffnungssätze unserer Gespräche. Und zwar gleichgültig, ob wir uns in Toluca, Mexiko-Stadt oder Potsdam begegneten. Stets war von der ersten Sekunde an Vertrautheit die Grundlage.
Heinz Krumpel hat in einer stark auf sich selbst bezogenen deutschen Philosophie, die am inter- und transkulturellen Austausch nur wenig Interesse zeigte, immer das offene Gespräch mit Lateinamerika gesucht. Die Philosophie anderer Breitengrade, anderer Denkrichtungen, vor allem aber die Philosophie der von ihm so geliebten lateinamerikanischen Welt lagen ihm am Herzen, waren für ihn eine Herzensangelegenheit. Die Hochachtung vor den großen Philosophen dieser Welt, der respektvolle Umgang und die bohrenden Fragen, die er an ihre philosophischen Ansätze richtete, waren die Grundlage dafür, dass er über Jahrzehnte einem Denken treu blieb, das den meisten Philosophen des deutschsprachigen Raumes noch nicht einmal vom Hörensagen bekannt war. Heinz Krumpel ließ sich dadurch nicht entmutigen, veröffentlichte in schöner Reihenfolge Bücher und Aufsätze, die den Weg zu dieser Welt, zu seiner Welt ebneten.
Daher rührte auch sein Interesse für Alexander von Humboldt. Der preußische Kultur- und Naturforscher war für ihn der Garant dafür, dass zwischen den Amerikas und Europa, dass zwischen Mexiko, Kolumbien, Peru oder Argentinien der Gesprächsfaden niemals abreißen durfte. Dass der Denker der Wechselwirkung stets das Symbol für eine transatlantische Wechselwirkung war und ist. Wie oft haben wir uns in unseren Gesprächen gefragt, wie Alexander die Entwicklung der Philosophie nach Hegel, bei dem er noch Vorlesungen gehört hatte, bewertet hätte.
Dass Heinz Krumpel sich für die Sache Alexander von Humboldts stark machte und sich selbstverständlich auch für unsere Zeitschrift HiN – Alexander von Humboldt im Netz einsetzte, verstand sich von selbst. Heinz hatte die Lektionen der Geschichte gelernt und stand nicht nur für den Polylog, den er auf vielen Ebenen führte, sondern auch und gerade für das Polylogische, das Viellogische. Für ein Denken, das die eigenen Positionen kritisch und selbstreflexiv aus unterschiedlichen Blickwinkeln befragt. So habe ich ihn kennengelernt, so werde ich ihn immer im Gedächtnis behalten.
Unsere Zeitschrift verneigt sich in Dankbarkeit für die jahrzehntelange Unterstützung vor Heinz Krumpel. Ich habe daher einen seiner beiden Söhne darum gebeten, einen Nachruf für unsere Zeitschrift zu verfassen – im Andenken an einen Menschen, dessen Heiterkeit, dessen Selbstkritik und dessen Spontaneität uns allen präsent und gegenwärtig sind.
Ottmar Ette
Casualties and damages from urban pluvial flooding are increasing. Triggered by short, localized, and intensive rainfall events, urban pluvial floods can occur anywhere, even in areas without a history of flooding. Urban pluvial floods have relatively small temporal and spatial scales. Although cumulative losses from urban pluvial floods are comparable, most flood risk management and mitigation strategies focus on fluvial and coastal flooding. Numerical-physical-hydrodynamic models are considered the best tool to represent the complex nature of urban pluvial floods; however, they are computationally expensive and time-consuming. These sophisticated models make large-scale analysis and operational forecasting prohibitive. Therefore, it is crucial to evaluate and benchmark the performance of other alternative methods.
The findings of this cumulative thesis are represented in three research articles. The first study evaluates two topographic-based methods to map urban pluvial flooding, fill–spill–merge (FSM) and topographic wetness index (TWI), by comparing them against a sophisticated hydrodynamic model. The FSM method identifies flood-prone areas within topographic depressions while the TWI method employs maximum likelihood estimation to calibrate a TWI threshold (τ) based on inundation maps from the 2D hydrodynamic model. The results point out that the FSM method outperforms the TWI method. The study highlights then the advantage and limitations of both methods.
Data-driven models provide a promising alternative to computationally expensive hydrodynamic models. However, the literature lacks benchmarking studies to evaluate the different models' performance, advantages and limitations. Model transferability in space is a crucial problem. Most studies focus on river flooding, likely due to the relative availability of flow and rain gauge records for training and validation. Furthermore, they consider these models as black boxes. The second study uses a flood inventory for the city of Berlin and 11 predictive features which potentially indicate an increased pluvial flooding hazard to map urban pluvial flood susceptibility using a convolutional neural network (CNN), an artificial neural network (ANN) and the benchmarking machine learning models random forest (RF) and support vector machine (SVM). I investigate the influence of spatial resolution on the implemented models, the models' transferability in space and the importance of the predictive features. The results show that all models perform well and the RF models are superior to the other models within and outside the training domain. The models developed using fine spatial resolution (2 and 5 m) could better identify flood-prone areas. Finally, the results point out that aspect is the most important predictive feature for the CNN models, and altitude is for the other models.
While flood susceptibility maps identify flood-prone areas, they do not represent flood variables such as velocity and depth which are necessary for effective flood risk management. To address this, the third study investigates data-driven models' transferability to predict urban pluvial floodwater depth and the models' ability to enhance their predictions using transfer learning techniques. It compares the performance of RF (the best-performing model in the previous study) and CNN models using 12 predictive features and output from a hydrodynamic model. The findings in the third study suggest that while CNN models tend to generalise and smooth the target function on the training dataset, RF models suffer from overfitting. Hence, RF models are superior for predictions inside the training domains but fail outside them while CNN models could control the relative loss in performance outside the training domains. Finally, the CNN models benefit more from transfer learning techniques than RF models, boosting their performance outside training domains.
In conclusion, this thesis has evaluated both topographic-based methods and data-driven models to map urban pluvial flooding. However, further studies are crucial to have methods that completely overcome the limitation of 2D hydrodynamic models.
This thesis bridges two areas of mathematics, algebra on the one hand with the Milnor-Moore theorem (also called Cartier-Quillen-Milnor-Moore theorem) as well as the Poincaré-Birkhoff-Witt theorem, and analysis on the other hand with Shintani zeta functions which generalise multiple zeta functions.
The first part is devoted to an algebraic formulation of the locality principle in physics and generalisations of classification theorems such as Milnor-Moore and Poincaré-Birkhoff-Witt theorems to the locality framework. The locality principle roughly says that events that take place far apart in spacetime do not infuence each other. The algebraic formulation of this principle discussed here is useful when analysing singularities which arise from events located far apart in space, in order to renormalise them while keeping a memory of the fact that they do not influence each other. We start by endowing a vector space with a symmetric relation, named the locality relation, which keeps track of elements that are "locally independent". The pair of a vector space together with such relation is called a pre-locality vector space. This concept is extended to tensor products allowing only tensors made of locally independent elements. We extend this concept to the locality tensor algebra, and locality symmetric algebra of a pre-locality vector space and prove the universal properties of each of such structures. We also introduce the pre-locality Lie algebras, together with their associated locality universal enveloping algebras and prove their universal property. We later upgrade all such structures and results from the pre-locality to the locality context, requiring the locality relation to be compatible with the linear structure of the vector space. This allows us to define locality coalgebras, locality bialgebras, and locality Hopf algebras. Finally, all the previous results are used to prove the locality version of the Milnor-Moore and the Poincaré-Birkhoff-Witt theorems. It is worth noticing that the proofs presented, not only generalise the results in the usual (non-locality) setup, but also often use less tools than their counterparts in their non-locality counterparts.
The second part is devoted to study the polar structure of the Shintani zeta functions. Such functions, which generalise the Riemman zeta function, multiple zeta functions, Mordell-Tornheim zeta functions, among others, are parametrised by matrices with real non-negative arguments. It is known that Shintani zeta functions extend to meromorphic functions with poles on afine hyperplanes. We refine this result in showing that the poles lie on hyperplanes parallel to the facets of certain convex polyhedra associated to the defining matrix for the Shintani zeta function. Explicitly, the latter are the Newton polytopes of the polynomials induced by the columns of the underlying matrix. We then prove that the coeficients of the equation which describes the hyperplanes in the canonical basis are either zero or one, similar to the poles arising when renormalising generic Feynman amplitudes. For that purpose, we introduce an algorithm to distribute weight over a graph such that the weight at each vertex satisfies a given lower bound.
Movement is a mechanism that shapes biodiversity patterns across spatialtemporal scales. Thereby, the movement process affects species interactions, population dynamics and community composition. In this thesis, I disentangled the effects of movement on the biodiversity of zooplankton ranging from the individual to the community level. On the individual movement level, I used video-based analysis to explore the implication of movement behavior on preypredator interactions. My results showed that swimming behavior was of great importance as it determined their survival in the face of predation. The findings also additionally highlighted the relevance of the defense status/morphology of prey, as it not only affected the prey-predator relationship by the defense itself but also by plastic movement behavior. On the community movement level, I used a field mesocosm experiment to explore the role of dispersal (time i.e., from the egg bank into the water body and space i.e., between water bodies) in shaping zooplankton metacommunities. My results revealed that priority effects and taxon-specific dispersal limitation influenced community composition. Additionally, different modes of dispersal also generated distinct community structures. The egg bank and biotic vectors (i.e. mobile links) played significant roles in the colonization of newly available habitat patches. One crucial aspect that influences zooplankton species after arrival in new habitats is the local environmental conditions. By using common garden experiments, I assessed the performance of zooplankton communities in their home vs away environments in a group of ponds embedded within an agricultural landscape. I identified environmental filtering as a driving factor as zooplankton communities from individual ponds developed differently in their home and away environments. On the individual species level, there was no consistent indication of local adaptation. For some species, I found a higher abundance/fitness in their home environment, but for others, the opposite was the case, and some cases were indifferent.
Overall, the thesis highlights the links between movement and biodiversity patterns, ranging from the individual active movement to the community level.
Transferability of data-driven models to predict urban pluvial flood water depth in Berlin, Germany
(2023)
Data-driven models have been recently suggested to surrogate computationally expensive hydrodynamic models to map flood hazards. However, most studies focused on developing models for the same area or the same precipitation event. It is thus not obvious how transferable the models are in space. This study evaluates the performance of a convolutional neural network (CNN) based on the U-Net architecture and the random forest (RF) algorithm to predict flood water depth, the models' transferability in space and performance improvement using transfer learning techniques. We used three study areas in Berlin to train, validate and test the models. The results showed that (1) the RF models outperformed the CNN models for predictions within the training domain, presumable at the cost of overfitting; (2) the CNN models had significantly higher potential than the RF models to generalize beyond the training domain; and (3) the CNN models could better benefit from transfer learning technique to boost their performance outside training domains than RF models.
Carbonates carried in subducting slabs may play a major role in sourcing and storing carbon in the deep Earth’s interior. Current estimates indicate that between 40 to 66 million tons of carbon per year enter subduction zones, but it is uncertain how much of it reaches the lower mantle. It appears that most of this carbon might be extracted from subducting slabs at the mantle wedge and only a limited amount continues deeper and eventually reaches the deep mantle. However, estimations on deeply subducted carbon broadly range from 0.0001 to 52 million tons of carbon per year. This disparity is primarily due to the limited understanding of the survival of carbonate minerals during their transport to deep mantle conditions. Indeed, carbon has very low solubility in mantle silicates, therefore it is expected to be stored primarily in accessory phases such as carbonates. Among those carbonates, magnesite (MgCO3), as a single phase, is the most stable under all mantle conditions. However, experimental investigation on the stability of magnesite in contact with SiO2 at lower mantle conditions suggests that magnesite is stable only along a cold subducted slab geotherm. Furthermore, our understanding of magnesite’s stability when interacting with more complex mantle silicate phases remains incomplete. In the first part of this dissertation, laser-heated diamond anvil cells and multi-anvil apparatus experiments were performed to investigate the stability of magnesite in contact with iron-bearing mantle silicates. Sub-solidus reactions, melting, decarbonation and diamond formation were examined from shallow to mid-lower mantle conditions (25 to 68 GPa; 1300 to 2000 K). Multi-anvil experiments at 25 GPa show the formation of carbonate-rich melt, bridgmanite, and stishovite with melting occurring at a temperature corresponding to all geotherms except the coldest one. In situ X-ray diffraction, in laser-heating diamond anvil cells experiments, shows crystallization of bridgmanite and stishovite but no melt phase was detected in situ at high temperatures. To detect decarbonation phases such as diamond, Raman spectroscopy was used. Crystallization of diamonds is observed as a sub-solidus process even at temperatures relevant and lower than the coldest slab geotherm (1350 K at 33 GPa). Data obtained from this work suggest that magnesite is unstable in contact with the surrounding peridotite mantle in the upper-most lower mantle. The presence of magnesite instead induces melting under oxidized conditions and/or foster diamond formation under more reduced conditions, at depths ∼700 km. Consequently, carbonates will be removed from the carbonate-rich slabs at shallow lower mantle conditions, where subducted slabs can stagnate. Therefore, the transport of carbonate to deeper depths will be restricted, supporting the presence of a barrier for carbon subduction at the top of the lower mantle. Moreover, the reduction of magnesite, forming diamonds provides additional evidence that super-deep diamond crystallization is related to the reduction of carbonates or carbonated-rich melt.
The second part of this dissertation presents the development of a portable laser-heating system optimized for X-ray emission spectroscopy (XES) or nuclear inelastic scattering (NIS) spectroscopy with signal collection at near 90◦. The laser-heated diamond anvil cell is the only static pressure device that can replicate the pressure and temperatures of the Earth’s lower mantle and core. The high temperatures are reached by using high-powered lasers focused on the sample contained between the diamond anvils. Moreover, diamonds’ transparency to X-rays enables in situ X-ray spectroscopy measurements that can probe the sample under high-temperature and high-pressure conditions. Therefore, the development of portable laser-heating systems has linked high-pressure and temperature research with high-resolution X-ray spectroscopy techniques to synchrotron beamlines that do not have a dedicated, permanent, laser-heating system. A general description of the system is provided, as well as details on the use of a parabolic mirror as a reflective imaging objective for on-axis laser heating and radiospectrometric temperature measurements with zero attenuation of incoming X-rays. The parabolic mirror improves the accuracy of temperature measurements free from chromatic aberrations in a wide spectral range and its perforation permits in situ X-rays measurement at synchrotron facilities. The parabolic mirror is a well-suited alternative to refractive objectives in laser heating systems, which will facilitate future applications in the use of CO2 lasers.
In model-driven engineering, the adaptation of large software systems with dynamic structure is enabled by architectural runtime models. Such a model represents an abstract state of the system as a graph of interacting components. Every relevant change in the system is mirrored in the model and triggers an evaluation of model queries, which search the model for structural patterns that should be adapted. This thesis focuses on a type of runtime models where the expressiveness of the model and model queries is extended to capture past changes and their timing. These history-aware models and temporal queries enable more informed decision-making during adaptation, as they support the formulation of requirements on the evolution of the pattern that should be adapted. However, evaluating temporal queries during adaptation poses significant challenges. First, it implies the capability to specify and evaluate requirements on the structure, as well as the ordering and timing in which structural changes occur. Then, query answers have to reflect that the history-aware model represents the architecture of a system whose execution may be ongoing, and thus answers may depend on future changes. Finally, query evaluation needs to be adequately fast and memory-efficient despite the increasing size of the history---especially for models that are altered by numerous, rapid changes.
The thesis presents a query language and a querying approach for the specification and evaluation of temporal queries. These contributions aim to cope with the challenges of evaluating temporal queries at runtime, a prerequisite for history-aware architectural monitoring and adaptation which has not been systematically treated by prior model-based solutions. The distinguishing features of our contributions are: the specification of queries based on a temporal logic which encodes structural patterns as graphs; the provision of formally precise query answers which account for timing constraints and ongoing executions; the incremental evaluation which avoids the re-computation of query answers after each change; and the option to discard history that is no longer relevant to queries. The query evaluation searches the model for occurrences of a pattern whose evolution satisfies a temporal logic formula. Therefore, besides model-driven engineering, another related research community is runtime verification. The approach differs from prior logic-based runtime verification solutions by supporting the representation and querying of structure via graphs and graph queries, respectively, which is more efficient for queries with complex patterns. We present a prototypical implementation of the approach and measure its speed and memory consumption in monitoring and adaptation scenarios from two application domains, with executions of an increasing size. We assess scalability by a comparison to the state-of-the-art from both related research communities. The implementation yields promising results, which pave the way for sophisticated history-aware self-adaptation solutions and indicate that the approach constitutes a highly effective technique for runtime monitoring on an architectural level.
With the recent growth of sensors, cloud computing handles the data processing of many applications. Processing some of this data on the cloud raises, however, many concerns regarding, e.g., privacy, latency, or single points of failure. Alternatively, thanks to the development of embedded systems, smart wireless devices can share their computation capacity, creating a local wireless cloud for in-network processing. In this context, the processing of an application is divided into smaller jobs so that a device can run one or more jobs.
The contribution of this thesis to this scenario is divided into three parts. In part one, I focus on wireless aspects, such as power control and interference management, for deciding which jobs to run on which node and how to route data between nodes. Hence, I formulate optimization problems and develop heuristic and meta-heuristic algorithms to allocate wireless and computation resources. Additionally, to deal with multiple applications competing for these resources, I develop a reinforcement learning (RL) admission controller to decide which application should be admitted. Next, I look into acoustic applications to improve wireless throughput by using microphone clock synchronization to synchronize wireless transmissions.
In the second part, I jointly work with colleagues from the acoustic processing field to optimize both network and application (i.e., acoustic) qualities. My contribution focuses on the network part, where I study the relation between acoustic and network qualities when selecting a subset of microphones for collecting audio data or selecting a subset of optional jobs for processing these data; too many microphones or too many jobs can lessen quality by unnecessary delays. Hence, I develop RL solutions to select the subset of microphones under network constraints when the speaker is moving while still providing good acoustic quality. Furthermore, I show that autonomous vehicles carrying microphones improve the acoustic qualities of different applications. Accordingly, I develop RL solutions (single and multi-agent ones) for controlling these vehicles.
In the third part, I close the gap between theory and practice. I describe the features of my open-source framework used as a proof of concept for wireless in-network processing. Next, I demonstrate how to run some algorithms developed by colleagues from acoustic processing using my framework. I also use the framework for studying in-network delays (wireless and processing) using different distributions of jobs and network topologies.
Most machine learning methods provide only point estimates when being queried to predict on new data. This is problematic when the data is corrupted by noise, e.g. from imperfect measurements, or when the queried data point is very different to the data that the machine learning model has been trained with. Probabilistic modelling in machine learning naturally equips predictions with corresponding uncertainty estimates which allows a practitioner to incorporate information about measurement noise into the modelling process and to know when not to trust the predictions. A well-understood, flexible probabilistic framework is provided by Gaussian processes that are ideal as building blocks of probabilistic models. They lend themself naturally to the problem of regression, i.e., being given a set of inputs and corresponding observations and then predicting likely observations for new unseen inputs, and can also be adapted to many more machine learning tasks. However, exactly inferring the optimal parameters of such a Gaussian process model (in a computationally tractable manner) is only possible for regression tasks in small data regimes. Otherwise, approximate inference methods are needed, the most prominent of which is variational inference.
In this dissertation we study models that are composed of Gaussian processes embedded in other models in order to make those more flexible and/or probabilistic. The first example are deep Gaussian processes which can be thought of as a small network of Gaussian processes and which can be employed for flexible regression. The second model class that we study are Gaussian process state-space models. These can be used for time-series modelling, i.e., the task of being given a stream of data ordered by time and then predicting future observations. For both model classes the state-of-the-art approaches offer a trade-off between expressive models and computational properties (e.g. speed or convergence properties) and mostly employ variational inference. Our goal is to improve inference in both models by first getting a deep understanding of the existing methods and then, based on this, to design better inference methods. We achieve this by either exploring the existing trade-offs or by providing general improvements applicable to multiple methods.
We first provide an extensive background, introducing Gaussian processes and their sparse (approximate and efficient) variants. We continue with a description of the models under consideration in this thesis, deep Gaussian processes and Gaussian process state-space models, including detailed derivations and a theoretical comparison of existing methods.
Then we start analysing deep Gaussian processes more closely: Trading off the properties (good optimisation versus expressivity) of state-of-the-art methods in this field, we propose a new variational inference based approach. We then demonstrate experimentally that our new algorithm leads to better calibrated uncertainty estimates than existing methods.
Next, we turn our attention to Gaussian process state-space models, where we closely analyse the theoretical properties of existing methods.The understanding gained in this process leads us to propose a new inference scheme for general Gaussian process state-space models that incorporates effects on multiple time scales. This method is more efficient than previous approaches for long timeseries and outperforms its comparison partners on data sets in which effects on multiple time scales (fast and slowly varying dynamics) are present.
Finally, we propose a new inference approach for Gaussian process state-space models that trades off the properties of state-of-the-art methods in this field. By combining variational inference with another approximate inference method, the Laplace approximation, we design an efficient algorithm that outperforms its comparison partners since it achieves better calibrated uncertainties.
Today, point clouds are among the most important categories of spatial data, as they constitute digital 3D models of the as-is reality that can be created at unprecedented speed and precision. However, their unique properties, i.e., lack of structure, order, or connectivity information, necessitate specialized data structures and algorithms to leverage their full precision. In particular, this holds true for the interactive visualization of point clouds, which requires to balance hardware limitations regarding GPU memory and bandwidth against a naturally high susceptibility to visual artifacts.
This thesis focuses on concepts, techniques, and implementations of robust, scalable, and portable 3D visualization systems for massive point clouds. To that end, a number of rendering, visualization, and interaction techniques are introduced, that extend several basic strategies to decouple rendering efforts and data management: First, a novel visualization technique that facilitates context-aware filtering, highlighting, and interaction within point cloud depictions. Second, hardware-specific optimization techniques that improve rendering performance and image quality in an increasingly diversified hardware landscape. Third, natural and artificial locomotion techniques for nausea-free exploration in the context of state-of-the-art virtual reality devices. Fourth, a framework for web-based rendering that enables collaborative exploration of point clouds across device ecosystems and facilitates the integration into established workflows and software systems.
In cooperation with partners from industry and academia, the practicability and robustness of the presented techniques are showcased via several case studies using representative application scenarios and point cloud data sets. In summary, the work shows that the interactive visualization of point clouds can be implemented by a multi-tier software architecture with a number of domain-independent, generic system components that rely on optimization strategies specific to large point clouds. It demonstrates the feasibility of interactive, scalable point cloud visualization as a key component for distributed IT solutions that operate with spatial digital twins, providing arguments in favor of using point clouds as a universal type of spatial base data usable directly for visualization purposes.
The trace elements, selenium (Se) and copper (Cu) play an important role in maintaining normal brain function. Since they have essential functions as cofactors of enzymes or structural components of proteins, an optimal supply as well as a well-defined homeostatic regulation are crucial. Disturbances in trace element homeostasis affect the health status and contribute to the incidence and severity of various diseases. The brain in particular is vulnerable to oxidative stress due to its extensive oxygen consumption and high energy turnover, among other factors. As components of a number of antioxidant enzymes, both elements are involved in redox homeostasis. However, high concentrations are also associated with the occurrence of oxidative stress, which can induce cellular damage. Especially high Cu concentrations in some brain areas are associated with the development and progression of neurodegenerative diseases such as Alzheimer's disease (AD). In contrast, reduced Se levels were measured in brains of AD patients. The opposing behavior of Cu and Se renders the study of these two trace elements as well as the interactions between them being particularly relevant and addressed in this work.
Natural gas hydrates are ice-like crystalline compounds containing water cavities that trap natural gas molecules like methane (CH4), which is a potent greenhouse gas with high energy density. The Mallik site at the Mackenzie Delta in the Canadian Arctic contains a large volume of technically recoverable CH4 hydrate beneath the base of the permafrost. Understanding how the sub-permafrost hydrate is distributed can aid in searching for the ideal locations for deploying CH4 production wells to develop the hydrate as a cleaner alternative to crude oil or coal. Globally, atmospheric warming driving permafrost thaw results in sub-permafrost hydrate dissociation, releasing CH4 into the atmosphere to intensify global warming. It is therefore crucial to evaluate the potential risk of hydrate dissociation due to permafrost degradation. To quantitatively predict hydrate distribution and volume in complex sub-permafrost environments, a numerical framework was developed to simulate sub-permafrost hydrate formation by coupling the equilibrium CH4-hydrate formation approach with a fluid flow and transport simulator (TRANSPORTSE). In addition, integrating the equations of state describing ice melting and forming with TRANSPORTSE enabled this framework to simulate the permafrost evolution during the sub-permafrost hydrate formation. A modified sub-permafrost hydrate formation mechanism for the Mallik site is presented in this study. According to this mechanism, the CH4-rich fluids have been vertically transported since the Late Pleistocene from deep overpressurized zones via geologic fault networks to form the observed hydrate deposits in the Kugmallit–Mackenzie Bay Sequences. The established numerical framework was verified by a benchmark of hydrate formation via dissolved methane. Model calibration was performed based on laboratory data measured during a multi-stage hydrate formation experiment undertaken in the LArge scale Reservoir Simulator (LARS). As the temporal and spatial evolution of simulated and observed hydrate saturation matched well, the LARS model was therefore validated. This laboratory-scale model was then upscaled to a field-scale 2D model generated from a seismic transect across the Mallik site. The simulation confirmed the feasibility of the introduced sub-permafrost hydrate formation mechanism by demonstrating consistency with field observations. The 2D model was extended to the first 3D model of the Mallik site by using well-logs and seismic profiles, to investigate the geologic controls on the spatial hydrate distribution. An assessment of this simulation revealed the hydraulic contribution of each geological element, including relevant fault networks and sedimentary sequences. Based on the simulation results, the observed heterogeneous distribution of sub-permafrost hydrate resulted from the combined factors of the source-gas generation rate, subsurface temperature, and the permeability of geologic elements. Analysis of the results revealed that the Mallik permafrost was heated by 0.8–1.3 °C, induced by the global temperature increase of 0.44 °C and accelerated by Arctic amplification from the early 1970s to the mid-2000s. This study presents a numerical framework that can be applied to study the formation of the permafrost-hydrate system from laboratory to field scales, across timescales ranging from hours to millions of years. Overall, these simulations deepen the knowledge about the dominant factors controlling the spatial hydrate distribution in sub-permafrost environments with heterogeneous geologic elements. The framework can support improving the design of hydrate formation experiments and provide valuable contributions to future industrial hydrate exploration and exploitation activities.
Hybrid nanomaterials offer the combination of individual properties of different types of nanoparticles. Some strategies for the development of new nanostructures in larger scale rely on the self-assembly of nanoparticles as a bottom-up approach. The use of templates provides ordered assemblies in defined patterns. In a typical soft-template, nanoparticles and other surface-active agents are incorporated into non-miscible liquids. The resulting self-organized dispersions will mediate nanoparticle interactions to control the subsequent self-assembly. Especially interactions between nanoparticles of very different dispersibility and functionality can be directed at a liquid-liquid interface.
In this project, water-in-oil microemulsions were formulated from quasi-ternary mixtures with Aerosol-OT as surfactant. Oleyl-capped superparamagnetic iron oxide and/or silver nanoparticles were incorporated in the continuous organic phase, while polyethyleneimine-stabilized gold nanoparticles were confined in the dispersed water droplets. Each type of nanoparticle can modulate the surfactant film and the inter-droplet interactions in diverse ways, and their combination causes synergistic effects. Interfacial assemblies of nanoparticles resulted after phase-separation. On one hand, from a biphasic Winsor type II system at low surfactant concentration, drop-casting of the upper phase afforded thin films of ordered nanoparticles in filament-like networks. Detailed characterization proved that this templated assembly over a surface is based on the controlled clustering of nanoparticles and the elongation of the microemulsion droplets. This process offers versatility to use different nanoparticle compositions by keeping the surface functionalization, in different solvents and over different surfaces. On the other hand, a magnetic heterocoagulate was formed at higher surfactant concentration, whose phase-transfer from oleic acid to water was possible with another auxiliary surfactant in ethanol-water mixture. When the original components were initially mixed under heating, defined oil-in-water, magnetic-responsive nanostructures were obtained, consisting on water-dispersible nanoparticle domains embedded by a matrix-shell of oil-dispersible nanoparticles.
Herein, two different approaches were demonstrated to form diverse hybrid nanostructures from reverse microemulsions as self-organized dispersions of the same components. This shows that microemulsions are versatile soft-templates not only for the synthesis of nanoparticles, but also for their self-assembly, which suggest new approaches towards the production of new sophisticated nanomaterials in larger scale.
Digitale und gesellschaftliche Entwicklungen fordern kontinuierliche Weiterbildung für Mitarbeiter im Vertrieb. Es halten sich in dieser Berufssparte aber immer noch einige Mythen zum Training von Vertriebsmitarbeitern. Unter anderem deshalb wurde in der Vergangenheit der Trainingsbedarf im Vertrieb stark vernachlässigt. Die Arbeit befasst sich deshalb zunächst mit der Frage, wie der Vertrieb in Deutschland aktuell geschult wird (unter Einbezug der Corona-Pandemie) und ob sich aus den Trainingsgewohnheiten erste Hinweise zur Erlangung eines strategischen Wettbewerbsvorteils ergeben könnten.
Dabei greift die Arbeit auf, dass Investitionen in das Training von Vertriebsmitarbeitern eine Anlage in die Wettbewerbsfähigkeit des Unternehmens sein könnten. Automatisierte Trainings, beispielsweise basierend auf Virtual Reality (VR) und Künstlicher Intelligenz (KI), könnten in der Aus- und Weiterbildung des Vertriebs einen effizienten Beitrag in der Sicherstellung eines strategischen Wettbewerbsvorteils leisten. Durch weitere Forschungsfragen befasst sich die Arbeit anschließend damit, wie ein automatisiertes Vertriebstraining mit KI- und VR-Inhalten unter Einbeziehung der Nutzer gestaltet werden muss, um Vertriebsmitarbeiter in einem dafür ausgewählten Verhandlungskontext zu trainieren. Dazu wird eine Anwendung mit Hilfe von Virtual Reality und Künstlicher Intelligenz in einem Verhandlungsdialog entwickelt, getestet und evaluiert.
Die vorliegende Arbeit liefert eine Basis für die Automatisierung von Vertriebstrainings und im erweiterten Sinne für Trainings im Allgemeinen.
Life on Earth is diverse and ranges from unicellular organisms to multicellular creatures like humans. Although there are theories about how these organisms might have evolved, we understand little about how ‘life’ started from molecules. Bottom-up synthetic biology aims to create minimal cells by combining different modules, such as compartmentalization, growth, division, and cellular communication.
All living cells have a membrane that separates them from the surrounding aqueous medium and helps to protect them. In addition, all eukaryotic cells have organelles that are enclosed by intracellular membranes. Each cellular membrane is primarily made of a lipid bilayer with membrane proteins. Lipids are amphiphilic molecules that assemble into molecular bilayers consisting of two leaflets. The hydrophobic chains of the lipids in the two leaflets face each other, and their hydrophilic headgroups face the aqueous surroundings. Giant unilamellar vesicles (GUVs) are model membrane systems that form large compartments with a size of many micrometers and enclosed by a single lipid bilayer. The size of GUVs is comparable to the size of cells, making them good membrane models which can be studied using an optical microscope. However, after the initial preparation, GUV membranes lack membrane proteins which have to be reconstituted into these membranes by subsequent preparation steps. Depending on the protein, it can be either attached via anchor lipids to one of the membrane leaflets or inserted into the lipid bilayer via its transmembrane domains.
The first step is to prepare the GUVs and then expose them to an exterior solution with proteins. Various protocols have been developed for the initial preparation of GUVs. For the second step, the GUVs can be exposed to a bulk solution of protein or can be trapped in a microfluidic device and then supplied with the protein solution. To minimize the amount of solution and for more precise measurements, I have designed a microfluidic device that has a main channel, and several dead-end side channels that are perpendicular to the main channel. The GUVs are trapped in the dead-end channels. This design exchanges the solution around the GUVs via diffusion from the main channel, thus shielding the GUVs from the flow within the main channel. This device has a small volume of just 2.5 μL, can be used without a pump and can be combined with a confocal microscope, enabling uninterrupted imaging of the GUVs during the experiments. I used this device for most of the experiments on GUVs that are discussed in this thesis.
In the first project of the thesis, a lipid mixture doped with an anchor lipid was used that can bind to a histidine chain (referred to as His-tag(ged) or 6H) via the metal cation Ni2+. This method is widely used for the biofunctionalization of GUVs by attaching proteins without a transmembrane domain. Fluorescently labeled His-tags which are bound to a membrane can be observed in a confocal microscope. Using the same lipid mixture, I prepared the GUVs with different protocols and investigated the membrane composition of the resulting GUVs by evaluating the amount of fluorescently labeled His-tagged molecules bound to their membranes. I used the microfluidic device described above to expose the outer leaflet of the vesicle to a constant concentration of the His-tagged molecules. Two fluorescent molecules with a His-tag were studied and compared: green fluorescent protein (6H-GFP) and fluorescein isothiocyanate (6H-FITC). Although the quantum yield in solution is similar for both molecules, the brightness of the membrane-bound 6H-GFP is higher than the brightness of the membrane-bound 6H-FITC. The observed difference in the brightness reveals that the fluorescence of the 6H-FITC is quenched by the anchor lipid via the Ni2+ ion. Furthermore, my measurements also showed that the fluorescence intensity of the membranebound His-tagged molecules depends on microenvironmental factors such as pH. For both 6H-GFP and 6H-FITC, the interaction with the membrane is quantified by evaluating the equilibrium dissociation constant. The membrane fluorescence is measured as a function of the fluorophores’ molar concentration. Theoretical analysis of these data leads to the equilibrium dissociation constants of (37.5 ± 7.5) nM for 6H-GFP and (18.5 ± 3.7) nM for 6H-FITC.
The anchor lipid mentioned previously used the metal cation Ni2+ to mediate the bond between the anchor lipid and the His-tag. The Ni2+ ion can be replaced by other transition metal ions. Studies have shown that Co3+ forms the strongest bonds with the His-tags attached to proteins. In these studies, strong oxidizing agents were used to oxidize the Co2+ mediated complex with the His-tagged protein to a Co3+ mediated complex. This procedure puts the proteins at risk of being oxidized as well. In this thesis, the vesicles were first prepared with anchor lipids without any metal cation. The Co3+ was added to these anchor lipids and finally the His-tagged protein was added to the GUVs to form the Co3+ mediated bond. This system was also established using the microfluidic device.
The different preparation procedures of GUVs usually lead to vesicles with a spherical morphology. On the other hand, many cell organelles have a more complex architecture with a non spherical topology. One fascinating example is provided by the endoplasmic reticulum (ER) which is made of a continuous membrane and extends throughout the cell in the form of tubes and sheets. The tubes are connected by three-way junctions and form a tubular network of irregular polygons. The formation and maintenance of these reticular networks requires membrane proteins that hydrolyize guanosine triphosphate (GTP). One of these membrane proteins is atlastin. In this thesis, I reconstituted the atlastin protein in GUV membranes using detergent-assisted reconstitution protocols to insert the proteins directly into lipid bilayers.
This thesis focuses on protein reconstitution by binding His-tagged proteins to anchor lipids and by detergent-assisted insertion of proteins with transmembrane domains. It also provides the design of a microfluidic device that can be used in various experiments, one example is the evaluation of the equilibrium dissociation constant for membrane-protein interactions. The results of this thesis will help other researchers to understand the protocols for preparing GUVs, to reconstitute proteins in GUVs, and to perform experiments using the microfluidic device. This knowledge should be beneficial for the long-term goal of combining the different modules of synthetic biology to make a minimal cell.
Towards unifying approaches in exposure modelling for scenario-based multi-hazard risk assessments
(2023)
This cumulative thesis presents a stepwise investigation of the exposure modelling process for risk assessment due to natural hazards while highlighting its, to date, not much-discussed importance and associated uncertainties. Although “exposure” refers to a very broad concept of everything (and everyone) that is susceptible to damage, in this thesis it is narrowed down to the modelling of large-area residential building stocks. Classical building exposure models for risk applications have been constructed fully relying on unverified expert elicitation over data sources (e.g., outdated census datasets), and hence have been implicitly assumed to be static in time and in space. Moreover, their spatial representation has also typically been simplified by geographically aggregating the inferred composition onto coarse administrative units whose boundaries do not always capture the spatial variability of the hazard intensities required for accurate risk assessments. These two shortcomings and the related epistemic uncertainties embedded within exposure models are tackled in the first three chapters of the thesis. The exposure composition of large-area residential building stocks is studied on the scope of scenario-based earthquake loss models. Then, the proposal of optimal spatial aggregation areas of exposure models for various hazard-related vulnerabilities is presented, focusing on ground-shaking and tsunami risks. Subsequently, once the experience is gained in the study of the composition and spatial aggregation of exposure for various hazards, this thesis moves towards a multi-hazard context while addressing cumulative damage and losses due to consecutive hazard scenarios. This is achieved by proposing a novel method to account for the pre-existing damage descriptions on building portfolios as a key input to account for scenario-based multi-risk assessment. Finally, this thesis shows how the integration of the aforementioned elements can be used in risk communication practices. This is done through a modular architecture based on the exploration of quantitative risk scenarios that are contrasted with social risk perceptions of the directly exposed communities to natural hazards.
In Chapter 1, a Bayesian approach is proposed to update the prior assumptions on such composition (i.e., proportions per building typology). This is achieved by integrating high-quality real observations and then capturing the intrinsic probabilistic nature of the exposure model. Such observations are accounted as real evidence from both: field inspections (Chapter 2) and freely available data sources to update existing (but outdated) exposure models (Chapter 3). In these two chapters, earthquake scenarios with parametrised ground motion fields were transversally used to investigate the role of such epistemic uncertainties related to the exposure composition through sensitivity analyses. Parametrised scenarios of seismic ground shaking were the hazard input utilised to study the physical vulnerability of building portfolios. The second issue that was investigated, which refers to the spatial aggregation of building exposure models, was investigated within two decoupled vulnerability contexts: due to seismic ground shaking through the integration of remote sensing techniques (Chapter 3); and within a multi-hazard context by integrating the occurrence of associated tsunamis (Chapter 4). Therein, a careful selection of the spatial aggregation entities while pursuing computational efficiency and accuracy in the risk estimates due to such independent hazard scenarios (i.e., earthquake and tsunami) are discussed. Therefore, in this thesis, the physical vulnerability of large-area building portfolios due to tsunamis is considered through two main frames: considering and disregarding the interaction at the vulnerability level, through consecutive and decoupled hazard scenarios respectively, which were then contrasted.
Contrary to Chapter 4, where no cumulative damages are addressed, in Chapter 5, data and approaches, which were already generated in former sections, are integrated with a novel modular method to ultimately study the likely interactions at the vulnerability level on building portfolios. This is tested by evaluating cumulative damages and losses after earthquakes with increasing magnitude followed by their respective tsunamis. Such a novel method is grounded on the possibility of re-using existing fragility models within a probabilistic framework. The same approach is followed in Chapter 6 to forecast the likely cumulative damages to be experienced by a building stock located in a volcanic multi-hazard setting (ash-fall and lahars). In that section, special focus was made on the manner the forecasted loss metrics are communicated to locally exposed communities. Co-existing quantitative scientific approaches (i.e., comprehensive exposure models; explorative risk scenarios involving single and multiple hazards) and semi-qualitative social risk perception (i.e., level of understanding that the exposed communities have about their own risk) were jointly considered. Such an integration ultimately allowed this thesis to also contribute to enhancing preparedness, science divulgation at the local level as well as technology transfer initiatives.
Finally, a synthesis of this thesis along with some perspectives for improvement and future work are presented.
Volcanoes are one of the Earth’s most dynamic zones and responsible for many changes in our planet. Volcano seismology aims to provide an understanding of the physical processes in volcanic systems and anticipate the style and timing of eruptions by analyzing the seismic records. Volcanic tremor signals are usually observed in the seismic records before or during volcanic eruptions. Their analysis contributes to evaluate the evolving volcanic activity and potentially predict eruptions. Years of continuous seismic monitoring now provide useful information for operational eruption forecasting. The continuously growing amount of seismic recordings, however, poses a challenge for analysis, information extraction, and interpretation, to support timely decision making during volcanic crises. Furthermore, the complexity of eruption processes and precursory activities makes the analysis challenging.
A challenge in studying seismic signals of volcanic origin is the coexistence of transient signal swarms and long-lasting volcanic tremor signals. Separating transient events from volcanic tremors can, therefore, contribute to improving our understanding of the underlying physical processes. Some similar issues (data reduction, source separation, extraction, and classification) are addressed in the context of music information retrieval (MIR). The signal characteristics of acoustic and seismic recordings comprise a number of similarities. This thesis is going beyond classical signal analysis techniques usually employed in seismology by exploiting similarities of seismic and acoustic signals and building the information retrieval strategy on the expertise developed in the field of MIR.
First, inspired by the idea of harmonic–percussive separation (HPS) in musical signal processing, I have developed a method to extract harmonic volcanic tremor signals and to detect transient events from seismic recordings. This provides a clean tremor signal suitable for tremor investigation along with a characteristic function suitable for earthquake detection. Second, using HPS algorithms, I have developed a noise reduction technique for seismic signals. This method is especially useful for denoising ocean bottom seismometers, which are highly contaminated by noise. The advantage of this method compared to other denoising techniques is that it doesn’t introduce distortion to the broadband earthquake waveforms, which makes it reliable for different applications in passive seismological analysis. Third, to address the challenge of extracting information from high-dimensional data and investigating the complex eruptive phases, I have developed an advanced machine learning model that results in a comprehensive signal processing scheme for volcanic tremors. Using this method seismic signatures of major eruptive phases can be automatically detected. This helps to provide a chronology of the volcanic system. Also, this model is capable to detect weak precursory volcanic tremors prior to the eruption, which could be used as an indicator of imminent eruptive activity. The extracted patterns of seismicity and their temporal variations finally provide an explanation for the transition mechanism between eruptive phases.
Gene expression data is analyzed to identify biomarkers, e.g. relevant genes, which serve for diagnostic, predictive, or prognostic use. Traditional approaches for biomarker detection select distinctive features from the data based exclusively on the signals therein, facing multiple shortcomings in regards to overfitting, biomarker robustness, and actual biological relevance. Prior knowledge approaches are expected to address these issues by incorporating prior biological knowledge, e.g. on gene-disease associations, into the actual analysis. However, prior knowledge approaches are currently not widely applied in practice because they are often use-case specific and seldom applicable in a different scope. This leads to a lack of comparability of prior knowledge approaches, which in turn makes it currently impossible to assess their effectiveness in a broader context.
Our work addresses the aforementioned issues with three contributions. Our first contribution provides formal definitions for both prior knowledge and the flexible integration thereof into the feature selection process. Central to these concepts is the automatic retrieval of prior knowledge from online knowledge bases, which allows for streamlining the retrieval process and agreeing on a uniform definition for prior knowledge. We subsequently describe novel and generalized prior knowledge approaches that are flexible regarding the used prior knowledge and applicable to varying use case domains. Our second contribution is the benchmarking platform Comprior. Comprior applies the aforementioned concepts in practice and allows for flexibly setting up comprehensive benchmarking studies for examining the performance of existing and novel prior knowledge approaches. It streamlines the retrieval of prior knowledge and allows for combining it with prior knowledge approaches. Comprior demonstrates the practical applicability of our concepts and further fosters the overall development and comparability of prior knowledge approaches. Our third contribution is a comprehensive case study on the effectiveness of prior knowledge approaches. For that, we used Comprior and tested a broad range of both traditional and prior knowledge approaches in combination with multiple knowledge bases on data sets from multiple disease domains. Ultimately, our case study constitutes a thorough assessment of a) the suitability of selected knowledge bases for integration, b) the impact of prior knowledge being applied at different integration levels, and c) the improvements in terms of classification performance, biological relevance, and overall robustness.
In summary, our contributions demonstrate that generalized concepts for prior knowledge and a streamlined retrieval process improve the applicability of prior knowledge approaches. Results from our case study show that the integration of prior knowledge positively affects biomarker results, particularly regarding their robustness. Our findings provide the first in-depth insights on the effectiveness of prior knowledge approaches and build a valuable foundation for future research.
The electrical resistivity tomography (ERT) method is widely used to investigate geological, geotechnical, and hydrogeological problems in inland and aquatic environments (i.e., lakes, rivers, and seas). The objective of the ERT method is to obtain reliable resistivity models of the subsurface that can be interpreted in terms of the subsurface structure and petrophysical properties. The reliability of the resulting resistivity models depends not only on the quality of the acquired data, but also on the employed inversion strategy. Inversion of ERT data results in multiple solutions that explain the measured data equally well. Typical inversion approaches rely on different deterministic (local) strategies that consider different smoothing and damping strategies to stabilize the inversion. However, such strategies suffer from the trade-off of smearing possible sharp subsurface interfaces separating layers with resistivity contrasts of up to several orders of magnitude. When prior information (e.g., from outcrops, boreholes, or other geophysical surveys) suggests sharp resistivity variations, it might be advantageous to adapt the parameterization and inversion strategies to obtain more stable and geologically reliable model solutions. Adaptations to traditional local inversions, for example, by using different structural and/or geostatistical constraints, may help to retrieve sharper model solutions. In addition, layer-based model parameterization in combination with local or global inversion approaches can be used to obtain models with sharp boundaries.
In this thesis, I study three typical layered near-surface environments in which prior information is used to adapt 2D inversion strategies to favor layered model solutions. In cooperation with the coauthors of Chapters 2-4, I consider two general strategies. Our first approach uses a layer-based model parameterization and a well-established global inversion strategy to generate ensembles of model solutions and assess uncertainties related to the non-uniqueness of the inverse problem. We apply this method to invert ERT data sets collected in an inland coastal area of northern France (Chapter~2) and offshore of two Arctic regions (Chapter~3). Our second approach consists of using geostatistical regularizations with different correlation lengths. We apply this strategy to a more complex subsurface scenario on a local intermountain alluvial fan in southwestern Germany (Chapter~4). Overall, our inversion approaches allow us to obtain resistivity models that agree with the general geological understanding of the studied field sites. These strategies are rather general and can be applied to various geological environments where a layered subsurface structure is expected. The flexibility of our strategies allows adaptations to invert other kinds of geophysical data sets such as seismic refraction or electromagnetic induction methods, and could be considered for joint inversion approaches.
Magmatic-hydrothermal systems form a variety of ore deposits at different proximities to upper-crustal hydrous magma chambers, ranging from greisenization in the roof zone of the intrusion, porphyry mineralization at intermediate depths to epithermal vein deposits near the surface. The physical transport processes and chemical precipitation mechanisms vary between deposit types and are often still debated.
The majority of magmatic-hydrothermal ore deposits are located along the Pacific Ring of Fire, whose eastern part is characterized by the Mesozoic to Cenozoic orogenic belts of the western North and South Americas, namely the American Cordillera. Major magmatic-hydrothermal ore deposits along the American Cordillera include (i) porphyry Cu(-Mo-Au) deposits (along the western cordilleras of Mexico, the western U.S., Canada, Chile, Peru, and Argentina); (ii) Climax- (and sub−) type Mo deposits (Colorado Mineral Belt and northern New Mexico); and (iii) porphyry and IS-type epithermal Sn(-W-Ag) deposits of the Central Andean Tin Belt (Bolivia, Peru and northern Argentina).
The individual studies presented in this thesis primarily focus on the formation of different styles of mineralization located at different proximities to the intrusion in magmatic-hydrothermal systems along the American Cordillera. This includes (i) two individual geochemical studies on the Sweet Home Mine in the Colorado Mineral Belt (potential endmember of peripheral Climax-type mineralization); (ii) one numerical modeling study setup in a generic porphyry Cu-environment; and (iii) a numerical modeling study on the Central Andean Tin Belt-type Pirquitas Mine in NW Argentina.
Microthermometric data of fluid inclusions trapped in greisen quartz and fluorite from the Sweet Home Mine (Detroit City Portal) suggest that the early-stage mineralization precipitated from low- to medium-salinity (1.5-11.5 wt.% equiv. NaCl), CO2-bearing fluids at temperatures between 360 and 415°C and at depths of at least 3.5 km. Stable isotope and noble gas isotope data indicate that greisen formation and base metal mineralization at the Sweet Home Mine was related to fluids of different origins. Early magmatic fluids were the principal source for mantle-derived volatiles (CO2, H2S/SO2, noble gases), which subsequently mixed with significant amounts of heated meteoric water. Mixing of magmatic fluids with meteoric water is constrained by δ2Hw-δ18Ow relationships of fluid inclusions. The deep hydrothermal mineralization at the Sweet Home Mine shows features similar to deep hydrothermal vein mineralization at Climax-type Mo deposits or on their periphery. This suggests that fluid migration and the deposition of ore and gangue minerals in the Sweet Home Mine was triggered by a deep-seated magmatic intrusion.
The second study on the Sweet Home Mine presents Re-Os molybdenite ages of 65.86±0.30 Ma from a Mo-mineralized major normal fault, namely the Contact Structure, and multimineral Rb-Sr isochron ages of 26.26±0.38 Ma and 25.3±3.0 Ma from gangue minerals in greisen assemblages. The age data imply that mineralization at the Sweet Home Mine formed in two separate events: Late Cretaceous (Laramide-related) and Oligocene (Rio Grande Rift-related). Thus, the age of Mo mineralization at the Sweet Home Mine clearly predates that of the Oligocene Climax-type deposits elsewhere in the Colorado Mineral Belt. The Re-Os and Rb-Sr ages also constrain the age of the latest deformation along the Contact Structure to between 62.77±0.50 Ma and 26.26±0.38 Ma, which was employed and/or crosscut by Late Cretaceous and Oligocene fluids. Along the Contact Structure Late Cretaceous molybdenite is spatially associated with Oligocene minerals in the same vein system, a feature that precludes molybdenite recrystallization or reprecipitation by Oligocene ore fluids.
Ore precipitation in porphyry copper systems is generally characterized by metal zoning (Cu-Mo to Zn-Pb-Ag), which is suggested to be variably related to solubility decreases during fluid cooling, fluid-rock interactions, partitioning during fluid phase separation and mixing with external fluids. The numerical modeling study setup in a generic porphyry Cu-environment presents new advances of a numerical process model by considering published constraints on the temperature- and salinity-dependent solubility of Cu, Pb and Zn in the ore fluid. This study investigates the roles of vapor-brine separation, halite saturation, initial metal contents, fluid mixing, and remobilization as first-order controls of the physical hydrology on ore formation. The results show that the magmatic vapor and brine phases ascend with different residence times but as miscible fluid mixtures, with salinity increases generating metal-undersaturated bulk fluids. The release rates of magmatic fluids affect the location of the thermohaline fronts, leading to contrasting mechanisms for ore precipitation: higher rates result in halite saturation without significant metal zoning, lower rates produce zoned ore shells due to mixing with meteoric water. Varying metal contents can affect the order of the final metal precipitation sequence. Redissolution of precipitated metals results in zoned ore shell patterns in more peripheral locations and also decouples halite saturation from ore precipitation.
The epithermal Pirquitas Sn-Ag-Pb-Zn mine in NW Argentina is hosted in a domain of metamorphosed sediments without geological evidence for volcanic activity within a distance of about 10 km from the deposit. However, recent geochemical studies of ore-stage fluid inclusions indicate a significant contribution of magmatic volatiles. This study tested different formation models by applying an existing numerical process model for porphyry-epithermal systems with a magmatic intrusion located either at a distance of about 10 km underneath the nearest active volcano or hidden underneath the deposit. The results show that the migration of the ore fluid over a 10-km distance results in metal precipitation by cooling before the deposit site is reached. In contrast, simulations with a hidden magmatic intrusion beneath the Pirquitas deposit are in line with field observations, which include mineralized hydrothermal breccias in the deposit area.
Es wird ein Überblick über verschiedene spektroskopische Techniken, mit denen dünne organische Schichten, wie sie in der Fotovoltaik, bei Leuchtdioden oder organischen Halbleitern Anwendung finden, gegeben. Mit einfachen Zusatzgeräten lassen sich Schichtdicke, Schichtaufbau und Zusammensetzung untersuchen. Die Schichtdicke kann monomolekular sein. Unter bestimmten Voraussetzungen sind einzelne Moleküle in einer Schicht charakterisierbar.
Diese Masterarbeit zielt darauf ab, exemplarisch an zoologischen Gärten für das politische Spannungsverhältnis zwischen Mensch und Tier zu sensibilisieren sowie die damit einhergehenden Aushandlungsprozesse auf individueller bzw. gesamtgesellschaftlicher Ebene didaktisch anschlussfähig zu machen. Nach einer kurzen begrifflichen Einführung der titelgebenden Termini werden in diesem Sinne vier verschiedene Ausdrucksformen ambivalenter Mensch-Tier-Beziehungen erörtert: die Entwicklungsgeschichte und Architektur sowie die Artenschutz- bzw. Bildungsleistungen der Zoos. Dabei wird der historisch vorbelastete Balanceakt zoologischer Gärten deutlich, in Gegenwart und Zukunft menschliche und tierliche Interessen glaubhaft in Einklang bringen zu müssen. Als Grundübel dieses Dilemmas wird wiederum der menschliche Anspruch identifiziert, Naturzustände vor dem Hintergrund eines fragwürdigen Legitimationsnarratives kulturell nachstellen zu wollen.
Außerdem entfaltet der Autor die These, dass der Zoo gerade durch die ihn prägenden Ambivalenzen gegenüber anderen Problembereichen der Mensch-Tier-Beziehungen an Kontroversität gewinnt und somit prädestiniert ist, um als politikdidaktische Reibungsfläche zeitgemäßer Mensch-Tier-Beziehungen zu fungieren. Dementsprechend werden abschließend Zugänge vorgeschlagen, um den Zoo als außerschulischen politischen Lernort vor dem Hintergrund vielfältiger Streitfragen erkunden und produktiv erörtern zu können.
Indem Schülerinnen und Schüler demnach die Wert- und Zweckrationalität der Zoos auf den Prüfstand stellen, werden sie dazu angeregt, sich selbst- und gesellschaftskritisch mit dem politischen Verhältnis zwischen Tieren und Menschen auseinanderzusetzen. Die dabei exemplarisch am Zoo gewonnenen Erkenntnisse und Überzeugungen lassen sich in Bezug auf die ebenso drängende wie polarisierende Tierfrage abstrahieren. Durch den somit geschaffenen Orientierungsrahmen werden die Lernenden nicht zuletzt in die Lage versetzt, ihre gereiften Vorstellungen von einem angemessenen Umgang mit (nichtmenschlichen) Tieren öffentlich zu vertreten.
The amount of data stored in databases and the complexity of database workloads are ever- increasing. Database management systems (DBMSs) offer many configuration options, such as index creation or unique constraints, which must be adapted to the specific instance to efficiently process large volumes of data. Currently, such database optimization is complicated, manual work performed by highly skilled database administrators (DBAs). In cloud scenarios, manual database optimization even becomes infeasible: it exceeds the abilities of the best DBAs due to the enormous number of deployed DBMS instances (some providers maintain millions of instances), missing domain knowledge resulting from data privacy requirements, and the complexity of the configuration tasks.
Therefore, we investigate how to automate the configuration of DBMSs efficiently with the help of unsupervised database optimization. While there are numerous configuration options, in this thesis, we focus on automatic index selection and the use of data dependencies, such as functional dependencies, for query optimization. Both aspects have an extensive performance impact and complement each other by approaching unsupervised database optimization from different perspectives.
Our contributions are as follows: (1) we survey automated state-of-the-art index selection algorithms regarding various criteria, e.g., their support for index interaction. We contribute an extensible platform for evaluating the performance of such algorithms with industry-standard datasets and workloads. The platform is well-received by the community and has led to follow-up research. With our platform, we derive the strengths and weaknesses of the investigated algorithms. We conclude that existing solutions often have scalability issues and cannot quickly determine (near-)optimal solutions for large problem instances. (2) To overcome these limitations, we present two new algorithms. Extend determines (near-)optimal solutions with an iterative heuristic. It identifies the best index configurations for the evaluated benchmarks. Its selection runtimes are up to 10 times lower compared with other near-optimal approaches. SWIRL is based on reinforcement learning and delivers solutions instantly. These solutions perform within 3 % of the optimal ones. Extend and SWIRL are available as open-source implementations.
(3) Our index selection efforts are complemented by a mechanism that analyzes workloads to determine data dependencies for query optimization in an unsupervised fashion. We describe and classify 58 query optimization techniques based on functional, order, and inclusion dependencies as well as on unique column combinations. The unsupervised mechanism and three optimization techniques are implemented in our open-source research DBMS Hyrise. Our approach reduces the Join Order Benchmark’s runtime by 26 % and accelerates some TPC-DS queries by up to 58 times.
Additionally, we have developed a cockpit for unsupervised database optimization that allows interactive experiments to build confidence in such automated techniques. In summary, our contributions improve the performance of DBMSs, support DBAs in their work, and enable them to contribute their time to other, less arduous tasks.
To grant high-quality evidence-based research in the field of exercise sciences, it is often necessary for various institutions to collaborate over longer distances and internationally. Here, not only with regard to the recent COVID-19-pandemic, digital means provide new options for remote scientific exchanges. This thesis is meant to analyse and test digital opportunities to support the dissemination of knowledge and instruction of investigators about defined examination protocols in an international multi-center context.
The project consisted of three studies. The first study, a questionnaire-based survey, aimed at learning about the opinions and preferences of digital learning or social media among students of sport science faculties in two universities each in Germany, the UK and Italy. Based on these findings, in a second study, an examination video of an ultrasound determination of the intima-media-thickness and diameter of an artery was distributed by a messenger app to doctors and nursing personnel as simulated investigators and efficacy of the test setting was analysed. Finally, a third study integrated the use of an augmented reality device for direct remote supervision of the same ultrasound examinations in a long-distance international setting with international experts from the fields of engineering and sports science and later remote supervision of augmented reality equipped physicians performing a given task.
The first study with 229 participating students revealed a high preference for YouTube to receive video-based knowledge as well as a preference for using WhatsApp and Facebook for peer-to-peer contacts for learning purposes and to exchange and discuss knowledge. In the second study, video-based instructions send by WhatsApp messenger
showed high approval of the setup in both study groups, one with doctors familiar with the use of ultrasound technology as well as one with nursing staff who were not familiar with the device, with similar results in overall time of performance and the measurements of the femoral arteries. In the third and final study, experts from different continents were connected remotely to the examination site via an augmented reality device with good transmission quality. The remote supervision to doctors ́ examination produced a good interrater correlation. Experiences with the augmented reality-based setting were rated as highly positive by the participants. Potential benefits of this technique were seen in the fields of education, movement analysis, and supervision.
Concluding, the findings of this thesis were able to suggest modern and addressee- centred digital solutions to enhance the understanding of given examinations techniques of potential investigators in exercise science research projects. Head-mounted augmented reality devices have a special value and may be recommended for collaborative research projects with physical examination–based research questions. While the established setting should be further investigated in prospective clinical studies, digital competencies of future researchers should already be enhanced during the early stages of their education.
Mit dem FREI DAY wurde ein neues Format entwickelt, welches schulisches Lernen an den globalen Nachhaltigkeitszielen ausrichten und zukunftsrelevante Kompetenzen von Kindern und Jugendlichen fördern soll. Ob es erfolgreich im Bildungssystem implementiert werden kann, wird insbesondere von der Veränderungsbereitschaft der Lehrkräfte abhängen. Um sie bei der Umsetzung unterstützen zu können, ist notwendig, ihre individuelle Perspektive im Implementationsprozess zu erfassen. In der vorliegenden Arbeit wird untersucht, inwiefern sich das Kollegium einer Berliner Grundschule mit dem FREI DAY auseinandersetzt. Dazu wird auf die Stages of Concern von Hall und Hord (2015) zurückgegriffen und ein Interview mit der Lehrerin geführt, die die Umsetzung des Lernformats an der Schule koordiniert. Ihre Antworten werden durch eine qualitative Analyse des Interviewtranskripts ausgewertet und vor dem Hintergrund der Forschungsfrage interpretiert. Es zeigt sich, dass das Kollegium hinsichtlich der Auseinandersetzung mit dem FREI DAY in drei Gruppen eingeteilt werden kann. Während die erste von Anfang an begeistert und dazu bereit war, das Lernformat an der Schule einzuführen, war sich die zweite zunächst nicht sicher, ob sie sich die Umsetzung zutraut. Schließlich gab es auch eine Gruppe von Lehrkräften, die kein Interesse daran hatte, sich eingehender mit dem Konzept des FREI DAYS zu befassen und sich dementsprechend auch nicht an der Umsetzung beteiligte. Die Ergebnisse der Untersuchung weisen darauf hin, dass Transferunterstützung, insbesondere vonseiten der Schulleitung, notwendig ist, wenn das Lernformat langfristig in unserem Bildungssystem verankert werden soll. Aufgrund des explorativen Charakters der Studie sind jedoch weitere Untersuchungen in dieser Hinsicht erforderlich.
With the implementation of intense, short pulsed light sources throughout the last years, the powerful technique of resonant inelastic X-ray scattering (RIXS) became feasible for a wide range of experiments within femtosecond dynamics in correlated materials and molecules.
In this thesis I investigate the potential to bring RIXS into the fluence regime of nonlinear X-ray-matter interactions, especially focusing on the impact of stimulated scattering on RIXS in transition metal systems in a transmission spectroscopy geometry around transition metal L-edges.
After presenting the RIXS toolbox and the capabilities of free electron laser light sources for ultrafast intense X-ray experiments, the thesis explores an experiment designed to understand the impact of stimulated scattering on diffraction and direct beam transmission spectroscopy on a CoPd multilayer system. The experiments require short X-ray pulses that can only be generated at free electron lasers (FEL). Here the pulses are not only short, but also very intense, which opens the door to nonlinear X-ray-matter interactions. In the second part of this thesis, we investigate observations in the nonlinear interaction regime, look at potential difficulties for classic spectroscopy and investigate possibilities to enhance the RIXS through stimulated scattering. Here, a study on stimulated RIXS is presented, where we investigate the light field intensity dependent CoPd demagnetization in transmission as well as scattering geometry. Thereby we show the first direct observation of stimulated RIXS as well as light field induced nonlinear effects,
namely the breakdown of scattering intensity and the increase in sample transmittance. The topic is of ongoing interest and will just increase in relevance as more free electron lasers are planned and the number of experiments at such light sources will continue to increase in the near future.
Finally we present a discussion on the accessibility of small DOS shifts in the absorption-band of transition metal complexes through stimulated resonant X-ray scattering. As these shifts occur for example in surface states this finding could expand the experimental selectivity of NEXAFS and RIXS to the detectability of surface states. We show how stimulation can indeed enhance the visibility of DOS shifts through the detection of stimulated spectral shifts and enhancements in this theoretical study. We also forecast the observation of stimulated enhancements in resonant excitation experiments at FEL sources in systems with a high density of states just below the Fermi edge and in systems with an occupied to unoccupied DOS ratio in the valence band above 1.
"Aggredior ad ipsum crimen magiae." Mit diesen Worten leitet Apuleius die Widerlegung der gegen ihn gerichteten Anklage ein: Er soll die reiche Witwe Pudentilla durch Liebeszauber zu einer Heirat mit ihm veranlasst haben. Dagegen setzt er sich in seiner Verteidigungsrede "De magia" zur Wehr. Die vorliegende Arbeit soll diese spannende Rede Lateinschülern der gymnasialen Oberstufe durch ein Lektüreheft bekannt machen. Letzteres ist mit kompetenzorientierten Aufgaben und einem Erwartungshorizont versehen. Es wird ferner von der wissenschaftlichen Auseinandersetzung mit den in der Rede behandelten Themen flankiert. Das Heft soll den Schülern einerseits die argumentative Strategie der Rede und andererseits das antike Alltagsphänomen 'Magie' näherbringen. Dabei tauchen sie ein in die antiken Vorstellungen von Zauberei und versuchen zugleich die einstigen Vorwürfe der Ankläger zu rekonstruieren, die etwa die Suche nach bestimmten Fischarten, die Behandlung von Epilepsie oder gewisse nächtliche Rituale betreffen. Darüber hinaus fragen sie nach der Unterscheidung magischer von religiösen Praktiken und stellen dabei Bezüge zu ihrer eigenen Lebenswelt her.
Background: The concept self-compassion (SC), a special way of being compassionate with oneself while dealing with stressful life circumstances, has attracted increasing attention in research over the past two decades. Research has already shown that SC has beneficial effects on affective well-being and other mental health outcomes. However, little is known in which ways SC might facilitate our affective well-being in stressful situations. Hence, a central concern of this dissertation was to focus on the question which underlying processes might influence the link between SC and affective well-being. Two established components in stress processing, which might also play an important role in this context, could be the amount of experienced stress and the way of coping with a stressor. Thus, using a multi-method approach, this dissertation aimed at finding to which extent SC might help to alleviate the experienced stress and promotes the use of more salutary coping, while dealing with stressful circumstances. These processes might ultimately help improve one’s affective well-being. Derived from that, it was hypothesized that more SC is linked to less perceived stress and intensified use of salutary coping responses. Additionally, it was suggested that perceived stress and coping mediate the relation between SC and affective well-being.
Method: The research questions were targeted in three single studies and one meta-study. To test my assumptions about the relations of SC and coping in particular, a systematic literature search was conducted resulting in k = 136 samples with an overall sample size of N = 38,913. To integrate the z-transformed Pearson correlation coefficients, random-effects models were calculated. All hypotheses were tested with a three-wave cross-lagged design in two short-term longitudinal online studies assessing SC, perceived stress and coping responses in all waves. The first study explored the assumptions in a student sample (N = 684) with a mean age of 27.91 years over a six-week period, whereas the measurements were implemented in the GESIS Panel (N = 2934) with a mean age of 52.76 years analyzing the hypotheses in a populationbased sample across eight weeks. Finally, an ambulatory assessment study was designed to expand the findings of the longitudinal studies to the intraindividual level. Thus, a sample of 213 participants completed questionnaires of momentary SC, perceived stress, engagement and disengagement coping, and affective well-being on their smartphones three times per day over seven consecutive days. The data was processed using 1-1-1 multilevel mediation analyses.
Results: Results of the meta-analysis indicated that higher SC is significantly associated with more use of engagement coping and less use of disengagement coping. Considering the relations between SC and stress processing variables in all three single studies, cross-lagged paths from the longitudinal data, as well as multilevel modeling paths from the ambulatory assessment data indicated a notable relation between all relevant stress variables. As expected, results showed a significant negative relation between SC and perceived stress and disengagement coping, as well as a positive connection with engagement coping responses at the dispositional and intra-individual level. However, considering the mediational hypothesis, the most promising pathway in the link between SC and affective well-being turned out to be perceived stress in all three studies, while effects of the mediational pathways through coping responses were less robust.
Conclusion: Thus, a more self-compassionate attitude and higher momentary SC, when needed in specific situations, can help to engage in effective stress processing. Considering the underlying mechanisms in the link between SC and affective well-being, stress perception in particular seemed to be the most promising candidate for enhancing affective well-being at the dispositional and at the intraindividual level. Future research should explore the pathways between SC and affective well-being in specific contexts and samples, and also take into account additional influential factors.
Predator-forager interactions are a major factor in evolutionary adaptation of many species, as predators need to gain energy by consuming prey species, and foragers needs to avoid the worst fate of mortality while still consuming resources for energetic gains. In this evolutionary arms race, the foragers have constantly evolved anti-predator behaviours (e.g. foraging activity changes). To describe all these complex changes, researchers developed the framework of the landscape of fear, that is, the spatio-temporal variation of perceived predation risk. This concept simplifies all the involved ecological processes into one framework, by integrating animal biology and distribution with habitat characteristics. Researchers can then evaluate the perception of predation risk in prey species, what are the behavioural responses of the prey and, therefore, understand the cascading effects of landscapes of fear at the resource levels (tri-trophic effects). Although tri-trophic effects are well studied at the predator-prey interaction level, little is known on how the forager-resource interactions are part of the overall cascading effects of landscapes of fear, despite the changes of forager feeding behaviour - that occur with perceived predation risk - affecting directly the level of the resources.
This thesis aimed to evaluate the cascading effects of the landscape of fear on biodiversity of resources, and how the feeding behaviour and movement of foragers shaped the final resource species composition (potential coexistence mechanisms). We studied the changes caused by landscapes of fear on wild and captive rodent communities and evaluated: the cascading effects of different landscapes of fear on a tri-trophic system (I), the effects of fear on a forager’s movement patterns and dietary preferences (II) and cascading effects of different types of predation risk (terrestrial versus avian, III).
In Chapter I, we applied a novel measure to evaluate the cascading effects of fear at the level of resources, by quantifying the diversity of resources left after the foragers gave-up on foraging (diversity at the giving-up density). We tested the measure at different spatial levels (local and regional) and observed that with decreased perceived predation risk, the density and biodiversity of resources also decreased. Foragers left a very dissimilar community of resources based on perceived risk and resources functional traits, and therefore acted as an equalising mechanism.
In Chapter II, we wanted to understand further the decision-making processes of rodents in different landscapes of fear, namely, in which resource species rodents decided to forage on (based on three functional traits: size, nutrients and shape) and how they moved depending on perceived predation risk. In safe landscapes, individuals increased their feeding activity and movements and despite the increased costs, they visited more often patches that were further away from their central-place. Despite a preference for the bigger resources regardless of risk, when perceived predation risk was low, individuals changed their preference to fat-rich resources.
In Chapter III, we evaluated the cascading effects of two different types of predation risk in rodents: terrestrial (raccoon) versus avian predation risk. Raccoon presence or absence did not alter the rodents feeding behaviour in different landscapes of fear. Rodent’s showed risk avoidance behaviours towards avian predators (spatial risk avoidance), but not towards raccoons (lack of temporal risk avoidance).
By analysing the effects of fear in tri-trophic systems, we were able to deepen the knowledge of how non-consumptive effects of predators affect the behaviour of foragers, and quantitatively measure the cascading effects at the level of resources with a novel measure. Foragers are at the core of the ecological processes and responses to the landscape of fear, acting as variable coexistence agents for resource species depending on perceived predation risk. This newly found measures and knowledge can be applied to more trophic chains, and inform researchers on biodiversity patterns originating from landscapes of fear.
In X-ray computed tomography (XCT), an X-ray beam of intensity I0 is transmitted through an object and its attenuated intensity I is measured when it exits the object. The attenuation of the beam depends on the attenuation coefficients along its path. The attenuation coefficients provide information about the structure and composition of the object and can be determined through mathematical operations that are referred to as reconstruction.
The standard reconstruction algorithms are based on the filtered backprojection (FBP) of the measured data. While these algorithms are fast and relatively simple, they do not always succeed in computing a precise reconstruction, especially from under-sampled data.
Alternatively, an image or volume can be reconstructed by solving a system of linear equations. Typically, the system of equations is too large to be solved but its solution can be approximated by iterative methods, such as the Simultaneous Iterative Reconstruction Technique (SIRT) and the Conjugate Gradient Least Squares (CGLS).
This dissertation focuses on the development of a novel iterative algorithm, the Direct Iterative Reconstruction of Computed Tomography Trajectories (DIRECTT). After its reconstruction principle is explained, its performance is assessed for real parallel- and cone-beam CT (including under-sampled) data and compared to that of other established algorithms. Finally, it is demonstrated how the shape of the measured object can be modelled into DIRECTT to achieve even better reconstruction results.
Increasing demand for food, healthcare, and transportation arising from the growing world population is accompanied by and driving global warming challenges due to the rise of the atmospheric CO2 concentration. Industrialization for human needs has been increasingly releasing CO2 into the atmosphere for the last century or more. In recent years, the possibility of recycling CO2 to stabilize the atmospheric CO2 concentration and combat rising temperatures has gained attention. Thus, using CO2 as the feedstock to address future world demands is the ultimate solution while controlling the rapid climate change. Valorizing CO2 to produce activated and stable one-carbon feedstocks like formate and methanol and further upgrading them to industrial microbial processes to replace unsustainable feedstocks would be crucial for a future biobased circular economy. However, not all microbes can grow on formate as a feedstock, and those microbes that can grow are not well established for industrial processes.
S. cerevisiae is one of the industrially well-established microbes, and it is a significant contributor to bioprocess industries. However, it cannot grow on formate as a sole carbon and energy source. Thus, engineering S. cerevisiae to grow on formate could potentially pave the way to sustainable biomass and value-added chemicals production.
The Reductive Glycine Pathway (RGP), designed as the aerobic twin of the anaerobic Reductive Acetyl-CoA pathway, is an efficient formate and CO2 assimilation pathway. The RGP comprises of the glycine synthesis module (Mis1p, Gcv1p, Gcv2p, Gcv3p, and Lpd1p), the glycine to serine conversion module (Shmtp), the pyruvate synthesis module (Cha1p), and the energy supply module (Fdh1p). The RGP requires formate and elevated CO2 levels to operate the glycine synthesis module. In this study, I established the RGP in the yeast system using growth-coupled selection strategies to achieve formate and CO2-dependent biomass formation in aerobic conditions.
Firstly, I constructed serine biosensor strains by disrupting the native serine and glycine biosynthesis routes in the prototrophic S288c and FL100 yeast strains and insulated serine, glycine, and one-carbon metabolism from the central metabolic network. These strains cannot grow on glucose as the sole carbon source but require the supply of serine or glycine to complement the engineered auxotrophies. Using growth as a readout, I employed these strains as selection hosts to establish the RGP. Initially, to achieve this, I engineered different serine-hydroxymethyltransferases in the genome of serine biosensor strains for efficient glycine to serine conversion. Then, I implemented the glycine synthesis module of the RGP in these strains for the glycine and serine synthesis from formate and CO2. I successfully conducted Adaptive Laboratory Evolution (ALE) using these strains, which yielded a strain capable of glycine and serine biosynthesis from formate and CO2. Significant growth improvements from 0.0041 h-1 to 0.03695 h-1 were observed during ALE. To validate glycine and serine synthesis, I conducted carbon tracing experiments with 13C formate and 13CO2, confirming that more than 90% of glycine and serine biosynthesis in the evolved strains occurs via the RGP. Interestingly, labeling data also revealed that 10-15% of alanine was labelled, indicating pyruvate synthesis from the formate-derived serine using native serine deaminase (Cha1p) activity. Thus, RGP contributes to a small pyruvate pool which is converted to alanine without any selection pressure for pyruvate synthesis from formate. Hence, this data confirms the activity of all three modules of RGP even in the presence of glucose. Further, ALE in glucose limiting conditions did not improve pyruvate flux via the RGP.
Growth characterization of these strains showed that the best growth rates were achieved in formate concentrations between 25 mM to 300 mM. Optimum growth required 5% CO2, and dropped when the CO2 concentration was reduced from 5% to 2.5%.
Whole-genome sequencing of these evolved strains revealed mutations in genes that encode Gdh1p, Pet9p, and Idh1p. These enzymes might influence intracellular NADPH, ATP, and NADH levels, indicating adjustment to meet the energy demand of the RGP. I reverse-engineered the GDH1 truncation mutation on unevolved serine biosensor strains and reproduced formate dependent growth. To elucidate the effect of the GDH1 mutation on formate assimilation, I reintroduced this mutation in the S288c strain and conducted carbon-tracing experiments to compared formate assimilation between WT and ∆gdh1 mutant strains. Comparatively, enhanced formate assimilation was recorded in the ∆gdh1 mutant strain.
Although the 13C carbon tracing experiments confirmed the activity of all three modules of the RGP, the overall pyruvate flux via the RGP might be limited by the supply of reducing power. Hence, in a different approach, I overexpressed the formate dehydrogenase (Fdh1p) for energy supply and serine deaminase (Cha1p) for active pyruvate synthesis in the S288c parental strain and established growth on formate and serine without glucose in the medium. Further reengineering and evolution of this strain with a consistent energy, and formate-derived serine supply for pyruvate synthesis, is essential to achieve complete formatotrophic growth in the yeast system.
Rainfall-triggered landslides are a globally occurring hazard that cause several thousand fatalities per year on average and lead to economic damages by destroying buildings and infrastructure and blocking transportation networks. For people living and governing in susceptible areas, knowing not only where, but also when landslides are most probable is key to inform strategies to reduce risk, requiring reliable assessments of weather-related landslide hazard and adequate warning. Taking proper action during high hazard periods, such as moving to higher levels of houses, closing roads and rail networks, and evacuating neighborhoods, can save lives. Nevertheless, many regions of the world with high landslide risk currently lack dedicated, operational landslide early warning systems.
The mounting availability of temporal landslide inventory data in some regions has increasingly enabled data-driven approaches to estimate landslide hazard on the basis of rainfall conditions. In other areas, however, such data remains scarce, calling for appropriate statistical methods to estimate hazard with limited data. The overarching motivation for this dissertation is to further our ability to predict rainfall-triggered landslides in time in order to expand and improve warning. To this end, I applied Bayesian inference to probabilistically quantify and predict landslide activity as a function of rainfall conditions at spatial scales ranging from a small coastal town, to metropolitan areas worldwide, to a multi-state region, and temporal scales from hourly to seasonal. This thesis is composed of three studies.
In the first study, I contributed to developing and validating statistical models for an online landslide warning dashboard for the small town of Sitka, Alaska, USA. We used logistic and Poisson regressions to estimate daily landslide probability and counts from an inventory of only five reported landslide events and 18 years of hourly precipitation measurements at the Sitka airport. Drawing on community input, we established two warning thresholds for implementation in the dashboard, which uses observed rainfall and US National Weather Service forecasts to provide real-time estimates of landslide hazard.
In the second study, I estimated rainfall intensity-duration thresholds for shallow landsliding for 26 cities worldwide and a global threshold for urban landslides. I found that landslides in urban areas occurred at rainfall intensities that were lower than previously reported global thresholds, and that 31% of urban landslides were triggered during moderate rainfall events. However, landslides in cities with widely varying climates and topographies were triggered above similar critical rainfall intensities: thresholds for 77% of cities were indistinguishable from the global threshold, suggesting that urbanization may harmonize thresholds between cities, overprinting natural variability. I provide a baseline threshold that could be considered for warning in cities with limited landslide inventory data.
In the third study, I investigated seasonal landslide response to annual precipitation patterns in the Pacific Northwest region, USA by using Bayesian multi-level models to combine data from five heterogeneous landslide inventories that cover different areas and time periods. I quantitatively confirmed a distinctly seasonal pattern of landsliding and found that peak landslide activity lags the annual precipitation peak. In February, at the height of the landslide season, landslide intensity for a given amount of monthly rainfall is up to ten times higher than at the season onset in November, underlining the importance of antecedent seasonal hillslope conditions.
Together, these studies contributed actionable, objective information for landslide early warning and examples for the application of Bayesian methods to probabilistically quantify landslide hazard from inventory and rainfall data.
Für eine gelingende Kommunikation und Zusammenarbeit zwischen Therapeut:innen und Eltern sind meines Erachtens zwei Aspekte wesentlich: die therapeutische Haltung/innere Einstellung zur Beziehungsgestaltung auf Seiten der Therapeut:innen sowie ihr Wissen und die Fähigkeiten, Eltern ganz pragmatisch in den therapeutischen Prozess einzubinden.
In this bachelor’s thesis I implement the automatic theorem prover nanoCoP-Ω. This system is the result of porting arithmetic and equality handling procedures first introduced in the automatic theorem prover with arithmetic leanCoP-Ω into the similar system nanoCoP 2.0. To understand these procedures, I first introduce the mathematical background to both automatic theorem proving and arithmetic expressions. I present the predecessor projects leanCoP, nanoCoP and leanCoP-Ω, out of which nanCoP-Ω was developed. This is followed by an extensive description of the concepts the non-clausal connection calculus needed to be extended by, to allow for proving arithmetic expressions and equalities, as well as of their implementation into nanoCoP-Ω. An extensive comparison between both the runtimes and the number of solved problems of the systems nanoCoP-Ω and leanCoP-Ω was made. I come to the conclusion, that nanoCoP-Ω is considerably faster than leanCoP-Ω for small problems, though less well suited for larger problems. Additionally, I was able to construct a non-theorem that nanoCoP-Ω generates a false proof for. I discuss how this pressing issue could be resolved, as well as some possible optimizations and expansions of the system.
Solar photocatalysis is the one of leading concepts of research in the current paradigm of sustainable chemical industry. For actual practical implementation of sunlight-driven catalytic processes in organic synthesis, a cheap, efficient, versatile and robust heterogeneous catalyst is necessary. Carbon nitrides are a class of organic semiconductors who are known to fulfill these requirements.
First, current state of solar photocatalysis in economy, industry and lab research is overviewed, outlining EU project funding, prospective synthetic and reforming bulk processes, small scale solar organic chemistry, and existing reactor designs and prototypes, concluding feasibility of the approach.
Then, the photocatalytic aerobic cleavage of oximes to corresponding aldehydes and ketones by anionic poly(heptazine imide) carbon nitride is discussed. The reaction provides a feasible method of deprotection and formation of carbonyl compounds from nitrosation products and serves as a convenient model to study chromoselectivity and photophysics of energy transfer in heterogeneous photocatalysis.
Afterwards, the ability of mesoporous graphitic carbon nitride to conduct proton-coupled electron transfer was utilized for the direct oxygenation of 1,3-oxazolidin-2-ones to corresponding 1,3-oxazlidine-2,4-diones. This reaction provides an easier access to a key scaffold of diverse types of drugs and agrochemicals.
Finally, a series of novel carbon nitrides based on poly(triazine imide) and poly(heptazine imide) structure was synthesized from cyanamide and potassium rhodizonate. These catalysts demonstrated a good performance in a set of photocatalytic benchmark reactions, including aerobic oxidation, dual nickel photoredox catalysis, hydrogen peroxide evolution and chromoselective transformation of organosulfur precursors.
Concluding, the scope of carbon nitride utilization for net-oxidative and net-neutral photocatalytic processes was expanded, and a new tunable platform for catalyst synthesis was discovered.
Modern datasets often exhibit diverse, feature-rich, unstructured data, and they are massive in size. This is the case of social networks, human genome, and e-commerce databases. As Artificial Intelligence (AI) systems are increasingly used to detect pattern in data and predict future outcome, there are growing concerns on their ability to process large amounts of data. Motivated by these concerns, we study the problem of designing AI systems that are scalable to very large and heterogeneous data-sets.
Many AI systems require to solve combinatorial optimization problems in their course of action. These optimization problems are typically NP-hard, and they may exhibit additional side constraints. However, the underlying objective functions often exhibit additional properties. These properties can be exploited to design suitable optimization algorithms. One of these properties is the well-studied notion of submodularity, which captures diminishing returns. Submodularity is often found in real-world applications. Furthermore, many relevant applications exhibit generalizations of this property.
In this thesis, we propose new scalable optimization algorithms for combinatorial problems with diminishing returns. Specifically, we focus on three problems, the Maximum Entropy Sampling problem, Video Summarization, and Feature Selection. For each problem, we propose new algorithms that work at scale. These algorithms are based on a variety of techniques, such as forward step-wise selection and adaptive sampling. Our proposed algorithms yield strong approximation guarantees, and the perform well experimentally.
We first study the Maximum Entropy Sampling problem. This problem consists of selecting a subset of random variables from a larger set, that maximize the entropy. By using diminishing return properties, we develop a simple forward step-wise selection optimization algorithm for this problem. Then, we study the problem of selecting a subset of frames, that represent a given video. Again, this problem corresponds to a submodular maximization problem. We provide a new adaptive sampling algorithm for this problem, suitable to handle the complex side constraints imposed by the application. We conclude by studying Feature Selection. In this case, the underlying objective functions generalize the notion of submodularity. We provide a new adaptive sequencing algorithm for this problem, based on the Orthogonal Matching Pursuit paradigm.
Overall, we study practically relevant combinatorial problems, and we propose new algorithms to solve them. We demonstrate that these algorithms are suitable to handle massive datasets. However, our analysis is not problem-specific, and our results can be applied to other domains, if diminishing return properties hold. We hope that the flexibility of our framework inspires further research into scalability in AI.
Both horizontal-to-vertical (H/V) spectral ratios and the spatial autocorrelation method (SPAC) have proven to be valuable tools to gain insight into local site effects by ambient noise measurements. Here, the two methods are employed to assess the subsurface velocity structure at the Piano delle Concazze area on Mt Etna. Volcanic tremor records from an array of 26 broadband seismometers is processed and a strong variability of H/V ratios during periods of increased volcanic activity is found. From the spatial distribution of H/V peak frequencies, a geologic structure in the north-east of Piano delle Concazze is imaged which is interpreted as the Ellittico caldera rim. The method is extended to include both velocity data from the broadband stations and distributed acoustic sensing data from a co-located 1.5 km long fibre optic cable. High maximum amplitude values of the resulting ratios along the trajectory of the cable coincide with known faults. The outcome also indicates previously unmapped parts of a fault. The geologic interpretation is in good agreement with inversion results from magnetic survey data. Using the neighborhood algorithm, spatial autocorrelation curves obtained from the modified SPAC are inverted alone and jointly with the H/V peak frequencies for 1D shear wave velocity profiles. The obtained models are largely consistent with published models and were able to validate the results from the fibre optic cable.
Successful sentence comprehension requires the comprehender to correctly figure out who did what to whom. For example, in the sentence John kicked the ball, the comprehender has to figure out who did the action of kicking and what was being kicked. This process of identifying and connecting the syntactically-related words in a sentence is called dependency completion. What are the cognitive constraints that determine dependency completion? A widely-accepted theory is cue-based retrieval. The theory maintains that dependency completion is driven by a content-addressable search for the co-dependents in memory. The cue-based retrieval explains a wide range of empirical data from several constructions including subject-verb agreement, subject-verb non-agreement, plausibility mismatch configurations, and negative polarity items.
However, there are two major empirical challenges to the theory: (i) Grammatical sentences’ data from subject-verb number agreement dependencies, where the theory predicts a slowdown at the verb in sentences like the key to the cabinet was rusty compared to the key to the cabinets was rusty, but the data are inconsistent with this prediction; and, (ii) Data from antecedent-reflexive dependencies, where a facilitation in reading times is predicted at the reflexive in the bodybuilder who worked with the trainers injured themselves vs. the bodybuilder who worked with the trainer injured themselves, but the data do not show a facilitatory effect.
The work presented in this dissertation is dedicated to building a more general theory of dependency completion that can account for the above two datasets without losing the original empirical coverage of the cue-based retrieval assumption. In two journal articles, I present computational modeling work that addresses the above two empirical challenges.
To explain the grammatical sentences’ data from subject-verb number agreement dependencies, I propose a new model that assumes that the cue-based retrieval operates on a probabilistically distorted representation of nouns in memory (Article I). This hybrid distortion-plus-retrieval model was compared against the existing candidate models using data from 17 studies on subject-verb number agreement in 4 languages. I find that the hybrid model outperforms the existing models of number agreement processing suggesting that the cue-based retrieval theory must incorporate a feature distortion assumption.
To account for the absence of facilitatory effect in antecedent-reflexive dependencies, I propose an individual difference model, which was built within the cue-based retrieval framework (Article II). The model assumes that individuals may differ in how strongly they weigh a syntactic cue over a number cue. The model was fitted to data from two studies on antecedent-reflexive dependencies, and the participant-level cue-weighting was estimated. We find that one-fourth of the participants, in both studies, weigh the syntactic cue higher than the number cue in processing reflexive dependencies and the remaining participants weigh the two cues equally. The result indicates that the absence of predicted facilitatory effect at the level of grouped data is driven by some, not all, participants who weigh syntactic cues higher than the number cue. More generally, the result demonstrates that the assumption of differential cue weighting is important for a theory of dependency completion processes. This differential cue weighting idea was independently supported by a modeling study on subject-verb non-agreement dependencies (Article III).
Overall, the cue-based retrieval, which is a general theory of dependency completion, needs to incorporate two new assumptions: (i) the nouns stored in memory can undergo probabilistic feature distortion, and (ii) the linguistic cues used for retrieval can be weighted differentially. This is the cumulative result of the modeling work presented in this dissertation.
The dissertation makes an important theoretical contribution: Sentence comprehension in humans is driven by a mechanism that assumes cue-based retrieval, probabilistic feature distortion, and differential cue weighting. This insight is theoretically important because there is some independent support for these three assumptions in sentence processing and the broader memory literature. The modeling work presented here is also methodologically important because for the first time, it demonstrates (i) how the complex models of sentence processing can be evaluated using data from multiple studies simultaneously, without oversimplifying the models, and (ii) how the inferences drawn from the individual-level behavior can be used in theory development.
Alexander von Humboldt führte einen bisher völlig unbekannten Briefwechsel mit Ludwig August von Buch, einem Neffen des berühmten Geologen Leopold von Buch. Die drei in französischer Sprache verfassten und in Rom aufbewahrten Briefe werden zum ersten Mal publiziert. Sie berichten insbesondere über interessante Details von Humboldts Russland-Reise und enthalten ein Empfehlungsschreiben für den Offizier und Historiker Leopold von Orlich.
Aptamers are single-stranded DNA (ssDNA) or RNA molecules that can bind specifically and with high affinity to target molecules due to their unique three-dimensional structure. For this reason, they are often compared to antibodies and sometimes even referred to as “chemical antibodies”. They are simple and inexpensive to synthesize, easy to modify, and smaller than conventional antibodies. Enzymes, especially hydrolases, are interesting targets in this context. This class of enzymes is capable of hydrolytically cleaving various macromolecules such as proteins, as well as smaller molecules such as antibiotics. Hence, they play an important role in many biological processes including diseases and their treatment. Hydrolase detection as well as the understanding of their function is therefore of great importance for diagnostics and therapy. Due to their various desirable features compared to antibodies, aptamers are being discussed as alternative agents for analytical and diagnostic use in various applications. The use of aptamers in therapy is also frequently investigated, as the binding of aptamers can have effects on the catalytic activity, protein-protein interactions, or proteolytic cascades. Aptamers are generated by an in vitro selection process. Potential aptamer candidates are selected from a pool of enriched nucleic acid sequences with affinity to the target, and their binding affinity and specificity is investigated. This is one of the most important steps in aptamer generation to obtain specific aptamers with high affinity for use in analytical and diagnostic applications. The binding properties or binding domains and their effects on enzyme functions form the basis for therapeutic applications.
In this work, the binding properties of DNA aptamers against two different hydrolases were investigated. In view of their potential utility for analytical methods, aptamers against human urokinase (uPA) and New Delhi metallo-β-lactamase-1 (NDM-1) were evaluated for their binding affinity and specificity using different methods. Using the uPA aptamers, a protocol for measuring the binding kinetics of an aptamer-protein-interaction by surface plasmon resonance spectroscopy (SPR) was developed. Based on the increased expression of uPA in different types of cancer, uPA is discussed as a prognostic and diagnostic tumor marker. As uPA aptamers showed different binding sites on the protein, microtiter plate-based aptamer sandwich assay systems for the detection of uPA were developed. Because of the function of urokinase in cancer cell proliferation and metastasis, uPA is also discussed as a therapeutic target. In this regard, the different binding sites of aptamers showed different effects on uPA function. In vitro experiments demonstrated both inhibition of uPA binding to its receptor as well as the inhibition of uPA catalytic activity for different aptamers. Thus, in addition to their specificity and affinity for their targets, the utility of the aptamers for potential diagnostic and therapeutic applications was demonstrated. First, as an alternative inhibitor of human urokinase for therapeutic purposes, and second, as valuable recognition molecules for the detection of urokinase, as a prognostic and diagnostic marker for cancer, and for NDM-1 to detect resistance to carbapenem antibiotics.
Volcanic hazard assessment relies on physics-based models of hazards, such as lava flows and pyroclastic density currents, whose outcomes are very sensitive to the location where future eruptions will occur. On the contrary, forecast of vent opening locations in volcanic areas typically relies on purely data-driven approaches, where the spatial density of past eruptive vents informs the probability maps of future vent opening. Such techniques may be suboptimal in volcanic systems with missing or scarce data, and where the controls on magma pathways may change over time. An alternative approach was recently proposed, relying on a model of stress-driven pathways of magmatic dikes. In that approach, the crustal stress was optimized so that dike trajectories linked consistently the location of the magma chamber to that of past vents. The retrieved information on the stress state was then used to forecast future dike trajectories. The validation of such an approach requires extensive application to nature. Before doing so, however, several important limitations need to be removed, most importantly the two-dimensional (2D) character of the models and theoretical concepts. In this thesis, I develop methods and tools so that a physics-based strategy of stress inversion and eruptive vent forecast in volcanoes can be applied to three dimensional (3D) problems. In the first part, I test the stress inversion and vent forecast strategy on analog models, still within a 2D framework, but improving on the efficiency of the stress optimization. In the second part, I discuss how to correctly account for gravitational loading/unloading due to complex 3D topography with a Boundary-Element numerical model. Then, I develop a new, simplified but fast model of dike pathways in 3D, designed for running large numbers of simulations at minimal computational cost, and able to backtrack dike trajectories from vents on the surface. Finally, I combine the stress and dike models to simulate dike pathways in synthetic calderas. In the third part, I describe a framework of stress inversion and vent forecast strategy in 3D for calderas. The stress inversion relies on, first, describing the magma storage below a caldera in terms of a probability density function. Next, dike trajectories are backtracked from the known locations of past vents down through the crust, and the optimization algorithm seeks for the stress models which lead trajectories through the regions of highest probability. I apply the new strategy to the synthetic scenarios presented in the second part, and I exploit the results from the stress inversions to produce probability maps of future vent locations for some of those scenarios. In the fourth part, I present the inversion of different deformation source models applied to the ongoing ground deformation observed across the Rhenish Massif in Central Europe. The region includes the Eifel Volcanic Fields in Germany, a potential application case for the vent forecast strategy. The results show how the observed deformation may be due to melt accumulation in sub-horizontal structures in the lower crust or upper mantle. The thesis concludes with a discussion of the stress inversion and vent forecast strategy, its limitations and applicability to real volcanoes. Potential developments of the modeling tools and concepts presented here are also discussed, as well as possible applications to other geophysical problems.
Mit dem Alter kann eine Zunahme leichtgradiger Entzündungsprozesse beobachtet werden, von denen angenommen wird, dass sie den typischen, altersbedingten Verlust an Muskelmasse, -kraft und -funktion „befeuern“. Diese als Inflammaging bezeichneten Prozesse können auf ein komplexes Zusammenspiel aus einem dysfunktionalen (viszeralen) Fettgewebe, einer Dysbiose und damit einhergehender mikrobiellen Translokation und geringeren Abwehrfähigkeit sowie einer insgesamt zunehmenden Immunseneszenz zurückgeführt werden. In Summa begünstigt ein pro-inflammatorisches Milieu metabolische Störungen und chronische, altersassoziierte Erkrankungen, die das Entzündungsgeschehen aufrechterhalten oder vorantreiben. Neben einem essenziellen Bewegungsmangel trägt auch eine westlich geprägte, industrialisierte Ernährungsweise zum Entzündungsgeschehen und zur Entwicklung chronischer Erkrankungen bei. Daher liegt die Vermutung nahe, dem Entzündungsgeschehen mit ausreichend Bewegung und einer anti-inflammatorischen Ernährung entgegenzuwirken. In dieser Hinsicht werden insbesondere Omega-3-Fettsäuren (Omega-3) mit anti-inflammatorischen Eigenschaften verbunden. Obwohl ein Zusammenhang zwischen dem ernährungsbedingten Inflammationspotenzial bzw. der Zufuhr von Omega-3 und dem Inflammationsprofil bereits untersucht wurde, fehlen bislang Untersuchungen insbesondere bei älteren Erwachsenen, die den Link zwischen dem Inflammationspotenzial der Ernährung und Sarkopenie-relevanten Muskelparametern herstellen.
Aufgrund des Proteinmehrbedarfs zum Erhalt der funktionellen Muskulatur im Alter wurde bereits eine Vielzahl an Sport- und Ernährungsinterventionen durchgeführt, die eine Verbesserung des Muskelstatus mit Hilfe von strukturiertem Krafttraining und einer proteinreichen Ernährung zeigen. Es gibt zudem Hinweise, dass Omega-3 auch die Proteinsynthese verstärken könnten. Unklar ist jedoch, inwiefern eine anti-inflammatorische Ernährung mit Fokus auf Omega-3 sowohl die Entzündungsprozesse als auch den Muskelproteinmetabolismus und die neuromuskuläre Funktionalität im Alter günstig unterstützen kann. Dies vor allem im Hinblick auf die Muskelleistung, die eng mit der Sturzneigung und der Autonomie im Alltag verknüpft ist, aber in Interventionsstudien mit älteren Erwachsenen bisher wenig Berücksichtigung erhielt. Darüber hinaus werden häufig progressive Trainingselemente genutzt, die nach Studienabschluss oftmals wenig Anschluss im Lebensalltag der Betroffenen finden und somit wenig nachhaltig sind. Ziel dieser Arbeit war demnach die Evaluierung einer proteinreichen und zusätzlich mit Omega-3 supplementierten Ernährung in Kombination mit einem wöchentlichen Vibrationstraining und altersgemäßen Bewegungsprogramm auf Inflammation und neuromuskuläre Funktion bei älteren, selbständig lebenden Erwachsenen.
Hierzu wurden zunächst mögliche Zusammenhänge zwischen dem ernährungsbedingten Inflammationspotenzial, ermittelt anhand des Dietary Inflammatory Index, und dem Muskelstatus sowie dem Inflammationsprofil im Alter eruiert. Dazu dienten die Ausgangswerte von älteren, selbständig lebenden Erwachsenen einer postprandialen Interventionsstudie (POST-Studie), die im Querschnitt analysiert wurden. Die Ergebnisse bestätigten, dass eine pro-inflammatorische Ernährung sich einerseits in einem stärkeren Entzündungsgeschehen widerspiegelt und andererseits mit Sarkopenie-relevanten Parametern, wie einer geringeren Muskelmasse und Gehgeschwindigkeit, ungünstig assoziiert ist. Darüber hinaus zeigten sich diese Zusammenhänge auch in Bezug auf die Handgreifkraft bei den inaktiven, älteren Erwachsenen der Studie.
Anschließend wurde in einer explorativ ausgerichteten Pilot-Interventionsstudie (AIDA-Studie) in einem dreiarmigen Design untersucht, inwieweit sich eine Supplementierung mit Omega-3 unter Voraussetzung einer optimierten Proteinzufuhr und altersgemäßen Sportintervention mit Vibrationstraining auf die neuromuskuläre Funktion und Inflammation bei selbständig lebenden, älteren Erwachsenen auswirkt. Nach acht Wochen Intervention zeigte sich, dass eine mit Omega-3 supplementierte, proteinreiche Ernährung die Muskelleistung insbesondere bei den älteren Männern steigerte. Während sich die Kontrollgruppe nach acht Wochen Sportintervention nicht verbesserte, bestätigte sich zusätzlich eine Verbesserung der Beinkraft und der Testzeit beim Stuhl-Aufsteh-Test der älteren Erwachsenen mit einer proteinreichen Ernährung in Kombination mit der Sportintervention.
Darüber hinaus wurde deutlich, dass die zusätzliche Omega-3-Supplementierung insbesondere bei den Männern eine Reduktion der pro-inflammatorischen Zytokine im Serum zur Folge hatte. Allerdings spiegelten sich diese Beobachtungen nicht auf Genexpressionsebene in mononukleären Immunzellen oder in der LPS-induzierten Sekretion der Zytokine und Chemokine in Vollblutzellkulturen wider. Dies erfordert weitere Untersuchungen.
Housing in metabolic cages can induce a pronounced stress response. Metabolic cage systems imply housing mice on metal wire mesh for the collection of urine and feces in addition to monitoring food and water intake. Moreover, mice are single-housed, and no nesting, bedding, or enrichment material is provided, which is often argued to have a not negligible impact on animal welfare due to cold stress. We therefore attempted to reduce stress during metabolic cage housing for mice by comparing an innovative metabolic cage (IMC) with a commercially available metabolic cage from Tecniplast GmbH (TMC) and a control cage. Substantial refinement measures were incorporated into the IMC cage design. In the frame of a multifactorial approach for severity assessment, parameters such as body weight, body composition, food intake, cage and body surface temperature (thermal imaging), mRNA expression of uncoupling protein 1 (Ucp1) in brown adipose tissue (BAT), fur score, and fecal corticosterone metabolites (CMs) were included. Female and male C57BL/6J mice were single-housed for 24 h in either conventional Macrolon cages (control), IMC, or TMC for two sessions. Body weight decreased less in the IMC (females—1st restraint: 6.94%; 2nd restraint: 6.89%; males—1st restraint: 8.08%; 2nd restraint: 5.82%) compared to the TMC (females—1st restraint: 13.2%; 2nd restraint: 15.0%; males—1st restraint: 13.1%; 2nd restraint: 14.9%) and the IMC possessed a higher cage temperature (females—1st restraint: 23.7°C; 2nd restraint: 23.5 °C; males—1st restraint: 23.3 °C; 2nd restraint: 23.5 °C) compared with the TMC (females—1st restraint: 22.4 °C; 2nd restraint: 22.5 °C; males—1st restraint: 22.6 °C; 2nd restraint: 22.4 °C). The concentration of fecal corticosterone metabolites in the TMC (females—1st restraint: 1376 ng/g dry weight (DW); 2nd restraint: 2098 ng/g DW; males—1st restraint: 1030 ng/g DW; 2nd restraint: 1163 ng/g DW) was higher compared to control cage housing (females—1st restraint:
640 ng/g DW; 2nd restraint: 941 ng/g DW; males—1st restraint: 504 ng/g DW; 2nd restraint: 537 ng/g DW). Our results show the stress potential induced by metabolic cage restraint that is markedly influenced by the lower housing temperature. The IMC represents a first attempt to target cold stress reduction during metabolic cage application thereby producing more animal welfare friendly data.
Satisfaction and frustration of the needs for autonomy, competence, and relatedness, as assessed with the 24-item Basic Psychological Need Satisfaction and Frustration Scale (BPNSFS), have been found to be crucial indicators of individuals’ psychological health. To increase the usability of this scale within a clinical and health services research context, we aimed to validate a German short version (12 items) of this scale in individuals with depression including the examination of the relations from need frustration and need satisfaction to ill-being and quality of life (QOL). This cross-sectional study involved 344 adults diagnosed with depression (Mage (SD) = 47.5 years (11.1); 71.8% females). Confirmatory factor analyses indicated that the short version of the BPNSFS was not only reliable, but also fitted a six-factor structure (i.e., satisfaction/frustration X type of need). Subsequent structural equation modeling showed that need frustration related positively to indicators of ill-being and negatively to QOL. Surprisingly, need satisfaction did not predict differences in ill-being or QOL. The short form of the BPNSFS represents a practical instrument to measure need satisfaction and frustration in people with depression. Further, the results support recent evidence on the importance of especially need frustration in the prediction of psychopathology.
This thesis discusses heat and charge transport phenomena in single-crystalline Silicon penetrated by nanometer-sized pores, known as mesoporous Silicon (pSi). Despite the extensive attention given to it as a thermoelectric material of interest, studies on microscopic thermal and electronic transport beyond its macroscopic characterizations are rarely reported. In contrast, this work reports the interplay of both.
PSi samples synthesized by electrochemical anodization display a temperature dependence of specific heat 𝐶𝑝 that deviates from the characteristic 𝑇^3 behaviour (at 𝑇<50𝐾). A thorough analysis reveals that both 3D and 2D Einstein and Debye modes contribute to this specific heat. Additional 2D Einstein modes (~3 𝑚𝑒𝑉) agree reasonably well with the boson peak of SiO2 in pSi pore walls. 2D Debye modes are proposed to account for surface acoustic modes causing a significant deviation from the well-known 𝑇^3 dependence of 𝐶𝑝 at 𝑇<50𝐾.
A novel theoretical model gives insights into the thermal conductivity of pSi in terms of porosity and phonon scattering on the nanoscale. The thermal conductivity analysis utilizes the peculiarities of the pSi phonon dispersion probed by the inelastic neutron scattering experiments. A phonon mean-free path of around 10 𝑛𝑚 extracted from the presented model is proposed to cause the reduced thermal conductivity of pSi by two orders of magnitude compared to p-doped bulk Silicon. Detailed analysis indicates that compound averaging may cause a further 10-50% reduction. The percolation threshold of 65% for thermal conductivity of pSi samples is subsequently determined by employing theoretical effective medium models.
Temperature-dependent electrical conductivity measurements reveal a thermally activated transport process. A detailed analysis of the activation energy 𝐸𝐴𝜎 in the thermally activated transport exhibits a Meyer Neldel compensation rule between different samples that originates in multi-phonon absorption upon carrier transport. Activation energies 𝐸𝐴𝑆 obtained from temperature-dependent thermopower measurements provide further evidence for multi-phonon assisted hopping between localized states as a dominant charge transport mechanism in pSi, as they systematically differ from the determined 𝐸𝐴𝜎 values.
Essays in public economics
(2023)
This cumulative dissertation uses economic theory and micro-econometric tools and evaluation methods to analyse public policies and their impact on welfare and individual behaviour. In particular, it focuses on policies in two distinct areas that represent fundamental societal challenges in the 21st century: the ageing of society and life in densely-populated urban agglomerations. Together, these areas shape important financial decisions in a person's life, impact welfare, and are driving forces behind many of the challenges in today's societies. The five self-contained research chapters of this thesis analyse the forward looking effects of pension reforms, affordable housing policies as well as a public transport subsidy and its effect on air pollution.
Die Amerikanischen Reisetagebücher wurden von Alexander von Humboldt während seines gesamten Lebens genutzt, dabei annotiert, auseinandergenommen und in Teilen an andere Forscher weitergegeben. In seinem letzten Lebensjahrzehnt ließ er diese in jene neun ledernen Bände einbinden, die heute in der Staatsbibliothek zu Berlin – Preußischer Kulturbesitz überliefert sind. Eine der Leitthesen der bisherigen Forschung lautete, dass dabei ihre ursprüngliche Ordnung verlorenging bzw. dass sie in großer Unordnung neu gebunden
wurden. In dem Beitrag wird gezeigt, dass diese Leitthese nicht zutrifft. Vielmehr darf von einem weitgehenden Erhalt des ursprünglichen Zustands der Tagebuchbände ausgegangen werden, jenes Zustands, der in dem 1805 in Berlin angefertigten alphabetischen Register zu seinen Manuskripten (Index général) überliefert ist. Analysiert wurden neben dem Index selbst die Materialität der Bän-
de, besonders Paginierungssprünge und Schnittkanten. Zudem wurde die bestehende Foliierung kritisch hinterfragt.
In der Handschriftenabteilung der Staatsbibliothek zu Berlin – Preußischer Kulturbesitz befindet sich das Manuskript einer Rezension von Humboldts Kosmos (Band 1), das der Berliner Pädagoge Karl Friedrich von Klöden kurz nach dem Erscheinen des Buches für die „Vossische Zeitung“ verfasste. Manuskript und gedruckter Text werden in dem folgenden Artikel reproduziert. Ihr Vergleich bietet die seltene Gelegenheit, die Entstehung eines Dokumentes zu verfolgen, das als ein Beispiel für die Humboldt-Rezeption im 19. Jahrhundert gelten kann.
Das Ganze erfassen
(2023)
Der Wissenschaftshistoriker Eberhard Knobloch beschäftigt sich seit rund zwanzig Jahren mit Leben und Werk Alexander von Humboldts. Er zeigt, dass Humboldts Wissenschaftstheorie vom Naturbild der pythagoreischen Schule inspiriert war, seine wissenschaftliche Methode hingegen dem Vorbild der Himmelsmechanik Laplaces folgte. Humboldt entwickelte aus diesen Quellen ein auf Zahlenverhältnisse und Mittelwerte gegründetes Erkenntnismodell, das wegweisend für die datenbasierten Bio- und Geowissenschaften wurde. Die wechselseitige Verbundenheit der verschiedenen Naturphänomene visualisierte Humboldt in seinem ‚Tableau physique des Andes‘. In mehreren Aufsätzen entschlüsselte Eberhard Knobloch auf anschauliche Weise diesen komplexen Blick ins Ganze der Natur.
Dieser Beitrag präsentiert die epistemischen Veränderungen, die von der Entdeckungsreise zur Forschungsreise führten, im Lichte jener Auseinandersetzungen, die als „Berliner Debatte um die Neue Welt“ berühmt wurden. Alexander von Humboldt bemerkte und beschrieb um die Wende zum 19. Jahrhundert eine fundamentale Epochenschwelle, die er im Vorwort zu seinen Vues des Cordillères et Monumens des Peuples Indigènes de l’Amérique als eine „glückliche Revolution“ bezeichnete. Diese Revolution schloss für Humboldt auch die Tatsache mit ein, die Aufklärung nicht als eine rein europäische, sondern als eine transatlantische und weltumspannende philosophische Bewegung zu verstehen.
Control over spin and electronic structure of MoS₂ monolayer via interactions with substrates
(2023)
The molybdenum disulfide (MoS2) monolayer is a semiconductor with a direct bandgap while it is a robust and affordable material.
It is a candidate for applications in optoelectronics and field-effect transistors.
MoS2 features a strong spin-orbit coupling which makes its spin structure promising for acquiring the Kane-Mele topological concept with corresponding applications in spintronics and valleytronics.
From the optical point of view, the MoS2 monolayer features two valleys in the regions of K and K' points. These valleys are differentiated by opposite spins and a related valley-selective circular dichroism.
In this study we aim to manipulate the MoS2 monolayer spin structure in the vicinity of the K and K' points to explore the possibility of getting control over the optical and electronic properties.
We focus on two different substrates to demonstrate two distinct routes: a gold substrate to introduce a Rashba effect and a graphene/cobalt substrate to introduce a magnetic proximity effect in MoS2.
The Rashba effect is proportional to the out-of-plane projection of the electric field gradient. Such a strong change of the electric field occurs at the surfaces of a high atomic number materials and effectively influence conduction electrons as an in-plane magnetic field. A molybdenum and a sulfur are relatively light atoms, thus, similar to many other 2D materials, intrinsic Rashba effect in MoS2 monolayer is vanishing small. However, proximity of a high atomic number substrate may enhance Rashba effect in a 2D material as it was demonstrated for graphene previously.
Another way to modify the spin structure is to apply an external magnetic field of high magnitude (several Tesla), and cause a Zeeman splitting, the conduction electrons.
However, a similar effect can be reached via magnetic proximity which allows us to reduce external magnetic fields significantly or even to zero. The graphene on cobalt interface is ferromagnetic and stable for MoS2 monolayer synthesis. Cobalt is not the strongest magnet; therefore, stronger magnets may lead to more significant results.
Nowadays most experimental studies on the dichalcogenides (MoS2 included) are performed on encapsulated heterostructures that are produced by mechanical exfoliation.
While mechanical exfoliation (or scotch-tape method) allows to produce a huge variety of structures, the shape and the size of the samples as well as distance between layers in heterostructures are impossible to control reproducibly.
In our study we used molecular beam epitaxy (MBE) methods to synthesise both MoS2/Au(111) and MoS2/graphene/Co systems.
We chose to use MBE, as it is a scalable and reproducible approach, so later industry may adapt it and take over.
We used graphene/cobalt instead of just a cobalt substrate because direct contact of MoS2\ monolayer and a metallic substrate may lead to photoluminescence (PL) quenching in the metallic substrate. Graphene and hexagonal boron nitride monolayer are considered building blocks of a new generation of electronics also commonly used as encapsulating materials for PL studies. Moreover graphene is proved to be a suitable substrate for the MBE growth of transitional metal dichalcogenides (TMDCs).
In chapter 1,
we start with an introduction to TMDCs. Then we focus on MoS2 monolayer state of the art research in the fields of application scenario; synthesis approaches; electronic, spin, and optical properties; and interactions with magnetic fields and magnetic materials.
We briefly touch the basics of magnetism in solids and move on to discuss various magnetic exchange interactions and magnetic proximity effect.
Then we describe MoS2 optical properties in more detail. We start from basic exciton physics and its manifestation in the MoS2 monolayer. We consider optical selection rules in the MoS2 monolayer and such properties as chirality, spin-valley locking, and coexistence of bright and dark excitons.
Chapter 2 contains an overview of the employed surface science methods: angle-integrated, angle-resolved, and spin-resolved photoemission; low energy electron diffraction and scanning tunneling microscopy.
In chapter 3, we describe MoS2 monolayer synthesis details for two substrates: gold monocrystal with (111) surface and graphene on cobalt thin film with Co(111) surface orientation.
The synthesis descriptions are followed by a detailed characterisation of the obtained structures: fingerprints of MoS2 monolayer formation; MoS2 monolayer symmetry and its relation to the substrate below; characterisation of MoS2 monolayer coverage, domain distribution, sizes and shapes, and moire structures.
In chapter~4, we start our discussion with MoS2/Au(111) electronic and spin structure. Combining density functional theory computations (DFT) and spin-resolved photoemission studies, we demonstrate that the MoS2 monolayer band structure features an in-plane Rashba spin splitting. This confirms the possibility of MoS2 monolayer spin structure manipulation via a substrate.
Then we investigate the influence of a magnetic proximity in the MoS2/graphene/Co system on the MoS2 monolayer spin structure.
We focus our investigation on MoS2 high symmetry points: G and K.
First, using spin-resolved measurements, we confirm that electronic states are spin-split at the G point via a magnetic proximity effect. Second, combining spin-resolved measurements and DFT computations for MoS2 monolayer in the K point region, we demonstrate the appearance of a small in-plane spin polarisation in the valence band top and predict a full in-plane spin polarisation for the conduction band bottom.
We move forward discussing how these findings are related to the MoS2 monolayer optical properties, in particular the possibility of dark exciton observation. Additionally, we speculate on the control of the MoS2 valley energy via magnetic proximity from cobalt.
As graphene is spatially buffering the MoS2 monolayer from the Co thin film, we speculate on the role of graphene in the magnetic proximity transfer by replacing graphene with vacuum and other 2D materials in our computations.
We finish our discussion by investigating the K-doped MoS2/graphene/Co system and the influence of this doping on the electronic and spin structure as well as on the magnetic proximity effect.
In summary, using a scalable MBE approach we synthesised
MoS2/Au(111) and MoS2/graphene/Co systems. We found a Rashba effect taking place in MoS2/Au(111) which proves that the MoS2 monolayer in-plane spin structure can be modified. In MoS2/graphene/Co the in-plane magnetic proximity effect indeed takes place which rises the possibility of fine tuning the MoS2 optical properties via manipulation of the the substrate magnetisation.
During the Cenozoic, global cooling and uplift of the Tian Shan, Pamir, and Tibetan plateau modified atmospheric circulation and reduced moisture supply to Central Asia. These changes led to aridification in the region during the Neogene. Afterwards, Quaternary glaciations led to modification of the landscape and runoff.
In the Issyk-Kul basin of the Kyrgyz Tian Shan, the sedimentary sequences reflect the development of the adjacent ranges and local climatic conditions. In this work, I reconstruct the late Miocene – early Pleistocene depositional environment, climate, and lake development in the Issyk-Kul basin using facies analyses and stable δ18O and δ13C isotopic records from sedimentary sections dated by magnetostratigraphy and 26Al/10Be isochron burial dating. Also, I present 10Be-derived millennial-scale modern and paleo-denudation rates from across the Kyrgyz Tian Shan and long-term exhumation rates calculated from published thermochronology data. This allows me to examine spatial and temporal changes in surface processes in the Kyrgyz Tian Shan.
In the Issyk-Kul basin, the style of fluvial deposition changed at ca. 7 Ma, and aridification in the basin commenced concurrently, as shown by magnetostratigraphy and the δ18O and δ13C data. Lake formation commenced on the southern side of the basin at ca. 5 Ma, followed by a ca. 2 Ma local depositional hiatus. 26Al/10Be isochron burial dating and paleocurrent analysis show that the Kungey range to the north of the basin grew eastward, leading to a change from fluvial-alluvial deposits to proximal alluvial fan conglomerates at 5-4 Ma in the easternmost part of the basin. This transition occurred at 2.6-2.8 Ma on the southern side of the basin, synchronously with the intensification of the Northern Hemisphere glaciation. The paleo-denudation rates from 2.7-2.0 Ma are as low as long-term exhumation rates, and only the millennial-scale denudation rates record an acceleration of denudation.
This work concludes that the growth of the ranges to the north of the basin led to creation of the topographic barrier at ca. 7 Ma and a subsequent aridification in the Issyk-Kul basin. Increased subsidence and local tectonically-induced river system reorganization on the southern side of the basin enabled lake formation at ca. 5 Ma, while growth of the Kungey range blocked westward-draining rivers and led to sediment starvation and lake expansion. Denudational response of the Kyrgyz Tian Shan landscape is delayed due to aridity and only substantial cooling during the late Quaternary glacial cycles led to notable acceleration of denudation. Currently, increased glacier reduction and runoff controls a more rapid denudation of the northern slope of the Terskey range compared to other ranges of the Kyrgyz Tian Shan.
Befragungen von Expert*innen durch Schüler*innen sind eine im Unterricht und in den Methodenhandbüchern der allgemeinen Didaktik sowie insbesondere in der Politischen Bildung etablierte Praxis. Bei dieser Methode gelingt es jedoch nicht immer, die Schüler*innen abzuholen und im gewünschten Maße zu aktivieren. Aus der wissenschaftlichen Literatur und den gängigen Methodenhandbüchern geht hervor, dass Kommunikation zwischen Lehrkraft und Expert*in in der Vorbereitung der Befragung der entscheidende Ansatz sein kann, um die be-stehenden Probleme zu lösen. Dabei sind die konkreten Kommunikationsvorschläge in der Literatur wage und/oder widersprüchlich zueinander. Deswegen wird in dieser Arbeit untersucht, wie die Kommunikation zwischen Lehrkraft und Expert*in in Vorbereitung der Expert*innenbefragung gelingen kann.
Es werden sieben leitfadengestützte Interviews mit einer Dauer von 25 bis 60 Minuten mit Akteur*innen, die am Landtag Brandenburg mehrfach an der Durchführung von Expert*innenbefragungen beteiligt waren, geführt und mit einer qualitativen Inhaltsanalyse ausgewertet. Dabei wird zum ersten Mal die Perspektive von Expert*innen auf die Unterrichtsmethode erfasst. Die Analyse ergab, dass Kommunikation in der Vorbereitung auf schulische Expert*innenbefragungen positive Auswirkungen auf die Qualität des Lernangebots für die Schüler*innen und die Bereitschaft der Expert*innen zur Teilnahme haben kann. Dazu muss sie auf Augenhöhe stattfinden, zielorientiert, bewältigbar und zuverlässig sein. Für die Zielorientierung wird in dieser Arbeit eine eigene Definition schulischer Expert*innenbefragungen vorgeschlagen, die als Überbegriff von fünf zieldifferenten Unterrichtsmethoden fungiert. Dabei wird ein konkreter Vorschlag für den Kommunikationsprozess in mit personellen Ressourcen ausgestatteten Kontexten wie dem Landtag Brandenburg sowie für den direkten Kontakt entwickelt. Außer-dem wird gezeigt, dass eine auf solche Art gelungene Kommunikation andere Inhalte transportiert als die bisher in der didaktischen Literatur empfohlenen.
Residential segregation is a widespread phenomenon that can be observed in almost every major city. In these urban areas, residents with different ethnical or socioeconomic backgrounds tend to form homogeneous clusters. In Schelling’s classical segregation model two types of agents are placed on a grid. An agent is content with its location if the fraction of its neighbors, which have the same type as the agent, is at least 𝜏, for some 0 < 𝜏 ≤ 1. Discontent agents simply swap their location with a randomly chosen other discontent agent or jump to a random empty location. The model gives a coherent explanation of how clusters can form even if all agents are tolerant, i.e., if they agree to live in mixed neighborhoods. For segregation to occur, all it needs is a slight bias towards agents preferring similar neighbors.
Although the model is well studied, previous research focused on a random process point of view. However, it is more realistic to assume instead that the agents strategically choose where to live. We close this gap by introducing and analyzing game-theoretic models of Schelling segregation, where rational agents strategically choose their locations.
As the first step, we introduce and analyze a generalized game-theoretic model that allows more than two agent types and more general underlying graphs modeling the residential area. We introduce different versions of Swap and Jump Schelling Games. Swap Schelling Games assume that every vertex of the underlying graph serving as a residential area is occupied by an agent and pairs of discontent agents can swap their locations, i.e., their occupied vertices, to increase their utility. In contrast, for the Jump Schelling Game, we assume that there exist empty vertices in the graph and agents can jump to these vacant vertices if this increases their utility. We show that the number of agent types as well as the structure of underlying graph heavily influence the dynamic properties and the tractability of finding an optimal strategy profile.
As a second step, we significantly deepen these investigations for the swap version with 𝜏 = 1 by studying the influence of the underlying topology modeling the residential area on the existence of equilibria, the Price of Anarchy, and the dynamic properties. Moreover, we restrict the movement of agents locally. As a main takeaway, we find that both aspects influence the existence and the quality of stable states.
Furthermore, also for the swap model, we follow sociological surveys and study, asking the same core game-theoretic questions, non-monotone singlepeaked utility functions instead of monotone ones, i.e., utility functions that are not monotone in the fraction of same-type neighbors. Our results clearly show that moving from monotone to non-monotone utilities yields novel structural properties and different results in terms of existence and quality of stable states.
In the last part, we introduce an agent-based saturated open-city variant, the Flip Schelling Process, in which agents, based on the predominant type in their neighborhood, decide whether to change their types. We provide a general framework for analyzing the influence of the underlying topology on residential segregation and investigate the probability that an edge is monochrome, i.e., that both incident vertices have the same type, on random geometric and Erdős–Rényi graphs. For random geometric graphs, we prove the existence of a constant c > 0 such that the expected fraction of monochrome edges after the Flip Schelling Process is at least 1/2 + c. For Erdős–Rényi graphs, we show the expected fraction of monochrome edges after the Flip Schelling Process is at most 1/2 + o(1).
An exploration of activity and therapist preferences and their predictors in German-speaking samples
(2023)
According to current definitions of evidence-based practice, patients’ preferences play an important role for the psychotherapeutic process and outcomes. However, whereas a significant body of research investigated preferences regarding specific treatments, research on preferred activities or therapist characteristics is rare, investigated heterogeneous aspects with inconclusive results, lacked validated assessment tools, and neglected relevant preferences, their predictors as well as the perspective of mental health professionals. Therefore, the three studies of this dissertation aimed to address the most fundamental drawbacks in current preference research by providing a validated questionnaire, focus efforts on activity and therapist preferences and add preferences of psychotherapy trainees. To this end, Paper I reports the translation and validation of the 18-item Cooper-Norcross Inventory of Preference (C-NIP) in a broad, heterogeneous sample of N = 969 laypeople, resulting in good to acceptable reliabilities and first evidence of validity. However, the original factor structure was not replicated. Paper II assesses activity preferences of psychotherapists in training using the C-NIP and compares them with the initial laypeople sample. There were significant differences between both samples, with trainees preferring a more patient-directed, emotionally intense and confrontational approach than laypeople. CBT trainees preferred a more therapist-directed, present-focused, challenging and less emotional intense approach than psychodynamic or -analytic trainees. Paper III explores therapist preferences and tests predictors for specific preference choices. For most characteristics, more than half of the participants did not have specific preferences. Results pointed towards congruency effects (i.e., preference for similar characteristics), especially for members of marginalized groups. The dissertation provides both researchers and practitioners with a validated questionnaire, shows potentially obstructive differences between patients and therapists and underlines the importance of therapist characteristics for marginalized groups, thereby laying the foundation for future applications and implementations in research and practice.
Distributed decision-making studies the choices made among a group of interactive and self-interested agents. Specifically, this thesis is concerned with the optimal sequence of choices an agent makes as it tries to maximize its achievement on one or multiple objectives in the dynamic environment. The optimization of distributed decision-making is important in many real-life applications, e.g., resource allocation (of products, energy, bandwidth, computing power, etc.) and robotics (heterogeneous agent cooperation on games or tasks), in various fields such as vehicular network, Internet of Things, smart grid, etc.
This thesis proposes three multi-agent reinforcement learning algorithms combined with game-theoretic tools to study strategic interaction between decision makers, using resource allocation in vehicular network as an example. Specifically, the thesis designs an interaction mechanism based on second-price auction, incentivizes the agents to maximize multiple short-term and long-term, individual and system objectives, and simulates a dynamic environment with realistic mobility data to evaluate algorithm performance and study agent behavior.
Theoretical results show that the mechanism has Nash equilibria, is a maximization of social welfare and Pareto optimal allocation of resources in a stationary environment. Empirical results show that in the dynamic environment, our proposed learning algorithms outperform state-of-the-art algorithms in single and multi-objective optimization, and demonstrate very good generalization property in significantly different environments. Specifically, with the long-term multi-objective learning algorithm, we demonstrate that by considering the long-term impact of decisions, as well as by incentivizing the agents with a system fairness reward, the agents achieve better results in both individual and system objectives, even when their objectives are private, randomized, and changing over time. Moreover, the agents show competitive behavior to maximize individual payoff when resource is scarce, and cooperative behavior in achieving a system objective when resource is abundant; they also learn the rules of the game, without prior knowledge, to overcome disadvantages in initial parameters (e.g., a lower budget).
To address practicality concerns, the thesis also provides several computational performance improvement methods, and tests the algorithm in a single-board computer. Results show the feasibility of online training and inference in milliseconds.
There are many potential future topics following this work. 1) The interaction mechanism can be modified into a double-auction, eliminating the auctioneer, resembling a completely distributed, ad hoc network; 2) the objectives are assumed to be independent in this thesis, there may be a more realistic assumption regarding correlation between objectives, such as a hierarchy of objectives; 3) current work limits information-sharing between agents, the setup befits applications with privacy requirements or sparse signaling; by allowing more information-sharing between the agents, the algorithms can be modified for more cooperative scenarios such as robotics.
This research focuses on empowering leadership, a leadership style that shares autonomy and responsibilities with the followers. Empowering leadership enhances the meaningfulness of work by fostering participation in decision-making, expressing confidence in high performance, and providing autonomy in target setting (Cheong, 2016). I examine how empowering leadership affects followers’ reflection. I used data from 528 individuals across 172 teams and found a positive relationship between empowering leadership and followers’ reflection. Followers’ reflection, in turn, is negatively associated with followers’ withdrawal, which mediates the beneficial effect of empowering leadership on leaders’ emotional exhaustion. As for the leaders, I propose that empowering leadership is negatively related also to leaders’ emotional exhaustion. This research broadens our understanding of empowering leadership's effects on both followers and leaders. Moreover, it integrates empowering leadership, leader emotional exhaustion, and burnout literature. Overall, empowering leadership strengthens members’ reflective attitudes and behaviors, which result in reduced withdrawal (and increased presence and contribution) in teams. Because the members contribute to team effort more, the leaders experience less emotional exhaustion. Hence, my work not only identifies new ways through which empowering leadership positively affects followers but also shows how these positive effects on followers benefit the leaders’ well-being.
Planets outside our solar system, so-called "exoplanets", can be detected with different methods, and currently more than 5000 exoplanets have been confirmed, according to NASA Exoplanet Archive. One major highlight of the studies on exoplanets in the past twenty years is the characterization of their atmospheres usingtransmission spectroscopy as the exoplanet transits. However, this characterization is a challenging process and sometimes there are reported discrepancies in the literature regarding the atmosphere of the same exoplanet. One potential reason for the observed atmospheric inconsistencies is called impact parameter degeneracy, and it is highly driven by the limb darkening effect of the host star. A brief introductionto those topics in presented in chapter 1, while the motivation and objectives of thiswork are described in chapter 2.The first goal is to clarify the origin of the transmission spectrum, which is anindicator of an exoplanet’s atmosphere; whether it is real or influenced by the impactparameter degeneracy. A second goal is to determine whether photometry from space using the Transiting Exoplanet Survey Satellite (TESS), could improve on the major parameters, which are responsible for the aforementioned degeneracy, of known exoplanetary systems. Three individual projects were conducted in order toaddress those goals. The three manuscripts are presented, in short, in the manuscriptoverview in chapter 3.More specifically, in chapter 4, the first manuscript is presented, which is an ex-tended investigation on the impact parameter degeneracy and its application onsynthetic transmission spectra. Evidently, the limb darkening of the host star isan important driver for this effect. It keeps the degeneracy persisting through different groups of exoplanets, based on the uncertainty of their impact parameter and on the type of their host star. The second goal, was addressed in the second and third manuscripts (chapter 5 and chapter 6 respectively). Using observationsfrom the TESS mission, two samples of exoplanets were studied; 10 transiting inflated hot-Jupiters and 43 transiting grazing systems. Potentially, the refinement or confirmation of their major system parameters’ measurements can assist in solving current or future discrepancies regarding their atmospheric characterization.In chapter 7 the conclusions of this work are discussed, while in chapter 8 itis proposed how TESS’s measurements can be able to discern between erroneousinterpretations of transmission spectra, especially on systems where the impact parameter degeneracy is likely not applicable.
Transposable elements (TEs) are loci that can replicate and multiply within the genome of their host. Within the host, TEs through transposition are responsible for variation on genomic architecture and gene regulation across all vertebrates. Genome assemblies have increased in numbers in recent years. However, to explore in deep the variations within different genomes, such as SNPs (single nucleotide polymorphism), INDELs (Insertion-deletion), satellites and transposable elements, we need high-quality genomes. Studies of molecular markers in the past 10 years have limitations to correlate with biological differences because molecular markers rely on the accuracy of the genomic resources. This has generated that a substantial part of the studies of TE in recent years have been on high quality genomic resources such as Drosophila, zebrafinch and maize. As testudine have a slow mutation rate lower only to crocodilians, with more than 300 species, adapted to different environments all across the globe, the testudine clade can help us to study variation. Here we propose Testudines as a clade to study variation and the abundance of TE on different species that diverged a long time ago. We investigated the genomic diversity of sea turtles, identifying key genomic regions associated to gene family duplication, specific expansion of particular TE families for Dermochelyidae and that are important for phenotypic differentiation, the impact of environmental changes on their populations, and the dynamics of TEs within different lineages. In chapter 1, we identify that despite high levels of genome synteny within sea turtles, we identified that regions of reduced collinearity and microchromosomes showed higher concentrations of multicopy gene families, as well as genetic distances between species, indicating their potential importance as sources of variation underlying phenotypic differentiation. We found that differences in the ecological niches occupied by leatherback and green turtles have led to contrasting evolutionary paths for their olfactory receptor genes. We identified in leatherback turtles a long-term low population size. Nonetheless, we identify no correlation between the regions of reduced collinearity with abundance of TEs or an accumulation of a particular TE group. In chapter 2, we identified that sea turtle genomes contain a significant proportion of TEs, with differences in TE abundance between species, and the discovery of a recent expansion of Penelope-like elements (PLEs) in the highly conserved sea turtle genome provides new insights into the dynamics of TEs within Testudines. In chapter 3, we compared the proportion of TE across the Testudine clade, and we identified that the proportion of transposable elements within the clade is stable, regardless of the quality of the assemblies. However, we identified that the proportion of TEs orders has correlation with genome quality depending of their expanded abundancy. For retrotransposon, a highly abundant element for this clade, we identify no correlation. However, for DNA elements a rarer element on this clade, correlate with the quality of the assemblies.
Here we confirm that high-quality genomes are fundamental for the study of transposable element evolution and the conservation within the clade. The detection and abundance of specific orders of TEs are influenced by the quality of the genomes. We identified that a reduction in the population size on D. coriacea had left signals of long-term low population sizes on their genomes. On the same note we identified an expansion of TE on D. coriacea, not present in any other member of the available genomes of Testudines, strongly suggesting that it is a response of deregulation of TE on their genomes as consequences of the low population sizes.
Here we have identified important genomic regions and gene families for phenotypic differentiation and highlighted the impact of environmental changes on the populations of sea turtles. We stated that accurate classification and analysis of TE families are important and require high-quality genome assemblies. Using TE analysis we manage to identify differences in highly syntenic species. These findings have significant implications for conservation and provide a foundation for further research into genome evolution and gene function in turtles and other vertebrates. Overall, this study contributes to our understanding of evolutionary change and adaptation mechanisms.
Ein maßvoller Umgang mit der Natur scheint […] nur möglich zu sein, wenn man eine Beziehung zu ihr aufbauen kann. (Wittkowske, 2001, S. 87)
Lehrkräfte sind aktuell in der Verantwortung, die Bildung für nachhaltige Entwicklung umfassend und lerngerecht in den Unterricht ihrer Fächer zu implementieren. In der Grundschulbildung betrifft dies in besonderem Maß Lehrkräfte des Sachunterrichts und dessen Bezugsfächer, da sich im Sachunterricht als Ankerfach der Grundschule vielseitige Gelegenheiten bieten die Bildung für nachhaltige Entwicklung einzubinden. Eine dieser Gelegenheiten ist die Schulgartenarbeit, vorausgesetzt, diese erfährt eine entsprechende konzeptionelle Ausrichtung. Diese Neuausrichtung wird im Rahmen dieses Bandes vollzogen.
Der auf die schulische Praxis ausgerichtete Band 2 der Potsdamer Beiträge zur Innovation des Sachunterrichts richtet sich an alle Lehrkräfte des Sachunterrichts und dessen Bezugsfächer. Der Band stellt den Lehrkräften ein Instrumentarium zur Verfügung, welches die realpraktische Lerntätigkeit der Kinder unter Beachtung der Ziele, der Dimensionen und der Kompetenzerwartungen der Bildung für nachhaltige Entwicklung im Schulgarten als Lerninhalt und Lernort des Sachunterrichts sicherstellt. Dazu werden theoretische Grundlagen sowohl von Schulgärten als auch der Bildung für nachhaltige Entwicklung dargelegt und mit verschiedenen Schulgartentypen in Verbindung gesetzt, bevor aufbauend auf diesen Betrachtungen das Konzept des bildenden Nachhaltigkeitsgartens abgeleitet wird.
In this thesis, we investigate language learning in the formalisation of Gold [Gol67]. Here, a learner, being successively presented all information of a target language, conjectures which language it believes to be shown. Once these hypotheses converge syntactically to a correct explanation of the target language, the learning is considered successful. Fittingly, this is termed explanatory learning. To model learning strategies, we impose restrictions on the hypotheses made, for example requiring the conjectures to follow a monotonic behaviour. This way, we can study the impact a certain restriction has on learning.
Recently, the literature shifted towards map charting. Here, various seemingly unrelated restrictions are contrasted, unveiling interesting relations between them. The results are then depicted in maps. For explanatory learning, the literature already provides maps of common restrictions for various forms of data presentation.
In the case of behaviourally correct learning, where the learners are required to converge semantically instead of syntactically, the same restrictions as in explanatory learning have been investigated. However, a similarly complete picture regarding their interaction has not been presented yet.
In this thesis, we transfer the map charting approach to behaviourally correct learning. In particular, we complete the partial results from the literature for many well-studied restrictions and provide full maps for behaviourally correct learning with different types of data presentation. We also study properties of learners assessed important in the literature. We are interested whether learners are consistent, that is, whether their conjectures include the data they are built on. While learners cannot be assumed consistent in explanatory learning, the opposite is the case in behaviourally correct learning. Even further, it is known that learners following different restrictions may be assumed consistent. We contribute to the literature by showing that this is the case for all studied restrictions.
We also investigate mathematically interesting properties of learners. In particular, we are interested in whether learning under a given restriction may be done with strongly Bc-locking learners. Such learners are of particular value as they allow to apply simulation arguments when, for example, comparing two learning paradigms to each other. The literature gives a rich ground on when learners may be assumed strongly Bc-locking, which we complete for all studied restrictions.
The impact of individual differences in cognitive skills and socioeconomic background on key educational, occupational, and health outcomes, as well as the mechanisms underlying inequalities in these outcomes across the lifespan, are two central questions in lifespan psychology. The contextual embeddedness of such questions in ontogenetic (i.e., individual, age-related) and historical time is a key element of lifespan psychological theoretical frameworks such as the HIstorical changes in DEvelopmental COntexts (HIDECO) framework (Drewelies et al., 2019). Because the dimension of time is also a crucial part of empirical research designs examining developmental change, a third central question in research on lifespan development is how the timing and spacing of observations in longitudinal studies might affect parameter estimates of substantive phenomena. To address these questions in the present doctoral thesis, I applied innovative state-of-the-art methodology including static and dynamic longitudinal modeling approaches, used data from multiple international panel studies, and systematically simulated data based on empirical panel characteristics, in three empirical studies.
The first study of this dissertation, Study I, examined the importance of adolescent intelligence (IQ), grade point average (GPA), and parental socioeconomic status (pSES) for adult educational, occupational, and health outcomes over ontogenetic and historical time. To examine the possible impact of historical changes in the 20th century on the relationships between adolescent characteristics and key adult life outcomes, the study capitalized on data from two representative US cohort studies, the National Longitudinal Surveys of Youth 1979 and 1997, whose participants were born in the late 1960s and 1980s, respectively. Adolescent IQ, GPA, and pSES were positively associated with adult educational attainment, wage levels, and mental and physical health. Across historical time, the influence of IQ and pSES for educational, occupational, and health outcomes remained approximately the same, whereas GPA gained in importance over time for individuals born in the 1980s.
The second study of this dissertation, Study II, aimed to examine strict cumulative advantage (CA) processes as possible mechanisms underlying individual differences and inequality in wage development across the lifespan. It proposed dynamic structural equation models (DSEM) as a versatile statistical framework for operationalizing and empirically testing strict CA processes in research on wages and wage dynamics (i.e., wage levels and growth rates). Drawing on longitudinal representative data from the US National Longitudinal Survey of Youth 1979, the study modeled wage levels and growth rates across 38 years. Only 0.5 % of the sample revealed strict CA processes and explosive wage growth (autoregressive coefficients AR > 1), with the majority of individuals following logarithmic wage trajectories across the lifespan. Adolescent intelligence (IQ) and adult highest educational level explained substantial heterogeneity in initial wage levels and long-term wage growth rates over time.
The third study of this dissertation, Study III, investigated the role of observation timing variability in the estimation of non-experimental intervention effects in panel data. Although longitudinal studies often aim at equally spaced intervals between their measurement occasions, this goal is hardly ever met. Drawing on continuous time dynamic structural equation models, the study examines the –seemingly counterintuitive – potential benefits of measurement intervals that vary both within and between participants (often called individually varying time intervals, IVTs) in a panel study. It illustrates the method by modeling the effect of the transition from primary to secondary school on students’ academic motivation using empirical data from the German National Educational Panel Study (NEPS). Results of a simulation study based on this real-life example reveal that individual variation in time intervals can indeed benefit the estimation precision and recovery of the true intervention effect parameters.
Learning the causal structures from observational data is an omnipresent challenge in data science. The amount of observational data available to Causal Structure Learning (CSL) algorithms is increasing as data is collected at high frequency from many data sources nowadays. While processing more data generally yields higher accuracy in CSL, the concomitant increase in the runtime of CSL algorithms hinders their widespread adoption in practice. CSL is a parallelizable problem. Existing parallel CSL algorithms address execution on multi-core Central Processing Units (CPUs) with dozens of compute cores. However, modern computing systems are often heterogeneous and equipped with Graphics Processing Units (GPUs) to accelerate computations. Typically, these GPUs provide several thousand compute cores for massively parallel data processing.
To shorten the runtime of CSL algorithms, we design efficient execution strategies that leverage the parallel processing power of GPUs. Particularly, we derive GPU-accelerated variants of a well-known constraint-based CSL method, the PC algorithm, as it allows choosing a statistical Conditional Independence test (CI test) appropriate to the observational data characteristics.
Our two main contributions are: (1) to reflect differences in the CI tests, we design three GPU-based variants of the PC algorithm tailored to CI tests that handle data with the following characteristics. We develop one variant for data assuming the Gaussian distribution model, one for discrete data, and another for mixed discrete-continuous data and data with non-linear relationships. Each variant is optimized for the appropriate CI test leveraging GPU hardware properties, such as shared or thread-local memory. Our GPU-accelerated variants outperform state-of-the-art parallel CPU-based algorithms by factors of up to 93.4× for data assuming the Gaussian distribution model, up to 54.3× for discrete data, up to 240× for continuous data with non-linear relationships and up to 655× for mixed discrete-continuous data. However, the proposed GPU-based variants are limited to datasets that fit into a single GPU’s memory. (2) To overcome this shortcoming, we develop approaches to scale our GPU-based variants beyond a single GPU’s memory capacity. For example, we design an out-of-core GPU variant that employs explicit memory management to process arbitrary-sized datasets. Runtime measurements on a large gene expression dataset reveal that our out-of-core GPU variant is 364 times faster than a parallel CPU-based CSL algorithm. Overall, our proposed GPU-accelerated variants speed up CSL in numerous settings to foster CSL’s adoption in practice and research.
This thesis explores the variation in coreference patterns across language modes (i.e., spoken and written) and text genres. The significance of research on variation in language use has been emphasized in a number of linguistic studies. For instance, Biber and Conrad [2009] state that “register/genre variation is a fundamental aspect of human language” and “Given the ubiquity of register/genre variation, an understanding of how linguistic features are used in patterned ways across text varieties is of central importance for both the description of particular languages and the development of cross-linguistic theories of language use.”[p.23]
We examine the variation across genres with the primary goal of contributing to the body of knowledge on the description of language use in English. On the computational side, we believe that incorporating linguistic knowledge into learning-based systems can boost the performance of automatic natural language processing systems, particularly for non-standard texts. Therefore, in addition to their descriptive value, the linguistic findings we provide in this study may prove to be helpful for improving the performance of automatic coreference resolution, which is essential for a good text understanding and beneficial for several downstream NLP applications, including machine translation and text summarization.
In particular, we study a genre of texts that is formed of conversational interactions on the well-known social media platform Twitter. Two factors motivate us: First, Twitter conversations are realized in written form but resemble spoken communication [Scheffler, 2017], and therefore they form an atypical genre for the written mode. Second, while Twitter texts are a complicated genre for automatic coreference resolution, due to their widespread use in the digital sphere, at the same time they are highly relevant for applications that seek to extract information or sentiments from users’ messages. Thus, we are interested in discovering more about the linguistic and computational aspects of coreference in Twitter conversations. We first created a corpus of such conversations for this purpose and annotated it for coreference. We are interested in not only the coreference patterns but the overall discourse behavior of Twitter conversations. To address this, in addition to the coreference relations, we also annotated the coherence relations on the corpus we compiled. The corpus is available online in a newly developed form that allows for separating the tweets from their annotations.
This study consists of three empirical analyses where we independently apply corpus-based, psycholinguistic and computational approaches for the investigation of variation in coreference patterns in a complementary manner. (1) We first make a descriptive analysis of variation across genres through a corpus-based study. We investigate the linguistic aspects of nominal coreference in Twitter conversations and we determine how this genre relates to other text genres in spoken and written modes. In addition to the variation across genres, studying the differences in spoken-written modes is also in focus of linguistic research since from Woolbert [1922]. (2) In order to investigate whether the language mode alone has any effect on coreference patterns, we carry out a crowdsourced experiment and analyze the patterns in the same genre for both spoken and written modes. (3) Finally, we explore the potentials of domain adaptation of automatic coreference resolution (ACR) for the conversational Twitter data. In order to answer the question of how the genre of Twitter conversations relates to other genres in spoken and written modes with respect to coreference patterns, we employ a state-of-the-art neural ACR model [Lee et al., 2018] to examine whether ACR on Twitter conversations will benefit from mode-based separation in out-of-domain training data.
The North Pamir, part of the India-Asia collision zone, essentially formed during the late Paleozoic to late Triassic–early Jurassic. Coeval to the subduction of the Turkestan ocean—during the Carboniferous Hercynian orogeny in the Tien Shan—a portion of the Paleo-Tethys ocean subducted northward and lead to the formation and obduction of a volcanic arc. This Carboniferous North Pamir arc is of Andean style in the western Darvaz segment and trends towards an intraoceanic arc in the eastern, Oytag segment. A suite of arc-volcanic rocks and intercalated, marine sediments together with intruded voluminous plagiogranites (trondhjemite and tonalite) and granodiorites was uplifted and eroded during the Permian, as demonstrated by widespread sedimentary unconformities. Today it constitutes a major portion of the North Pamir.
In this work, the first comprehensive Uranium-Lead (U-Pb) laser-ablation inductively-coupled-plasma mass-spectrometry (LA-ICP-MS) radiometric age data are presented along with geochemical data from the volcanic and plutonic rocks of the North Pamir volcanic arc. Zircon U-Pb data indicate a major intrusive phase between 340 and 320 Ma. The magmatic rocks show an arc-signature, with more primitive signatures in the Oytag segment compared to the Darvaz segment. Volcanic rocks in the Chinese North Pamir were indirectly dated by determining the age of ocean floor alteration. We investigate calcite filled vesicles and show that oxidative sea water and the basaltic host rock are major trace element sources. The age of ocean floor alteration, within a range of 25 Ma, constrains the extrusion age of the volcanic rocks. In the Chinese Pamir, arc-volcanic basalts have been dated to the Visean-Serpukhovian boundary. This relates the North Pamir volcanic arc to coeval units in the Tien Shan. Our findings further question the idea of a continuous Tarim-Tajik continent in the Paleozoic.
From the Permian (Guadalupian) on, a progressive sea-retreat led to continental conditions in the northeastern Pamir. Large parts of Central Asia were affected by transcurrent tectonics, while subduction of the Paleo-Tethys went on south of the accreted North Pamir arc, likely forming an accretionary wedge, representing an early stage of the later Karakul-Mazar tectonic unit. Graben systems dissected the Permian carbonate platforms, that formed on top of the uplifted Carboniferous arc in the central and western North Pamir. A continental graben formed in the eastern North Pamir. Zircon U-Pb dating suggest initiation of volcanic activity at ~260 Ma. Extensional tectonics prevailed throughout the Triassic, forming the Hindukush-North Pamir rift system. New geochemistry and zircon U-Pb data tie volcanic rocks, found in the Chinese Pamir, to coeval arc-related plutonic rocks found within the Karakul-Mazar arc-accretionary complex. The sedimentary environment in the continental North Pamir rift evolved from an alluvial plain, lake dominated environment in the Guadalupian to a coarser-clastic, alluvial, braided river dominated in the Triassic. Volcanic activity terminated in the early Jurassic. We conducted Potassium-Argon (K-Ar) fine-fraction dating on the Shala Tala thrust fault, a major structure juxtaposing Paleozoic marine units of lower greenschist to amphibolite facies conditions against continental Permian deposits. Fault slip under epizonal conditions is dated to 204.8 ± 3.7 Ma (2σ), implying Rhaetian nappe emplacement. This pinpoints the Central–North Pamir collision, since the Shala Tala thrust was a back-thrust at that time.
The aesthetic phenomenon of the uncanny in literature and art is a spatial and gendered aesthetic concept, which is expressed in the spatial characteristics of a literary or photographic narrative. The intention of this thesis is to evaluate the entanglement of the uncanny, space, domesticity and femininity in the context of Gothic literature and photography. These four objects can only be read in their interplay with each other and how they each function as structural principles in the framework of Gothic fiction and photography. The literary texts, Charlotte Perkins Gilman’s “The Yellow Wall-Paper” (1892) and Shirley Jackson’s “The Lovely House” (1950) and The Haunting of Hill House (1959) as well as Francesca Woodman’s self-portraits that will be discussed further share one particular quality; they use the haunted house motif to express the protagonist’s psychological state by transferring mental hauntings onto the narrative’s spatial layer. The establishment of a connection between the concepts at hand, the uncanny, domesticity, spatiality and femininity, is the basis for the first half of the thesis. What follows is an overview of how domestic politics and gendered perceptions of and behaviors in spaces are expressed in the Gothic mode in particular. In the literary analysis two ways in which the Freudian uncanny constitutes itself in the haunted house narrative, first the house as the site of repetition and second the house as a stand-in for the maternal body, are examined. Drawing from Gernot Böhme’s and Martina Löw’s theoretic work on space and atmosphere the thesis focuses on the different aesthetic strategies that produce the uncanny atmosphere associated with the Gothic haunted house. The female subjects at the narratives’ center are in the ambiguous process of disappearing or becoming, this (dis)appearing act is facilitated by their haunted surroundings. In the case of the unnamed narrator in “Wall-Paper” her suppressed rage at her husband is mirrored in the strangled woman trapped inside the yellow wallpaper. Once she recognizes her doppelganger the union of her two selves takes place in the short story’s dramatic climax. In Shirley Jackson’s literary works the haunted houses, protagonists in themselves, entrap, transform, and ultimately devour their female daughter-victims. The haunted houses are symbols, means and places of the continuous tradition of female entrapment within the domestic sphere, be it as wives, mothers or daughters. In Francesca Woodman’s self-portraits the themes of creation/destruction and becoming/disappearing within the ruinous (post)domestic sphere are acted out by the fragmented and blurry female figure who intriguingly oscillates between self-empowerment and submission to destruction.
Monoklonale Antikörper (mAK) sind eines der wichtigsten Biomoleküle für die Umweltanalytik und die medizinische Diagnostik. Für die Detektion von Mikroorganismen bilden sie die Grundlage für ein schnelles und präzises Testverfahren. Bis heute gibt es, aufgrund des hohen zeitlichen und materiellen Aufwandes und der unspezifischen Immunisierungsstrategien, nur wenige mAK, die spezifisch Mikroorganismen erkennen.
Zu diesem Zweck sollte ein anwendbares Verfahren für die Generierung von mAK gegen Mikroorganismen entwickelt werden, welches anhand von Escherichia coli O157:H7 und Legionella pneumophila validiert wurde. In dieser Dissertation konnten neue Oberflächenstrukturen auf den Mikroorganismen mittels vergleichender Genomanalysen und in silico Epitopanalysen identifiziert werden. Diese wurden in das Virushüllprotein VP1 integriert und für eine gezielte Immunisierungsstrategie verwendet. Für die Bestimmung antigenspezifischer antikörperproduzierender Hybridome wurde ein Immunfärbeprotokoll entwickelt und etabliert, um die Hybridome im Durchflusszytometer zu sortieren.
In der vorliegenden Studie konnten für E. coli O157:H7 insgesamt 53 potenzielle Proteinkandidaten und für L. pneumophila 38 Proteine mithilfe der bioinformatischen Analyse identifiziert werden. Fünf verschiedene potenzielle Epitope wurden für E. coli O157:H7 und drei verschiedenen für L. pneumophila ausgewählt und für die Immunisierung mit chimären VP1 verwendet. Alle Immunseren zeigten eine antigenspezifische Immunantwort. Aus den nachfolgend generierten Hybridomzellen konnten mehrere Antikörperkandidaten gewonnen werden, welche in Charakterisierungsstudien eine starke Bindung zu E. coli O157:H7 bzw. L. pneumophila vorwiesen. Kreuzreaktivitäten zu anderen relevanten Mikroorganismen konnten keine bzw. nur in geringem Maße festgestellt werden.
Folglich konnte der hier beschriebene interdisziplinäre Ansatz zur Generierung spezifischer mAK gegen Mikroorganismen nachweislich spezifische mAK hervorbringen und ist als hocheffizienter Arbeitsablauf für die Herstellung von Antikörpern gegen Mikroorganismen einsetzbar.
Modular and incremental global model management with extended generalized discrimination networks
(2023)
Complex projects developed under the model-driven engineering paradigm nowadays often involve several interrelated models, which are automatically processed via a multitude of model operations. Modular and incremental construction and execution of such networks of models and model operations are required to accommodate efficient development with potentially large-scale models. The underlying problem is also called Global Model Management.
In this report, we propose an approach to modular and incremental Global Model Management via an extension to the existing technique of Generalized Discrimination Networks (GDNs). In addition to further generalizing the notion of query operations employed in GDNs, we adapt the previously query-only mechanism to operations with side effects to integrate model transformation and model synchronization. We provide incremental algorithms for the execution of the resulting extended Generalized Discrimination Networks (eGDNs), as well as a prototypical implementation for a number of example eGDN operations.
Based on this prototypical implementation, we experiment with an application scenario from the software development domain to empirically evaluate our approach with respect to scalability and conceptually demonstrate its applicability in a typical scenario. Initial results confirm that the presented approach can indeed be employed to realize efficient Global Model Management in the considered scenario.
Throughout the last ~3 million years, the Earth's climate system was characterised by cycles of glacial and interglacial periods. The current warm period, the Holocene, is comparably stable and stands out from this long-term cyclicality. However, since the industrial revolution, the climate has been increasingly affected by a human-induced increase in greenhouse gas concentrations. While instrumental observations are used to describe changes over the past ~200 years, indirect observations via proxy data are the main source of information beyond this instrumental era. These data are indicators of past climatic conditions, stored in palaeoclimate archives around the Earth. The proxy signal is affected by processes independent of the prevailing climatic conditions. In particular, for sedimentary archives such as marine sediments and polar ice sheets, material may be redistributed during or after the initial deposition and subsequent formation of the archive. This leads to noise in the records challenging reliable reconstructions on local or short time scales. This dissertation characterises the initial deposition of the climatic signal and quantifies the resulting archive-internal heterogeneity and its influence on the observed proxy signal to improve the representativity and interpretation of climate reconstructions from marine sediments and ice cores.
To this end, the horizontal and vertical variation in radiocarbon content of a box-core from the South China Sea is investigated. The three-dimensional resolution is used to quantify the true uncertainty in radiocarbon age estimates from planktonic foraminifera with an extensive sampling scheme, including different sample volumes and replicated measurements of batches of small and large numbers of specimen. An assessment on the variability stemming from sediment mixing by benthic organisms reveals strong internal heterogeneity. Hence, sediment mixing leads to substantial time uncertainty of proxy-based reconstructions with error terms two to five times larger than previously assumed.
A second three-dimensional analysis of the upper snowpack provides insights into the heterogeneous signal deposition and imprint in snow and firn. A new study design which combines a structure-from-motion photogrammetry approach with two-dimensional isotopic data is performed at a study site in the accumulation zone of the Greenland Ice Sheet. The photogrammetry method reveals an intermittent character of snowfall, a layer-wise snow deposition with substantial contributions by wind-driven erosion and redistribution to the final spatially variable accumulation and illustrated the evolution of stratigraphic noise at the surface. The isotopic data show the preservation of stratigraphic noise within the upper firn column, leading to a spatially variable climate signal imprint and heterogeneous layer thicknesses. Additional post-depositional modifications due to snow-air exchange are also investigated, but without a conclusive quantification of the contribution to the final isotopic signature.
Finally, this characterisation and quantification of the complex signal formation in marine sediments and polar ice contributes to a better understanding of the signal content in proxy data which is needed to assess the natural climate variability during the Holocene.
In late summer, migratory bats of the temperate zone face the challenge of accomplishing two energy-demanding tasks almost at the same time: migration and mating. Both require information and involve search efforts, such as localizing prey or finding potential mates. In non-migrating bat species, playback studies showed that listening to vocalizations of other bats, both con-and heterospecifics, may help a recipient bat to find foraging patches and mating sites. However, we are still unaware of the degree to which migrating bats depend on con-or heterospecific vocalizations for identifying potential feeding or mating opportunities during nightly transit flights. Here, we investigated the vocal responses of Nathusius’ pipistrelle bats, Pipistrellus nathusii, to simulated feeding and courtship aggregations at a coastal migration corridor. We presented migrating bats either feeding buzzes or courtship calls of their own or a heterospecific migratory species, the common noctule, Nyctalus noctula. We expected that during migratory transit flights, simulated feeding opportunities would be particularly attractive to bats, as well as simulated mating opportunities which may indicate suitable roosts for a stopover. However, we found that when compared to the natural silence of both pre-and post-playback phases, bats called indifferently during the playback of conspecific feeding sounds, whereas P. nathusii echolocation call activity increased during simulated feeding of N. noctula. In contrast, the call activity of P. nathusii decreased during the playback of conspecific courtship calls, while no response could be detected when heterospecific call types were broadcasted. Our results suggest that while on migratory transits, P. nathusii circumnavigate conspecific mating aggregations, possibly to save time or to reduce the risks associated with social interactions where aggression due to territoriality might be expected. This avoidance behavior could be a result of optimization strategies by P. nathusii when performing long-distance migratory flights, and it could also explain the lack of a response to simulated conspecific feeding. However, the observed increase of activity in response to simulated feeding of N. noctula, suggests that P. nathusii individuals may be eavesdropping on other aerial hawking insectivorous species during migration, especially if these occupy a slightly different foraging niche.