Refine
Has Fulltext
- yes (140) (remove)
Year of publication
- 2018 (140) (remove)
Document Type
- Doctoral Thesis (140) (remove)
Keywords
- Fernerkundung (3)
- Magnetismus (3)
- magnetism (3)
- remote sensing (3)
- uncertainty (3)
- Angriffserkennung (2)
- Bakterien (2)
- Big Data (2)
- Bildung (2)
- Biodiversität (2)
Institute
- Institut für Chemie (24)
- Institut für Geowissenschaften (21)
- Institut für Physik und Astronomie (21)
- Institut für Biochemie und Biologie (14)
- Extern (12)
- Hasso-Plattner-Institut für Digital Engineering GmbH (12)
- Wirtschaftswissenschaften (10)
- Sozialwissenschaften (6)
- Department Linguistik (5)
- Department Sport- und Gesundheitswissenschaften (4)
Die deutsche Berufsausbildung hat in den vergangenen Jahren stark an Zuspruch verloren. Dies trifft insbesondere auch auf die duale kaufmännische Berufsausbildung zu. Galt sie vor einigen Jahren noch als ein möglicher Ausbildungsweg für leistungsstarke Schüler/-innen, präferieren diese heute zum großen Teil das Studium. Die wachsende Anzahl an Studienabbrechern belegt jedoch, dass dadurch auch Potenzial verloren geht, weil sich Jugendliche mit dem Studium für einen Ausbildungsweg entscheiden, der für sie nicht geeignet ist. Bisherige Bemühungen zur Etablierung alternativer Bildungswege wie zum Beispiel Berufsakademien weisen zwar Erfolge auf, basieren jedoch auf einem Konzept, das sich ausschließlich am Bedarf der Wirtschaft orientiert. Es ist jedoch die Überzeugung der Autorin, dass neue innovative Bildungswege auch die Bedürfnisse und Vorstellungen derjenigen berücksichtigen müssen, für die sie entworfen werden. Denn die Generation der heutigen Jugendlichen zeichnet sich dadurch aus, dass sie ein anderes Wertekonzept als ihre Vorgängergenerationen aufweist. Die Dissertation entwickelt daher ein Modell einer wirtschaftsorientierten Ausbildung, welches sich aus unterschiedlichen motivationstheoretischen Elementen ableitet und zugleich die Werte der Generation der heutigen Jugend-lichen berücksichtigt. Es umfasst sowohl die Anreiz-Beitrags-Theorie nach Barnard als auch die Inhalts-Erwartungstheorie nach Vroom. Zudem liegt ein Hauptaugenmerk dieser Arbeit auf der Anpassung der Zwei-Faktoren-Theorie nach Herzberg auf die heutige Zeit.
Empirisch basiert die Dissertation auf einem dreistufigen Untersuchungsdesign. Die erste Stufe umfasst eine quantitative Befragung von insgesamt 459 Abiturienten/-innen und 100 Studierenden. In der zweiten Stufe wurden 10 Studieren-de und 12 Abiturienten/-innen qualitativ befragt. Eine Validierung der Ergebnis-se erfolgte in der dritten Stufe mittels Experteninterviews. Das Ziel der empirischen Untersuchung bestand in der Überprüfung von vier Hypothesen als Basis zur Modellableitung:
Hypothese H1 - Flexibilität erhöht die Attraktivität einer wirtschaftsorientierten Ausbildung: Der Faktor Flexibilität wurde als ein relevanter Motivator für die Wahl eines Ausbildungsweges identifiziert. Jugendliche wollen sich heutzutage nicht sofort bzw. nicht zu früh festlegen müssen.
Hypothese H2 - Auslandsaufenthalte erhöhen die Attraktivität einer wirtschaftsorientierten Ausbildung: Es wurde bestätigt, dass Auslandsaufenthalte die Attraktivität einer wirtschaftsorientierten Ausbildung steigert, es besteht jedoch eine Reihe von Barrieren, die Jugendliche (obwohl sie den grundsätzlichen Vor-teil sehen) davon abhalten, einen Auslandsaufenthalt für sich selbst in Betracht zu ziehen.
Hypothese H3 - Das Aufzeigen einer Karriereperspektive erhöht die Attraktivität einer wirtschaftsorientierten Ausbildung: Für die Generation der heutigen Jugendlichen steht bezüglich der Wahl ihres Ausbildungsweges die Aussicht auf eine Tätigkeit im Vordergrund, die ein gesichertes Einkommen und somit ein gutes Leben ermöglicht und zudem aus ihrer Sicht eine gewisse Sinnhaftigkeit hat. Führungspositionen, in denen auch höhere Verantwortung übernommen wird, strebt nur eine Minderheit an.
Hypothese H4 - Zusätzliche monetäre Anreize erhöhen die Attraktivität einer wirtschaftsorientierten Ausbildung: Vergütungsbestandteile werden grundsätzlich nicht abgelehnt (das wäre auch irrational), haben jedoch auch nicht die Anreizfunktion, die ihr auf Basis der Voruntersuchung im Rahmen dieser Arbeit hätte unterstellt werden können. Für die Entscheidungsfindung bezüglich eines Ausbildungsweges spielen sie nur eine untergeordnete Rolle. Dennoch trägt die Vergütung zur Attraktivität eines Ausbildungsweges bei.
Basierend auf den zuvor genannten Ergebnissen wurde das Modell einer wirtschaftsorientieren Ausbildung abgeleitet, das sowohl horizontal als auch vertikal flexibel ist. Horizontale Flexibilität ist dadurch gegeben, dass innerhalb eines Ausbildungsjahres unterschiedliche Unternehmen und Branchen kennengelernt werden (Jahr 1 und Jahr 2). Eine Spezialisierung erfolgt erst in den späteren Ausbildungsjahren. Durch die Möglichkeit, nach jedem Ausbildungsjahr mit einem Abschluss ins Berufsleben zu wechseln und die Ausbildung gegebenenfalls zu einem späteren Zeitpunkt fortzusetzen, ist auch eine vertikale Flexibilität gegeben. Zudem bietet das Modell Studienabbrechern/-innen die Möglichkeit, im Ausbildungsjahr 2 bzw. 3 in die Ausbildung einzusteigen. Im Jahr 2 und/oder Jahr 3 sind Auslandsaufenthalte integriert. Diese werden fakultativ an-geboten. Bereits ab dem Jahr 1 besteht die Möglichkeit, Vorbereitungskurse zu belegen. Der hohen Bedeutung der Karriereperspektive wird im abgeleiteten Modell auf mehreren Ebenen Rechnung getragen. So werden nach jedem Ausbildungsjahr anerkannte Abschlüsse erreicht. Während diese in den Jahren 1 und 2 mit IHK-Abschlüssen gleichzusetzen sind, beginnen ab Jahr 3 die akademischen Graduierungen (Jahr 3 Bachelor, Jahr 4 Master). Die Vergütung wird Bestandteil einer wirtschaftsorientierten Ausbildung, wobei ihre Höhe mit Dauer der Ausbildung zunimmt.
Da mit der Einführung des Modells einer wirtschaftsorientierten Ausbildung die Überwindung von institutionellen Paradigmen und Schranken verbunden sind, erfolgte im Rahmen des Ausblicks der Arbeit eine weitere Expertenbefragung bezüglich seiner Umsetzbarkeit. Es setzt eine Beweglichkeit von institutioneller Seite voraus (hierbei insbesondere auch von den Kammern), die von der Mehr-zahl der Experten derzeit skeptisch gesehen wird. Die konzeptionelle Ausgestaltung findet grundsätzlichen Zuspruch, wobei in einigen Details, zum Beispiel in der Dauer der Ausbildung, noch Klärungsbedarf besteht.
Grundsätzlich schließen sich die Experten/-innen der Meinung der Autorin an, dass ein Sinneswandel in der deutschen Ausbildungslandschaft gewünscht und gefordert wird. Dies betrifft insbesondere auch den kaufmännischen Bereich. Diese Arbeit liefert mit dem Modell der wirtschaftsorientierten Ausbildung einen wichtigen Beitrag zur Diskussion über neue Ausbildungswege.
East Africa is a natural laboratory: Studying its unique geological and biological history can help us better inform our theories and models. Studying its present and future can help us protect its globally important biodiversity and ecosystem services. East African vegetation plays a central role in all these aspects, and this dissertation aims to quantify its dynamics through computer simulations.
Computer models help us recreate past settings, forecast into the future or conduct simulation experiments that we cannot otherwise perform in the field. But before all that, one needs to test their performance. The outputs that the model produced using the present day-inputs, agreed well with present-day observations of East African vegetation. Next, I simulated past vegetation for which we have fossil pollen data to compare. With computer models, we can fill the gaps of knowledge between sites where we have fossil pollen data from, and create a more complete picture of the past. Good level of agreement between model and pollen data where they overlapped in space further validated our model performance.
Once the model was tested and validated for the region, it became possible to probe one of the long standing questions regarding East African vegetation: How did East Africa lose its tropical forests? The present-day vegetation in the tropics is mainly characterized by continuous forests worldwide except in tropical East Africa, where forests only occur as patches. In a series of simulation experiments, I was able to show under which conditions these forest patches could have been connected and fragmented in the past. This study showed the sensitivity of East African vegetation to climate change and variability such as those expected under future climate change.
El Niño Southern Oscillation (ENSO) events that result from the fluctuations in temperature between the ocean and atmosphere, bring further variability to East African climate and are predicted to increase in intensity in the future. But climate models are still not good at capturing the pattens of these events. In a study where I quantified the influence of ENSO events on East African vegetation, I showed how different the future vegetation could be from what we currently predict with these climate models that lack accurate ENSO contribution. Consideration of these discrepancies is important for our future global carbon budget calculations and management decisions.
Business process automation improves organizations’ efficiency to perform work. Therefore, a business process is first documented as a process model which then serves as blueprint for a number of process instances representing the execution of specific business cases. In existing business process management systems, process instances run independently from each other. However, in practice, instances are also collected in groups at certain process activities for a combined execution to improve the process performance. Currently, this so-called batch processing is executed manually or supported by external software. Only few research proposals exist to explicitly represent and execute batch processing needs in business process models. These works also lack a comprehensive understanding of requirements.
This thesis addresses the described issues by providing a basic concept, called batch activity. It allows an explicit representation of batch processing configurations in process models and provides a corresponding execution semantics, thereby easing automation. The batch activity groups different process instances based on their data context and can synchronize their execution over one or as well multiple process activities. The concept is conceived based on a requirements analysis considering existing literature on batch processing from different domains and industry examples. Further, this thesis provides two extensions: First, a flexible batch configuration concept, based on event processing techniques, is introduced to allow run time adaptations of batch configurations. Second, a concept for collecting and batching activity instances of multiple different process models is given. Thereby, the batch configuration is centrally defined, independently of the process models, which is especially beneficial for organizations with large process model collections. This thesis provides a technical evaluation as well as a validation of the presented concepts. A prototypical implementation in an existing open-source BPMS shows that with a few extensions, batch processing is enabled. Further, it demonstrates that the consolidated view of several work items in one user form can improve work efficiency. The validation, in which the batch activity concept is applied to different use cases in a simulated environment, implies cost-savings for business processes when a suitable batch configuration is used. For the validation, an extensible business process simulator was developed. It enables process designers to study the influence of a batch activity in a process with regards to its performance.
The development of self-adaptive software requires the engineering of an adaptation engine that controls the underlying adaptable software by a feedback loop. State-of-the-art approaches prescribe the feedback loop in terms of numbers, how the activities (e.g., monitor, analyze, plan, and execute (MAPE)) and the knowledge are structured to a feedback loop, and the type of knowledge. Moreover, the feedback loop is usually hidden in the implementation or framework and therefore not visible in the architectural design. Additionally, an adaptation engine often employs runtime models that either represent the adaptable software or capture strategic knowledge such as reconfiguration strategies. State-of-the-art approaches do not systematically address the interplay of such runtime models, which would otherwise allow developers to freely design the entire feedback loop.
This thesis presents ExecUtable RuntimE MegAmodels (EUREMA), an integrated model-driven engineering (MDE) solution that rigorously uses models for engineering feedback loops. EUREMA provides a domain-specific modeling language to specify and an interpreter to execute feedback loops. The language allows developers to freely design a feedback loop concerning the activities and runtime models (knowledge) as well as the number of feedback loops. It further supports structuring the feedback loops in the adaptation engine that follows a layered architectural style. Thus, EUREMA makes the feedback loops explicit in the design and enables developers to reason about design decisions.
To address the interplay of runtime models, we propose the concept of a runtime megamodel, which is a runtime model that contains other runtime models as well as activities (e.g., MAPE) working on the contained models. This concept is the underlying principle of EUREMA. The resulting EUREMA (mega)models are kept alive at runtime and they are directly executed by the EUREMA interpreter to run the feedback loops. Interpretation provides the flexibility to dynamically adapt a feedback loop. In this context, EUREMA supports engineering self-adaptive software in which feedback loops run independently or in a coordinated fashion within the same layer as well as on top of each other in different layers of the adaptation engine. Moreover, we consider preliminary means to evolve self-adaptive software by providing a maintenance interface to the adaptation engine.
This thesis discusses in detail EUREMA by applying it to different scenarios such as single, multiple, and stacked feedback loops for self-repairing and self-optimizing the mRUBiS application. Moreover, it investigates the design and expressiveness of EUREMA, reports on experiments with a running system (mRUBiS) and with alternative solutions, and assesses EUREMA with respect to quality attributes such as performance and scalability.
The conducted evaluation provides evidence that EUREMA as an integrated and open MDE approach for engineering self-adaptive software seamlessly integrates the development and runtime environments using the same formalism to specify and execute feedback loops, supports the dynamic adaptation of feedback loops in layered architectures, and achieves an efficient execution of feedback loops by leveraging incrementality.
Die Kernfrage der vorliegenden Arbeit lautet: Sichert die Schuldenbremse die fiskalische Nachhaltigkeit in Deutschland? Zur Beantwortung dieser Frage wird zunächst untersucht, welche Vor-Wirkungen die Einführung der Schuldenbremse im Zeitraum 2010-16 auf die deutschen Bundesländer zeitigte. Dafür wurden die beobachtete Konsolidierungsleistung und der 2009 bestehende Konsolidierungsanreiz bzw. –druck der Bundesländer mit Hilfe einer eigens zu diesem Zweck entwickelten Scorecard evaluiert. Mittels multipler Regressionsanalyse wurde dann analysiert, wie die Faktoren der Scorecard die Konsolidierungsleistung der Bun- desländer beeinflussen. Dabei wurde festgestellt, dass beinahe 90% der Variation, durch die unabhängigen Variablen Haushaltslage, Schuldenlast, Einnahmenwachstum und Pensionslast erklärt werden und der Schuldenbremse bei der Konsolidierungsepisode 2009-2016 eher eine untergeordnete Rolle zugefallen sein dürfte. Anschließend wurde mithilfe der in 65 Expertinneninterviews gesammelten Daten analysiert, welche Grenzen der neuen Fiskalregel in ihrem Wirken gesetzt sind, bzw. welche Risiken zukünftig die Einhaltung der Schuldenbremse erschweren oder verhindern könnten: Kommunalverschuldung, FEUs, Eventualverpflichtungen in Form von Bürgschaften für Finanzinstitute und Pensionsverpflichtungen. Die häufig geäußerten Kritikpunkte, die Schuldenbremse sei eine Konjunktur- und Investitionsbremse werden ebenfalls überprüft und zurückgewiesen. Schließlich werden potentielle zukünftige Entwicklungen hinsichtlich der Schuldenbremse und der öffentlichen Verwaltung in Deutschland sowie der Konsolidierungsbemühungen der Länder erörtert.
Plastic pollution is ubiquitous on the planet since several millions of tons of plastic waste enter aquatic ecosystems each year. Furthermore, the amount of plastic produced is expected to increase exponentially shortly. The heterogeneity of materials, additives and physical characteristics of plastics are typical of these emerging contaminants and affect their environmental fate in marine and freshwaters. Consequently, plastics can be found in the water column, sediments or littoral habitats of all aquatic ecosystems. Most of this plastic debris will fragment as a product of physical, chemical and biological forces, producing particles of small size. These particles (< 5mm) are known as “microplastics” (MP). Given their high surface-to-volume ratio, MP stimulate biofouling and the formation of biofilms in aquatic systems.
As a result of their unique structure and composition, the microbial communities in MP biofilms are referred to as the “Plastisphere.” While there is increasing data regarding the distinctive composition and structure of the microbial communities that form part of the plastisphere, scarce information exists regarding the activity of microorganisms in MP biofilms. This surface-attached lifestyle is often associated with the increase in horizontal gene transfer (HGT) among bacteria. Therefore, this type of microbial activity represents a relevant function worth to be analyzed in MP biofilms. The horizontal exchange of mobile genetic elements (MGEs) is an essential feature of bacteria. It accounts for the rapid evolution of these prokaryotes and their adaptation to a wide variety of environments. The process of HGT is also crucial for spreading antibiotic resistance and for the evolution of pathogens, as many MGEs are known to contain antibiotic resistance genes (ARGs) and genetic determinants of pathogenicity.
In general, the research presented in this Ph.D. thesis focuses on the analysis of HGT and heterotrophic activity in MP biofilms in aquatic ecosystems. The primary objective was to analyze the potential of gene exchange between MP bacterial communities vs. that of the surrounding water, including bacteria from natural aggregates. Moreover, the thesis addressed the potential of MP biofilms for the proliferation of biohazardous bacteria and MGEs from wastewater treatment plants (WWTPs) and associated with antibiotic resistance. Finally, it seeks to prove if the physiological profile of MP biofilms under different limnological conditions is divergent from that of the water communities. Accordingly, the thesis is composed of three independent studies published in peer-reviewed journals. The two laboratory studies were performed using both model and environmental microbial communities. In the field experiment, natural communities from freshwater ecosystems were examined.
In Chapter I, the inflow of treated wastewater into a temperate lake was simulated with a concentration gradient of MP particles. The effects of MP on the microbial community structure and the occurrence of integrase 1 (int 1) were followed. The int 1 is a marker associated with mobile genetic elements and known as a proxy for anthropogenic effects on the spread of antimicrobial resistance genes. During the experiment, the abundance of int1 increased in the plastisphere with increasing MP particle concentration, but not in the surrounding water. In addition, the microbial community on MP was more similar to the original wastewater community with increasing microplastic concentrations. Our results show that microplastic particles indeed promote persistence of standard indicators of microbial anthropogenic pollution in natural waters.
In Chapter II, the experiments aimed to compare the permissiveness of aquatic bacteria towards model antibiotic resistance plasmid pKJK5, between communities that form biofilms on MP vs. those that are free-living. The frequency of plasmid transfer in bacteria associated with MP was higher when compared to bacteria that are free-living or in natural aggregates. Moreover, comparison increased gene exchange occurred in a broad range of phylogenetically-diverse bacteria. The results indicate a different activity of HGT in MP biofilms, which could affect the ecology of aquatic microbial communities on a global scale and the spread of antibiotic resistance.
Finally, in Chapter III, physiological measurements were performed to assess whether microorganisms on MP had a different functional diversity from those in water. General heterotrophic activity such as oxygen consumption was compared in microcosm assays with and without MP, while diversity and richness of heterotrophic activities were calculated by using Biolog® EcoPlates. Three lakes with different nutrient statuses presented differences in MP-associated biomass build up. Functional diversity profiles of MP biofilms in all lakes differed from those of the communities in the surrounding water, but only in the oligo-mesotrophic lake MP biofilms had a higher functional richness compared to the ambient water. The results support that MP surfaces act as new niches for aquatic microorganisms and can affect global carbon dynamics of pelagic environments.
Overall, the experimental works presented in Chapters I and II support a scenario where MP pollution affects HGT dynamics among aquatic bacteria. Among the consequences of this alteration is an increase in the mobilization and transfer efficiency of ARGs. Moreover, it supposes that changes in HGT can affect the evolution of bacteria and the processing of organic matter, leading to different catabolic profiles such as demonstrated in Chapter III. The results are discussed in the context of the fate and magnitude of plastic pollution and the importance of HGT for bacterial evolution and the microbial loop, i.e., at the base of aquatic food webs. The thesis supports a relevant role of MP biofilm communities for the changes observed in the aquatic microbiome as a product of intense human intervention.
The continuously increasing pollution of aquatic environments with microplastics (plastic particles < 5 mm) is a global problem with potential implications for organisms of all trophic levels. For microorganisms, trillions of these floating microplastics particles represent a huge surface area for colonization. Due to the very low biodegradability, microplastics remain years to centuries in the environment and can be transported over thousands of kilometers together with the attached organisms. Since also pathogenic, invasive, or otherwise harmful species could be spread this way, it is essential to study microplastics-associated communities.
For this doctoral thesis, eukaryotic communities were analyzed for the first time on microplastics in brackish environments and compared to communities in the surrounding water and on the natural substrate wood. With Illumina MiSeq high-throughput sequencing, more than 500 different eukaryotic taxa were detected on the microplastics samples. Among them were various green algae, dinoflagellates, ciliates, fungi, fungal-like protists and small metazoans such as nematodes and rotifers. The most abundant organisms was a dinoflagellate of the genus Pfiesteria, which could include fish pathogenic and bloom forming toxigenic species. Network analyses revealed that there were numerous interaction possibilities among prokaryotes and eukaryotes in microplastics biofilms. Eukaryotic community compositions on microplastics differed significantly from those on wood and in water, and compositions were additionally distinct among the sampling locations. Furthermore, the biodiversity was clearly lower on microplastics in comparison to the diversity on wood or in the surrounding water.
In another experiment, a situation was simulated in which treated wastewater containing microplastics was introduced into a freshwater lake. With increasing microplastics concentrations, the resulting bacterial communities became more similar to those from the treated wastewater. Moreover, the abundance of integrase I increased together with rising concentrations of microplastics. Integrase I is often used as a marker for anthropogenic environmental pollution and is further linked to genes conferring, e.g., antibiotic resistance.
This dissertation gives detailed insights into the complexity of prokaryotic and eukaryotic communities on microplastics in brackish and freshwater systems. Even though microplastics provide novel microhabitats for various microbes, they might also transport toxigenic, pathogenic, antibiotic-resistant or parasitic organisms; meaning their colonization can pose potential threats to humans and the environment. Finally, this thesis explains the urgent need for more research as well as for strategies to minimize the global microplastic pollution.
Metamaterial devices
(2018)
Digital fabrication machines such as 3D printers excel at producing arbitrary shapes, such as for decorative objects. In recent years, researchers started to engineer not only the outer shape of objects, but also their internal microstructure. Such objects, typically based on 3D cell grids, are known as metamaterials. Metamaterials have been used to create materials that, e.g., change their volume, or have variable compliance.
While metamaterials were initially understood as materials, we propose to think of them as devices.
We argue that thinking of metamaterials as devices enables us to create internal structures that offer functionalities to implement an input-process-output model without electronics, but purely within the material’s internal structure. In this thesis, we investigate three aspects of such metamaterial devices that implement parts of the input-process-output model: (1) materials that process analog inputs by implementing mechanisms based on their microstructure, (2) that process digital signals by embedding mechanical computation into the object’s microstructure, and (3) interactive metamaterial objects that output to the user by changing their outside to interact with their environment. The input to our metamaterial devices is provided directly by the users interacting with the device by means of physically pushing the metamaterial, e.g., turning a handle, pushing a button, etc.
The design of such intricate microstructures, which enable the functionality of metamaterial devices, is not obvious. The complexity of the design arises from the fact that not only a suitable cell geometry is necessary, but that additionally cells need to play together in a well-defined way. To support users in creating such microstructures, we research and implement interactive design tools. These tools allow experts to freely edit their materials, while supporting novice users by auto-generating cells assemblies from high-level input. Our tools implement easy-to-use interactions like brushing, interactively simulate the cell structures’ deformation directly in the editor, and export the geometry as a 3D-printable file. Our goal is to foster more research and innovation on metamaterial devices by allowing the broader public to contribute.
This text is a contribution to the research on the worldwide success of evangelical Christianity and offers a new perspective on the relationship between late modern capitalism and evangelicalism. For this purpose, the utilization of affect and emotion in evangelicalism towards the mobilization of its members will be examined in order to find out what similarities to their employment in late modern capitalism can be found. Different examples from within the evangelical spectrum will be analyzed as affective economies in order to elaborate how affective mobilization is crucial for evangelicalism’s worldwide success. Pivotal point of this text is the exploration of how evangelicalism is able to activate the voluntary commitment of its members, financiers, and missionaries. Gathered here are examples where both spheres—evangelicalism and late modern capitalism—overlap and reciprocate, followed by a theoretical exploration of how the findings presented support a view of evangelicalism as an inner-worldly narcissism that contributes to an assumed re-enchantment of the world.
Movement and navigation are essential for many organisms during some parts of their lives. This is also true for bacteria, which can move along surfaces and swim though liquid environments. They are able to sense their environment, and move towards environmental cues in a directed fashion.
These abilities enable microbial lifecyles in biofilms, improved food uptake, host infection, and many more. In this thesis we study aspects of the swimming movement - or motility - of the soil bacterium (P. putida). Like most bacteria, P. putida swims by rotating its helical flagella, but their arrangement differs from the main model organism in bacterial motility research: (E. coli). P. putida is known for its intriguing motility strategy, where fast and slow episodes can occur after each other. Up until now, it was not known how these two speeds can be produced, and what advantages they might confer to this bacterium.
Normally the flagella, the main component of thrust generation in bacteria, are not observable by ordinary light microscopy. In order to elucidate this behavior, we therefore used a fluorescent staining technique on a mutant strain of this species to specifically label the flagella, while leaving the cell body only faintly stained. This allowed us to image the flagella of the swimming bacteria with high spacial and temporal resolution with a customized high speed fluorescence microscopy setup. Our observations show that P. putida can swim in three different modes. First, It can swim with the flagella pushing the cell body, which is the main mode of swimming motility previously known from other bacteria. Second, it can swim with the flagella pulling the cell body, which was thought not to be possible in situations with multiple flagella. Lastly, it can wrap its flagellar bundle around the cell body, which results in a speed wich is slower by a factor of two. In this mode, the flagella are in a different physical conformation with a larger radius so the cell body can fit inside. These three swimming modes explain the previous observation of two speeds, as well as the non strict alternation of the different speeds.
Because most bacterial swimming in nature does not occur in smoothly walled glass enclosures under a microscope, we used an artificial, microfluidic, structured system of obstacles to study the motion of our model organism in a structured environment. Bacteria were observed in microchannels with cylindrical obstacles of different sizes and with different distances with video microscopy and cell tracking. We analyzed turning angles, run times, and run length, which we compared to a minimal model for movement in structured geometries. Our findings show that hydrodynamic interactions with the walls lead to a guiding of the bacteria along obstacles. When comparing the observed behavior with the statics of a particle that is deflected with every obstacle contact, we find that cells run for longer distances than that model.
Navigation in chemical gradients is one of the main applications of motility in bacteria. We studied the swimming response of P. putida cells to chemical stimuli (chemotaxis) of the common food preservative sodium benzoate. Using a microfluidic gradient generation device, we created gradients of varying strength, and observed the motion of cells with a video microscope and subsequent cell tracking. Analysis of different motility parameters like run lengths and times, shows that P. putida employs the classical chemotaxis strategy of E. coli: runs up the gradient are biased to be longer than those down the gradient. Using the two different run speeds we observed due to the different swimming modes, we classify runs into `fast' and `slow' modes with a Gaussian mixture model (GMM). We find no evidence that P. putida's uses its swimming modes to perform chemotaxis.
In most studies of bacterial motility, cell tracking is used to gather trajectories of individual swimming cells. These trajectories then have to be decomposed into run sections and tumble sections. Several algorithms have been developed to this end, but most require manual tuning of a number of parameters, or extensive measurements with chemotaxis mutant strains. Together with our collaborators, we developed a novel motility analysis scheme, based on generalized Kramers-Moyal-coefficients. From the underlying stochastic model, many parameters like run length etc., can be inferred by an optimization procedure without the need for explicit run and tumble classification. The method can, however, be extended to a fully fledged tumble classifier. Using this method, we analyze E. coli chemotaxis measurements in an aspartate analog, and find evidence for a chemotactic bias in the tumble angles.
Light-driven diffusioosmosis
(2018)
The emergence of microfluidics created the need for precise and remote control of micron-sized objects. I demonstrate how light-sensitive motion can be induced at the micrometer scale by a simple addition of a photosensitive surfactant, which makes it possible to trigger hydrophobicity with light. With point-like laser irradiation, radial inward and outward hydrodynamic surface flows are remotely switched on and off. In this way, ensembles of microparticles can be moved toward or away from the irradiation center. Particle motion is analyzed according to varying parameters, such as surfactant and salt concentration, illumination condition, surface hydrophobicity, and surface structure.
The physical origin of this process is the so-called light-driven diffusioosmosis (LDDO), a phenomenon that was discovered in the framework of this thesis and is described experimentally and theoretically in this work. To give a brief explanation, a focused light irradiation induces a local photoisomerization that creates a concentration gradient at the solid-liquid interface. To compensate for the change in osmotic pressure near the surface, a hydrodynamic flow along the surface is generated. Surface-surfactant interaction largely governs LDDO. It is shown that surfactant adsorption depends on the isomerization state of the surfactant. Photoisomerization, therefore, triggers a surfactant attachment or detachment from the surface. This change is considered to be one of the reasons for the formation of LDDO flow.
These flows are introduced not only by a focused laser source but also by global irradiation. Porous particles show reversible repulsive and attractive interactions when dispersed in the solution of photosensitive surfactant. Repulsion and attraction is controlled by the irradiation wavelength. Illumination with red light leads to formation of aggregates, while illumination with blue light leads to the formation of a well-separated grid with equal interparticle distances, between 2µm and 80µm, depending on the particle surface density. These long-range interactions are considered to be a result of an increase or decrease of surfactant concentration around each particle, depending on the irradiation wavelength. Surfactant molecules adsorb inside the pores of the particles. A light-induced photoisomerization changes adsorption to the pores and drives surfactant molecules to the outside. The concentration gradients generate symmetric flows around each single particle resulting in local LDDO. With a break of the symmetry (i.e., by closing one side of the particle with a metal cap), one can achieve active self-propelled particle motion.
Gesetzgebungsmehrheiten in parlamentarischen Systemen mit ihrem Dualismus aus Regierungslager und Oppositionsparteien bilden sich nicht frei. Vielmehr findet ihre Koordination in einem Spannungsfeld aus den programmatischen Positionen der Akteure und ihrem opportunistischen Wettbewerb untereinander statt. Diese Problematik bricht die Arbeit auf drei konkrete Fragestellungen herunter, im Rahmen derer sie die Konfliktmuster zwischen Akteuren bei der legislativen Mehrheitskoordination unter Mehrheitsregierungen in den deutschen Landesparlamenten untersucht: 1) Inwieweit hängt es von programmatischen Positionen oder vom opportunistischen Wettbewerb des Neuen Dualismus zwischen Regierungslager und Oppositionsparteien ab, ob Oppositionsparteien und Regierungslager bei der Bildung von Gesetzgebungsmehrheiten kooperieren oder konfligieren? 2) Inwieweit kommt es vor dem Hintergrund unterschiedlicher programmatischer Positionen und opportunistischer Überlegungen zu Konflikt statt Kooperation zwischen Koalitionsakteuren bei der Bildung gemeinsamer Gesetzgebungsmehrheiten? Letztere Fragestellung wird sodann auch in den Kontext des bundesrepublikanischen Kooperativföderalismus eingebettet: 3) Inwieweit geht die Bildung von Gesetzgebungsmehrheiten bei der Ausführung von Bundesgesetzen in Mischkoalitionen (bestehend aus Parteien, die sich auf Bundesebene in konkurrierenden Lagern gegenüberstehen) mit mehr Konflikt einher als in ebenenübergreifend kongruenten Regierungskoalitionen?
Theoretisch wird ein rationalistisches Modell der grundlegenden Handlungsanreize bei der Bildung von Gesetzgebungsmehrheiten in den deutschen Landesparlamenten erarbeitet. Auf dieser Basis beschäftigt sich die Arbeit damit, wie die Akteure strategisch programmatische und opportunistische Anreize zu Konflikt und Kooperation abwägen. Die Arbeit leitet dann konkrete Determinanten ab, die vorwiegend – aber nicht nur – mittels quantitativer Methoden getestet werden. Die Arbeit stützt sich dabei auf eine größtenteils neu zusammengestellte Gesetzgebungsdatenbank aus 3.359 Gesetzgebungsvorgängen aus 23 Legislaturperioden zwischen 1990 und 2013 in den Ländern Hamburg, Hessen, Mecklenburg-Vorpommern, Nordrhein-Westfalen und Sachsen-Anhalt.
Die Analyse der Konfliktmuster zwischen Oppositionsparteien und Regierungslager zeigt, dass programmatische Distanz einer Oppositionspartei zum Regierungslager für Oppositionsverhalten eine Rolle spielt; dies gilt jedoch auch für opportunistische Aspekte (so lässt sich beispielsweise ein kompetitiveres Oppositionsverhalten beobachten, wenn nach der letzten Wahl ein vollständiger Regierungswechsel erfolgte). Oppositionsverhalten erscheint dabei recht kleinteilig ausgeprägt. Neben Unterschieden zwischen Legislaturperioden treten solche auch innerhalb von Legislaturperioden zwischen Akteuren sowie zwischen Gesetzentwürfen auf. Die Analyse generellen Koalitionskonflikts weist darauf hin, dass ein nicht unerheblicher Teil von Koalitionskonflikt strukturell bedingt ist. Handelt es sich bei einer gebildeten Regierungskoalition um die Wunschkoalition der beteiligten Parteien, so ist dies Koalitionskonflikt abträglich. Selbiges gilt für eine größere Mehrheitsmarge des Regierungslagers. Darüber hinaus ergeben sich Hinweise, dass die Ausführung von Bundesgesetzen unter Mischkoalitionen bei bundespolitischer Abgrenzung der Koalitionspartner mit mehr Koalitionskonflikt einhergeht als eine Ausführung unter kongruenten Koalitionen.
Der Beitrag der Arbeit ist polymorph angelegt. Sie hilft zunächst, die Strategien von Akteuren im Gesetzgebungsprozess besser zu verstehen. Als normativer Beitrag tritt auf einer zweiten Ebene die bessere Erforschung etwaiger nachteiliger Effekte des Neuen Dualismus unter Mehrheitsregierungen hinzu. Gleichzeitig soll die Arbeit drittens in der Zusammenschau helfen, die Mechanik der parlamentarischen Systeme in den Ländern selbst zu erhellen und besser normativ bewerten zu können. Hintergrund sind hier die jahrzehntealten Debatten um das beste Regierungssystem und -format der deutschen Länder als subnationale Entitäten. Die dritte Fragestellung dieser Arbeit konnte diese Debatte zudem mit einem neuen Aspekt bereichern. Wissen darüber, inwieweit die Ausführung von Bundesgesetzen in den Ländern je nach ebenenübergreifendem Koalitionsmuster in unterschiedlichem Ausmaß mit einem ‚coalition governance‘-Problem verbunden ist, fügt der Forschung zum föderalen Entscheiden in der Bundesrepublik eine neue und beachtenswerte Facette hinzu. Denn dabei handelt es sich um eine föderal bedingte mechanische Beeinträchtigung der Mehrheitskoordination in den Landesparlamenten selbst, die die potenziell gegebene föderale Flexibilität bei der Ausführung von Bundesgesetzen hemmt. Dies ebnet den Weg zu neuen Debatten darüber, wie in den deutschen Ländern mehr legislative Abstimmungsflexibilität ermöglicht werden kann als unter den bisher üblichen Mehrheits-Koalitionsregierungen.
Landslides are frequent natural hazards in rugged terrain, when the resisting frictional force of the surface of rupture yields to the gravitational force. These forces are functions of geological and morphological factors, such as angle of internal friction, local slope gradient or curvature, which remain static over hundreds of years; whereas more dynamic triggering events, such as rainfall and earthquakes, compromise the force balance by temporarily reducing resisting forces or adding transient loads. This thesis investigates landslide distribution and orientation due to landslide triggers (e.g. rainfall) at different scales (6-4∙10^5 km^2) and aims to link rainfall movement with the landslide distribution. It additionally explores the local impacts of the extreme rainstorms on landsliding and the role of precursory stability conditions that could be induced by an earlier trigger, such as an earthquake.
Extreme rainfall is a common landslide trigger. Although several studies assessed rainfall intensity and duration to study the distribution of thus triggered landslides, only a few case studies quantified spatial rainfall patterns (i.e. orographic effect). Quantifying the regional trajectories of extreme rainfall could aid predicting landslide prone regions in Japan. To this end, I combined a non-linear correlation metric, namely event synchronization, and radial statistics to assess the general pattern of extreme rainfall tracks over distances of hundreds of kilometers using satellite based rainfall estimates. Results showed that, although the increase in rainfall intensity and duration positively correlates with landslide occurrence, the trajectories of typhoons and frontal storms were insufficient to explain landslide distribution in Japan. Extreme rainfall trajectories inclined northwestwards and were concentrated along some certain locations, such as coastlines of southern Japan, which was unnoticed in the landslide distribution of about 5000 rainfall-triggered landslides. These landslides seemed to respond to the mean annual rainfall rates.
Above mentioned findings suggest further investigation on a more local scale to better understand the mechanistic response of landscape to extreme rainfall in terms of landslides. On May 2016 intense rainfall struck southern Germany triggering high waters and landslides. The highest damage was reported at the Braunsbach, which is located on the tributary-mouth fan formed by the Orlacher Bach. Orlacher Bach is a ~3 km long creek that drains a catchment of about ~6 km^2. I visited this catchment in June 2016 and mapped 48 landslides along the creek. Such high landslide activity was not reported in the nearby catchments within ~3300 km^2, despite similar rainfall intensity and duration based on weather radar estimates. My hypothesis was that several landslides were triggered by rainfall-triggered flash floods that undercut hillslope toes along the Orlacher Bach. I found that morphometric features such as slope and curvature play an important role in landslide distribution on this micro scale study site (<10 km^2). In addition, the high number of landslides along the Orlacher Bach could also be boosted by accumulated damages on hillslopes due karst weathering over longer time scales.
Precursory damages on hillslopes could also be induced by past triggering events that effect landscape evolution, but this interaction is hard to assess independently from the latest trigger. For example, an earthquake might influence the evolution of a landscape decades long, besides its direct impacts, such as landslides that follow the earthquake. Here I studied the consequences of the 2016 Kumamoto Earthquake (MW 7.1) that triggered some 1500 landslides in an area of ~4000 km^2 in central Kyushu, Japan. Topography, i.e. local slope and curvature, both amplified and attenuated seismic waves, thus controlling the failure mechanism of those landslides (e.g. progressive). I found that topography fails in explaining the distribution and the preferred orientation of the landslides after the earthquake; instead the landslides were concentrated around the northeast of the rupture area and faced mostly normal to the rupture plane. This preferred location of the landslides was dominated mainly by the directivity effect of the strike-slip earthquake, which is the propagation of wave energy along the fault in the rupture direction; whereas amplitude variations of the seismic radiation altered the preferred orientation. I suspect that the earthquake directivity and the asymmetry of seismic radiation damaged hillslopes at those preferred locations increasing landslide susceptibility. Hence a future weak triggering event, e.g. scattered rainfall, could further trigger landslides at those damaged hillslopes.
Holocene climate variability is generally characterized by low frequency changes than compared to the last glaciations including the Lateglacial. However, there is vast evidence for decadal to centennial scale oscillations and millennial scale climate trends, which are within and beyond a human lifetime perception, respectively. Within the Baltic realm, a transitional zone between oceanic and continental climate influence, the impact of Holocene and Lateglacial climate and environmental change is currently partly understood. This is mainly attributed to the scarcity of well-dated and high-resolution sediment records and to the lacking continuity of already investigated archives.
The aim of this doctoral thesis is to reconstruct Holocene and Late Glacial climate variability on local to (over)regional scales based on varved (annually laminated) sediments from Lake Czechowskie down to annual resolution. This project was carried out within the Virtual Institute for Integrated Climate and Landscape Evolution Analyses (ICLEA) and funded by the Helmholtz Association and the Helmholtz Climate Initiative REKLIM (Regional Climate Change). ICLEA intended to gain a better understanding of climate variability and landscape evolution processes in the Northern Central European lowlands since the last deglaciation. REKLIM Topic 8 “Abrupt climate change derived from proxy data” aims at identifying spatiotemporal patterns of climate variability between e.g. higher and lower latitudes. The main aim of this thesis was (i) to establish a robust chronology based on a multiple dating approach for Lake Czechowskie covering the Late Glacial and Holocene and for the Trzechowskie palaeolake for the Lateglacial, respectively, (ii) to reconstruct past climatic and environmental conditions on centennial to multi-millennial time scales and (iii) to distinguish between local to regional different sediments responses to climate change.
Addressing the first aim, the Lake Czechowskie chronology has been established by a multiple dating approach comprising information from varve counting, tephrochronology, AMS 14C dating of terrestrial plant remains, biostratigraphy and 137Cs activity concentration measurements. Those independent age constraints covering the Lateglacial and the entire Holocene and have been further implemented in a Bayesian age model by using OxCal v.4.2. Thus, even within non-varved sediment intervals, robust chronological information has been used for absolute age determination. The identification of five cryptotephras, of which three are used as unambiguous isochrones, is furthermore a significant improvement of the Czechowskie chronology and currently unique for the Holocene within Poland. The first findings of coexisting early Holocene Hässeldalen and Askja-S cryptotephras within a varved sequence even allowed differential dating between both volcanic ashes and stimulated the discussion of revising the absolute ages of the Askja-S tephra.
The Trzechowskie palaeolake chronology has been established by a multiple dating approach comprising varve counting, tephrochronology, AMS 14C dating of terrestrial plant remains and biostratigraphy, covers the Lateglacial period (Allerød and Younger Dryas) and has been implemented in OxCal v.4.2. Those age constraints allowed regional correlation to other high-resolution climate archives and identifying leads and lags of proxy responses at the onset of the Younger Dryas.
The second aim has been accomplished by detailed micro-facies and geochemical analyses of the Czechowskie sediments for the entire Holocene. Thus, especially micro-facies changes had been linked to enhanced productivity at Lake Czechowskie. Most prominent changes have been recorded at 7.3, 6.5, 4.3 and 2.8 varve kyrs BP and are linked to a stepwise increasing influence of Atlantic air masses. Especially, the mid-Holocene change, which had been widely reported from palaeohydrological records in low latitudes, has been identified and linked to large scale reorganization of atmospheric circulation patterns. Thus, especially long-term changes of climatic and environmental boundary conditions are widely recorded by the Czechowskie sediments. The pronounced response to (multi)millennial scale changes is further corroborated by the lack of clear sediment responses to early Holocene centennial scale climate oscillations (e.g. the Preboreal Oscillation).
However, decadal scale changes at Lake Czechowskie during the most recent period (last 140 years) have been investigated in a lake comparison study. To fulfill the third aim of the doctoral thesis, three lakes in close vicinity to each other have been investigated in order to better distinguish how local, site-specific parameters, may superimpose regional climate driven changes. All lakes haven been unambiguously linked by the Askja AD1875 cryptotephra and independent varve chronologies. As a result, climate warming has only been recorded by sedimentation changes at the smallest and best sheltered lake (Głęboczek), whereas the largest lake (Czechowskie) and the shallowest lake (Jelonek) showed attenuated and less clear sediment responses, respectively. The different responses have been linked to morphological lake characteristics (lake size and depth, catchment area). This study highlights the potential of high-resolution lake comparison for robust proxy based climate reconstructions.
In summary, the doctoral thesis presents a high-resolution sediment record with an underlying age model, which is prerequisite for unprecedented age control down to annual resolution. Sediment proxy based climate reconstructions demonstrate the importance of the Czechowskie sediments for better understanding climate variability in the southern Baltic realm. Case studies showed the clear response on millennial time scale, while decadal scale fluctuations are either less well expressed or superimposed by local, site-specific parameters. The identification of volcanic ash layers is not only used for unambiguous isochrones, those are key tie lines for local to supra regional archive synchronization and establish the Lake Czechowskie as a key climate archive.
The Himalayan arc stretches >2500 km from east to west at the southern edge of the Tibetan Plateau, representing one of the most important Cenozoic continent-continent collisional orogens. Internal deformation processes and climatic factors, which drive weathering, denudation, and transport, influence the growth and erosion of the orogen. During glacial times wet-based glaciers sculpted the mountain range and left overdeepend and U-shaped valleys, which were backfilled during interglacial times with paraglacial sediments over several cycles. These sediments partially still remain within the valleys because of insufficient evacuation capabilities into the foreland. Climatic processes overlay long-term tectonic processes responsible for uplift and exhumation caused by convergence. Possible processes accommodating convergence within the orogenic wedge along the main Himalayan faults, which divide the range into four major lithologic units, are debated. In this context, the identification of processes shaping the Earth’s surface on short- and on long-term are crucial to understand the growth of the orogen and implications for landscape development in various sectors along the arc. This thesis focuses on both surface and tectonic processes that shape the landscape in the western Indian Himalaya since late Miocene.
In my first study, I dated well-preserved glacially polished bedrock on high-elevated ridges and valley walls in the upper of the Chandra Valley the by means of 10Be terrestrial cosmogenic radionuclides (TCN). I used these ages and mapped glacial features to reconstruct the extent and timing of Pleistocene glaciation at the southern front of the Himalaya. I was able to reconstruct an extensive valley glacier of ~200 km length and >1000 m thickness. Deglaciation of the Chandra Valley glacier started subsequently to insolation increase on the Northern Hemisphere and thus responded to temperature increase. I showed that the timing this deglaciation onset was coeval with retreat of further midlatitude glaciers on the Northern and Southern Hemispheres. These comparisons also showed that the post-LGM deglaciation very rapid, occurred within a few thousand years, and was nearly finished prior to the Bølling/Allerød interstadial.
A second study (co-authorship) investigates how glacial advances and retreats in high mountain environments impact the landscape. By 10Be TCN dating and geomorphic mapping, we obtained maximal length and height of the Siachen Glacier within the Nubra Valley. Today the Shyok and Nubra confluence is backfilled with sedimentary deposits, which are attributed to the valley blocking of the Siachen Glacier 900 m above the present day river level. A glacial dam of the Siachen Glacier blocked the Shyok River and lead to the evolution of a more than 20 km long lake. Fluvial and lacustrine deposits in the valley document alternating draining and filling cycles of the lake dammed by the Siachen Glacier. In this study, we can show that glacial incision was outpacing fluvial incision.
In the third study, which spans the million-year timescale, I focus on exhumation and erosion within the Chandra and Beas valleys. In this study the position and discussed possible reasons of rapidly exhuming rocks, several 100-km away from one of the main Himalayan faults (MFT) using Apatite Fission Track (AFT) thermochronometry. The newly gained AFT ages indicate rapid exhumation and confirm earlier studies in the Chandra Valley. I assume that the rapid exhumation is most likely related to uplift over subsurface structures. I tested this hypothesis by combining further low-temperature thermochronometers from areas east and west of my study area. By comparing two transects, each parallel to the Beas/Chandra Valley transect, I demonstrate similarities in the exhumation pattern to transects across the Sutlej region, and strong dissimilarities in the transect crossing the Dhauladar Range. I conclude that the belt of rapid exhumation terminates at the western end of the Kullu-Rampur window. Therewith, I corroborate earlier studies suggesting changes in exhumation behavior in the western Himalaya. Furthermore, I discussed several causes responsible for the pronounced change in exhumation patterns along strike: 1) the role of inherited pre-collisional features such as the Proterozoic sedimentary cover of the Indian basement, former ridges and geological structures, and 2) the variability of convergence rates along the Himalayan arc due to an increased oblique component towards the syntaxis.
The combination of field observations (geological and geomorphological mapping) and methods to constrain short- and long-term processes (10Be, AFT) help to understand the role of the individual contributors to exhumation and erosion in the western Indian Himalaya. With the results of this thesis, I emphasize the importance of glacial and tectonic processes in shaping the landscape by driving exhumation and erosion in the studied areas.
La cabrona aquí soy yo
(2018)
La última década ha visto un interés creciente en el fenómeno del narcotráfico en México a nivel global. Las diversas expresiones de violencia extrema que acompañan al negocio ilegal de drogas se narran en artefactos mediáticos que provocan fascinación e intriga. Así, la literatura y el cine, la música y la televisión presentan imágenes e historias sobre el narcotráfico que alimentan el imaginario colectivo.
En este contexto, a nivel global hay representaciones mediáticas de la mujer mexicana narcotraficante que reproducen estereotipos femeninos donde la mujer se cosifica, exagerando los atributos sexuales del cuerpo de las mujeres. Esta representación cultural hace de la mujer un objeto de deseo, cuya belleza sirve como una marca de prestigio y ostentación para el hombre narcotraficante. La cultura del narcotráfico impone a las mujeres un ideal estético particular distintivo, que las mujeres reproducen meticulosamente para emular esta representación. Aunado a la belleza física, la mujer es retratada violenta y sin escrúpulos, usa su belleza y poder de seducción para acumular dinero y poder a costa de los hombres que conquista. Para los que no pertenecen al mundo del narcotráfico, este tipo de mujer, hipersexualizada, inspira juicios negativos, discriminación, desconfianza y temor.
La intención de la pregunta y objetivos de investigación de este trabajo fue rebasar estas representaciones para observar las complejidades de las experiencias de vida de estas mujeres. El propósito de esta tesis de doctorado fue explorar cómo cambian las vidas de las mujeres mexicanas cuando se involucran en la narcocultura, en la frontera México-Estados Unidos. En específico, la investigación analizó las transformaciones en la corporalidad y en las subjetividades de estas mujeres, y cómo estas transformaciones influían en el lugar que ocupan en el espacio social y cultural que configura el narcotráfico. Además, se analizó qué márgenes de negociación tienen las mujeres en la narcocultura, para poder actuar y definirse a sí mismas.
Las preguntas que guiaron el trabajo indagaban sobre cómo las mujeres cambiaban su cuerpo para encarnar el ideal estético y qué significados se atribuían a estos cambios. Fue importante analizar qué dinámicas de poder se ponían en juego a partir de estos cuerpos femeninos, en las relaciones con los hombres y con otras mujeres. También, otro objetivo fue qué procesos de subjetivación operaban en las mujeres que participan en la narcocultura, y qué márgenes de negociación tenían para actuar y definirse a sí mismas.
Esta es una investigación inscrita dentro de los estudios culturales y con una perspectiva feminista interseccional. La investigación se realizó en la frontera mexicana con Estados Unidos, en el noroeste, específicamente en las ciudades de Mexicali, Tijuana y San Diego, California. La frontera, en esta tesis, se observa como un espacio con múltiples contextos de interpretación, polisémico y heterogéneo. Estas cualidades hacen que los fenómenos culturales que ocurren en él sean diversos y contradictorios.
Para entender los fenómenos culturales que emergen de la frontera norte de México, fue útil el concepto de transfrontera de José Valenzuela Arce (2014). La propuesta de este académico es que las transfronteras son “espacios que se niegan a una sola de las condiciones o los lados que la integran” (p. 9). Así, el concepto habla de los procesos de conectividad y simultaneidad que la globalización genera y que redefinen a los Estados-territorio. Al mismo tiempo, habla también de los límites que estos mismos Estados utilizan para sostener narrativas nacionales que son “referentes organizadores de adscripciones identitarias y culturales” (p. 18) que crean diferencias y desigualdades. Si esto es así, una frontera no se explica completamente desde la demarcación territorial o desde la diferenciación jerárquica que incluye a algunos y excluye a otros, pero tampoco puede entenderse si nos concentramos solamente en los procesos de hibridación cultural que ocurren en esos espacios. Por eso, para Valenzuela las fronteras son entre espacios y entre tiempos.
Este concepto ayuda a entender cómo se intersecta lo global y local en los sistemas semióticos que componen el universo cultural del narcotráfico mexicano, al mismo tiempo que explica cómo se estructuran mecanismos de exclusión y jerarquías a partir del género, la posición social y otras marcas de diferenciación social. En última instancia, ayuda a localizar estos procesos culturales, materializados en el cuerpo de las mujeres.
El concepto de narcocultura también fue una herramienta heurística útil. La cultura aquí se entiende como un proceso de producción y reproducción de modelos simbólicos, materializados en artefactos o representaciones y, además, interiorizados en lógicas de vida, sistemas de valores y creencias, que circulan a través de las prácticas individuales y colectivas de mujeres y hombres, en contextos históricos y espaciales específicos. La narcocultura sería entonces el sistema semiótico producido en torno al negocio transnacional de tráfico ilegal de drogas, tal como se vive en la frontera norte de México. La narcocultura, tal como se define en este trabajo es un sistema semiótico con límites difusos. Así, las distinciones entre el mundo ilegal del narcotráfico y el mundo de la legalidad externo a este negocio, en el mejor de los casos son borrosas, en el peor, ficticias. La narcocultura trasciende límites territoriales, es un fenómeno cultural transnacional.
Fue necesario delinear las características de los estudios culturales latinoamericanos y los Kulturwissenschaften en Alemania, para distinguir las genealogías de estas dos diferentes perspectivas, entender sus diferencias, pero, sobre todo, encontrar los puntos en común entre ellas. La coincidencia central fue el carácter transdisciplinario de estas dos tradiciones académicas. Los estudios culturales entonces se entienden como un espacio de articulación entre disciplinas (Castro Gómez, 2002), que no tiene como objetivo la unificación sino la pluralización de significados, actitudes y modos de percepción (Bachmann-Medick, 2016). La transdisciplina permite trazar las complejidades de los fenómenos culturales, creando puentes entre diferentes formas de conocimiento y prácticas de investigación.
El feminismo interseccional es una perspectiva central en el trabajo de investigación. Una contribución del feminismo a los estudios culturales que influye en esta investigación es cuestionar “Hombre” y “Mujer” como esencias naturales dadas e inmutables, desde la premisa que “los signos "hombre" y "mujer" son construcciones discursivas que el lenguaje de la cultura proyecta e inscribe en el escenario de los cuerpos, disfrazando sus montajes de signos tras la falsa apariencia de que lo masculino y lo femenino son verdades naturales, ahistóricas” (Richard, 2009, p. 77). Los estudios culturales feministas suponen que estos signos se construyen en un sistema de representaciones que articulan subjetividades en mundos culturales concretos. Su objetivo entonces es develar en las prácticas significantes, los elementos ideológicos que configuran los signos y los conflictos que se suscitan a través del uso e interpretación de éstos.
Estos signos adquieren múltiples significados y lecturas de acuerdo con especificidades que se distinguen en la diferencia. La interseccionalidad, dentro del feminismo es un discurso teórico y metodológico que aboga por reconocer que el signo “mujer” no es una categoría absoluta, y por lo tanto no puede explicar por sí misma las variadas experiencias vitales de las mujeres. Las diferencias se vuelven legibles cuando se ponen en juego con otras categorías sociales como la posición social, la raza, la edad y la discapacidad. Las diferencias sociales están fincadas en diferentes discursos que naturalizan los diferentes atributos de estas categorías sociales cuando, para esta perspectiva, son socialmente construidos y cambiantes. El objetivo de una perspectiva interseccional es identificar cómo interactúan diferentes categorías sociales en instituciones, prácticas y subjetividades, para entender cómo se materializan las desigualdades a través del tiempo.
Los conceptos teóricos que guían esta tesis son cuerpo y subjetividad. Para esta tesis, el cuerpo se entiende como un sitio de articulación, donde se materializan códigos culturales y el orden social. El cuerpo puede entenderse como una frontera dinámica y mutable, donde convergen lo físico, lo simbólico y lo social. Sujeto y cuerpo son mutuamente constitutivos; el cuerpo es el medio a través del cual el sujeto vive experiencias en el mundo social, y son esas experiencias las que llevan al sujeto a encarnar las diferencias sociales, materializadas en género, sexo, clase social y raza.
A pesar de esta relación indisociable, para facilitar el análisis, una parte se concentra en el cuerpo y otra en la subjetividad. Así, para entender la dimensión corporal se puso en tensión la representación con la experiencia vivida, a través del análisis audiovisual y la observación etnográfica leída en conjunto. En el caso de la subjetividad, se puso en tensión la vida en la narrativa de ficción con las narraciones de vida en entrevistas, para también encontrar los puentes entre las representaciones y la experiencia vital.
Esta investigación fue un estudio cualitativo y transdisciplinario. Se utilizaron diversos recursos metodológicos para construir el análisis. Se realizó observación etnográfica en diversos bares y clubs a ambos lados de la frontera, que son frecuentados por personas que se adscriben al mundo de la narcocultura o bien, que trabajan dentro de las redes del narcotráfico. En las incursiones a estos sitios, se observó el físico de las mujeres: su manera de vestir, su arreglo personal, sus formas corporales. Se observó la conducta: las gestualidades y las interacciones con otros sujetos en el espacio. Además, se observó el espacio, para ver cómo se establecían reglas, límites y jerarquizaciones en la disposición física de los lugares visitados. Se analizaron tres videos de narcocorridos a través de la video hermenéutica, para determinar cómo se representan las mujeres en estos artefactos culturales, usando los mismos criterios físicos y conductuales que mencioné anteriormente.
El análisis de los videos de la mano del trabajo etnográfico ayudó a profundizar en los significados atribuidos a la corporalidad femenina, y también a los impactos que estos significados tienen en las vivencias y relaciones de estas mujeres.
Se realizaron 5 entrevistas semi estructuradas con mujeres que se identificaban con la narcocultura. Algunas sólo simpatizan con el estilo de vida, otras estuvieron involucradas de alguna manera en el negocio ilegal de drogas. En las entrevistas se exploraron narraciones sobre sus vidas donde se revelaban discursos sobre qué es lo femenino, qué significa ser mujer y cómo se vive el ser mujer en el mundo del narcotráfico. Adicionalmente, utilicé las narraciones de dos textos literarios de la narrativa sobre narcotráfico del norte de México. En estos dos textos, los personajes principales son mujeres. Analicé cómo se construye al sujeto femenino en la narración y qué discursos se transparentan en el texto sobre la feminidad y ser una mujer en el mundo del narco.
Aquí también se puso en tensión la representación y la experiencia de vida, buscando en el análisis de la narración literaria y las experiencias narradas por las mujeres, discursos comunes que explicaran los procesos de subjetivación femenina dentro de la narcocultura mexicana.
La primera parte del análisis articuló la observación etnográfica con el material audiovisual para entender las exigencias estéticas que la narcocultura demanda a las mujeres y las maneras en que ellas transforman su cuerpo para complacer esta demanda.
La narcocultura impone a las mujeres un ideal estético que se convierte en un medio de acceso a un tipo de poder. Este ideal exige un tipo particular de fisonomía y de apariencia personal, que las mujeres intentan reproducir a través de intervenciones en el cuerpo, con el maquillaje y el peinado y/o la cirugía estética. Además, demanda cierto estilo de moda, en ropa y accesorios, de marcas de lujo de consumo global. Entre más fielmente se reproduzca este ideal, las mujeres están en posibilidad de acceder a beneficios económicos y sociales que les dan márgenes de acción dentro de este entorno social. El cuerpo de las mujeres se convierte en el recurso primario para la movilidad social y la agencia dentro de este mundo. El cuerpo es el signo principal para determinar el lugar de las mujeres dentro de los sistemas de jerarquización, de inclusión y exclusión en los espacios físicos y sociales que fabrica el narcotráfico. Estos mecanismos de diferencia reproducen las desigualdades sociales, de género, edad, posición social y raza que se observan en otros ámbitos de la sociedad mexicana.
La observación etnográfica y el análisis audiovisual revelan que las posibilidades para performar la feminidad está confinado a limites muy estrechos. Alicia Gaspar de Alba llama a esto The Three Maria Syndrome, que ella define como “the patriarchal social discourse of Chicano/Mexicano culture that constructs women’s gender and sexuality according to three Biblical archetypes -virgins, mothers and whores-” (Gaspar de Alba, 2014, pos.3412). Estas representaciones femeninas son alegorías a las constricciones que la cultura machista mexicana impone sobre las mujeres, sometiéndolas a un repertorio restringido de opciones de vida y al control social de su sexualidad. Las mujeres dentro de la narcocultura tienen un lugar en él en función de su belleza física, el cuerpo es el referente principal para definirse como sujetos.
Las mujeres son objetos de deseo, cuya belleza es una joya más para la corona de un narcotraficante, una posesión más para ostentar su poderío. Al mismo tiempo, aparecen cada vez más las representaciones femeninas como sujetos activos, participando del negocio y de la violencia a la par de los hombres. Se observan transgresiones al ideal de feminidad que se exige a la mujer tradicional en la cultura mexicana. La docilidad, la suavidad y la sumisión que se espera, el recato y la compostura, no está presente. Las mujeres adoptan cualidades consideradas masculinas, tomando para sí el ejercicio de la violencia y la agresividad sexual para demostrar que ellas también pueden navegar un mundo agresivo e hipermasculino. A pesar de esto, esta mujer guerrera y valiente está dentro de los confines limitados que la cultura patriarcal impone al régimen heterosexual. Siguen al pie de la letra la prescripción del Three Maria Syndrome.
Esto queda patente un sistema de jerarquización a través de la cual se evalúa a las mujeres dentro de la narcocultura. Las mujeres son juzgadas a partir de criterios que intersectan componentes raciales, de género y de clase. Aunque las maneras en las que estas marcas de diferencia se encarnan en un cuerpo femenino de manera muy diversa, se puede identificar, a través de las representaciones y la observación etnográfica, que las mujeres más privilegiadas, son mujeres que encarnan los signos de una posición económica alta: tienen tez clara, son atractivas y cuidan su apariencia para presentar signos de feminidad de manera discreta, y su conducta proyecta compostura y respetabilidad, en función de su restricción, particularmente en la expresión de la sexualidad. A las mujeres que encarnan estos signos de feminidad se les respeta y se consideran valiosas. Su valor se formaliza a través de la respetabilidad del contrato matrimonial: este tipo de performance de género lo reproducen, por lo general, mujeres esposas de narcotraficantes. En el otro extremo del espectro están las mujeres menos valoradas: son mujeres morenas, que utilizan una estética asociada con la clase trabajadora, por lo general ostentosa y recargada de decoraciones. La conducta de estas mujeres se juzga como vulgar y sin restricciones. A las mujeres que encarnan este tipo de feminidad se les discrimina y cosifica, son las más vulnerables a la violencia en función del poco valor que tienen dentro del mundo del narcotráfico.
La buchona representa una versión devaluada de la feminidad, que choca con el decoro y la discreción que exigen las normas tradicionales de género. Son mujeres que se consideran vulgares, porque sus cuerpos portan signos de una sexualidad agresiva, porque adoptan conductas que irrumpen las restricciones sociales impuestas a las mujeres, porque sus prácticas y consumos culturales están asociadas a las clases trabajadoras y rurales. En las mujeres que entrevisté hay un conflicto entre la atractiva libertad que promete la transgresión de ser buchona y el deseo de respetabilidad que otorga ser una mujer que cumple con lo que la sociedad exige. Uno de los dilemas al centro de performar el cuerpo buchón es la batalla entre una feminidad aceptada socialmente, pero restrictiva y una feminidad que otorga poder, pero castiga.
Por este motivo, las mujeres que entrevisté rechazaban ser nombradas como buchonas y preferían llamarse a sí mismas cabronas. En este contexto particular, la palabra cabrona es una resignificación de un término coloquial castellano, usado para ofender. Aquí, la mujer cabrona se convierte en un eje articulador para la constitución de subjetividades femeninas dentro de la narcocultura. La cabrona es un tropo femenino que entrelaza narrativas sobre ser mujer que circulan a nivel global con narrativas locales sobre la feminidad. Asumirse “cabrona”, se convierte en un recurso para enfrentar un mundo violento y encontrar estrategias de acción en un espacio claramente dominado por los hombres.
La cabrona representa independencia y fuerza, autonomía y acción. La cabrona confronta los discursos tradicionales de una feminidad abnegada y dócil, con diferentes matices, aparentemente interpelando la dominación masculina. Por lo mismo, carga un fuerte estigma. La cultura de masas también produce representaciones sobre la cabrona. Se transmiten en discursos de género que circulan a través de imágenes en las redes sociales, en libros y workshops del mercado de autoayuda en el mundo entero, y que promueven una idea de mujer indócil frente a la gente de su entorno, suscrita al consumo y al individualismo de la cultura capitalista. En estas representaciones culturales contemporáneas, la mujer es fuerte e insumisa, pero conservando códigos corporales y prácticas femeninas.
En el contexto concreto de la narcocultura, los discursos globales sobre una mujer fuerte e independiente con poder económico y a cargo de su sexualidad, se encuentran con las condiciones particulares del norte mexicano. La violencia extrema, el machismo, las desigualdades sociales pronunciadas y la crisis de legitimidad del Estado intervienen para que estos discursos globales sobre la mujer muten en la representación de la buchona y la cabrona, interpretaciones locales de un discurso de género global. Para las mujeres, asumirse cabrona es un recurso para enfrentar un mundo violento y encontrar estrategias de acción en un espacio claramente dominado por los hombres. Ayuda a enfrentar la violencia perpetrada sobre ella, abre la posibilidad a ser la victimaria. La cabrona es la reacción que provoca el cuerpo femenino vulnerable y vulnerado, pero también, es la posibilidad de apropiarse de la violencia para ejercerla sobre otros cuerpos. Implica independencia, libertad sexual y éxito económico, evidenciadas por el consumo y el estilo de vida. Cuando niegan ser buchonas, están rechazando todos los estigmas que acarrea la palabra. No se reconocen en la discriminación de clase, las connotaciones raciales y los prejuicios sexistas que contiene. Prefieren cabrona porque es una manera de escindirse de los discursos negativos que se vuelcan sobre ellas, es un camino de acceso a una feminidad global que los medios de comunicación masiva presentan como ideal.
El análisis exploró qué elementos componían este tropo femenino a través de las entrevistas a mujeres y de personajes femeninos en novelas sobre narcotráfico, para encontrar puentes entre la ficción y la experiencia vital. La belleza y la capacidad de seducir tiene una utilidad ambivalente. Por un lado, todo el tiempo, dinero y cuidado que se invierte en apropiarse de un ideal estético, es para convertirse en una mujer que un narco pueda presumir. Para las mujeres es un motivo de orgullo saberse deseadas y puestas en aparador. Las mujeres están sometidas a las presiones que genera la creencia de que, para sobrevivir, hay que ser bella. En los textos literarios y en las entrevistas, se transparenta una naturalización del lugar de la mujer como objeto de ostentación para el hombre y, además, la validación que sienten las mujeres al ser reconocidas como bellas. La ficción y la vida nos presentan la precaria condición del sujeto femenino en la narcocultura. Es una subjetividad anclada a los discursos que demandan un ideal de belleza imposible para las mujeres y que encajonan el ser mujer a los caprichos y necesidades del hombre.
Sin embargo, la belleza femenina tiene otra faceta. La subjetividad femenina en la narcocultura no sólo es resultado del sometimiento de la mujer a los discursos que regulan su apariencia y su conducta. La belleza también es un instrumento al servicio de las mujeres para acceder a dinero y poder. La belleza y el poder de seducción femenino se convierten en estrategias de subsistencia, y esto transforma a la mujer de un objeto sometido a un sujeto que somete. La belleza y la seducción podrán dar a las mujeres ciertos márgenes de acción, pero esto tiene límites muy claros. Aunque estas estrategias femeninas muevan la balanza de poder hacía el sujeto femenino, hay que recordar el contexto. Están insertas en un mundo violento y machista, así que ejercer ese poder es un ejercicio de equilibrio muy delicado y arriesgado. Las mujeres que habitan la narcocultura están inmersas es un mundo de violencia, y no conocer y respetar las reglas y límites significa un riesgo de muerte. La muerte violenta es una consecuencia muy real por cometer errores en este mundo.
Esto lleva a tercer componente de ser cabrona: el riesgo. Para los hombres y mujeres que se involucran en el mundo cultural del narcotráfico, perseguir el riesgo es parte integral de vivir y es una parte importante de la constitución de subjetividades en la narcocultura. En las narraciones de las entrevistas y en las narraciones literarias, hay muchos momentos donde las mujeres viven situaciones de riesgo que ponen en peligro hasta sus vidas. A través de las narraciones se asoma la manera en qué ellas interpretan su papel en la situación y cómo se ven a sí mismas en función de esas experiencias. El riesgo le da sentido al carácter recio y atrevido que demanda asumir el rol de una cabrona, pero también expone la vulnerabilidad de la condición de las mujeres en un mundo violento. Tomar riesgos es otra manera de afirmarse como mujeres fuertes y poner distancia con las disposiciones de género que les exigen ser dóciles y pasivas. Tienen que demostrar lo que valen frente a un mundo dominado por hombres y el control de sus emociones juega un rol fundamental en lograr esto. Sin embargo, el reconocimiento del miedo y vulnerabilidad es, paradójicamente, lo que las ayuda a sobrevivir.
Detrás de los discursos de fuerza y poder femenino, se revela la fragilidad de unas vidas sumergidas en un mundo donde la violencia y el machismo deja a las mujeres en el filo de la vida y de la muerte. Para el caso que nos compete, el vacío institucional para garantizar seguridad a las mujeres en México deja a estas mujeres absolutamente expuestas, y cobra sentido la adopción del discurso de la cabrona como estrategia de persistencia. Al investirse como cabronas, encuentran una manera de enfrentarse al mundo violento al que deciden pertenecer, aunque al final de cuentas, permanecen atrapadas en él.
Innerhalb dieser Doktorarbeit wurde eine neuartige Mikromanipulationstechnik für die lokale Flüssigkeitsabgabe am komplexen Drüsengewebe der Schabe P. americana charakterisiert und für die damit verbundene gezielte Manipulation von einzelnen Zellen in einem Zellkomplex (Gewebe) angewandt. Bei dieser Mikromanipulationstechnik handelt es sich um die seit 2009 bekannte nanofluidische Rasterkraftmikroskopie (FluidFM = fluidic force microscopy). Dabei werden sehr kleine mikrokanälige Rasterkraftspitzen bzw. Mikro-/Nanopipetten mit einer Öffnung zwischen 300 nm und 2 µm verwendet, mit denen es möglich ist, sehr kleine Volumina im Pikoliter- bis Femtoliter-Bereich (10-12 L – 10-15 L) gezielt und ortsgenau abzugeben. Das Ziel dieser Arbeit war die Analyse zellulärer Prozesse, wie z. B. Zell-Zell-Kommunikation oder Signalweiterleitung, zwischen benachbarten Zellen unter Zuhilfenahme der Fluoreszenzmikroskopie. Mit dieser Methode können die Zellen und ihre Bestandteile mittels vorheriger Farbstoffbeladung unter einem Mikroskop mit hohem Kontrast optisch dargestellt werden. Mit Hilfe der Fluoreszenzmikroskopie sollten schlussendlich die zellulären Reaktionen innerhalb des Gewebes nach der lokalen Manipulation visualisiert werden.
Zunächst wurde die Anwendung des Systems an Luft und wässriger Umgebung beschrieben. In diesem Zusammenhang wurde eine Reinigungs- und Beladungsmethode entwickelt, mit der es möglich war, die kostspieligen Mikro-/Nanopipetten zu reinigen und anschließend mehrmals wiederzuverwenden. Hierzu wurde eine alternative Methode getestet, mit der das Diffusionsverhalten von Farbstoffmolekülen in unterschiedlichen Medien untersucht werden kann. Des Weiteren wurden die Systemparameter optimiert, welche nötig sind, um zwischen der Probenoberfläche und der Pipette einen guten Pipettenöffnungs-abschluss zu erhalten. Dieser Abschluss ist essentiell, damit die abgegebene Flüssigkeit ausschließlich in der Abgaberegion mit der Probe wechselwirkt und die darauffolgenden Reaktionen nur innerhalb des Gewebes erfolgen, da ansonsten die Zell-Zell-Signalweiterleitung zwischen den Zellen nicht eindeutig nachvollzogen werden kann. Diese interzelluläre Kommunikation wurde anhand zweier sekundärer Botenstoffe (Ca2+ und NO) untersucht. Hierbei war es möglich einzelne lokale Reaktionen zu detektieren, welche sich über weitere Zellen ausbreiteten. Schlussendlich wurde die Fertigung einer speziellen Injektionspipette beschrieben, welche an zwei biologischen Systemen getestet wurde.
Health effects, attributed to the environmental pollution resulted from using solvents such as benzene, are relatively unexplored among petroleum workers, personal use, and laboratory researchers. Solvents can cause various health problems, such as neurotoxicity, immunotoxicity, and carcinogenicity. As such it can be absorbed via epidermal or respiratory into the human body resulting in interacting with molecules that are responsible for biochemical and physiological processes of the brain.
Owing to the ever-growing demand for finding a solution, an Ionic liquid can use as an alternative solvent. Ionic liquids are salts in a liquid state at low temperature (below 100 C), or even at room temperature. Ionic liquids impart a unique architectural platform, which has been interesting because of their unusual properties that can be tuned by simple ways such as mixing two ionic liquids.
Ionic liquids not only used as reaction solvents but they became a key developing for novel applications based on their thermal stability, electric conductivity with very low vapor pressure in contrast to the conventional solvents.
In this study, ionic liquids were used as a solvent and reactant at the same time for the novel nanomaterials synthesis for different applications including solar cells, gas sensors, and water splitting.
The field of ionic liquids continues to grow, and become one of the most important branches of science. It appears to be at a point where research and industry can work together in a new way of thinking for green chemistry and sustainable production.
Ferroic materials have attracted a lot of attention over the years due to their wide range of applications in sensors, actuators, and memory devices. Their technological applications originate from their unique properties such as ferroelectricity and piezoelectricity. In order to optimize these materials, it is necessary to understand the coupling between their nanoscale structure and transient response, which are related to the atomic structure of the unit cell.
In this thesis, synchrotron X-ray diffraction is used to investigate the structure of ferroelectric thin film capacitors during application of a periodic electric field. Combining electrical measurements with time-resolved X-ray diffraction on a working device allows for visualization of the interplay between charge flow and structural motion. This constitutes the core of this work. The first part of this thesis discusses the electrical and structural dynamics of a ferroelectric Pt/Pb(Zr0.2,Ti0.8)O3/SrRuO3 heterostructure during charging, discharging, and polarization reversal. After polarization reversal a non-linear piezoelectric response develops on a much longer time scale than the RC time constant of the device. The reversal process is inhomogeneous and induces a transient disordered domain state. The structural dynamics under sub-coercive field conditions show that this disordered domain state can be remanent and can be erased with an appropriate voltage pulse sequence. The frequency-dependent dynamic characterization of a Pb(Zr0.52,Ti0.48)O3 layer, at the morphotropic phase boundary, shows that at high frequency, the limited domain wall velocity causes a phase lag between the applied field and both the structural and electrical responses. An external modification of the RC time constant of the measurement delays the switching current and widens the electromechanical hysteresis loop while achieving a higher compressive piezoelectric strain within the crystal.
In the second part of this thesis, time-resolved reciprocal space maps of multiferroic BiFeO3 thin films were measured to identify the domain structure and investigate the development of an inhomogeneous piezoelectric response during the polarization reversal. The presence of 109° domains is evidenced by the splitting of the Bragg peak.
The last part of this work investigates the effect of an optically excited ultrafast strain or heat pulse propagating through a ferroelectric BaTiO3 layer, where we observed an additional current response due to the laser pulse excitation of the metallic bottom electrode of the heterostructure.
Earth's climate varies continuously across space and time, but humankind has witnessed only a small snapshot of its entire history, and instrumentally documented it for a mere 200 years. Our knowledge of past climate changes is therefore almost exclusively based on indirect proxy data, i.e. on indicators which are sensitive to changes in climatic variables and stored in environmental archives. Extracting the data from these archives allows retrieval of the information from earlier times. Obtaining accurate proxy information is a key means to test model predictions of the past climate, and only after such validation can the models be used to reliably forecast future changes in our warming world. The polar ice sheets of Greenland and Antarctica are one major climate archive, which record information about local air temperatures by means of the isotopic composition of the water molecules embedded in the ice. However, this temperature proxy is, as any indirect climate data, not a perfect recorder of past climatic variations. Apart from local air temperatures, a multitude of other processes affect the mean and variability of the isotopic data, which hinders their direct interpretation in terms of climate variations. This applies especially to regions with little annual accumulation of snow, such as the Antarctic Plateau. While these areas in principle allow for the extraction of isotope records reaching far back in time, a strong corruption of the temperature signal originally encoded in the isotopic data of the snow is expected. This dissertation uses observational isotope data from Antarctica, focussing especially on the East Antarctic low-accumulation area around the Kohnen Station ice-core drilling site, together with statistical and physical methods, to improve our understanding of the spatial and temporal isotope variability across different scales, and thus to enhance the applicability of the proxy for estimating past temperature variability. The presented results lead to a quantitative explanation of the local-scale (1–500 m) spatial variability in the form of a statistical noise model, and reveal the main source of the temporal variability to be the mixture of a climatic seasonal cycle in temperature and the effect of diffusional smoothing acting on temporally uncorrelated noise. These findings put significant limits on the representativity of single isotope records in terms of local air temperature, and impact the interpretation of apparent cyclicalities in the records. Furthermore, to extend the analyses to larger scales, the timescale-dependency of observed Holocene isotope variability is studied. This offers a deeper understanding of the nature of the variations, and is crucial for unravelling the embedded true temperature variability over a wide range of timescales.
How can interactive devices connect with users in the most immediate and intimate way? This question has driven interactive computing for decades. Throughout the last decades, we witnessed how mobile devices moved computing into users’ pockets, and recently, wearables put computing in constant physical contact with the user’s skin. In both cases moving the devices closer to users allowed devices to sense more of the user, and thus act more personal. The main question that drives our research is: what is the next logical step?
Some researchers argue that the next generation of interactive devices will move past the user’s skin and be directly implanted inside the user’s body. This has already happened in that we have pacemakers, insulin pumps, etc. However, we argue that what we see is not devices moving towards the inside of the user’s body, but rather towards the body’s biological “interface” they need to address in order to perform their function.
To implement our vision, we created a set of devices that intentionally borrow parts of the user’s body for input and output, rather than adding more technology to the body.
In this dissertation we present one specific flavor of such devices, i.e., devices that borrow the user’s muscles. We engineered I/O devices that interact with the user by reading and controlling muscle activity. To achieve the latter, our devices are based on medical-grade signal generators and electrodes attached to the user’s skin that send electrical impulses to the user’s muscles; these impulses then cause the user’s muscles to contract.
While electrical muscle stimulation (EMS) devices have been used to regenerate lost motor functions in rehabilitation medicine since the 1960s, in this dissertation, we propose a new perspective: EMS as a means for creating interactive systems.
We start by presenting seven prototypes of interactive devices that we have created to illustrate several benefits of EMS. These devices form two main categories: (1) Devices that allow users eyes-free access to information by means of their proprioceptive sense, such as the value of a variable in a computer system, a tool, or a plot; (2) Devices that increase immersion in virtual reality by simulating large forces, such as wind, physical impact, or walls and heavy objects.
Then, we analyze the potential of EMS to build interactive systems that miniaturize well and discuss how they leverage our proprioceptive sense as an I/O modality. We proceed by laying out the benefits and disadvantages of both EMS and mechanical haptic devices, such as exoskeletons.
We conclude by sketching an outline for future research on EMS by listing open technical, ethical and philosophical questions that we left unanswered.
In a changing world facing several direct or indirect anthropogenic challenges the freshwater resources are endangered in quantity and quality. An excessive supply of nutrients, for example, can cause disproportional phytoplankton development and oxygen deficits in large rivers, leading to failure of the aims requested by the Water Framework Directive (WFD). Such problems can be observed in many European river catchments including the Elbe basin, and effective measures for improving water quality status are highly appreciated.
In water resources management and protection, modelling tools can help to understand the dominant nutrient processes and to identify the main sources of nutrient pollution in a watershed. They can be effective instruments for impact assessments investigating the effects of changing climate or socio-economic conditions on the status of surface water bodies, and for testing the usefulness of possible protection measures. Due to the high number of interrelated processes, ecohydrological model approaches containing water quality components are more complex than the pure hydrological ones, and their setup and calibration require more efforts. Such models, including the Soil and Water Integrated Model (SWIM), still need some further development and improvement.
Therefore, this cumulative dissertation focuses on two main objectives: 1) the approach-related objectives aiming in the SWIM model improvement and further development regarding nutrient (nitrogen and phosphorus) process description, and 2) the application-related objectives in meso- to large-scale Elbe river basins to support adaptive river basin management in view of possible future changes. The dissertation is based on five scientific papers published in international journals and dealing with these research questions.
Several adaptations were implemented in the model code to improve the representation of nutrient processes including a simple wetland approach, an extended by ammonium nitrogen cycle in the soils, as well as a detailed in-stream module, simulating algal growth, nutrient transformation processes and oxygen conditions in the river reaches, mainly driven by water temperature and light. Although this new approaches created a highly complex ecohydrological model with a large number of additional calibration parameters and rising uncertainty, the calibration and validation of the SWIM model enhanced by the new approaches in selected subcatchment and the entire Elbe river basin delivered satisfactory to good model results in terms of criteria of fit. Thus, the calibrated and validated model provided a sound base for the assessment of possible future changes and impacts in climate, land use and management in the Elbe river (sub)basin(s).
The new enhanced modelling approach improved the applicability of the SWIM model for the WFD related research questions, where the ability to consider biological water quality components (such as phytoplankton) is important. It additionally enhanced its ability to simulate the behaviour of nutrients coming mainly from point sources (e.g. phosphate phosphorus). Scenario results can be used by decision makers and stakeholders to find and understand future challenges and possible adaptation measures in the Elbe river basin.
Veränderungen im thermalen Regime des Permafrosts verursachen Störungen der Erdoberfläche. Diese Veränderungen werden durch die in der Arktis seit Jahrzehnten ansteigenden Temperaturen verstärkt. Thermokarst ist ein Prozess, welcher die Erdoberfläche durch Schmelzen von Grundeis, oder Auftauen von Permafrost absacken lässt, wodurch charakteristische Landformen entstehen. Thermokarst ist vor allem entlang von Hängen weit verbreitet und die Anzahl der damit verbundenen Landformen in der Arktis steigt stetig an. Dieser Prozess mobilisiert große Mengen an Material, welche in Richtung Meer transportiert oder entlang von Hängen akkumuliert werden. Während entlang von Hängen auftretender Thermokarst terrestrische sowie aquatische Ökosysteme stark verändert, ist dessen Einfluss auf regionaler Skala zurzeit noch Gegenstand der Forschung.
In dieser Arbeit quantifizieren wir die Auswirkungen von Thermokarstprozessen entlang von Hängen auf die umliegenden Ökosysteme der küstennahen Täler und Nahküstenbereiche entlang der Yukon Küste in Kanada. Mittels überwachtem maschinellen Lernen haben wir geomorphische Faktoren identifiziert, welche die Entwicklung von retrogressiven Auftaurutschungen (RTS) begünstigen. RTS sind eine Erscheinungsform von Thermokarst entlang von Hängen. Die Küstengeomorphologie, sowie der Grundeistyp und -inhalt sind die wesentlichen bestimmenden Faktoren für das Auftreten von RTS. Wir haben Luftbildaufnahmen und Satellitenbilder genutzt, um die Evolution von RTS im Zeitraum von 1952 bis 2011 zu verfolgen. Während dieser Zeit ist die Anzahl und Ausdehnung von RTS linear angestiegen. Wir zeigen, dass 56% der RTS welche entlang der Küste in 2011 identifiziert wurden, 16.6 × 106 m3 an Material erodiert haben. Hiervon wurden 45% durch Küstenprozesse entlang der Küste transportiert. RTS tragen wesentlich zu dem Kohlenstoff-Budget des Nahküstenbereiches bei: 17% der in 2011 identifizierten RTS, haben 0.6% des organischen Kohlenstoffes transportiert, welcher durch Küstenerosion entlang der Yukon Küste jährlich freigesetzt wird. Um den Einfluss von Thermokarst entlang von Hängen auf das terrestrische Ökosystem zu beurteilen, haben wir die räumliche Verteilung von organischem Bodenkohlenstoff und Stickstoff (SOC, TN) entlang von Hangprofilen in drei arktischen Tälern analysiert.Wir weisen auf eine hohe räumliche Variabilität in der Verteilung von SOC und TN hin, welche auf komplexe Bodenprozesse zurückzuführen ist, welche entlang von Hängen auftreten. Thermokarst entlang von Hängen hat einen großen Einfluss auf die Degradierung von organischem Material und die Speicherung von SOC und TN.
Die Folgen einer lebensmittelbedingten Erkrankung sind zum Teil gravierend, insbesondere für Kinder und immunsupprimierte Menschen. Hierbei gehören Salmonella und Campylobacter zu den häufigsten Erregern, die verantwortlich für gastrointestinale Erkrankungen in Deutschland sind. Trotz umfassender Maßnahmen der EU zur Prävention und Bekämpfung von Salmonellen in Geflügelbeständen und der Lebensmittel-Industrie, wird von einem stagnierenden Trend von Infektionszahlen berichtet. Zoonose-Erreger wie Salmonellen können über Nutztiere in die Nahrungskette des Menschen gelangen, wodurch sich Infektionsherde schnell ausbreiten können. Dabei sind bestehende Präventionsstrategien für Geflügel vorhanden, die aber nicht auf den Menschen übertragbar sind. Folglich sind Diagnostik und Prävention in der Lebensmittelindustrie essentiell. Deshalb besteht ein hoher Bedarf für spezifische, sensitive und zuverlässige Nachweismethoden, die eine Point-of-care Diagnostik gewährleisten. Durch ein wachsendes Verständnis der wirtsspezifischen Faktoren von S. enterica Serovaren kann die Entwicklung sowohl neuartiger diagnostischer Methoden, als auch neuartiger Therapien und Impfstoffe maßgeblich vorangetrieben werden.
Infolgedessen wurde in dieser Arbeit ein infektionsähnliches in vitro Modell für S. Enteritidis etabliert und darauf basierend eine umfassende Untersuchung zur Identifizierung neuer Zielstrukturen für den Erreger durchgeführt. Während einer Salmonellen-Infektion ist die erste zelluläre Barriere im Wirt die Epithelschicht. Dementsprechend wurde eine humane Zelllinie (CaCo 2, Darmepithel) für die Pathogen-Wirt-Studie ausgewählt. Das Salmonellen-Transkriptom und morphologische Eigenschaften der Epithelzellen wurden in verschiedenen Phasen der Salmonellen-Infektion untersucht und mit bereits gut beschriebenen Virulenzfaktoren und Beobachtungen in Bezug gesetzt. Durch dieses Infektionsmodell konnte ein spezifischer Phänotyp für die intrazellulären Salmonellen in den Epithelzellen nachgewiesen werden. Zudem wurde aufgezeigt, dass bereits die Kultivierung in Flüssigmedium einen invasionsaktiven Zustand der Salmonellen erzeugt. Allerdings wurde durch die Kokultivierung mit Epithelzellen eine zusätzliche Expression relevanter Gene induziert, um eine effiziente Adhäsion und Transmembran-Transport zu gewährleisten. Letzterer ist charakteristisch für die intrazelluläre Limitierung von Nährstoffen und prägt den infektionsrelevanten Status. Unter Berücksichtigung dieser Faktoren ergab sich ein Phänotyp, der eindeutig Mechanismen zur Wirtsadaptation und möglicherweise auch Pathogenese aufzeigt. Die intrazellulären Bakterien müssen vom Wirt separiert werden, was ein wesentlicher Schritt für Pathogen-bestimmende Analysen ist. Hierbei wurde mithilfe einer Detergenz-basierten Lyse der eukaryotischen Zellmembran und differentieller Zentrifugation, der eukaryotische Eintrag minimal gehalten. Unter Verwendung der Virulenz-adaptierten Salmonellen wurden Untersuchungen in Hinblick auf die Identifizierung neuer Zielstrukturen für S. Enteritidis durchgeführt. Mithilfe eines immunologischen Screenings wurden neue potentielle Antigene entdeckt. Zu diesem Zweck wurden bakterielle cDNA-basierte Expressionsbibliotheken hergestellt, die durch eine vereinfachte Microarray-Anwendung ein Hochdurchsatzscreening von Proteinen als potentielle Binder ermöglichen. Folglich konnten neue unbeschriebene Proteine identifiziert werden, die sich durch eine Salmonella-Spezifität oder Membranständigkeit auszeichnen. Ebenso wurde ein Vergleich der im Screening identifizierten Proteine mit der Regulation der kodierenden Gene im infektionsähnlichen Modell durchgeführt. Dabei wurde deutlich, dass die Häufigkeit von Transkripten einen Einfluss auf die Verfügbarkeit in der cDNA-Bibliothek und folglich auch auf die Expressionsbibliothek nimmt. Angesichts eines Ungleichgewichts zwischen der Gesamtzahl protein-kodierender Gene in S. Enteritidis zu möglichen Klonen, die während des Microarray-Screenings untersucht werden können, besteht der Bedarf einer Anreicherung von Proteinen in der Expressionsbibliothek. Das infektionsähnliche Modell zeigte, dass nicht nur Virulenz-assoziierte, sondern auch Stress- und Metabolismus-relevante Gene hochreguliert werden. Durch die Konstruktion dieser spezifischen cDNA-Bibliotheken ist die Erkennung von charakteristischen molekularen Markern gegeben.
Weiterhin wurden anhand der Transkriptomanalyse spezifisch hochregulierte Gene identifiziert, die relevant für das intrazelluläre Überleben von S. Enteritidis in humanen Epithelzellen sind. Hiervon wurden drei Gene näher untersucht, indem ihr Einfluss im infektionsähnlichen Modell mittels entsprechender Gen-Knockout-Stämme analysiert wurde. Dabei wurde für eine dieser Mutanten ein reduziertes Wachstum in der späten intrazellulären Phase nachgewiesen. Weiterführende in vitro Analysen sind für die Charakterisierung des Knockout-Stamms notwendig, um den Einsatz als potenzielles Therapeutikum zu verifizieren.
Zusammenfassend wurde ein in vitro Infektionsmodell für S. Enteritidis etabliert, wodurch neue Zielstrukturen des Erregers identifiziert wurden. Diese sind für diagnostische oder therapeutische Anwendungen interessant. Das Modell lässt sich ebenso für andere intrazelluläre Pathogene übertragen und gewährleistet eine zuverlässige Identifizierung von potentiellen Antigenen.
Understanding how humans move their eyes is an important part for understanding the functioning of the visual system. Analyzing eye movements from observations of natural scenes on a computer screen is a step to understand human visual behavior in the real world. When analyzing eye-movement data from scene-viewing experiments, the impor- tant questions are where (fixation locations), how long (fixation durations) and when (ordering of fixations) participants fixate on an image. By answering these questions, computational models can be developed which predict human scanpaths. Models serve as a tool to understand the underlying cognitive processes while observing an image, especially the allocation of visual attention.
The goal of this thesis is to provide new contributions to characterize and model human scanpaths on natural scenes. The results from this thesis will help to understand and describe certain systematic eye-movement tendencies, which are mostly independent of the image. One eye-movement tendency I focus on throughout this thesis is the tendency to fixate more in the center of an image than on the outer parts, called the central fixation bias. Another tendency, which I will investigate thoroughly, is the characteristic distribution of angles between successive eye movements.
The results serve to evaluate and improve a previously published model of scanpath generation from our laboratory, the SceneWalk model. Overall, six experiments were conducted for this thesis which led to the following five core results:
i) A spatial inhibition of return can be found in scene-viewing data. This means that locations which have already been fixated are afterwards avoided for a certain time interval (Chapter 2).
ii) The initial fixation position when observing an image has a long-lasting influence of up to five seconds on further scanpath progression (Chapter 2 & 3).
iii) The often described central fixation bias on images depends strongly on the duration of the initial fixation. Long-lasting initial fixations lead to a weaker central fixation bias than short fixations (Chapter 2 & 3).
iv) Human observers adjust their basic eye-movement parameters, like fixation dura- tions and saccade amplitudes, to the visual properties of a target they look for in visual search (Chapter 4).
v) The angle between two adjacent saccades is an indicator for the selectivity of the upcoming saccade target (Chapter 4).
All results emphasize the importance of systematic behavioral eye-movement tenden- cies and dynamic aspects of human scanpaths in scene viewing.
Human actuation
(2018)
Ever since the conception of the virtual reality headset in 1968, many researchers have argued that the next step in virtual reality is to allow users to not only see and hear, but also feel virtual worlds. One approach is to use mechanical equipment to provide haptic feedback, e.g., robotic arms, exoskeletons and motion platforms. However, the size and the weight of such mechanical equipment tends to be proportional to its target’s size and weight, i.e., providing human-scale haptic feedback requires human-scale equipment, often restricting them to arcades and lab environments.
The key idea behind this dissertation is to bypass mechanical equipment by instead leveraging human muscle power. We thus create software systems that orchestrate humans in doing such mechanical labor—this is what we call human actuation. A potential benefit of such systems is that humans are more generic, flexible, and versatile than machines. This brings a wide range of haptic feedback to modern virtual reality systems.
We start with a proof-of-concept system—Haptic Turk, focusing on delivering motion experiences just like a motion platform. All Haptic Turk setups consist of a user who is supported by one or more human actuators. The user enjoys an interactive motion simulation such as a hang glider experience, but the motion is generated by those human actuators who manually lift, tilt, and push the user’s limbs or torso. To get the timing and force right, timed motion instructions in a format familiar from rhythm games are generated by the system.
Next, we extend the concept of human actuation from 3-DoF to 6-DoF virtual reality where users have the freedom to walk around. TurkDeck tackles this problem by orchestrating a group of human actuators to reconfigure a set of passive props on the fly while the user is progressing in the virtual environment. TurkDeck schedules human actuators by their distances from the user, and instructs them to reconfigure the props to the right place on the right time using laser projection and voice output.
Our studies in Haptic Turk and TurkDeck showed that human actuators enjoyed the experience but not as much as users. To eliminate the need of dedicated human actuators, Mutual Turk makes everyone a user by exchanging mechanical actuation between two or more users. Mutual Turk’s main functionality is that it orchestrates the users so as to actuate props at just the right moment and with just the right force to produce the correct feedback in each other's experience.
Finally, we further eliminate the need of another user, making human actuation applicable to single-user experiences. iTurk makes the user constantly reconfigure and animate otherwise passive props. This allows iTurk to provide virtual worlds with constantly varying or even animated haptic effects, even though the only animate entity present in the system is the user. Our demo experience features one example each of iTurk’s two main types of props, i.e., reconfigurable props (the foldable board from TurkDeck) and animated props (the pendulum).
We conclude this dissertation by summarizing the findings of our explorations and pointing out future directions. We discuss the development of human actuation compare to traditional machine actuation, the possibility of combining human and machine actuators and interaction models that involve more human actuators.
The concept of hydrologic connectivity summarizes all flow processes that link separate regions of a landscape. As such, it is a central theme in the field of catchment hydrology, with influence on neighboring disciplines such as ecology and geomorphology. It is widely acknowledged to be an important key in understanding the response behavior of a catchment and has at the same time inspired research on internal processes over a broad range of scales. From this process-hydrological point of view, hydrological connectivity is the conceptual framework to link local observations across space and scales.
This is the context in which the four studies this thesis comprises of were conducted. The focus was on structures and their spatial organization as important control on preferential subsurface flow. Each experiment covered a part of the conceptualized flow path from hillslopes to the stream: soil profile, hillslope, riparian zone, and stream.
For each study site, the most characteristic structures of the investigated domain and scale, such as slope deposits and peat layers were identified based on preliminary or previous investigations or literature reviews. Additionally, further structural data was collected and topographical analyses were carried out. Flow processes were observed either based on response observations (soil moisture changes or discharge patterns) or direct measurement (advective heat transport). Based on these data, the flow-relevance of the characteristic structures was evaluated, especially with regard to hillslope to stream connectivity.
Results of the four studies revealed a clear relationship between characteristic spatial structures and the hydrological behavior of the catchment. Especially the spatial distribution of structures throughout the study domain and their interconnectedness were crucial for the establishment of preferential flow paths and their relevance for large-scale processes. Plot and hillslope-scale irrigation experiments showed that the macropores of a heterogeneous, skeletal soil enabled preferential flow paths at the scale of centimeters through the otherwise unsaturated soil. These flow paths connected throughout the soil column and across the hillslope and facilitated substantial amounts of vertical and lateral flow through periglacial slope deposits.
In the riparian zone of the same headwater catchment, the connectivity between hillslopes and stream was controlled by topography and the dualism between characteristic subsurface structures and the geomorphological heterogeneity of the stream channel. At the small scale (1 m to 10 m) highest gains always occurred at steps along the longitudinal streambed profile, which also controlled discharge patterns at the large scale (100 m) during base flow conditions (number of steps per section). During medium and high flow conditions, however, the impact of topography and parafluvial flow through riparian zone structures prevailed and dominated the large-scale response patterns.
In the streambed of a lowland river, low permeability peat layers affected the connectivity between surface water and groundwater, but also between surface water and the hyporheic zone. The crucial factor was not the permeability of the streambed itself, but rather the spatial arrangement of flow-impeding peat layers, causing increased vertical flow through narrow “windows” in contrast to predominantly lateral flow in extended areas of high hydraulic conductivity sediments.
These results show that the spatial organization of structures was an important control for hydrological processes at all scales and study areas. In a final step, the observations from different scales and catchment elements were put in relation and compared. The main focus was on the theoretical analysis of the scale hierarchies of structures and processes and the direction of causal dependencies in this context. Based on the resulting hierarchical structure, a conceptual framework was developed which is capable of representing the system’s complexity while allowing for adequate simplifications.
The resulting concept of the parabolic scale series is based on the insight that flow processes in the terrestrial part of the catchment (soil and hillslopes) converge. This means that small-scale processes assemble and form large-scale processes and responses. Processes in the riparian zone and the streambed, however, are not well represented by the idea of convergence. Here, the large-scale catchment signal arrives and is modified by structures in the riparian zone, stream morphology, and the small-scale interactions between surface water and groundwater. Flow paths diverge and processes can better be represented by proceeding from large scales to smaller ones. The catchment-scale representation of processes and structures is thus the conceptual link between terrestrial hillslope processes and processes in the riparian corridor.
The rapid development and integration of Information Technologies over the last decades influenced all areas of our life, including the business world. Yet not only the modern enterprises become digitalised, but also security and criminal threats move into the digital sphere. To withstand these threats, modern companies must be aware of all activities within their computer networks.
The keystone for such continuous security monitoring is a Security Information and Event Management (SIEM) system that collects and processes all security-related log messages from the entire enterprise network. However, digital transformations and technologies, such as network virtualisation and widespread usage of mobile communications, lead to a constantly increasing number of monitored devices and systems. As a result, the amount of data that has to be processed by a SIEM system is increasing rapidly. Besides that, in-depth security analysis of the captured data requires the application of rather sophisticated outlier detection algorithms that have a high computational complexity. Existing outlier detection methods often suffer from performance issues and are not directly applicable for high-speed and high-volume analysis of heterogeneous security-related events, which becomes a major challenge for modern SIEM systems nowadays.
This thesis provides a number of solutions for the mentioned challenges. First, it proposes a new SIEM system architecture for high-speed processing of security events, implementing parallel, in-memory and in-database processing principles. The proposed architecture also utilises the most efficient log format for high-speed data normalisation. Next, the thesis offers several novel high-speed outlier detection methods, including generic Hybrid Outlier Detection that can efficiently be used for Big Data analysis. Finally, the special User Behaviour Outlier Detection is proposed for better threat detection and analysis of particular user behaviour cases.
The proposed architecture and methods were evaluated in terms of both performance and accuracy, as well as compared with classical architecture and existing algorithms. These evaluations were performed on multiple data sets, including simulated data, well-known public intrusion detection data set, and real data from the large multinational enterprise. The evaluation results have proved the high performance and efficacy of the developed methods.
All concepts proposed in this thesis were integrated into the prototype of the SIEM system, capable of high-speed analysis of Big Security Data, which makes this integrated SIEM platform highly relevant for modern enterprise security applications.
The prediction of the ground shaking that can occur at a site of interest due to an earthquake is crucial in any seismic hazard analysis. Usually, empirically derived ground-motion prediction equations (GMPEs) are employed within a logic-tree framework to account for this step. This is, however, challenging if the area under consideration has only low seismicity and lacks enough recordings to develop a region-specific GMPE. It is then usual practice to adapt GMPEs from data-rich regions (host area) to the area with insufficient ground-motion recordings (target area). Host GMPEs must be adjusted in such a way that they will capture the specific ground-motion characteristics of the target area. In order to do so, seismological parameters of the target region have to be provided as, for example, the site-specific attenuation factor kappa0. This is again an intricate task if data amount is too sparse to derive these parameters.
In this thesis, I explore methods that can facilitate the selection of non-endemic GMPEs in a logic-tree analysis or their adjustment to a data-poor region. I follow two different strategies towards this goal.
The first approach addresses the setup of a ground-motion logic tree if no indigenous GMPE is available. In particular, I propose a method to derive an optimized backbone model that captures the median ground-motion characteristics in the region of interest. This is done by aggregating several foreign GMPEs as weighted components of a mixture model in which the weights are inferred from observed data. The approach is applied to Northern Chile, a region for which no indigenous GMPE existed at the time of the study. Mixture models are derived for interface and intraslab type events using eight subduction zone GMPEs originating from different parts of the world. The derived mixtures provide satisfying results in terms of average residuals and average sample log-likelihoods. They outperform all individual non-endemic GMPEs and are comparable to a regression model that was specifically derived for that area.
The second approach is concerned with the derivation of the site-specific attenuation factor kappa0. kappa0 is one of the key parameters in host-to-target adjustments of GMPEs but is hard to derive if data amount is sparse. I explore methods to estimate kappa0 from ambient seismic noise. Seismic noise is, in contrast to earthquake recordings, continuously available. The rapidly emerging field of seismic interferometry gives the possibility to infer velocity and attenuation information from the cross-correlation or deconvolution of long noise recordings. The extraction of attenuation parameters from diffuse wavefields is, however, not straightforward especially not for frequencies above 1 Hz and at shallow depth. In this thesis, I show the results of two studies. In the first one, data of a small-scale array experiment in Greece are used to derive Love wave quality factors in
the frequency range 1-4 Hz. In a second study, frequency dependent quality factors of S-waves (5-15 Hz) are estimated by deconvolving noise recorded in a borehole and at a co-located surface station in West Bohemia/Vogtland. These two studies can be seen as preliminary steps towards the estimation of kappa0 from seismic noise.
In der vorliegenden Arbeit werden Wege zur Gewinnung verschiedener phenolischer Substanzen wie Lignin, Diarylheptanoide und 4-(3-Oxobutyl)phenol (Himbeerketon) aus dem Stamm der Hängebirke (Betula pendula) aufgezeigt. Durch Methacrylierung des 4-(3-Oxobutyl)phenols wurde ein Monomer erzeugt, welches mittels freier radikalischer Masse- und Lösungspolymerisation, sowie enzymatischer Polymerisation polymerisiert werden kann.
Eine erste Isolierung von Bestandteilen wurde durch Extraktion von Innenholz bzw. Rinde mit Methanol erzielt. Die in Methanol unlöslichen Bestandteile des Innenholzes und der Rinde wurden anschließend mit ausgewählten ionischen Flüssigkeiten extrahiert. Es wurde ein Verfahren zum selektiven Trennen der mit diesen ionischen Flüssigkeiten extrahierten Bestandteile in Cellulose, Hemicellulose, Lignin und mit Ethylacetat extrahierbare Bestandteile entwickelt. Hierdurch war es möglich, sowohl die verwendeten ionischen Flüssigkeiten als auch das Innenholz und die Rinde hinsichtlich ihres Extraktionsverhaltens miteinander zu vergleichen.
Ferner wurden verschiedene Strategien aufgezeigt, um insgesamt drei Spezies an Diarylheptanoiden aus dem methanolischen Extrakt der Rinde zu isolieren. Eines der gefundenen Diarylheptanoide (5 Hydroxy-1,7-bis(4-hydroxyphenyl)-3-heptanon) wurde via Retroaldolreaktion in 4 (3 Oxobutyl)phenol (Himbeerketon) und 3 (4 Hydroxyphenyl)propanal gespalten.
Es wurde die Verwendung des 4-(3-Oxobutyl)phenol als Monomerbestandteil untersucht. Hierfür wurde 4-(3-Oxobutyl)phenylmethacrylat synthetisiert und Wege zur Reinigung mittels Säulenchromatographie und Umkristallisation aufgezeigt. Anschließend wurde Poly(4-(3-oxobutyl)phenylmethacrylat) (PObMA) und Polybenzylmethacrylats (PBzMA) aus Massen- und Lösungspolymerisation hergestellt. Die Ausbeuten an PObpMA im Vergleich zum PBzMA liegen bei gleichen Reaktionsbedingungen auf gleichem Niveau. Im Kontrast hierzu ist der Polymerisationsgrad aus freier radikalischer Polymerisation in Masse des PObpMA im Vergleich zum PBzMA um den Faktor 3,7 größer. Die Glasübergangstemperaturen des PObpMA liegen bei gleichen Reaktionsbedingungen sowohl bei freier radikalischer Polymerisation in Masse, als auch bei Lösungspolymerisation über denen des PBzMA. Darüber hinaus wurde die Polymerisation von 4-(3-Oxobutyl)phenylmethacrylat und Benzylmethacrylat mit einem Initiatorsystem bestehend aus Meerrettichperoxidase, Acetylaceton und Wasserstoffperoxid bei Raumtemperatur beschrieben. Die mit enzymatischem Initiatorsystem erzeugten Produkte zeigten starke Übereinstimmung mit Produkten aus Lösungspolymerisationen, welche mit Azobis(isobutyronitril) initiiert wurden.
Professionelle GT Langstreckenmotorsportler (Rennfahrer) müssen den hohen motorischen und kognitiven Ansprüchen ohne Verlust der Performance während eines Rennens endgegenwirken können. Sie müssen stets, bei hoher Geschwindigkeit fokussiert und konzentriert auf ihr Auto, die Rennstrecke und ihre Gegner reagieren können. Darüber hinaus sind Rennfahrer zusätzlich durch die notwendige Kommunikation im Auto mit den Ingenieuren und Mechanikern in der Boxengasse gefordert. Daten über die tatsächliche Beanspruchung und häufig auftretende Beschwerden und/oder Verletzung von Profiathleten liegen kaum vor. Für eine möglichst gute Performance im Auto während eines Rennens ist es notwendige neben der körperlichen Beanspruchung auch die häufigen Krankheitsbilder zu kennen. Auf Basis dessen kann eine optimale Prävention oder notwendige Therapie zur möglichst schnellen Reintegration in den Sport abgeleitet und entwickelt werden. Die vorliegende Arbeit befasst sich durch ein regelmäßiges Gesundheitsmonitoring mit der Erfassung häufiger Beschwerden und oder Verletzungen im GT Langestreckenmotorsport zur Ableitung eines präventiven (trainingstherapeutischen) und therapeutischen Konzeptes. Darüber hinaus, soll über die Einschätzung der körperlichen Leistungsfähigkeit der Athleten, auf Basis der Beanspruchung im Rennfahrzeug ein mögliches Trainingskonzept in Abhängigkeit der Saison entwickelt werden.
Insgesamt wurden über 15 Jahre (2003-2017) 37 männliche Athleten aus dem GT Langstreckenmotorsport 353mal im Rahmen eines Gesundheitsmonitorings untersucht. Dabei wurden Athleten maximal 14 Jahre und mindestens 1 Jahr sportmedizinische betreut. Diese 2x im Jahr stattfindende Untersuchung beinhaltete im Wesentlichen eine sportmedizinische Untersuchung zur Einschätzung der Tauglichkeit für den Sport und die Erfassung der körperlichen Leistungsfähigkeit. Über das Gesundheitsmonitoring hinaus erfolgte die Betreuung zusätzlich an der Rennstrecke zur weiteren Erfassung der Beschwerden, Erkrankungen und Verletzungen der Athleten während ihrer sportartspezifischen Belastung. Zusammengefasst zeigen die Athleten geringe Prävalenzen und Inzidenzen der Krankheitsbilder bzw. Beschwerden. Ein Unterschied der Prävalenzen zeigt sich zwischen den Gesundheitsuntersuchungen und der Betreuung an der Rennstrecke. Die häufigsten Beschwerdebilder zeigen sich aus Orthopädie und Innerer Medizin. So sind Infekte der oberen Atemwege sowie Allergien neben Beschwerden der unteren Extremität und der Wirbelsäule am häufigsten. Demzufolge werden vorrangig physio- und trainingstherapeutische Konsequenzen abgeleitet. Eine medikamentöse Therapie erfolgt im Wesentlichen während der Rennbetreuung. Zur Reduktion der orthopädischen und internistischen Beschwerden sollten präventive Maßnahmen mehr betont werden. Die körperliche Leistungsfähigkeit zeigt im Wesentlichen über die Untersuchungsjahre eine stabile Performance für die Ausdauer-, Kraft und sensomotorische Leistungsfähigkeit. Die Ausdauerleistungsfähigkeit kann in Abhängigkeit der Sportartspezifik mit einer guten bis sehr guten Ausprägung definiert werden. Die Kraftleistungsfähigkeit und die sensomotorische Leistungsfähigkeit lassen sportartspezifische Unterschiede zu und sollte körpergewichtsbezogen betrachtet werden.
Ein sportmedizinisches und trainingstherapeutisches Konzept müsste demnach eine regelmäßige ärztlich-medizinische Untersuchung mit dem Fokus der Orthopädie, Inneren Medizin und Hals- Nasen-Ohren-Kunde beinhalten. Darüber hinaus sollte eine regelmäßige Erfassung der körperlichen Leistungsfähigkeit zur möglichst effektiven Ableitung von Trainingsinhalten oder Präventionsmaßnahmen berücksichtig werden. Auf Grundlage der hohen Reisetätigkeit und der ganzjährigen Saison könnte ein 1-2x jährlich stattfindendes Trainingslager, im Sinne eines Grundlagen- und Aufbautrainings zur Optimierung der Leistungsfähigkeit beitragen, das Konzept komplementieren. Zudem scheint eine ärztliche Rennbetreuung notwendig.
In einer verschränkten Lektüre von Curtius, Auerbach und Bachtin macht die Dissertation sichtbar, wie die Autoren mit ihren Arbeiten zur europäischen Literaturgeschichte nach einer ethischen Orientierung in der Krise der Moderne suchen. Ihr Konzept einer philologisch fundierten Geschichtsphilosophie in praktischer Absicht wird sowohl kultur- und theoriegeschichtlich aufbereitet, als auch anhand detaillierter Textanalysen nachvollzogen.
Der Blick auf den geschichtsphilosophischen Aspekt ihrer Forschungsarbeit erweist sich hierbei nicht nur insoweit als fruchtbar, als er sich als Schlüssel offenbart, philologische Mikrologie und breite Zusammenschau sowie ideengeschichtliche und gesellschaftliche Entwicklungen zu verknüpfen. Ihr Ansatz offenbart sich auch als wesentlich differenzierter, als es die gängigen Vorbehalte gegenüber der Geschichtsphilosophie vermuten lassen.
Die Untersuchung erweitert aus diesem Grund den methodischen Diskurshorizont, indem sie die Möglichkeiten einer kritischen Geschichtsphilosophie für gegenwärtige Fragen der Literaturgeschichte neu justiert. Dies geschieht über den Zugang so unterschiedlicher Rezeptionen wie der von Anselm Haverkamp, Edward Said, Terry Eagleton und Homi Bhabha, die einen Diskussionsraum eröffnen, welcher den eigenen historischen Standpunkt der Dissertation im Kontext von Postmoderne und Postkolonialismus reflektiert.
Der am 15. Juni 1875 in Frankfurt (Oder) geborene und langjährig in seiner Wahlheimat Potsdam praktizierende Allgemeinmediziner Georg Otto Schneider war einer der bedeutendsten ärztlichen Standesvertreter der ersten Hälfte des 20. Jahrhunderts. Eng verknüpft mit seinem Namen sind eine geradlinige, liberale Berufspolitik sowie die Entfaltung und der Erhalt beruflicher Selbstverwaltung in der brandenburgischen und gesamtdeutschen Ärzteschaft. Als führendes Mitglied in mehreren provinzialen und reichsweiten Verbänden engagierte sich Schneider über vier historische Epochen stets im Sinne einer freien Ausübung und autonomen Verwaltung des Arztberufes.
Im Deutschen Kaiserreich war Schneiders standespolitisches Handeln zunächst noch regional begrenzt. 1912 initiierte er die Errichtung eines Schutzverbandes für die Ärzte des Bezirks Potsdam, dem er über zehn Jahre vorsaß. In der Weimarer Republik stieg Schneider sodann zu einer Schlüsselfigur der Gesundheits- und ärztlichen Berufspolitik auf. 1920 belebte er den Ärzteverband für die Provinz Brandenburg, ab 1928 leitete er dazu in Personalunion die brandenburgische Ärztekammer. Bereits zwei Jahre zuvor hatte er die Geschäftsführung des Deutschen Ärztevereinsbundes übernommen. Infolge der Machtübernahme der Nationalsozialisten schied Schneider bis Mitte 1934 aus allen Ämtern aus, seine Bemühungen für den Erhalt der Berufsautonomie waren vergebens. Anders sah es zunächst nach Ende des Zweiten Weltkriegs aus. In der Sowjetischen Besatzungszone saß Schneider der Fachgruppe Ärzte im Freien Deutschen Gewerkschaftsbund Brandenburg vor und verteidigte die Möglichkeiten der selbstständigen Berufsverwaltung. Zudem war er von 1946 an bis zu seinem Tod am 26. Oktober 1949 Fraktionsvorsitzender der Liberal-Demokratischen Partei im brandenburgischen Landtag.
Vor dem Hintergrund des Lebens und Wirkens Georg Schneiders untersucht die Dissertation Kontinuitäten und Brüche im ärztlichen Organisationswesen, ausgehend vom Deutschen Kaiserreich über die Weimarer Epoche und den Nationalsozialismus bis hin zur Zeit der sowjetischen Besatzung. Die Arbeit stellt die Auswirkungen der jeweiligen politischen, sozioökonomischen und gesellschaftlichen Entwicklungen auf den Ärztestand und die entsprechenden Reaktionen der ärztlichen Berufsvertreter, allen voran Georg Schneiders, gegenüber. Dabei hinterfragt sie, inwiefern sich die ärztlichen Organisationsstrukturen dem jeweiligen System anpassten und welchen Einfluss Schneider als einzelne Person in den größeren Institutionen entfalten konnte.
Kolorektalkrebs (CRC) ist die dritthäufigste Tumorerkrankung weltweit. Neben dem Alter spielt auch die Ernährung eine wichtige Rolle bei der Entstehung der Krankheit. Eine vermutlich krebspräventive Wirkung wird dabei dem Spurenelement Selen zugeschrieben, das fast ausschließlich über Lebensmittel aufgenommen wird. So hängt beispielsweise ein niedriger Selenstatus mit dem Risiko, im Laufe des Lebens an CRC zu erkranken, zusammen. Seine Funktionen vermittelt Selen dabei überwiegend durch Selenoproteine, in denen es in Form von Selenocystein eingebaut wird. Zu den bisher am besten untersuchten Selenoproteinen mit möglicher Funktion während CRC zählen die Glutathionperoxidasen (GPXen). Die Mitglieder dieser Familie tragen aufgrund ihrer Hydroperoxid-reduzierenden Eigenschaften entscheidend zum Schutz der Zellen vor oxidativem Stress bei. Dies kann je nach Art und Stadium des Tumors entweder krebshemmend oder -fördernd wirken, da auch transformierte Zellen von dieser Schutzfunktion profitieren.
In dieser Arbeit wurde die GPX2 in HT29-Darmkrebszellen mithilfe stabil-transfizierter shRNA herunterreguliert, um die Funktion des Enzyms vor allem in Hinblick auf regulierte Signalwege zu untersuchen. Ein Knockdowns (KD) der strukturell ähnlichen GPX1 kam ebenfalls zum Einsatz, um gezielt Isoform-spezifische Funktionen unterscheiden zu können. Anhand eines PCR-Arrays wurden Signalwege identifiziert, die auf einen Einfluss der beiden Proteine im Zellwachstum hindeuteten. Anschließende Untersuchungen ließen auf einen verminderten Differenzierungsstatus in den GPX1- und GPX2-KDs aufgrund einer geringeren Aktivität der Alkalischen Phosphatase schließen. Zudem war die Zellviabilität im Neutralrot-Assay (NRU) bei Fehlen der GPX1 bzw. GPX2 im Vergleich zur Kontrolle reduziert. Die Ergebnisse des PCR-Arrays, und speziell für die GPX2 frühere Untersuchungen der Arbeitsgruppe, wiesen weiterhin auf eine Rolle der beiden Proteine in der entzündungsgetriebenen Karzinogenese hin. Daher wurden auch mögliche Interaktionen mit dem NFκB-Signalweg analysiert. Eine Stimulation der Zellen mit dem proinflammatorischen Zytokin IL1β ging mit einer verstärkten Aktivierung der MAP-Kinasen ERK1/2 in den Zellen mit GPX1- bzw. GPX2-KD einher. Die gleichzeitige Behandlung mit dem Antioxidans NAC führte nicht zur Rücknahme der Effekte in den KDs, sodass möglicherweise nicht nur die antioxidativen Eigenschaften der Enzyme bei der Interaktion mit diesen Signalwegsproteinen relevant sind.
Weiterhin wurden Analysen zum Substratspektrum der GPX2 in HCT116-Zellen mit einer Überexpression des Proteins durchgeführt. Dabei zeigte sich mittels NRU-Assay und DNA-Laddering, dass die GPX2 besonders vor den proapoptotischen Effekten einer Behandlung mit den Lipidhydroperoxiden HPODE und HPETE schützt.
Im Gegensatz zur GPX2 lässt sich Selenoprotein H (SELENOH) stärker durch die alimentäre Selenzufuhr beeinflussen. Einer möglichen Nutzung als Biomarker oder gar als Ansatzpunkt bei der Prävention bzw. Behandlung von CRC steht allerdings unvollständiges Wissen über die Funktion des Proteins gegenüber. Zur genaueren Charakterisierung von SELENOH wurden daher stabil-transfizierte KD-Klone in HT29- und Caco2-Zellen hergestellt und zunächst auf ihre Tumorigenität untersucht.
Zellen mit SELENOH-KD bildeten mehr und größere Kolonien im Soft Agar und zeigten ein erhöhtes Proliferations- und Migrationspotenzial im Vergleich zur Kontrolle.
Ein Xenograft in Nacktmäusen resultierte zudem in einer stärkeren Tumorbildung nach Injektion von KD-Zellen. Untersuchungen zur Beteiligung von SELENOH an der Zellzyklusregulation deuten auf eine hemmende Rolle des Proteins in der G1/S-Phase hin.
Die weiterhin beobachtete Hochregulation von SELENOH in humanen Adenokarzinomen und präkanzerösem Mausgewebe lässt sich möglicherweise mit der postulierten Schutzfunktion vor oxidativen Zell- und DNA-Schäden erklären. In gesunden Darmepithelzellen war das Protein vorrangig am Kryptengrund lokalisiert, was zu einer potenziellen Rolle während der gastrointestinalen Differenzierung passt.
Die vorliegende Arbeit behandelt die Synthese und Charakterisierung von funktionalisierten Alkydharzen und die photoinduzierte Polymerisation dieser unter Einsatz einer Quecksilberdampflampe oder einer UV LED mit unterschiedlicher Lichtintensität. Der Fokus dieser Arbeit bestand in der gezielten Substitution der internalen Doppelbindungen der Fettsäureester durch reaktivere Gruppen, wie Acrylate oder Methacrylate, welche für Alkydharze in dieser Form so in der Literatur nicht beschrieben sind. Untersuchungen des Polymerisationsverhaltens dieser funktionalisierten Harze wurden mit der Photo DSC durchgeführt, wobei Bis – (4 – methoxybenzoyl) diethylgermanium als Photoinitiator diente. Die Ergebnisse haben gezeigt, dass die Harze radikalisch polymerisiert werden können und eine geringere Abhängigkeit von der Umgebungsatmosphäre (Luftsauerstoff bzw. Stickstoff) vorliegt. Dies ist so in der Literatur für funktionalisierte Alkydharze nicht bekannt. Abmischungen von unterschiedlichen Monomeren und funktionalisierten Harzen bewirkten eine Steigerung der Viskosität sowie eine Verringerung der Sauerstoffinhibierung im Zuge der photoinduzierten Polymerisation unter Luftsauerstoff für die Quecksilberdampflampe und der UV LED.
Zur Untersuchung der sauerstoffinhibierenden Wirkung der Harze sind Synthesen unterschiedlicher, funktionalisierter Ölsäuremethylester als Modellsubstanzen durchgeführt worden. Ein verbessertes Polymerisationsverhalten und eine geringe Abhängigkeit von der Umgebungsatmosphäre konnte für die Modelle nachgewiesen werden. Zur Aufklärung des verbesserten Polymerisationsverhaltens sind gezielt Substituenten (Imidazol, Brom, Alkohol, Acetat) in den funktionalisierten Ölsäuremethylester eingebaut worden, um den Einfluss dieser aufzuzeigen. Im Rahmen dieser Synthesen sind neuartige Strukturen synthetisiert worden, welche so in der Literatur nicht beschrieben sind. Die Gegenüberstellung der Polymerisationszeit, der Umsatz der (Meth-)Acrylatgruppen sowie die Zeit zum Erreichen der maximalen Polymerisationsgeschwindigkeit unter Verwendung von unterschiedlichen UV Lichtquellen hat einen Einfluss der Substituenten auf das Polymerisationsverhalten gezeigt.
Due to a challenging population growth and environmental changes, a need for new routes to provide required chemicals for human necessities arises. An effective solution discussed in this thesis is industrial heterogeneous catalysis. The development of an advanced industrial heterogeneous catalyst is investigated herein by considering porous carbon nano-material as supports and modifying their surface chemistry structure with heteroatoms. Such modifications showed a significant influence on the performance of the catalyst and provided a deeper insight regarding the interaction between the surface structure of the catalyst and the surrounding phase. This thesis contributes to the few present studies about heteroatoms effect on the catalyst performance and emphasizes on the importance of understanding surface structure functionalization in a catalyst in different phases (liquid and gaseous) and for different reactions (hydrogenolysis, oxidation, and hydrogenation/ polymerization). Herein, the heteroatoms utilized for the modifications are hydrogen (H), oxygen (O), and nitrogen (N). The heteroatoms effect on the metal particle size, on the polarity of the support/ the catalyst, on the catalytic performance (activity, selectivity, and stability), and on the interaction with the surrounding phase has been explored. First hierarchical porous carbon nanomaterials functionalized with heteroatoms (N) is synthesized and applied as supports for nickel nanoparticles for hydrogenolysis process of kraft lignin in liquid phase. This reaction has been performed in batch and flow reactors for three different catalysts, two of comparable hierarchical porosity, yet one is modified with N and the other is not, and a third is a prepared catalyst from a commercial carbon support. The reaction production and analyses show that the catalysts with hierarchical porosity perform catalytically much better than in presence of a commercial carbon support with lower surface area. Moreover, the modification with N-heteroatoms enhanced the catalytic performance because the heteroatom modified porous carbon material with nickel nanoparticles catalyst (Ni-NDC) performed highest among the other catalysts. In the flow reactor, Ni-NDC selectively degraded the ether bonds (β-O-4) in kraft lignin with an activity of 2.2 x10^-4 mg lignin mg Ni-1 s-1 for 50 h at 350°C and 3.5 mL min-1 flow, providing ~99 % conversion to shorter chained chemicals (mainly guaiacol derivatives). Then, the functionalization of carbon surface was further studied in selective oxidation of glucose to gluconic acid using < 1 wt. % of gold (Au) deposited on the previously-mentioned synthesized carbon (C) supports with different functionalities (Au-CGlucose, Au-CGlucose-H, Au-CGlucose-O, Au-CGlucoseamine). Except for Au-CGlucose-O, the other catalysts achieved full glucose conversion within 40-120 min and 100% selectivity towards gluconic acid with a maximum activity of 1.5 molGlucose molAu-1 s-1 in an aqueous phase at 45 °C and pH 9. Each heteroatom influenced the polarity of the carbon differently, affecting by that the deposition of Au on the support and thus the activity of the catalyst and its selectivity. The heteroatom effect was further investigated in a gas phase. The Fischer-Tropsch reaction was applied to convert synthetic gas (CO and H2) to short olefins and paraffins using surface-functionalized carbon nanotubes (CNTs) with heteroatoms as supports for ion (Fe) deposition in presence and absence of promoters (Na and S). The results showed the promoted Fe-CNT doped with nitrogen catalyst to be stable up to 180 h and selective to the formation of olefins (~ 47 %) and paraffins (~6 %) with a conversion of CO ~ 92 % at a maximum activity of 94 *10^-5 mol CO g Fe-1 s-1. The more information given regarding this topic can open wide range of applications not only in catalysis, but in other approaches as well. In conclusion, incorporation of heteroatoms can be the next approach for an advanced industrial heterogeneous catalyst, but also for other applications (e.g. electrocatalysis, gas adsorption, or supercapacitors).
Physical computing covers the design and realization of interactive objects and installations and allows learners to develop concrete, tangible products of the real world, which arise from their imagination. This can be used in computer science education to provide learners with interesting and motivating access to the different topic areas of the subject in constructionist and creative learning environments. However, if at all, physical computing has so far mostly been taught in afternoon clubs or other extracurricular settings. Thus, for the majority of students so far there are no opportunities to design and create their own interactive objects in regular school lessons.
Despite its increasing popularity also for schools, the topic has not yet been clearly and sufficiently characterized in the context of computer science education. The aim of this doctoral thesis therefore is to clarify physical computing from the perspective of computer science education and to adequately prepare the topic both content-wise and methodologically for secondary school teaching. For this purpose, teaching examples, activities, materials and guidelines for classroom use are developed, implemented and evaluated in schools.
In the theoretical part of the thesis, first the topic is examined from a technical point of view. A structured literature analysis shows that basic concepts used in physical computing can be derived from embedded systems, which are the core of a large field of different application areas and disciplines. Typical methods of physical computing in professional settings are analyzed and, from an educational perspective, elements suitable for computer science teaching in secondary schools are extracted, e. g. tinkering and prototyping. The investigation and classification of suitable tools for school teaching show that microcontrollers and mini computers, often with extensions that greatly facilitate the handling of additional components, are particularly attractive tools for secondary education. Considering the perspectives of science, teachers, students and society, in addition to general design principles, exemplary teaching approaches for school education and suitable learning materials are developed and the design, production and evaluation of a physical computing construction kit suitable for teaching is described.
In the practical part of this thesis, with “My Interactive Garden”, an exemplary approach to integrate physical computing in computer science teaching is tested and evaluated in different courses and refined based on the findings in a design-based research approach. In a series of workshops on physical computing, which is based on a concept for constructionist professional development that is developed specifically for this purpose, teachers are empowered and encouraged to develop and conduct physical computing lessons suitable for their particular classroom settings. Based on their in-class experiences, a process model of physical computing teaching is derived. Interviews with those teachers illustrate that benefits of physical computing, including the tangibility of crafted objects and creativity in the classroom, outweigh possible drawbacks like longer preparation times, technical difficulties or difficult assessment. Hurdles in the classroom are identified and possible solutions discussed.
Empirical investigations in the different settings reveal that “My Interactive Garden” and physical computing in general have a positive impact, among others, on learner motivation, fun and interest in class and perceived competencies.
Finally, the results from all evaluations are combined to evaluate the design principles for physical computing teaching and to provide a perspective on the development of decision-making aids for physical computing activities in school education.
There are numerous situations in which people ask for something or make a request, e.g. asking a favor, asking for help or requesting compliance with specific norms. For this reason, how to ask for something in order to increase people’s willingness to fulfill such requests is one of the most important question for many people working in various different fields of responsibility such as charitable giving, marketing, management or policy making.
This dissertation consists of four chapters that deal with the effects of small changes in the decision-making environment on altruistic decision-making and compliance behavior. Most notably, written communication as an influencing factor is the focus of the first three chapters. The starting point was the question how to devise a request in order to maximize its chance of success (Chapter 1). The results of the first chapter originate the ideas for the second and third chapter. Chapter 2 analyzes how communication by a neutral third-party, i.e. a text from the experimenters that either reminds potential benefactors of their responsibility or highlights their freedom of choice, affects altruistic decision-making. Chapter 3 elaborates on the effect of thanking people in advance when asking them for help. While being not as closely related to the other chapters as the three first ones are, the fourth chapter deals as well with the question how compliance (here: compliance with norms and rules) is affected by subtle manipulations of the environment in which decisions are made. This chapter analyzes the effect of default settings in a tax return on tax compliance.
In order to study the research questions outlined above, controlled experiments were conducted. Chapter 1, which analyzes the effect of text messages on the decision to give something to another person, employs a mini-dictator game. The recipient sends a free-form text message to the dictator before the latter makes a binary decision whether or not to give part of her or his endowment to the recipient. We find that putting effort into the message by writing a long note without spelling mistakes increases dictators’ willingness to give. Moreover, writing in a humorous way and mentioning reasons why the money is needed pays off. Furthermore, men and women seem to react differently to some message categories. Only men react positively to efficiency arguments, while only women react to messages that emphasize the dictator’s power and responsibility.
Building on this last result, Chapter 2 attempts to disentangle the effect of reminding potential benefactors of their responsibility for the potential beneficiary and the effect of highlighting their decision power and freedom of choice on altruistic decision-making by studying the effects of two different texts on giving in a dictator game. We find that only men react positively to a text that stresses their responsibility for the recipient by giving more to her or him, whereas only women seem to react positively to a text that emphasizes their decision power and freedom of choice.
Chapter 3 focuses on the compliance with a request. In the experiment, participants are asked to provide a detailed answer to an open question. Compliance is measured by the effort participants spend on answering the question. The treatment variable is whether or not they see the text “thanks in advance.” We find that participants react negatively by putting less effort into complying with the request in response to the phrase “thanks in advance.”
Chapter 4 studies the effect of prefilled tax returns with mostly inaccurate default values on tax compliance. In a laboratory experiment, participants earn income by performing a real-effort task and must subsequently file a tax return for three consecutive rounds. In the main treatment, the tax return is prefilled with a default value, resulting from participants’ own performance in previous rounds, which varies in its relative size. The results suggest that there is no lasting effect of a default value on tax honesty, neither for relatively low nor relatively high defaults. However, participants who face a default that is lower than their true income in the first round evade significantly and substantially more taxes in this round than participants in the control treatment without a default.
Arctic warming has implications for the functioning of terrestrial Arctic ecosystems, global climate and socioeconomic systems of northern communities. A research gap exists in high spatial resolution monitoring and understanding of the seasonality of permafrost degradation, spring snowmelt and vegetation phenology. This thesis explores the diversity and utility of dense TerraSAR-X (TSX) X-Band time series for monitoring ice-rich riverbank erosion, snowmelt, and phenology of Arctic vegetation at long-term study sites in the central Lena Delta, Russia and on Qikiqtaruk (Herschel Island), Canada. In the thesis the following three research questions are addressed:
• Is TSX time series capable of monitoring the dynamics of rapid permafrost degradation in ice-rich permafrost on an intra-seasonal scale and can these datasets in combination with climate data identify the climatic drivers of permafrost degradation?
• Can multi-pass and multi-polarized TSX time series adequately monitor seasonal snow cover and snowmelt in small Arctic catchments and how does it perform compared to optical satellite data and field-based measurements?
• Do TSX time series reflect the phenology of Arctic vegetation and how does the recorded signal compare to in-situ greenness data from RGB time-lapse camera data and vegetation height from field surveys?
To answer the research questions three years of TSX backscatter data from 2013 to 2015 for the Lena Delta study site and from 2015 to 2017 for the Qikiqtaruk study site were used in quantitative and qualitative analysis complimentary with optical satellite data and in-situ time-lapse imagery.
The dynamics of intra-seasonal ice-rich riverbank erosion in the central Lena Delta, Russia were quantified using TSX backscatter data at 2.4 m spatial resolution in HH polarization and validated with 0.5 m spatial resolution optical satellite data and field-based time-lapse camera data. Cliff top lines were automatically extracted from TSX intensity images using threshold-based segmentation and vectorization and combined in a geoinformation system with manually digitized cliff top lines from the optical satellite data and rates of erosion extracted from time-lapse cameras. The results suggest that the cliff top eroded at a constant rate throughout the entire erosional season. Linear mixed models confirmed that erosion was coupled with air temperature and precipitation at an annual scale, seasonal fluctuations did not influence 22-day erosion rates. The results highlight the potential of HH polarized X-Band backscatter data for high temporal resolution monitoring of rapid permafrost degradation.
The distinct signature of wet snow in backscatter intensity images of TSX data was exploited to generate wet snow cover extent (SCE) maps on Qikiqtaruk at high temporal resolution. TSX SCE showed high similarity to Landsat 8-derived SCE when using cross-polarized VH data. Fractional snow cover (FSC) time series were extracted from TSX and optical SCE and compared to FSC estimations from in-situ time-lapse imagery. The TSX products showed strong agreement with the in-situ data and significantly improved the temporal resolution compared to the Landsat 8 time series. The final combined FSC time series revealed two topography-dependent snowmelt patterns that corresponded to in-situ measurements. Additionally TSX was able to detect snow patches longer in the season than Landsat 8, underlining the advantage of TSX for detection of old snow. The TSX-derived snow information provided valuable insights into snowmelt dynamics on Qikiqtaruk previously not available.
The sensitivity of TSX to vegetation structure associated with phenological changes was explored on Qikiqtaruk. Backscatter and coherence time series were compared to greenness data extracted from in-situ digital time-lapse cameras and detailed vegetation parameters on 30 areas of interest. Supporting previous results, vegetation height corresponded to backscatter intensity in co-polarized HH/VV at an incidence angle of 31°. The dry, tall shrub dominated ecological class showed increasing backscatter with increasing greenness when using the cross polarized VH/HH channel at 32° incidence angle. This is likely driven by volume scattering of emerging and expanding leaves. Ecological classes with more prostrate vegetation and higher bare ground contributions showed decreasing backscatter trends over the growing season in the co-polarized VV/HH channels likely a result of surface drying instead of a vegetation structure signal. The results from shrub dominated areas are promising and provide a complementary data source for high temporal monitoring of vegetation phenology.
Overall this thesis demonstrates that dense time series of TSX with optical remote sensing and in-situ time-lapse data are complementary and can be used to monitor rapid and seasonal processes in Arctic landscapes at high spatial and temporal resolution.
Eta Carinae
(2018)
The exceptional binary star Eta Carinae has been fascinating scientists and the people in the Southern hemisphere alike for hundreds of years. It survived an enormous outbreak, comparable to a supernova energy-wise, and for a short period became the brightest star of the night sky. From observations from the radio regime to X-rays the system's characteristics and its emission in photon energies up to ~ 50 keV are well studied today. The binary is composed of two massive stars of ~ 30 and ~ 100 solar masses. Either star drives a strong stellar wind that continuously carries away a fraction of its mass. The collision of these winds leads to a shock on each side of the encounter. In the wind-wind-collision region plasma gets heated when it is overrun by the shocks. Part of the emission seen in X-rays can be attributed to this plasma. Above ~ 50 keV the emission is no longer of thermal origin: the required plasma temperature exceeds the available mechanical energy input of the stellar winds. In contrast to its observational history in thermal energies observational evidence of Eta Carinae's non-thermal emission has only recently built up. In high-energy gamma-rays Eta Carinae is the only binary of its kind that has been detected unambiguously. Its energy spectrum reaches up to ~ hundred GeV, a regime where satellite-based gamma-ray experiments run out of statistics. Ground-based gamma-ray experiments have the advantage of large photon collection areas. H.E.S.S. is the only gamma-ray experiment located in the Southern hemisphere and thus able to observe Eta Carinae in this energy range. H.E.S.S. measures gamma-rays via electromagnetic showers of particles that very-high-energy gamma-rays initiate in the atmosphere. The main challenge in observations of Eta Carinae with H.E.S.S. is the UV emission of the Carina nebula that leads to a background that is up to 10 times stronger than usual for H.E.S.S. This thesis presents the first detection of a colliding-wind binary in very-high-energy gamma-rays and documents the studies that led to it. The differential gamma-ray energy spectrum of Eta Carinae is measured up to 700 GeV. A hadronic and leptonic origin of the gamma-ray emission is discussed and based on the comparison of cooling times a hadronic scenario is favoured.
This dissertation consists of five self-contained essays, addressing different aspects of career choices, especially the choice of entrepreneurship, under risk and ambiguity. In Chapter 2, the first essay develops an occupational choice model with boundedly rational agents, who lack information, receive noisy feedback, and are restricted in their decisions by their personality, to analyze and explain puzzling empirical evidence on entrepreneurial decision processes. In the second essay, in Chapter 3, I contribute to the literature on entrepreneurial choice by constructing a general career choice model on the basis of the assumption that outcomes are partially ambiguous. The third essay, in Chapter 4, theoretically and empirically analyzes the impact of media on career choices, where information on entrepreneurship provided by the media is treated as an informational shock affecting prior beliefs. The fourth essay, presented in Chapter 5, contains an empirical analysis of the effects of cyclical macro variables (GDP and unemployment) on innovative start-ups in Germany. In the fifth, and last, essay in Chapter 6, we examine whether information on personality is useful for advice, using the example of career advice.
This dissertation consists of four self-contained papers that deal with the implications of financial market imperfections and heterogeneity. The analysis mainly relates to the class of incomplete-markets models but covers different research topics.
The first paper deals with the distributional effects of financial integration for developing countries. Based on a simple heterogeneous-agent approach, it is shown that capital owners experience large welfare losses while only workers moderately gain due to higher wages. The large welfare losses for capital owners contrast with the small average welfare gains from representative-agent economies and indicate that a strong opposition against capital market opening has to be expected.
The second paper considers the puzzling observation of capital flows from poor to rich countries and the accompanying changes in domestic economic development. Motivated by the mixed results from the literature, we employ an incomplete-markets model with different types of idiosyncratic risk and borrowing constraints. Based on different scenarios, we analyze under what conditions the presence of financial market imperfections contributes to explain the empirical findings and how the conditions may change with different model assumptions.
The third paper deals with the interplay of incomplete information and financial market imperfections in an incomplete-markets economy. In particular, it analyzes the impact of incomplete information about idiosyncratic income shocks on aggregate saving. The results show that the effect of incomplete information is not only quantitatively substantial but also qualitatively ambiguous and varies with the influence of the income risk and the borrowing constraint.
Finally, the fourth paper analyzes the influence of different types of fiscal rules on the response of key macroeconomic variables to a government spending shock. We find that a strong temporary increase in public debt contributes to stabilizing consumption and leisure in the first periods following the change in government spending, whereas a non-debt-intensive fiscal rule leads to a faster recovery of consumption, leisure, capital and output in later periods. Regarding optimal debt policy, we find that a debt-intensive fiscal rule leads to the largest aggregate welfare benefit and that the individual welfare gain is particularly high for wealth-poor agents.
Die intrazelluläre Markierung mit geeigneten Reagenzien ermöglicht ihre bildgebende Darstellung in lebenden Organismen. Dieses Verfahren (auch „Zell-Tracking“ genannt) wird in der Grundlagenforschung zur Entwicklung zellulärer Therapien, für die Erforschung pathologischer Prozesse, wie der Metastasierung, sowie für Therapiekontrollen eingesetzt. Besondere Bedeutung haben in den letzten Jahren zelluläre Therapien mit Stammzellen erlangt, da sie großes Potential bei der Regeneration von Geweben bei Krankheiten wie Morbus Parkinson oder Typ-1-Diabetes versprechen. Für die Entwicklung einer zellulären Therapie sind Informationen über den Verbleib der applizierten Zellen in vivo (Homing-Potential), über ihre Zellphysiologie sowie über die Entstehung möglicher Entzündungen notwendig. Das Ziel der vorliegenden Arbeit war daher die Synthese von Markierungsreagenzien, die nicht nur eine effiziente Zellmarkierung ermöglichen, sondern einen synergistischen Effekt hinsichtlich des modalitätsübergreifenden Einsatzes in den bildgebenden Verfahren MRT und Laser-Ablation(LA)-ICP-MS erlauben. Die MRT-Bildgebung ermöglicht die nicht invasive Nachverfolgung markierter Zellen in vivo und die LA-ICP-MS die anschließende ex vivo Analytik zur Darstellung der Elementverteilung (Bioimaging) in einer Biopsieprobe oder in einem Gewebeschnitt. Für diese Zwecke wurden zwei verschiedene Markierungsreagenzien mit dem kontrastgebenden Element Gadolinium synthetisiert. Gadolinium eignet sich aufgrund seines hohen magnetischen Moments hervorragend für die MRT-Bildgebung und da es in Biomolekülen nicht natürlich vorkommt, konnten die Reagenzien gleichermaßen für die Zellmarkierung und das Bioimaging mit der LA-ICP-MS untersucht werden. Für die Synthese eines makromolekularen Reagenzes wurde das kommerziell verfügbare Dendrimer G5-PAMAM über bifunktionelle Linker mit dem Chelator DOTA funktionalisiert, um anschließend Gadolinium zu komplexieren. Ein zweites, nanopartikuläres Reagenz wurde über eine Solvothermal-Synthese erhalten, bei der Ln:GdVO4-Nanokristalle mit einer funktionellen Polyacrylsäure(PAA)-Hülle dargestellt wurden. Die Dotierung der Ln:GdVO4-PAA Nanokristalle mit verschiedenen Lanthanoiden (Ln=Eu, Tb) zeigte ihre prinzipielle Multiplexfähigkeit in der LA-ICP-MS. Beide Markierungsreagenzien zeichneten sich durch gute Bioverträglichkeiten und r1-Relaxivitäten aus, was zudem ihr Potential für Anwendungen als präklinische „blood-pool“ MRT-Kontrastmittel belegte. Die Untersuchung der Zellmarkierung erfolgte anhand einer Tumorzelllinie und einer Stammzelllinie, wobei beide Zellarten erfolgreich intrazellulär mit beiden Reagenzien markiert wurden. Nach der Zellmarkierung veranschaulichte die in vitro MRT-Bildgebung von Zell-Phantomen eine deutlichere Kontrastverstärkung der Zellen nach der Markierung mit den Nanokristallen im Vergleich zum kommerziellen Kontrastmittel Magnevist®. Die hohe Effizienz der Zellmarkierung mit den Nanokristallen und die damit verbundenen hohen Signalintensitäten in einer einzelnen Zelle erlaubten beim Bioimaging mit der LA-ICP-MS, Messungen bis zu einer Auflösung von 4 µm Laser Spot Size. Nach der Zellmarkierung mit den DOTA(Gd3+)-funktionalisierten G5-PAMAM Dendrimeren waren hingegen Aufnahmen mit der LA-ICP-MS nur bis zu einer Auflösung von 12 µm Laser Spot Size möglich. Insgesamt waren die Ln:GdVO4-PAA Nanokristalle mit größerer Ausbeute und kostengünstiger herstellbar als die DOTA(Gd3+)-funktionalisierten G5-PAMAM Dendrimere und zeigten zudem eine effizientere Zellmarkierung. Die Ln:GdVO4-PAA Nanokristalle erscheinen somit für das Zell-Tracking als besonders vielversprechend. Darauf aufbauend wurden die Nanokristalle zur Etablierung der Antikörper-Konjugation ausgewählt, was sie für die molekulare in vivo Bildgebung sowie für die Immuno-Bildgebung von Gewebeschnitten oder Biopsie-Proben mit der LA-ICP-MS anwendbar macht.
In dieser Arbeit steht die Entwicklung einer Sensorplattform für biochemische Anwendungen, welche auf einem optischen Detektionsprinzips beruht, im Vordergrund. Während der Entwicklung wurden zwei komplementäre Konzeptideen behandelt, zum einen ein Sensor, der auf photonischen Kristallen und Wellenleiterstrukturen basiert und zum anderen einen faserbasierten Sensor, der chemisch modifizierte Faser-Bragg-Gitter enthält. Das optische Detektionsprinzip in beiden Sensorideen ist die resultierende Brechungsindexänderung als messbare physikochemische Kenngröße.
Das aus der Natur bekannte Phänomen der photonischen Kristalle, das u. a. bei Opalen und bei Schmetterlingen zu finden ist, wurde bereits 1887 von Lord Rayleigh beschrieben. Er beschrieb die optischen Eigenschaften von periodischen mehrschichtigen Filmen, welche als vereinfachtes Modell eines eindimensionalen photonischen Kristalls verstanden werden können. Die Periodizität der Brechungsindexänderung resultiert in einem optischen Filter für Frequenzen in einem bestimmten spektralen Bereich, weshalb dann dort keine Lichtausbreitung mehr möglich ist. Wird dieses System aber durch eine Defektstelle in der Brechungsindexperiodizität gestört, sodass daraus zwei perfekt periodische Systeme entstehen, ist die Lichtausbreitung für eine bestimmte Frequenz dennoch möglich. In der Folge resultiert daraus ein schmalbandiges Signal im Transmissionsspektrum. Die erlaubte Frequenz ist dabei u. a. abhängig vom Brechungsindexunterschied des periodischen Systems, d.h. Veränderung des Brechungsindexes einer Schicht führt zu einer spektralen Verschiebung der erlaubten Frequenz, dadurch kann dieses Sensorkonzept für biochemische Sensorik ausgenutzt werden [1]. Diese Entwicklung des auf photonischen Kristallen basierenden Sensors war eine Kooperation mit dem Industriepartner „Nanoplus GmbH“. In der Doktorarbeit wurden Simulationen und praktischen Arbeiten zur Designentwicklung des Sensors und die Arbeiten an einem ersten Modellaufbau für die biochemischen Anwendungen durchgeführt.
Für den faserbasierten Sensor wurden Faser-Bragg-Gitter in den Faserkern hineingeschrieben. Hill et al. entdeckten 1978, dass solche Gitterstrukturen genau wie photonische Kristalle als optische Filter fungieren [2]. Die Gitter bestehen dabei aus Änderungen des Brechungsindexes im Faserkern. Im Laufe der nächsten vierzig Jahren wurden verschiedene Einschreibetechniken und Gitterstrukturen entwickelt, weshalb die Eigenschaften der jeweiligen Gitterstrukturen variieren. Eine solche Gitterstruktur sind u. a. die Faser-Bragg-Gitter, deren Gitterperiode, d. h. die Abstände der Brechungsindexmodifikationen, sich im Nanometer- bis Mikrometerbereich befinden. Aufgrund der kleinen Gitterperiode wird eine rückwärtsführende Welle im Kern für eine bestimmte Frequenz bzw. Wellenlänge, der Bragg-Wellenlänge, erzeugt. Im Endeffekt resultiert daraus ein schmalbandiges Signal sowohl im Transmissionsspektrum, als auch im Reflexionsspektrum. Die Resonanzwellenlänge ist dabei proportional zu der Gitterperiode und dem effektiven Brechungsindex, welcher vom Brechungsindex des Kerns und des kernumgebenen Materials abhängig ist. Letztlich eignet sich diese Technik für physikochemische Sensorik. Im Rahmen dieser Arbeit wurden die Gitter mit Hilfe einer relativen neuen Herstellungsmethode in die Fasern geschrieben [3]. Anschließend stand die Entwicklung eines Biosensors im Vordergrund, wobei zunächst ein Protokoll zum Ätzen der Faser mit Flusssäure entwickelt worden ist, dass das System sensitiv zum umgebenen Brechungsindex macht. Am Ende wurde ein Modellaufbau realisiert, indem ein Modellsystem, hier die Detektion vom C-reaktiven Protein mittels spezifischen einzelsträngigen DNS-Aptameren, erfolgreich getestet und quantifiziert worden ist.
1 Mandal, S.; Erickson, D. Nanoscale Optofluidic Sensor Arrays. Opt. Express 2008, 16 (3), 1623–1631.
2 Hill, K. O.; Fujii, Y.; Johnson, D. C.; Kawasaki, B. S. Photosensitivity in Optical Fiber Waveguides: Application to Reflection Filter Fabrication. Appl. Phys. Lett. 1978, 32 (10), 647–649.
3 Martínez, A.; Dubov, M.; Khrushchev, I.; Bennion, I. Direct Writing of Fibre Bragg Gratings by Femtosecond Laser. Electron. Lett. 2004, 40 (19), 1170.
The last years have shown an increasing sophistication of attacks against enterprises. Traditional security solutions like firewalls, anti-virus systems and generally Intrusion Detection Systems (IDSs) are no longer sufficient to protect an enterprise against these advanced attacks. One popular approach to tackle this issue is to collect and analyze events generated across the IT landscape of an enterprise. This task is achieved by the utilization of Security Information and Event Management (SIEM) systems. However, the majority of the currently existing SIEM solutions is not capable of handling the massive volume of data and the diversity of event representations. Even if these solutions can collect the data at a central place, they are neither able to extract all relevant information from the events nor correlate events across various sources. Hence, only rather simple attacks are detected, whereas complex attacks, consisting of multiple stages, remain undetected. Undoubtedly, security operators of large enterprises are faced with a typical Big Data problem.
In this thesis, we propose and implement a prototypical SIEM system named Real-Time Event Analysis and Monitoring System (REAMS) that addresses the Big Data challenges of event data with common paradigms, such as data normalization, multi-threading, in-memory storage, and distributed processing. In particular, a mostly stream-based event processing workflow is proposed that collects, normalizes, persists and analyzes events in near real-time. In this regard, we have made various contributions in the SIEM context. First, we propose a high-performance normalization algorithm that is highly parallelized across threads and distributed across nodes. Second, we are persisting into an in-memory database for fast querying and correlation in the context of attack detection. Third, we propose various analysis layers, such as anomaly- and signature-based detection, that run on top of the normalized and correlated events. As a result, we demonstrate our capabilities to detect previously known as well as unknown attack patterns. Lastly, we have investigated the integration of cyber threat intelligence (CTI) into the analytical process, for instance, for correlating monitored user accounts with previously collected public identity leaks to identify possible compromised user accounts.
In summary, we show that a SIEM system can indeed monitor a large enterprise environment with a massive load of incoming events. As a result, complex attacks spanning across the whole network can be uncovered and mitigated, which is an advancement in comparison to existing SIEM systems on the market.
El objetivo de este trabajo es investigar cómo se evoca la realidad del habla en la novela Rosario Tijeras, del colombiano Jorge Franco, y sus traducciones al alemán y al inglés. En esta novela negra, el autor recurre a un lenguaje coloquial llamado parlache, típico de un sector socioeconómico marginal de Medellín, y cuya influencia se ha extendido a todos los ámbitos de la ciudad y del país. El estudio se centrará en la descripción de la variación diatópica y diastrática en la ficción; concretamente, de las expresiones típicas del parlache. Con ello se pretende determinar la contribución de ciertos recursos lingüísticos típicos de esta variedad del español a la construcción de un diálogo hablado verosímil. Además, se estudiará cómo se reexpresa este «colorido local» en la evocación de la oralidad en las traducciones. Se intentarán determinar, por tanto, las divergencias de traducción y profundizar en la descripción de la variación lingüística (diatópica y diastrática) en la traducción literaria.
Microbial processing of organic matter (OM) in the freshwater biosphere is a key component of global biogeochemical cycles. Freshwaters receive and process valuable amounts of leaf OM from their terrestrial landscape. These terrestrial subsidies provide an essential source of energy and nutrients to the aquatic environment as a function of heterotrophic processing by fungi and bacteria. Particularly in freshwaters with low in-situ primary production from algae (microalgae, cyanobacteria), microbial turnover of leaf OM significantly contributes to the productivity and functioning of freshwater ecosystems and not least their contribution to global carbon cycling.
Based on differences in their chemical composition, it is believed that leaf OM is less bioavailable to microbial heterotrophs than OM photosynthetically produced by algae. Especially particulate leaf OM, consisting predominantly of structurally complex and aromatic polymers, is assumed highly resistant to enzymatic breakdown by microbial heterotrophs. However, recent research has demonstrated that OM produced by algae promotes the heterotrophic breakdown of leaf OM in aquatic ecosystems, with profound consequences for the metabolism of leaf carbon (C) within microbial food webs. In my thesis, I aimed at investigating the underlying mechanisms of this so called priming effect of algal OM on the use of leaf C in natural microbial communities, focusing on fungi and bacteria.
The works of my thesis underline that algal OM provides highly bioavailable compounds to the microbial community that are quickly assimilated by bacteria (Paper II). The substrate composition of OM pools determines the proportion of fungi and bacteria within the microbial community (Paper I). Thereby, the fraction of algae OM in the aquatic OM pool stimulates the activity and hence contribution of bacterial communities to leaf C turnover by providing an essential energy and nutrient source for the assimilation of the structural complex leaf OM substrate. On the contrary, the assimilation of algal OM remains limited for fungal communities as a function of nutrient competition between fungi and bacteria (Paper I, II). In addition, results provide evidence that environmental conditions determine the strength of interactions between microalgae and heterotrophic bacteria during leaf OM decomposition (Paper I, III). However, the stimulatory effect of algal photoautotrophic activities on leaf C turnover remained significant even under highly dynamic environmental conditions, highlighting their functional role for ecosystem processes (Paper III).
The results of my thesis provide insights into the mechanisms by which algae affect the microbial turnover of leaf C in freshwaters. This in turn contributes to a better understanding of the function of algae in freshwater biogeochemical cycles, especially with regard to their interaction with the heterotrophic community.
The Sun is the nearest star to the Earth. It consists of an interior and an atmosphere. The convection zone is the outermost layer of the solar interior. A flux rope may emerge as a coherent structure from the convection zone into the solar atmosphere or be formed by magnetic reconnection in the atmosphere. A flux rope is a bundle of magnetic field lines twisting around an axis field line, creating a helical shape by which dense filament material can be supported against gravity. The flux rope is also considered as the key structure of the most energetic phenomena in the solar system, such as coronal mass ejections (CMEs) and flares. These magnetic flux ropes can produce severe geomagnetic storms. In particular, to improve the ability to forecast space weather, it is important to enrich our knowledge about the dynamic formation of flux ropes and the underlying physical mechanisms that initiate their eruption, such as a CME.
A confined eruption consists of a filament eruption and usually an associated are, but does not evolve into a CME; rather, the moving plasma is halted in the solar corona and usually seen to fall back. The first detailed observations of a confined filament eruption were obtained on 2002 May 27by the TRACE satellite in the 195 A band. So, in the Chapter 3, we focus on a flux rope instability model. A twisted flux rope can become unstable by entering the kink instability regime. We show that the kink instability, which occurs if the twist of a flux rope exceeds a critical value, is capable of initiating of an eruption. This model is tested against the well observed confined eruption on 2002 May 27 in a parametric magnetohydrodynamic (MHD) simulation study that comprises all phases of the event. Very good agreement with the essential observed properties is obtained, only except for a relatively poor matching of the initial filament height.
Therefore, in Chapter 4, we submerge the center point of the flux rope deeper below the photosphere to obtain a flatter coronal rope section and a better matching with the initial height profile of the erupting filament. This implies a more realistic inclusion of the photospheric line tying. All basic assumptions and the other parameter settings are kept the same as in Chapter 3. This complement of the parametric study shows that the flux rope instability model can yield an even better match with the observational data. We also focus in Chapters 3 and 4 on the magnetic reconnection during the confined eruption, demonstrating that it occurs in two distinct locations and phases that correspond to the observed brightenings and changes of topology, and consider the fate of the erupting flux, which can reform a (less twisted) flux rope.
The Sun also produces series of homologous eruptions, i.e. eruptions which occur repetitively in the same active region and are of similar morphology. Therefore, in Chapter 5, we employ the reformed flux rope as a new initial condition, to investigate the possibility of subsequent homologous eruptions. Free magnetic energy is built up by imposing motions in the bottom boundary, such as converging motions, leading to flux cancellation. We apply converging motions in the sunspot area, such that a small part of the flux from the sunspots with different polarities is transported toward the polarity inversion line (PIL) and cancels with each other. The reconnection associated with the cancellation process forms more helical magnetic flux around the reformed flux rope, which leads to a second and a third eruption. In this study, we obtain the first MHD simulation results of a homologous sequence of eruptions that show a transition from a confined to two ejective eruptions, based on the reformation of a flux rope after each eruption.
Nanophotonics is the field of science and engineering aimed at studying the light-matter interactions on the nanoscale. One of the key aspects in studying such optics at the nanoscale is the ability to assemble the material components in a spatially controlled manner. In this work, DNA origami nanostructures were used to self-assemble dye molecules and DNA coated plasmonic nanoparticles. Optical properties of dye nanoarrays, where the dyes were arranged at distances where they can interact by Förster resonance energy transfer (FRET), were systematically studied according to the size and arrangement of the dyes using fluorescein (FAM) as the donor and cyanine 3 (Cy 3) as the acceptor. The optimized design, based on steady-state and time-resolved fluorometry, was utilized in developing a ratiometric pH sensor with pH-inert coumarin 343 (C343) as the donor and pH-sensitive FAM as the acceptor. This design was further applied in developing a ratiometric toxin sensor, where the donor C343 is unresponsive and FAM is responsive to thioacetamide (TAA) which is a well-known hepatotoxin. The results indicate that the sensitivity of the ratiometric sensor can be improved by simply arranging the dyes into a well-defined array. The ability to assemble multiple fluorophores without dye-dye aggregation also provides a strategy to amplify the signal measured from a fluorescent reporter, and was utilized here to develop a reporter for sensing oligonucleotides. By incorporating target capturing sequences and multiple fluorophores (ATTO 647N dye molecules), a reporter for microbead-based assay for non-amplified target oligonucleotide sensing was developed. Analysis of the assay using VideoScan, a fluorescence microscope-based technology capable of conducting multiplex analysis, showed the DNA origami nanostructure based reporter to have a lower limit of detection than a single stranded DNA reporter. Lastly, plasmonic nanostructures were assembled on DNA origami nanostructures as substrates to study interesting optical behaviors of molecules in the near-field. Specifically, DNA coated gold nanoparticles, silver nanoparticles, and gold nanorods, were placed on the DNA origami nanostructure aiming to study surface-enhanced fluorescence (SEF) and surface-enhanced Raman scattering (SERS) of molecules placed in the hotspot of coupled plasmonic structures.
Business process management is an acknowledged asset for running an organization in a productive and sustainable way. One of the most important aspects of business process management, occurring on a daily basis at all levels, is decision making. In recent years, a number of decision management frameworks have appeared in addition to existing business process management systems. More recently, Decision Model and Notation (DMN) was developed by the OMG consortium with the aim of complementing the widely used Business Process Model and Notation (BPMN). One of the reasons for the emergence of DMN is the increasing interest in the evolving paradigm known as the separation of concerns. This paradigm states that modeling decisions complementary to processes reduces process complexity by externalizing decision logic from process models and importing it into a dedicated decision model. Such an approach increases the agility of model design and execution. This provides organizations with the flexibility to adapt to the ever increasing rapid and dynamic changes in the business ecosystem. The research gap, identified by us, is that the separation of concerns, recommended by DMN, prescribes the externalization of the decision logic of process models in one or more separate decision models, but it does not specify this can be achieved.
The goal of this thesis is to overcome the presented gap by developing a framework for discovering decision models in a semi-automated way from information about existing process decision making. Thus, in this thesis we develop methodologies to extract decision models from: (1) control flow and data of process models that exist in enterprises; and (2) from event logs recorded by enterprise information systems, encapsulating day-to-day operations. Furthermore, we provide an extension of the methodologies to discover decision models from event logs enriched with fuzziness, a tool dealing with partial knowledge of the process execution information. All the proposed techniques are implemented and evaluated in case studies using real-life and synthetic process models and event logs. The evaluation of these case studies shows that the proposed methodologies provide valid and accurate output decision models that can serve as blueprints for executing decisions complementary to process models. Thus, these methodologies have applicability in the real world and they can be used, for example, for compliance checks, among other uses, which could improve the organization's decision making and hence it's overall performance.