Refine
Has Fulltext
- yes (108) (remove)
Year of publication
- 2015 (108) (remove)
Document Type
- Doctoral Thesis (108) (remove)
Is part of the Bibliography
- yes (108)
Keywords
- Klimawandel (3)
- climate change (3)
- Arbeitsgedächtnis (2)
- Aspekt (2)
- Bakteriophagen (2)
- Erosion (2)
- Germany (2)
- Geschäftsprozessmanagement (2)
- Modellierung (2)
- Nanopartikel (2)
Institute
- Institut für Geowissenschaften (23)
- Institut für Physik und Astronomie (15)
- Institut für Biochemie und Biologie (11)
- Institut für Ernährungswissenschaft (7)
- Sozialwissenschaften (7)
- Institut für Chemie (6)
- Institut für Umweltwissenschaften und Geographie (6)
- Department Linguistik (4)
- Institut für Mathematik (4)
- Department Erziehungswissenschaft (3)
Aufgrund ihrer potenziell gesundheitsfördernden Wirkung sind die polyphenolischen Isoflavone für die menschliche Ernährung von großem Interesse. Eine Vielzahl an experimentellen und epidemiologischen Studien zeigen für die in Soja enthaltenen Isoflavone Daidzein und Genistein eine präventive Wirkung bezüglich hormon-abhängiger und altersbedingter Erkrankungen, wie Brust- und Prostatakrebs, Osteoporose, Herz-Kreislauf-Erkrankungen sowie des menopausalen Syndroms. Die Metabolisierung und Bioaktivierung dieser sekundären Pflanzenstoffe durch die humane intestinale Darmmikrobiota ist individuell unterschiedlich. Nur in einem geringen Teil der westlichen Bevölkerung wird der Daidzein-Metabolit Equol durch spezifische Darmbakterien gebildet. Ein isoliertes Equol-produzierendes Bakterium des menschlichen Darmtrakts ist Slackia isoflavoniconvertens. Anhand dieser Spezies sollten die bislang unbekannten, an der Umsetzung von Daidzein und Genistein beteiligten Enzyme identifiziert und charakterisiert werden.
Fermentationsexperimente mit S. isoflavoniconvertens zeigten, dass die Gene der Daidzein und Genistein-umsetzenden Enzyme nicht konstitutiv exprimiert werden, sondern induziert werden müssen. Mit Hilfe der zweidimensionalen differentiellen Gelelektrophorese wurden sechs Proteine detektiert, welche in einer S. isoflavoniconvertens-Kultur in Anwesenheit von Daidzein induziert wurden. Auf Grundlage einzelner Peptidsequenzen erfolgte die Sequenzierung eines Genkomplexes mit den in gleicher Orientierung angeordneten Genen der durch Daidzein induzierten Proteine. Sequenzvergleiche identifizierten zudem äquivalente Genprodukte zu den Proteinen von S. isoflavoniconvertens in anderen Equolproduzierenden Bakterien. Nach der heterologen Expression in Escherichia coli wurden drei dieser Gene durch enzymatische Aktivitätstests als Daidzein-Reduktase (DZNR), Dihydrodaidzein-Reduktase (DHDR) und Tetrahydrodaidzein-Reduktase (THDR) identifiziert. Die Kombination der E. coli-Zellextrakte führte zur vollständigen Umsetzung von Daidzein über Dihydrodaidzein zu Equol. Neben Daidzein setzte die DZNR auch Genistein zu Dihydrogenistein um. Dies erfolgte mit einer größeren Umsatzgeschwindigkeit im Vergleich zur Reduktion von Daidzein zu Dihydrodaidzein. Enzymatische Aktivitätstests mit dem Zellextrakt von S. isoflavoniconvertens zeigten ebenfalls eine schnellere Umsetzung von Genistein. Die Kombination der rekombinanten DHDR und THDR führte zur Umsetzung von Dihydrodaidzein zu Equol. Der korrespondierende Metabolit 5-Hydroxyequol konnte als Endprodukt des Genistein-Metabolismus nicht detektiert werden. Zur Reinigung der drei identifizierten Reduktasen wurden diese genetisch an ein Strep-tag fusioniert und mittels Affinitätschromatographie gereinigt. Die übrigen durch Daidzein induzierten Proteine IfcA, IfcBC und IfcE wurden ebenfalls in E. coli exprimiert und als Strep-Fusionsproteine gereinigt. Vergleichende Aktivitätstests identifizierten das induzierte Protein IfcA als Dihydrodaidzein-Racemase. Diese katalysierte die Umsetzung des (R)- und (S)-Enantiomers von Dihydrodaidzein und Dihydrogenistein zum korrespondierenden Racemat. Neben dem Elektronentransfer-Flavoprotein IfcBC wurden auch die THDR, DZNR und IfcE als FAD-haltige Flavoproteine identifiziert. Zudem handelte es sich bei IfcE um ein Eisen-Schwefel-Protein. Nach Induktion der für die Daidzein-Umsetzung kodierenden Gene wurden mehrere verschieden lange mRNA-Transkripte gebildet. Dies zeigte, dass die Transkription des durch Daidzein induzierten Genkomplexes in S. isoflavoniconvertens nicht in Form eines einzelnen Operonsystems erfolgte.
Auf Grundlage der identifizierten Daidzein-umsetzenden Enzyme kann der Mechanismus der bakteriellen Umsetzung von Isoflavonen durch S. isoflavoniconvertens eingehend erforscht werden. Die ermittelten Gensequenzen der durch Daidzein induzierten Proteine sowie die korrespondierenden Gene weiterer Equol-produzierender Bakterien bieten zudem die Möglichkeit der mikrobiellen Metagenomanalyse im humanen Darmtrakt.
This dissertation investigates the working memory mechanism subserving human sentence processing and its relative contribution to processing difficulty as compared to syntactic prediction. Within the last decades, evidence for a content-addressable memory system underlying human cognition in general has accumulated (e.g., Anderson et al., 2004). In sentence processing research, it has been proposed that this general content-addressable architecture is also used for language processing (e.g., McElree, 2000).
Although there is a growing body of evidence from various kinds of linguistic dependencies that is consistent with a general content-addressable memory subserving sentence processing (e.g., McElree et al., 2003; VanDyke2006), the case of reflexive-antecedent dependencies has challenged this view. It has been proposed that in the processing of reflexive-antecedent dependencies, a syntactic-structure based memory access is used rather than cue-based retrieval within a content-addressable framework (e.g., Sturt, 2003).
Two eye-tracking experiments on Chinese reflexives were designed to tease apart accounts assuming a syntactic-structure based memory access mechanism from cue-based retrieval (implemented in ACT-R as proposed by Lewis and Vasishth (2005).
In both experiments, interference effects were observed from noun phrases which syntactically do not qualify as the reflexive's antecedent but match the animacy requirement the reflexive imposes on its antecedent. These results are interpreted as evidence against a purely syntactic-structure based memory access. However, the exact pattern of effects observed in the data is only partially compatible with the Lewis and Vasishth cue-based parsing model.
Therefore, an extension of the Lewis and Vasishth model is proposed. Two principles are added to the original model, namely 'cue confusion' and 'distractor prominence'.
Although interference effects are generally interpreted in favor of a content-addressable memory architecture, an alternative explanation for interference effects in reflexive processing has been proposed which, crucially, might reconcile interference effects with a structure-based account.
It has been argued that interference effects do not necessarily reflect cue-based retrieval interference in a content-addressable memory but might equally well be accounted for by interference effects which have already occurred at the moment of encoding the antecedent in memory (Dillon, 2011).
Three experiments (eye-tracking and self-paced reading) on German reflexives and Swedish possessives were designed to tease apart cue-based retrieval interference from encoding interference. The results of all three experiments suggest that there is no evidence that encoding interference affects the retrieval of a reflexive's antecedent.
Taken together, these findings suggest that the processing of reflexives can be explained with the same cue-based retrieval mechanism that has been invoked to explain syntactic dependency resolution in a range of other structures. This supports the view that the language processing system is located within a general cognitive architecture, with a general-purpose content-addressable working memory system operating on linguistic expressions.
Finally, two experiments (self-paced reading and eye-tracking) using Chinese relative clauses were conducted to determine the relative contribution to sentence processing difficulty of working-memory processes as compared to syntactic prediction during incremental parsing.
Chinese has the cross-linguistically rare property of being a language with subject-verb-object word order and pre-nominal relative clauses. This property leads to opposing predictions of expectation-based
accounts and memory-based accounts with respect to the relative processing difficulty of subject vs. object relatives.
Previous studies showed contradictory results, which has been attributed to different kinds local ambiguities confounding the materials (Lin and Bever, 2011). The two experiments presented are the first to compare Chinese relatives clauses in syntactically unambiguous contexts.
The results of both experiments were consistent with the predictions of the expectation-based account of sentence processing but not with the memory-based account. From these findings, I conclude that any theory of human sentence processing needs to take into account the power of predictive processes unfolding in the human mind.
This dissertation addresses the question of how linguistic structures can be represented in working memory. We propose a memory-based computational model that derives offline and online complexity profiles in terms of a top-down parser for minimalist grammars (Stabler, 2011). The complexity metric reflects the amount of time an item is stored in memory. The presented architecture links grammatical representations stored in memory directly to the cognitive behavior by deriving predictions about sentence processing difficulty.
Results from five different sentence comprehension experiments were used to evaluate the model's assumptions about memory limitations. The predictions of the complexity metric were compared to the locality (integration and storage) cost metric of Dependency Locality Theory (Gibson, 2000). Both metrics make comparable offline and online predictions for four of the five phenomena. The key difference between the two metrics is that the proposed complexity metric accounts for the structural complexity of intervening material. In contrast, DLT's integration cost metric considers the number of discourse referents, not the syntactic structural complexity.
We conclude that the syntactic analysis plays a significant role in memory requirements of parsing. An incremental top-down parser based on a grammar formalism easily computes offline and online complexity profiles, which can be used to derive predictions about sentence processing difficulty.
The dissertation proposes that the spread of photography and popular cinema in 19th- and 20th-century-India have shaped an aesthetic and affective code integral to the reading and interpretation of Indian English novels, particularly when they address photography and/or cinema film, as in the case of the four corpus texts. In analyzing the nexus between ‘real’ and ‘reel’, the dissertation shows how the texts address the reader as media consumer and virtual image projector. Furthermore, the study discusses the Indian English novel against the backdrop of the cultural and medial transformations of the 20th century to elaborate how these influenced the novel’s aesthetics. Drawing upon reception aesthetics, the author devises the concept of the ‘implied spectator’ to analyze the aesthetic impact of the novels’ images as visual textures.
No God in Sight (2005) by Altaf Tyrewala comprises of a string of 41 interior monologues, loosely connected through their narrators’ random encounters in Mumbai in the year 2000. Although marked by continuous perspective shifts, the text creates a sensation of acute immediacy. Here, the reader is addressed as implied spectator and is sutured into the narrated world like a film spectator ― an effect created through the use of continuity editing as a narrative technique.
Similarly, Ruchir Joshi’s The Last Jet Engine Laugh (2002) coll(oc)ates disparate narrative perspectives and explores photography as an artistic practice, historiographic recorder and epistemological tool. The narrative appears guided by the random viewing of old photographs by the protagonist and primary narrator, the photographer Paresh Bhatt. However, it is the photographic negative and the practice of superimposition that render this string of episodes and different perspectives narratively consequential and cosmologically meaningful. Photography thus marks the perfect symbiosis of autobiography and historiography.
Tabish Khair’s Filming. A Love Story (2007) immerses readers in the cine-aesthetic of 1930s and 40s Bombay film, the era in which the embedded plot is set. Plotline, central scenes and characters evoke the key films of Indian cinema history such as Satyajit Ray’s “Pather Panchali” or Raj Kapoor’s “Awara”. Ultimately, the text written as film dissolves the boundary between fiction and (narrated) reality, reel and real, thereby showing that the images of individual memory are inextricably intertwined with and shaped by collective memory. Ultimately, the reconstruction of the past as and through film(s) conquers trauma and endows the Partition of India as a historic experience of brutal contingency with meaning.
The Bioscope Man (Indrajit Hazra, 2008) is a picaresque narrative set in Calcutta - India’s cultural capital and birthplace of Indian cinema at the beginning of the 20th century. The autodiegetic narrator Abani Chatterjee relates his rise and fall as silent film star, alternating between the modes of tell and show. He is both autodiegetic narrator and spectator or perceiving consciousness, seeing himself in his manifold screen roles. Beyond his film roles however, the narrator remains a void. The marked psychoanalytical symbolism of the text is accentuated by repeated invocations of dark caves and the laterna magica. Here too, ‘reel life’ mirrors and foreshadows real life as Indian and Bengali history again interlace with private history. Abani Chatterjee thus emerges as a quintessentially modern man of no qualities who assumes definitive shape only in the lost reels of the films he starred in.
The final chapter argues that the static images and visual frames forwarded in the texts observe an integral psychological function: Premised upon linear perspective they imply a singular, static subjectivity appealing to the postmodern subject. In the corpus texts, the rise of digital technology in the 1990s thus appears not so much to have displaced older image repertories, practices and media techniques, than it has lent them greater visibility and appeal. Moreover, bricolage and pastiche emerge as cultural techniques which marked modernity from its inception. What the novels thus perpetuate is a media archeology not entirely servant to the poetics of the real. The permeable subject and the notion of the gaze as an active exchange as encapsulated in the concept of darshan - ideas informing all four texts - bespeak the resilience of a mythical universe continually re-instantiated in new technologies and uses. Eventually, the novels convey a sense of subalternity to a substantially Hindu nationalist history and historiography, the centrifugal force of which developed in the twentieth century and continues into the present.
The sea level rise induced intensification of coastal floods is a serious threat to many regions in proximity to the ocean. Although severe flood events are rare they can entail enormous damage costs, especially when built-up areas are inundated. Fortunately, the mean sea level advances slowly and there is enough time for society to adapt to the changing environment. Most commonly, this is achieved by the construction or reinforcement of flood defence measures such as dykes or sea walls but also land use and disaster management are widely discussed options. Overall, albeit the projection of sea level rise impacts and the elaboration of adequate response strategies is amongst the most prominent topics in climate impact research, global damage estimates are vague and mostly rely on the same assessment models. The thesis at hand contributes to this issue by presenting a distinctive approach which facilitates large scale assessments as well as the comparability of results across regions. Moreover, we aim to improve the general understanding of the interplay between mean sea level rise, adaptation, and coastal flood damage.
Our undertaking is based on two basic building blocks. Firstly, we make use of macroscopic flood-damage functions, i.e. damage functions that provide the total monetary damage within a delineated region (e.g. a city) caused by a flood of certain magnitude. After introducing a systematic methodology for the automatised derivation of such functions, we apply it to a total of 140 European cities and obtain a large set of damage curves utilisable for individual as well as comparative damage assessments. By scrutinising the resulting curves, we are further able to characterise the slope of the damage functions by means of a functional model. The proposed function has in general a sigmoidal shape but exhibits a power law increase for the relevant range of flood levels and we detect an average exponent of 3.4 for the considered cities. This finding represents an essential input for subsequent elaborations on the general interrelations of involved quantities.
The second basic element of this work is extreme value theory which is employed to characterise the occurrence of flood events and in conjunction with a damage function provides the probability distribution of the annual damage in the area under study. The resulting approach is highly flexible as it assumes non-stationarity in all relevant parameters and can be easily applied to arbitrary regions, sea level, and adaptation scenarios. For instance, we find a doubling of expected flood damage in the city of Copenhagen for a rise in mean sea levels of only 11 cm. By following more general considerations, we succeed in deducing surprisingly simple functional expressions to describe the damage behaviour in a given region for varying mean sea levels, changing storm intensities, and supposed protection levels. We are thus able to project future flood damage by means of a reduced set of parameters, namely the aforementioned damage function exponent and the extreme value parameters. Similar examinations are carried out to quantify the aleatory uncertainty involved in these projections. In this regard, a decrease of (relative) uncertainty with rising mean sea levels is detected. Beyond that, we demonstrate how potential adaptation measures can be assessed in terms of a Cost-Benefit Analysis. This is exemplified by the Danish case study of Kalundborg, where amortisation times for a planned investment are estimated for several sea level scenarios and discount rates.
Anthropogenic activities have transformed the Earth's environment, not only on local level, but on the planetary-scale causing global change. Besides industrialization, agriculture is a major driver of global change. This change in turn impairs the agriculture sector, reducing crop yields namely due to soil degradation, water scarcity, and climate change. However, this is a more complex issue than it appears. Crop yields can be increased by use of agrochemicals and fertilizers which are mainly produced by fossil energy. This is important to meet the increasing food demand driven by global demographic change, which is further accelerated by changes in regional lifestyles. In this dissertation, we attempt to address this complex problem exploring agricultural potential globally but on a local scale. For this, we considered the influence of lifestyle changes (dietary patterns) as well as technological progress and their effects on climate change, mainly greenhouse gas (GHG) emissions. Furthermore, we examined options for optimizing crop yields in the current cultivated land with the current cropping patterns by closing yield gaps. Using this, we investigated in a five-minute resolution the extent to which food demand can be met locally, and/or by regional and/or global trade. Globally, food consumption habits are shifting towards calorie rich diets. Due to dietary shifts combined with population growth, the global food demand is expected to increase by 60-110% between 2005 and 2050. Hence, one of the challenges to global sustainability is to meet the growing food demand, while at the same time, reducing agricultural inputs and environmental consequences. In order to address the above problem, we used several freely available datasets and applied multiple interconnected analytical approaches that include artificial neural network, scenario analysis, data aggregation and harmonization, downscaling algorithm, and cross-scale analysis.
Globally, we identified sixteen dietary patterns between 1961 and 2007 with food intakes ranging from 1,870 to 3,400 kcal/cap/day. These dietary patterns also reflected changing dietary habits to meat rich diets worldwide. Due to the large share of animal products, very high calorie diets that are common in the developed world, exhibit high total per capita emissions of 3.7-6.1 kg CO2eq./day. This is higher than total per capita emissions of 1.4-4.5 kg CO2eq./day associated with low and moderate calorie diets that are common in developing countries. Currently, 40% of the global crop calories are fed to livestock and the feed calorie use is four times the produced animal calories. However, these values vary from less than 1 kcal to greater 10 kcal around the world. On the local and national scale, we found that the local and national food production could meet demand of 1.9 and 4.4 billion people in 2000, respectively. However, 1 billion people from Asia and Africa require intercontinental agricultural trade to meet their food demand. Nevertheless, these regions can become food self-sufficient by closing yield gaps that require location specific inputs and agricultural management strategies. Such strategies include: fertilizers, pesticides, soil and land improvement, management targeted on mitigating climate induced yield variability, and improving market accessibility. However, closing yield gaps in particular requires global N-fertilizer application to increase by 45-73%, P2O5 by 22-46%, and K2O by 2-3 times compare to 2010. Considering population growth, we found that the global agricultural GHG emissions will approach 7 Gt CO2eq./yr by 2050, while the global livestock feed demand will remain similar to 2000. This changes tremendously when diet shifts are also taken into account, resulting in GHG emissions of 20 Gt CO2eq./yr and an increase of 1.3 times in the crop-based feed demand between 2000 and 2050. However, when population growth, diet shifts, and technological progress by 2050 were considered, GHG emissions can be reduced to 14 Gt CO2eq./yr and the feed demand to nearly 1.8 times compare to that in 2000. Additionally, our findings shows that based on the progress made in closing yield gaps, the number of people depending on international trade can vary between 1.5 and 6 billion by 2050. In medium term, this requires additional fossil energy. Furthermore, climate change, affecting crop yields, will increase the need for international agricultural trade by 4% to 16%.
In summary, three general conclusions are drawn from this dissertation. First, changing dietary patterns will significantly increase crop demand, agricultural GHG emissions, and international food trade in the future when compared to population growth only. Second, such increments can be reduced by technology transfer and technological progress that will enhance crop yields, decrease agricultural emission intensities, and increase livestock feed conversion efficiencies. Moreover, international trade dependency can be lowered by consuming local and regional food products, by producing diverse types of food, and by closing yield gaps. Third, location specific inputs and management options are required to close yield gaps. Sustainability of such inputs and management largely depends on which options are chosen and how they are implemented. However, while every cultivated land may not need to attain its potential yields to enable food security, closing yield gaps only may not be enough to achieve food self-sufficiency in some regions. Hence, a combination of sustainable implementations of agricultural intensification, expansion, and trade as well as shifting dietary habits towards a lower share of animal products is required to feed the growing population.
Nowadays, business processes are increasingly supported by IT services that produce massive amounts of event data during process execution. Aiming at a better process understanding and improvement, this event data can be used to analyze processes using process mining techniques. Process models can be automatically discovered and the execution can be checked for conformance to specified behavior. Moreover, existing process models can be enhanced and annotated with valuable information, for example for performance analysis. While the maturity of process mining algorithms is increasing and more tools are entering the market, process mining projects still face the problem of different levels of abstraction when comparing events with modeled business activities. Mapping the recorded events to activities of a given process model is essential for conformance checking, annotation and understanding of process discovery results. Current approaches try to abstract from events in an automated way that does not capture the required domain knowledge to fit business activities. Such techniques can be a good way to quickly reduce complexity in process discovery. Yet, they fail to enable techniques like conformance checking or model annotation, and potentially create misleading process discovery results by not using the known business terminology.
In this thesis, we develop approaches that abstract an event log to the same level that is needed by the business. Typically, this abstraction level is defined by a given process model. Thus, the goal of this thesis is to match events from an event log to activities in a given process model. To accomplish this goal, behavioral and linguistic aspects of process models and event logs as well as domain knowledge captured in existing process documentation are taken into account to build semiautomatic matching approaches. The approaches establish a pre--processing for every available process mining technique that produces or annotates a process model, thereby reducing the manual effort for process analysts. While each of the presented approaches can be used in isolation, we also introduce a general framework for the integration of different matching approaches.
The approaches have been evaluated in case studies with industry and using a large industry process model collection and simulated event logs. The evaluation demonstrates the effectiveness and efficiency of the approaches and their robustness towards nonconforming execution logs.
Methicillin resistant Staphylococcus aureus (MRSA) is one of the most important antibiotic-resistant pathogens in hospitals and the community. Recently, a new generation of MRSA, the so called livestock associated (LA) MRSA, has emerged occupying food producing animals as a new niche. LA-MRSA can be regularly isolated from economically important live-stock species including corresponding meats. The present thesis takes a methodological approach to confirm the hypothesis that LA-MRSA are transmitted along the pork, poultry and beef production chain from animals at farm to meat on consumers` table. Therefore two new concepts were developed, adapted to differing data sets.
A mathematical model of the pig slaughter process was developed which simulates the change in MRSA carcass prevalence during slaughter with special emphasis on identifying critical process steps for MRSA transmission. Based on prevalences as sole input variables the model framework is able to estimate the average value range of both the MRSA elimination and contamination rate of each of the slaughter steps. These rates are then used to set up a Monte Carlo simulation of the slaughter process chain. The model concludes that regardless of the initial extent of MRSA contamination low outcome prevalences ranging between 0.15 and 1.15 % can be achieved among carcasses at the end of slaughter. Thus, the model demonstrates that the standard procedure of pig slaughtering in principle includes process steps with the capacity to limit MRSA cross contamination. Scalding and singeing were identified as critical process steps for a significant reduction of superficial MRSA contamination.
In the course of the German national monitoring program for zoonotic agents MRSA prevalence and typing data are regularly collected covering the key steps of different food production chains. A new statistical approach has been proposed for analyzing this cross sectional set of MRSA data with regard to show potential farm to fork transmission. For this purpose, chi squared statistics was combined with the calculation of the Czekanowski similarity index to compare the distributions of strain specific characteristics between the samples from farm, carcasses after slaughter and meat at retail. The method was implemented on the turkey and veal production chains and the consistently high degrees of similarity which have been revealed between all sample pairs indicate MRSA transmission along the chain.
As the proposed methods are not specific to process chains or pathogens they offer a broad field of application and extend the spectrum of methods for bacterial transmission assessment.
The continuously increasing demand for rare earth elements in technical components of modern technologies, brings the detection of new deposits closer into the focus of global exploration. One promising method to globally map important deposits might be remote sensing, since it has been used for a wide range of mineral mapping in the past. This doctoral thesis investigates the capacity of hyperspectral remote sensing for the detection of rare earth element deposits. The definition and the realization of a fundamental database on the spectral characteristics of rare earth oxides, rare earth metals and rare earth element bearing materials formed the basis of this thesis. To investigate these characteristics in the field, hyperspectral images of four outcrops in Fen Complex, Norway, were collected in the near-field. A new methodology (named REEMAP) was developed to delineate rare earth element enriched zones. The main steps of REEMAP are: 1) multitemporal weighted averaging of multiple images covering the sample area; 2) sharpening the rare earth related signals using a Gaussian high pass deconvolution technique that is calibrated on the standard deviation of a Gaussian-bell shaped curve that represents by the full width of half maxima of the target absorption band; 3) mathematical modeling of the target absorption band and highlighting of rare earth elements. REEMAP was further adapted to different hyperspectral sensors (EO-1 Hyperion and EnMAP) and a new test site (Lofdal, Namibia). Additionally, the hyperspectral signatures of associated minerals were investigated to serve as proxy for the host rocks. Finally, the capacity and limitations of spectroscopic rare earth element detection approaches in general and of the REEMAP approach specifically were investigated and discussed. One result of this doctoral thesis is that eight rare earth oxides show robust absorption bands and, therefore, can be used for hyperspectral detection methods. Additionally, the spectral signatures of iron oxides, iron-bearing sulfates, calcite and kaolinite can be used to detect metasomatic alteration zones and highlight the ore zone. One of the key results of this doctoral work is the developed REEMAP approach, which can be applied from near-field to space. The REEMAP approach enables rare earth element mapping especially for noisy images. Limiting factors are a low signal to noise ratio, a reduced spectral resolution, overlaying materials, atmospheric absorption residuals and non-optimal illumination conditions. Another key result of this doctoral thesis is the finding that the future hyperspectral EnMAP satellite (with its currently published specifications, June 2015) will be theoretically capable to detect absorption bands of erbium, dysprosium, holmium, neodymium and europium, thulium and samarium. This thesis presents a new methodology REEMAP that enables a spatially wide and rapid hyperspectral detection of rare earth elements in order to meet the demand for fast, extensive and efficient rare earth exploration (from near-field to space).
Analysis and modeling of transient earthquake patterns and their dependence on local stress regimes
(2015)
Investigations in the field of earthquake triggering and associated interactions, which includes aftershock triggering as well as induced seismicity, is important for seismic hazard assessment due to earthquakes destructive power. One of the approaches to study earthquake triggering and their interactions is the use of statistical earthquake models, which are based on knowledge of the basic seismicity properties, in particular, the magnitude distribution and spatiotemporal properties of the triggered events.
In my PhD thesis I focus on some specific aspects of aftershock properties, namely, the relative seismic moment release of the aftershocks with respect to the mainshocks; the spatial correlation between aftershock occurrence and fault deformation; and on the influence of aseismic transients on the aftershock parameter estimation. For the analysis of aftershock sequences I choose a statistical approach, in particular, the well known Epidemic Type Aftershock Sequence (ETAS) model, which accounts for the input of background and triggered seismicity. For my specific purposes, I develop two ETAS model modifications in collaboration with Sebastian Hainzl. By means of this approach, I estimate the statistical aftershock parameters and performed simulations of aftershock sequences as well.
In the case of seismic moment release of aftershocks, I focus on the ratio of cumulative seismic moment release with respect to the mainshocks. Specifically, I investigate the ratio with respect to the focal mechanism of the mainshock and estimate an effective magnitude, which represents the cumulative aftershock energy (similar to Bath's law, which defines the average difference between mainshock and the largest aftershock magnitudes). Furthermore, I compare the observed seismic moment ratios with the results of the ETAS simulations. In particular, I test a restricted ETAS (RETAS) model which is based on results of a clock advanced model and static stress triggering.
To analyze spatial variations of triggering parameters I focus in my second approach on the aftershock occurrence triggered by large mainshocks and the study of the aftershock parameter distribution and their spatial correlation with the coseismic/postseismic slip and interseismic locking. To invert the aftershock parameters I improve the modified ETAS (m-ETAS) model, which is able to take the extension of the mainshock rupture into account. I compare the results obtained by the classical approach with the output of the m-ETAS model.
My third approach is concerned with the temporal clustering of seismicity, which might not only be related to earthquake-earthquake interactions, but also to a time-dependent background rate, potentially biasing the parameter estimations. Thus, my coauthors and I also applied a modification of the ETAS model, which is able to take into account time-dependent background activity. It can be applicable for two different cases: when an aftershock catalog has a temporal incompleteness or when the background seismicity rate changes with time, due to presence of aseismic forces.
An essential part of any research is the testing of the developed models using observational data sets, which are appropriate for the particular study case. Therefore, in the case of seismic moment release I use the global seismicity catalog. For the spatial distribution of triggering parameters I exploit two aftershock sequences of the Mw8.8 2010 Maule (Chile) and Mw 9.0 2011 Tohoku (Japan) mainshocks. In addition, I use published geodetic slip models of different authors. To test our ability to detect aseismic transients my coauthors and I use the data sets from Western Bohemia (Central Europe) and California.
Our results indicate that:
(1) the seismic moment of aftershocks with respect to mainshocks depends on the static stress changes and is maximal for the normal, intermediate for thrust and minimal for strike-slip stress regimes, where the RETAS model shows a good correspondence with the results;
(2) The spatial distribution of aftershock parameters, obtained by the m-ETAS model, shows anomalous values in areas of reactivated crustal fault systems. In addition, the aftershock density is found to be correlated with coseismic slip gradient, afterslip, interseismic coupling and b-values. Aftershock seismic moment is positively correlated with the areas of maximum coseismic slip and interseismically locked areas. These correlations might be related to the stress level or to material properties variations in space;
(3) Ignoring aseismic transient forcing or temporal catalog incompleteness can lead to the significant under- or overestimation of the underlying trigger parameters. In the case when a catalog is complete, this method helps to identify aseismic sources.