Refine
Has Fulltext
- yes (668) (remove)
Year of publication
- 2015 (668) (remove)
Document Type
- Article (255)
- Postprint (197)
- Doctoral Thesis (108)
- Monograph/Edited Volume (33)
- Part of Periodical (26)
- Preprint (12)
- Review (11)
- Conference Proceeding (7)
- Master's Thesis (7)
- Bachelor Thesis (5)
Keywords
- Patholinguistik (20)
- Sprachtherapie (20)
- geistige Behinderung (20)
- mental deficiency (20)
- patholinguistics (20)
- primary progessive aphasia (20)
- primär progessive Aphasie (20)
- speech therapy (20)
- Armut (13)
- Nachhaltigkeit (13)
Institute
- Institut für Physik und Astronomie (108)
- Mathematisch-Naturwissenschaftliche Fakultät (70)
- Institut für Informatik und Computational Science (52)
- Institut für Chemie (39)
- MenschenRechtsZentrum (37)
- Humanwissenschaftliche Fakultät (36)
- Institut für Romanistik (32)
- Department Linguistik (30)
- Institut für Geowissenschaften (29)
- Institut für Biochemie und Biologie (24)
Optical properties of modified diamondoids have been studied theoretically using vibrationally resolved electronic absorption, emission and resonance Raman spectra. A time-dependent correlation function approach has been used for electronic two-state models, comprising a ground state (g) and a bright, excited state (e), the latter determined from linear-response, time-dependent density functional theory (TD-DFT). The harmonic and Condon approximations were adopted. In most cases origin shifts, frequency alteration and Duschinsky rotation in excited states were considered. For other cases where no excited state geometry optimization and normal mode analysis were possible or desired, a short-time approximation was used. The optical properties and spectra have been computed for (i) a set of recently synthesized sp2/sp3 hybrid species with C[double bond, length as m-dash]C double-bond connected saturated diamondoid subunits, (ii) functionalized (mostly by thiol or thione groups) diamondoids and (iii) urotropine and other C-substituted diamondoids. The ultimate goal is to tailor optical and electronic features of diamondoids by electronic blending, functionalization and substitution, based on a molecular-level understanding of the ongoing photophysics.
Analysis and modeling of transient earthquake patterns and their dependence on local stress regimes
(2015)
Investigations in the field of earthquake triggering and associated interactions, which includes aftershock triggering as well as induced seismicity, is important for seismic hazard assessment due to earthquakes destructive power. One of the approaches to study earthquake triggering and their interactions is the use of statistical earthquake models, which are based on knowledge of the basic seismicity properties, in particular, the magnitude distribution and spatiotemporal properties of the triggered events.
In my PhD thesis I focus on some specific aspects of aftershock properties, namely, the relative seismic moment release of the aftershocks with respect to the mainshocks; the spatial correlation between aftershock occurrence and fault deformation; and on the influence of aseismic transients on the aftershock parameter estimation. For the analysis of aftershock sequences I choose a statistical approach, in particular, the well known Epidemic Type Aftershock Sequence (ETAS) model, which accounts for the input of background and triggered seismicity. For my specific purposes, I develop two ETAS model modifications in collaboration with Sebastian Hainzl. By means of this approach, I estimate the statistical aftershock parameters and performed simulations of aftershock sequences as well.
In the case of seismic moment release of aftershocks, I focus on the ratio of cumulative seismic moment release with respect to the mainshocks. Specifically, I investigate the ratio with respect to the focal mechanism of the mainshock and estimate an effective magnitude, which represents the cumulative aftershock energy (similar to Bath's law, which defines the average difference between mainshock and the largest aftershock magnitudes). Furthermore, I compare the observed seismic moment ratios with the results of the ETAS simulations. In particular, I test a restricted ETAS (RETAS) model which is based on results of a clock advanced model and static stress triggering.
To analyze spatial variations of triggering parameters I focus in my second approach on the aftershock occurrence triggered by large mainshocks and the study of the aftershock parameter distribution and their spatial correlation with the coseismic/postseismic slip and interseismic locking. To invert the aftershock parameters I improve the modified ETAS (m-ETAS) model, which is able to take the extension of the mainshock rupture into account. I compare the results obtained by the classical approach with the output of the m-ETAS model.
My third approach is concerned with the temporal clustering of seismicity, which might not only be related to earthquake-earthquake interactions, but also to a time-dependent background rate, potentially biasing the parameter estimations. Thus, my coauthors and I also applied a modification of the ETAS model, which is able to take into account time-dependent background activity. It can be applicable for two different cases: when an aftershock catalog has a temporal incompleteness or when the background seismicity rate changes with time, due to presence of aseismic forces.
An essential part of any research is the testing of the developed models using observational data sets, which are appropriate for the particular study case. Therefore, in the case of seismic moment release I use the global seismicity catalog. For the spatial distribution of triggering parameters I exploit two aftershock sequences of the Mw8.8 2010 Maule (Chile) and Mw 9.0 2011 Tohoku (Japan) mainshocks. In addition, I use published geodetic slip models of different authors. To test our ability to detect aseismic transients my coauthors and I use the data sets from Western Bohemia (Central Europe) and California.
Our results indicate that:
(1) the seismic moment of aftershocks with respect to mainshocks depends on the static stress changes and is maximal for the normal, intermediate for thrust and minimal for strike-slip stress regimes, where the RETAS model shows a good correspondence with the results;
(2) The spatial distribution of aftershock parameters, obtained by the m-ETAS model, shows anomalous values in areas of reactivated crustal fault systems. In addition, the aftershock density is found to be correlated with coseismic slip gradient, afterslip, interseismic coupling and b-values. Aftershock seismic moment is positively correlated with the areas of maximum coseismic slip and interseismically locked areas. These correlations might be related to the stress level or to material properties variations in space;
(3) Ignoring aseismic transient forcing or temporal catalog incompleteness can lead to the significant under- or overestimation of the underlying trigger parameters. In the case when a catalog is complete, this method helps to identify aseismic sources.
In many procedures of seismic risk mitigation, ground motion simulations are needed to test systems or improve their effectiveness. For example they may be used to estimate the level of ground shaking caused by future earthquakes. Good physical models for ground motion simulation are also thought to be important for hazard assessment, as they could close gaps in the existing datasets. Since the observed ground motion in nature shows a certain variability, part of which cannot be explained by macroscopic parameters such as magnitude or position of an earthquake, it would be desirable that a good physical model is not only able to produce one single seismogram, but also to reveal this natural variability.
In this thesis, I develop a method to model realistic ground motions in a way that is computationally simple to handle, permitting multiple scenario simulations. I focus on two aspects of ground motion modelling. First, I use deterministic wave propagation for the whole frequency range – from static deformation to approximately 10 Hz – but account for source variability by implementing self-similar slip distributions and rough fault interfaces. Second, I scale the source spectrum so that the modelled waveforms represent the correct radiated seismic energy. With this scaling I verify whether the energy magnitude is suitable as an explanatory variable, which characterises the amount of energy radiated at high frequencies – the advantage of the energy magnitude being that it can be deduced from observations, even in real-time.
Applications of the developed method for the 2008 Wenchuan (China) earthquake, the 2003 Tokachi-Oki (Japan) earthquake and the 1994 Northridge (California, USA) earthquake show that the fine source discretisations combined with the small scale source variability ensure that high frequencies are satisfactorily introduced, justifying the deterministic wave propagation approach even at high frequencies. I demonstrate that the energy magnitude can be used to calibrate the high-frequency content in ground motion simulations.
Because deterministic wave propagation is applied to the whole frequency range, the simulation method permits the quantification of the variability in ground motion due to parametric uncertainties in the source description. A large number of scenario simulations for an M=6 earthquake show that the roughness of the source as well as the distribution of fault dislocations have a minor effect on the simulated variability by diminishing directivity effects, while hypocenter location and rupture velocity more strongly influence the variability. The uncertainty in energy magnitude, however, leads to the largest differences of ground motion amplitude between different events, resulting in a variability which is larger than the one observed.
For the presented approach, this dissertation shows (i) the verification of the computational correctness of the code, (ii) the ability to reproduce observed ground motions and (iii) the validation of the simulated ground motion variability. Those three steps are essential to evaluate the suitability of the method for means of seismic risk mitigation.
Methicillin resistant Staphylococcus aureus (MRSA) is one of the most important antibiotic-resistant pathogens in hospitals and the community. Recently, a new generation of MRSA, the so called livestock associated (LA) MRSA, has emerged occupying food producing animals as a new niche. LA-MRSA can be regularly isolated from economically important live-stock species including corresponding meats. The present thesis takes a methodological approach to confirm the hypothesis that LA-MRSA are transmitted along the pork, poultry and beef production chain from animals at farm to meat on consumers` table. Therefore two new concepts were developed, adapted to differing data sets.
A mathematical model of the pig slaughter process was developed which simulates the change in MRSA carcass prevalence during slaughter with special emphasis on identifying critical process steps for MRSA transmission. Based on prevalences as sole input variables the model framework is able to estimate the average value range of both the MRSA elimination and contamination rate of each of the slaughter steps. These rates are then used to set up a Monte Carlo simulation of the slaughter process chain. The model concludes that regardless of the initial extent of MRSA contamination low outcome prevalences ranging between 0.15 and 1.15 % can be achieved among carcasses at the end of slaughter. Thus, the model demonstrates that the standard procedure of pig slaughtering in principle includes process steps with the capacity to limit MRSA cross contamination. Scalding and singeing were identified as critical process steps for a significant reduction of superficial MRSA contamination.
In the course of the German national monitoring program for zoonotic agents MRSA prevalence and typing data are regularly collected covering the key steps of different food production chains. A new statistical approach has been proposed for analyzing this cross sectional set of MRSA data with regard to show potential farm to fork transmission. For this purpose, chi squared statistics was combined with the calculation of the Czekanowski similarity index to compare the distributions of strain specific characteristics between the samples from farm, carcasses after slaughter and meat at retail. The method was implemented on the turkey and veal production chains and the consistently high degrees of similarity which have been revealed between all sample pairs indicate MRSA transmission along the chain.
As the proposed methods are not specific to process chains or pathogens they offer a broad field of application and extend the spectrum of methods for bacterial transmission assessment.
In den letzten Jahrzehnten ist der Trend der Verselbstständigung in vielen Kommunen zu beobachten. Ein Großteil der öffentlichen Leistungserbringer wird mittlerweile als privatrechtliche Gesellschaften in einem wettbewerbsorientierten Umfeld geführt. Während viele Forscher Ausgliederungen in Form von nachgeordneten Behörden auf Bundesebene untersuchen und diese Reformwelle als einen faktischen Autonomisierungsprozess beschreiben, gibt es nur einige wenige Studien, die sich explizit mit den Autonomisierungstendenzen auf Kommunalebene auseinandersetzen. Daher fehlt es an empirischen Erkenntnissen zur Steuerung der kommunalen Beteiligungen.
In dieser Arbeit werden die Steuerungsarrangements deutscher Großstädte erstmals aus Sicht der Gesteuerten beleuchtet. Das Untersuchungsziel der vorliegenden Forschungsarbeit besteht darin, Flexibilisierungstendenzen in mehrheitlich kommunalen Unternehmen zu identifizieren und hierfür Erklärungsfaktoren zu identifizieren. Die Forschungsfrage lautet: Welche instrumentellen und relationalen Faktoren beeinflussen die Managementautonomie in kommunalen Mehrheitsbeteiligungen?
Dabei interessiert insbesondere die Einflussnahme der Kommunen auf verschiedene Tätigkeitsbereiche ihrer Ausgliederungen. Über diese unternehmensspezifischen Sachverhalte ist in Deutschland fast nichts und international nur sehr wenig Empirisches bekannt. Zur Beantwortung der Forschungsfrage hat der Autor auf Basis der Transaktionskosten- und der Social-Exchange-Theorie einen Analyserahmen erstellt. Die aufgestellten Hypothesen wurden mit einer großflächigen Umfrage bei 243 Unternehmen in den 39 größten deutschen Städten empirisch getestet.
Im Ergebnis zeigen sich mehrere empirische Erkenntnisse: Erstens konnten mittels Faktorenanalyse vier unabhängige Faktoren von Managementautonomie in kommunalen Unternehmen identifiziert werden: Personalautonomie, Generelles Management, Preisautonomie und Strategische Fragen. Während die Kommunen ihren Beteiligungen einen hohen Grad an Personalautonomie zugestehen, unterliegen vor allem strategische Investitionsentscheidungen wie die finanzielle Beteiligung an Tochterfirmen, große Projektvorhaben, Diversifikationsentscheidungen oder Kreditautfnahmen einem starken politischen Einfluss.
Zweitens führt eine Rechtsformänderung und die Platzierung in einem Wettbewerbsumfeld (auch bekannt als Corporatisation) vor allem zu einer größeren Flexibilisierung der Personal- und Preispolitik, wirkt sich allerdings wenig auf die weiteren Faktoren der Managementautonomie, Generelles Management und Strategische Entscheidungen, aus. Somit behalten die Kommunen ihre Möglichkeit, auf wichtige Unternehmensfragen der Beteiligung Einfluss zu nehmen, auch im Fall einer Formalprivatisierung bei.
Letztlich können zur Erklärung der Autonomiefaktoren transaktionskostenbasierte und relationale Faktoren ergänzend herangezogen werden. In den Transaktionsspezifika wirken vor allem der wahrgenommene Wettbewerb in der Branche, die Messbarkeit der Leistung, Branchenvariablen, die Anzahl der Politiker im Aufsichtsrat und die eingesetzten Steuerungsmechanismen. In den relationalen Faktoren setzen sich die Variablen gegenseitiges Vertrauen, Effektivität der Aufsichtsräte, Informationsaustausch, Rollenkonflikte, Rollenambivalenzen und Geschäftsführererfahrung im Sektor durch.
Many chemical reactions in biological cells occur at very low concentrations of constituent molecules. Thus, transcriptional gene-regulation is often controlled by poorly expressed transcription-factors, such as E.coli lac repressor with few tens of copies. Here we study the effects of inherent concentration fluctuations of substrate-molecules on the seminal Michaelis-Menten scheme of biochemical reactions. We present a universal correction to the Michaelis-Menten equation for the reaction-rates. The relevance and validity of this correction for enzymatic reactions and intracellular gene-regulation is demonstrated. Our analytical theory and simulation results confirm that the proposed variance-corrected Michaelis-Menten equation predicts the rate of reactions with remarkable accuracy even in the presence of large non-equilibrium concentration fluctuations. The major advantage of our approach is that it involves only the mean and variance of the substrate-molecule concentration. Our theory is therefore accessible to experiments and not specific to the exact source of the concentration fluctuations.
Background
Generating percentile values is helpful for the identification of children with specific fitness characteristics (i.e., low or high fitness level) to set appropriate fitness goals (i.e., fitness/health promotion and/or long-term youth athlete development). Thus, the aim of this longitudinal study was to assess physical fitness development in healthy children aged 9–12 years and to compute sex- and age-specific percentile values.
Methods
Two-hundred and forty children (88 girls, 152 boys) participated in this study and were tested for their physical fitness. Physical fitness was assessed using the 50-m sprint test (i.e., speed), the 1-kg ball push test, the triple hop test (i.e., upper- and lower- extremity muscular power), the stand-and-reach test (i.e., flexibility), the star run test (i.e., agility), and the 9-min run test (i.e., endurance). Age- and sex-specific percentile values (i.e., P10 to P90) were generated using the Lambda, Mu, and Sigma method. Adjusted (for change in body weight, height, and baseline performance) age- and sex-differences as well as the interactions thereof were expressed by calculating effect sizes (Cohen’s d).
Results
Significant main effects of Age were detected for all physical fitness tests (d = 0.40–1.34), whereas significant main effects of Sex were found for upper-extremity muscular power (d = 0.55), flexibility (d = 0.81), agility (d = 0.44), and endurance (d = 0.32) only. Further, significant Sex by Age interactions were observed for upper-extremity muscular power (d = 0.36), flexibility (d = 0.61), and agility (d = 0.27) in favor of girls. Both, linear and curvilinear shaped curves were found for percentile values across the fitness tests. Accelerated (curvilinear) improvements were observed for upper-extremity muscular power (boys: 10–11 yrs; girls: 9–11 yrs), agility (boys: 9–10 yrs; girls: 9–11 yrs), and endurance (boys: 9–10 yrs; girls: 9–10 yrs). Tabulated percentiles for the 9-min run test indicated that running distances between 1,407–1,507 m, 1,479–1,597 m, 1,423–1,654 m, and 1,433–1,666 m in 9- to 12-year-old boys and 1,262–1,362 m, 1,329–1,434 m, 1,392–1,501 m, and 1,415–1,526 m in 9- to 12-year-old girls correspond to a “medium” fitness level (i.e., P40 to P60) in this population.
Conclusions
The observed differences in physical fitness development between boys and girls illustrate that age- and sex-specific maturational processes might have an impact on the fitness status of healthy children. Our statistical analyses revealed linear (e.g., lower-extremity muscular power) and curvilinear (e.g., agility) models of fitness improvement with age which is indicative of timed and capacity-specific fitness development pattern during childhood. Lastly, the provided age- and sex-specific percentile values can be used by coaches for talent identification and by teachers for rating/grading of children’s motor performance.
Fluid force microscopy combines the positional accuracy and force sensitivity of an atomic
force microscope (AFM) with nanofluidics via a microchanneled cantilever. However, adequate loading and cleaning procedures for such AFM micropipettes are required for various application situations. Here, a new frontloading procedure is described for an AFM micropipette functioning as a force- and pressure-controlled microscale liquid dispenser. This frontloading
procedure seems especially attractive when using target substances featuring high
costs or low available amounts. Here, the AFM micropipette could be filled from the tip side with liquid from a previously applied droplet with a volume of only a few μL using a short low-pressure pulse. The liquid-loaded AFM micropipettes could be then applied for experiments in air or liquid environments. AFM micropipette frontloading was evaluated with the well-known organic fluorescent dye rhodamine 6G and the AlexaFluor647-labeled antibody goat anti-rat IgG as an example of a larger biological compound. After micropipette usage, specific cleaning procedures were tested. Furthermore, a storage method is described, at which the AFM micropipettes could be stored for a few hours up to several days without drying out or clogging of the microchannel. In summary, the rapid, versatile and cost-efficient
frontloading and cleaning procedure for the repeated usage of a single AFM micropipette is beneficial for various application situations from specific surface modifications through to local manipulation of living cells, and provides a simplified and faster handling for already known experiments with fluid force microscopy.
Background
Body image distortion is highly prevalent among overweight individuals. Whilst there is evidence that body-dissatisfied women and those suffering from disordered eating show a negative attentional bias towards their own unattractive body parts and others’ attractive body parts, little is known about visual attention patterns in the area of obesity and with respect to males. Since eating disorders and obesity share common features in terms of distorted body image and body dissatisfaction, the aim of this study was to examine whether overweight men and women show a similar attentional bias.
Methods/Design
We analyzed eye movements in 30 overweight individuals (18 females) and 28 normalweight individuals (16 females) with respect to the participants’ own pictures as well as gender-
and BMI-matched control pictures (front and back view). Additionally, we assessed body image and disordered eating using validated questionnaires.
Discussion
The overweight sample rated their own body as less attractive and showed a more disturbed body image. Contrary to our assumptions, they focused significantly longer on attractive
compared to unattractive regions of both their own and the control body. For one’s own body, this was more pronounced for women. A higher weight status and more frequent body checking predicted attentional bias towards attractive body parts. We found that overweight adults exhibit an unexpected and stable pattern of selective attention, with a distinctive focus on their own attractive body regions despite higher levels of body dissatisfaction. This positive attentional bias may either be an indicator of a more pronounced pattern of attentional avoidance or a self-enhancing strategy. Further research is warranted to clarify these results.
The rise of evolutionary novelties is one of the major drivers of evolutionary diversification. African weakly-electric fishes (Teleostei, Mormyridae) have undergone an outstanding adaptive radiation, putatively owing to their ability to communicate through species-specific Electric Organ Discharges (EODs) produced by a novel, muscle-derived electric organ. Indeed, such EODs might have acted as effective pre-zygotic isolation mechanisms, hence favoring ecological speciation in this group of fishes. Despite the evolutionary importance of this organ, genetic investigations regarding its origin and function have remained limited.
The ultimate aim of this study is to better understand the genetic basis of EOD production by exploring the transcriptomic profiles of the electric organ and of its ancestral counterpart, the skeletal muscle, in the genus Campylomormyrus. After having established a set of reference transcriptomes using “Next-Generation Sequencing” (NGS) technologies, I performed in silico analyses of differential expression, in order to identify sets of genes that might be responsible for the functional differences observed between these two kinds of tissues. The results of such analyses indicate that: i) the loss of contractile activity and the decoupling of the excitation-contraction processes are reflected by the down-regulation of the corresponding genes in the electric organ; ii) the metabolic activity of the electric organ might be specialized towards the production and turnover of membrane structures; iii) several ion channels are highly expressed in the electric organ in order to increase excitability, and iv) several myogenic factors might be down-regulated by transcription repressors in the EO.
A secondary task of this study is to improve the genus level phylogeny of Campylomormyrus by applying new methods of inference based on the multispecies coalescent model, in order to reduce the conflict among gene trees and to reconstruct a phylogenetic tree as closest as possible to the actual species-tree. By using 1 mitochondrial and 4 nuclear markers, I was able to resolve the phylogenetic relationships among most of the currently described Campylomormyrus species. Additionally, I applied several coalescent-based species delimitation methods, in order to test the hypothesis that putatively cryptic species, which are distinguishable only from their EOD, belong to independently evolving lineages. The results of this analysis were additionally validated by investigating patterns of diversification at 16 microsatellite loci. The results suggest the presence of a new, yet undescribed species of Campylomormyrus.
This thesis investigates temporal and aspectual reference in the typologically unrelated African languages Hausa (Chadic, Afro–Asiatic) and Medumba (Grassfields Bantu).
It argues that Hausa is a genuinely tenseless language and compares the interpretation of temporally unmarked sentences in Hausa to that of morphologically tenseless sentences in Medumba, where tense marking is optional and graded.
The empirical behavior of the optional temporal morphemes in Medumba motivates an analysis as existential quantifiers over times and thus provides new evidence suggesting that languages vary in whether their (past) tense is pronominal or quantificational (see also Sharvit 2014).
The thesis proposes for both Hausa and Medumba that the alleged future tense marker is a modal element that obligatorily combines with a prospective future shifter (which is covert in Medumba). Cross-linguistic variation in whether or not a future marker is compatible with non-future interpretation is proposed to be predictable from the aspectual architecture of the given language.
Handbuch Textannotation
(2015)
Das Potsdamer Kommentarkorpus ist eine Sammlung von Zeitungstexten, die dem Genre ‘Kommentar' zuzuordnen sind. Der öffentlich verfügbare Teil besteht aus 175 Texten aus der Märkischen Allgemeinen Zeitung, die hinsichtlich Syntax, Koreferenz, Konnektoren und Rhetorische Struktur manuell annotiert wurden. Weitere Ebenen werden bei zukünftigen Korpusversionen hinzukommen. Dieses Buch enthält die Annotationsrichtlinien, die der Bearbeitung des öffentlichen Teils des Korpus zugrunde lagen, sowie auch anderer Teile, bei denen mit weiteren Annotationsebenen experimentiert wurde. Die meisten der Richtlinien werden auch für ähnliche Text-Genres und für andere Sprachen verwendbar sein.
Reconstructing climate from the Dead Sea sediment record using high-resolution micro-facies analyses
(2015)
The sedimentary record of the Dead Sea is a key archive for reconstructing climate in the eastern Mediterranean region, as it stores the environmental and tectonic history of the Levant for the entire Quaternary. Moreover, the lake is located at the boundary between Mediterranean sub-humid to semi-arid and Saharo-Arabian hyper-arid climates, so that even small shifts in atmospheric circulation are sensitively recorded in the sediments. This DFG-funded doctoral project was carried out within the ICDP Dead Sea Deep Drilling Project (DSDDP) that intended to gain the first long, continuous and high-resolution sediment core from the deep Dead Sea basin. The drilling campaign was performed in winter 2010-11 and more than 700 m of sediments were recovered. The main aim of this thesis was (1) to establish the lithostratigraphic framework for the ~455 m long sediment core from the deep Dead Sea basin and (2) to apply high-resolution micro-facies analyses for reconstructing and better understanding climate variability from the Dead Sea sediments.
Addressing the first aim, the sedimentary facies of the ~455 m long deep-basin core 5017-1 were described in great detail and characterised through continuous overview-XRF element scanning and magnetic susceptibility measurements. Three facies groups were classified: (1) the marl facies group, (2) the halite facies group and (3) a group involving different expressions of massive, graded and slumped deposits including coarse clastic detritus. Core 5017-1 encompasses a succession of four main lithological units. Based on first radiocarbon and U-Th ages and correlation of these units to on-shore stratigraphic sections, the record comprises the last ca 220 ka, i.e. the upper part of the Amora Formation (parts of or entire penultimate interglacial and glacial), the last interglacial Samra Fm. (~135-75 ka), the last glacial Lisan Fm. (~75-14 ka) and the Holocene Ze’elim Formation. A major advancement of this record is that, for the first time, also transitional intervals were recovered that are missing in the exposed formations and that can now be studied in great detail.
Micro-facies analyses involve a combination of high-resolution microscopic thin section analysis and µXRF element scanning supported by magnetic susceptibility measurements. This approach allows identifying and characterising micro-facies types, detecting event layers and reconstructing past climate variability with up to seasonal resolution, given that the analysed sediments are annually laminated. Within this thesis, micro-facies analyses, supported by further sedimentological and geochemical analyses (grain size, X-ray diffraction, total organic carbon and calcium carbonate contents) and palynology, were applied for two time intervals:
(1) The early last glacial period ~117-75 ka was investigated focusing on millennial-scale hydroclimatic variations and lake level changes recorded in the sediments. Thereby, distinguishing six different micro-facies types with distinct geochemical and sedimentological characteristics allowed estimating relative lake level and water balance changes of the lake. Comparison of the results to other records in the Mediterranean region suggests a close link of the hydroclimate in the Levant to North Atlantic and Mediterranean climates during the time of the build-up of Northern hemisphere ice sheets during the early last glacial period.
(2) A mostly annually laminated late Holocene section (~3700-1700 cal yr BP) was analysed in unprecedented detail through a multi-proxy, inter-site correlation approach of a shallow-water core (DSEn) and its deep-basin counterpart (5017-1). Within this study, a ca 1500 years comprising time series of erosion and dust deposition events was established and anchored to the absolute time-scale through 14C dating and age modelling. A particular focus of this study was the characterisation of two dry periods, from ~3500 to 3300 and from ~3000 to 2400 cal yr BP, respectively. Thereby, a major outcome was the coincidence of the latter dry period with a period of moist and cold climate in Europe related to a Grand Solar Minimum around 2800 cal yr BP and an increase in flood events despite overall dry conditions in the Dead Sea region during that time. These contrasting climate signatures in Europe and at the Dead Sea were likely linked through complex teleconnections of atmospheric circulation, causing a change in synoptic weather patterns in the eastern Mediterranean.
In summary, within this doctorate the lithostratigraphic framework of a unique long sediment core from the deep Dead Sea basin is established, which serves as a base for any further high-resolution investigations on this core. It is demonstrated in two case studies that micro-facies analyses are an invaluable tool to understand the depositional processes in the Dead Sea and to decipher past climate variability in the Levant on millennial to seasonal time-scales. Hence, this work adds important knowledge helping to establish the deep Dead Sea record as a key climate archive of supra-regional significance.
Während des Ersten Weltkrieges kam der deutsche Dramatiker und Erzähler Alfred Brust mit dem chassidischen Judentum im litauischen Kowno und Wilna in Berührung. So wie Sammy Gronemann, Arnold Zweig und andere Autoren, die in der Zensur- Abteilung von ‚Ober Ost‘ beschäftigt waren, war Brust tief beeindruckt von dessen archaischer Kultur. Diese schien ihm im Gegensatz zur Dekadenz der modernen Welt zu stehen. Beeinflusst vom expressionistischen Topos einer inneren Wandlung des Menschen entwickelte er in den folgenden Jahren die Idee einer Translation spiritueller und moralischer Werte von den osteuropäischen Juden auf die (deutschen) Ostpreußen. Für ihn galt: Die „Juden sind der Adel der Bewegung“. Brust stand in Kontakt mit Richard Dehmel, Hugo von Hofmannsthal, Florens Christian Rang und Martin Buber, selbst Walter Benjamin war zeitweise an ihm interessiert.
When Jesus Spoke Yiddish
(2015)
In this paper, I wish to bring some evidences from a Yiddish manuscript of the “Toledot Yeshu” which has not yet been the object of research: MS. Günzburg, 1730 kept in the Russian State Library in Moscow and dated 17th century. The manuscript is part of the so-called ‘Herode-tradition’ of the “Toledot Yeshu”. This means that the Yiddish manuscript is connected to the version printed in Hebrew and accompanied by a Latin translation by the Swiss pastor and theologian Johann Jacob Uldrich (Huldricus, 1683–1731) in Leiden in 1705, bearing the title “Historia Jeschuae Nazareni”. Given the uncertainty about the exact dating of the Yiddish manuscript, a comparison between the Hebrew and the Yiddish can still allow some remarks concerning the characteristics of the Yiddish version and posit some questions about the transmission and the reception of this challenging and intriguing text.
In diesem Beitrag soll es darum gehen, anhand der Distinktion von Jetztzeit und Erinnerung scheinbare Einheiten im lyrischen Werk der Nelly Sachs zu hinterfragen. Ein Nexus der Jesusfigur zur Shoah ist in den Gedichten nicht zu übersehen; inwieweit eine Korrelation beider Diskurse zwischen dem perennierenden Leiden der Opfer und dem am Kreuz Gemarterten besteht, ist der Gegenstand dieser Untersuchung. Jesus wird im lyrischen Oeuvre auf der zeitlichen wie inhaltlichen Ebene neu figuriert: nicht als christologisch-dogmatische Erlöserfigur, sondern als der qualvoll gemarterte Mann, dessen Schrei im 20. Jahrhundert eine neue Lesbarkeit generiert: als leidender Mit-Bruder. Der historische Index der Jesus-Gestalt wird in der Passionsszenerie, die jedoch transformativ modifiziert wird, manifest.
Carl Einsteins Drama „Die schlimme Botschaft“ (1921), das ihm einen Prozess wegen Gotteslästerung einbrachte, steht mit seiner kritischen Haltung gegenüber der christlichen Lehre von der Kreuzigung und Auferstehung im Trend einer seinerzeit von jüdischen Theologen geführten Diskussion mit Adolf Harnacks Modernisierungsprojekt des Christentums. Jüdische Theologen wie Joseph Eschelbacher entwarfen ein Judentum, das, anders als von Harnack dargestellt, bereits jene moderne Gestalt einer Religion erfüllt, die Harnack für das Christentum erst zu erreichen versuchte. Einsteins „vom Dogma abweichende[r] Standpunkt“, der ihm vor Gericht von einem katholischen Geistlichen als Standpunkt eines dissidenten Juden vorgeworfen wurde, hätte durchaus als Beitrag eines jüdischen Literaten zur Apologie des Judentums gelten können, wenn Einstein nicht selbst seine Glaubensbrüder im Verbund mit der von ihm abgelehnten bürgerlichen Ideologie des kapitalistischen Marktes gesehen hätte.
Bereits in der aufklärerischen Auseinandersetzung um Jesus’ Judentum finden sich Tendenzen, dieses als Ausgangspunkt einer universalisierenden Vermittlung zwischen Judentum und Christentum zu entwerfen, wie auch – zumeist von jüdischer Seite – Einsprüche gegen einen derartigen Universalismus, der entscheidende Differenzen und die Geschichte ausgrenzender Gewalt verleugne. Nach einem kurzen Blick auf den jüdischen Christus bei Heine analysiert der Beitrag Stefan Zweigs im Horizont des Ersten Weltkriegs und zionistischer Debatten entstandenes, 1917 publiziertes Drama „Jeremias“. Die für die Konzeption des Stückes zentrale Überblendung der Prophetenfigur Jeremias mit Christus treibt, so die These, die problematische Logik einer Opferstellvertretung hervor, die mit der Vorstellung einer durch (Opfer-)Blut konstituierten Gemeinschaft die Frage nach der Schuld (Christusmord) auf Andere verschiebt. Demgegenüber entwirft das Stück in Motiven der einbrechenden ‚Schrecknis‘, der Fremdheit im Eigenen, des Exils und der Weltwanderschaft einen anderen Kosmopolitismus.
Hintergrund: In Deutschland stellt der akute Myokardinfarkt (MI) eine der häufigsten Todesursachen dar. Als Ursache für regionale Unterschiede bei den Mortalitätsraten werden divergente Versorgungsstrukturen vermutet. Ziel der Untersuchung war, diese Fragestellung anhand anonymisierter krankenkassenbasierter Abrechnungsdaten zu evaluieren.
Methodik: Standardisierte Hospitalisierungs- sowie Krankenhaus- und Ein-Jahres-Mortalitätsraten nach MI wurden anhand anonymisierter Versichertendaten einer gesetzlichen Krankenkasse für das Jahr 2012 und die Bundesländer Berlin, Brandenburg und Mecklenburg-Vorpommern ermittelt (n=1.387.084, 46.3% male, 60.9 ± 18,2 years). Weiterhin wurden prädiktive Einflussfaktoren auf die Ein-Jahres-Mortalität, auf die Durchführung invasiver Prozeduren und auf eine leitliniengerechte pharmakotherapeutische Sekundärprävention analysiert.
Ergebnisse: 6.733 Patienten (73,7 ±13,0 Jahre, 56,7% männlich) wurden identifiziert. Obwohl für das Bundesland Berlin eine höhere Hospitalisierungsrate als in Mecklenburg-Vorpommern ermittelt werden konnte, ließen sich bei der Krankenhaus- und 1-Jahres-Mortalität keine signifikant abweichenden Raten zwischen den Bundesländern beobachten. Die Durchführung einer Koronarangiographie (OR: 0,42 [0,35-0,51]) und eine leitliniengerechte Pharmakotherapie (OR: 0,14 [0,12-0,17] waren mit einer geringeren 1-Jahres-Mortalität assoziiert. Die Durchführung einer Koronarangiographie und eine leitliniengerechte Pharmakotherapie von Patienten nach Myokardinfarkt wurde hingegen primär durch Alter und Geschlecht, nicht aber durch das Bundesland determiniert.
Folgerung: Eine regional divergierende stationäre und postinfarzielle Versorgung auf Bundesland-Ebene kann anhand der vorliegenden Daten nicht nachgewiesen werden.
Der deutsch-jüdische Dichter und Anarchist Erich Mühsam (1878–1934) rekurrierte in seinem schriftstellerischen OEuvre wiederholt auf die Figur Jesus’. Im Versuch, diese Beziehungsgeschichte zu rekonstruieren und zu deuten, rücken Einflussgeber aus dem Umfeld der literarischen Avantgarde Berlins sowie der Lebensreformbewegung in den Blick. In einer Verschränkung von quasireligiöser Renaissance und Aufbruchstimmung hin zum ‚Neuen Menschen‘ war Jesus in diesen Zirkeln um die Jahrhundertwende ein beliebtes Motiv und prägte sich, vermittelt über Freundschaften und Wegmarken, auch in Mühsams Schreiben ein. Neben Kain ist es Jesus, der bei Mühsam zur den fünften Stand revolutionierenden Lichtgestalt wird. In diese Bezugnahme spielen Mühsams konfliktreiches Verhältnis zu seinem Vater und dessen Assimilation an das deutsche Bürgertum mit hinein, das keineswegs gänzlich jene aufgeklärten Werte lebte, auf die es sich nominell berief: Statt Freiheit und Gleichheit prävalierten neue Zwänge, Begrenzungen und Ressentiments, wie Mühsam bemerkte. Über Gustav Landauer fand er nicht nur zum Anarchismus, sondern auch zu einer neuen Sprache für die verschüttete Tradition und prägte hierüber seiner Jesusfigur sein Verständnis eines rebellischen Judentums ein, das Mühsam selbst als Revolutionär verkörperte.
Die vorliegende Dissertation analysiert die Mentalitäten der bürgerlichen Bewohner Bogotas 19. Jahrhundert. Einblicke in die Fremd- und Eigenperspektiven der Bogotaner werden durch die Analyse von Reiseliteratur gewonnen. Methodologisch stützt sich die Arbeit auf den Vergleich zwischen europäischen Reiseberichten aus dem 19. Jh., Chroniken aus dem 16. Jh. sowie zwei kolumbianische Romane aus dem frühen 20.. Jh. Die Texte werden historiographisch behandelt; obwohl sie unterschiedlichen literarischen Genres angehören, weisen sie einen gemeinsamen autobiographischen Charakter auf. Aus den Erfahrungen und Gedanken der Reisenden werden v.a. die Auswirkungen der geographischen und sozialen Isolation thematisiert, sowie die Einflüsse von politischen und religiösen Diskursen auf die Bildung von bürgerlichem Gedankengut.
Jakob Wassermanns Roman „Die Juden von Zirndorf“ (1897), ein Sittengemälde deutsch-jüdischen Lebens, ist von diversen messianischen Vorstellungen getragen, die nur zum Teil auf die jüdische Tradition zurückzuführen sind. Folglich vereint der jüdische Knabe Agathon, die Hauptgestalt des zweiten Romanteils, Züge sowohl des jüdischen Messias als auch des christlichen Heilands. Die hohe Anzahl von Anspielungen auf die Figur Jesu und die ästhetische Funktionalisierung christologischer Motive im Roman ist in den Kontext jener „Heimholung Jesu in das Judentum“ einzubetten, die ab dem 19. Jahrhundert mit der Wissenschaft des Judentums einsetzte. Der vorliegende Beitrag wird – vor dem Hintergrund zeitgenössischer Jesus-Deutungen – sowohl auf die Spuren Jesu in der Charakterisierung Agathons als auch auf die Aneignung christologischer Motive vor allem im 2. Romanteil eingehen. Dabei soll insbesondere auf Lou von Salomés Aufsatz „Jesus der Jude“ (1896) als mögliche Quelle verwiesen werden.
Die jüdischen Künstler Maurycy Gottlieb (1856–1879) und Marc Chagall (1887–1985) stellten Jesus als gläubigen Juden, eingebettet in die jüdische Umwelt seiner Zeit, dar. Der jüdische Jesus wird für sie zu einer Auseinandersetzung mit den jüdischen Wurzeln des Christentums und mit dem Antisemitismus, und sein Martyrium zu einem Symbol des jüdischen Leidens. Der vorliegende Aufsatz untersucht, wie ihre Bilder diese Botschaften transportieren und analysiert Kontinuitäten und Brüche über einen langen Zeitraum hinweg.
In 1945, Zinovii Shenderovich Tolkatchev (1903–1977), a Soviet artist of Jewish origin, created a striking series of five images entitled “Jesus in Majdanek”. The series was the culmination of Tolkatchev‘s intensive preoccupation with the experience he, as a Red Army soldier, endured upon taking part in liberation of the concentration camps Majdanek and Auschwitz. Shocked by the actual sights he witnessed, he depicted Jesus as an actual camp inmate, wearing a striped uniform marked by every possible defamation sign – the Jewish yellow star, the red triangle of political prisoners, and the individual prison number, the numerical tattoo on his lower arm can also be seen. The different stages of camp life are portrayed as the traditional Passion of Christ. While showing the actual situations the artist based himself upon the well known European Renaissance paintings canonically depicting Jesus‘ suffering. The article places Tolkatchev‘s series in a broader cultural and visual context by exploring the development of the ‘historical Jesus’ in the 19th century European thought and Russian realist art, and by examining the impact of the German avant-garde. By doing so, a deeper understanding of the universal message Tolkatchev’s works entail is offered.
Messianic Jews are Jewish individuals who syncretically accept both the messianic character of Jesus and the ritual cultic practices provided by traditional Judaism. The present article examines the emergence of this marginal syncretic movement in contemporary Israel, and maintains that it represents a radical development in the bimillenary history of Jewish-Christian relations. This article offers a general introduction to the notion of Jewish-Christian identity, a brief history of the first group of Messianic Jews in the Land of Israel, the cultural influence and religious syncretism of the Messianic Jews in modern Israel, and, finally, the implication that Messianic Judaism is supposed to become the new paradigm within the various branches of Judaism.
The non-linear behaviour of the atmospheric dynamics is not well understood and makes the evaluation and usage of regional climate models (RCMs) difficult. Due to these non-linearities, chaos and internal variability (IV) within the RCMs are induced, leading to a sensitivity of RCMs to their initial conditions (IC). The IV is the ability of RCMs to realise different solutions of simulations that differ in their IC, but have the same lower and lateral boundary conditions (LBC), hence can be defined as the across-member spread between the ensemble members.
For the investigation of the IV and the dynamical and diabatic contributions generating the IV four ensembles of RCM simulations are performed with the atmospheric regional model HIRHAM5. The integration area is the Arctic and each ensemble consists of 20 members. The ensembles cover the time period from July to September for the years 2006, 2007, 2009 and 2012. The ensemble members have the same LBC and differ in their IC only. The different IC are arranged by an initialisation time that shifts successively by six hours. Within each ensemble the first simulation starts on 1st July at 00 UTC and the last simulation starts on 5th July at 18 UTC and each simulation runs until 30th September. The analysed time period ranges from 6th July to 30th September, the time period that is covered by all ensemble members. The model runs without any nudging to allow a free development of each simulation to get the full internal variability within the HIRHAM5.
As a measure of the model generated IV, the across-member standard deviation and the across-member variance is used and the dynamical and diabatic processes influencing the IV are estimated by applying a diagnostic budget study for the IV tendency of the potential temperature developed by Nikiema and Laprise [2010] and Nikiema and Laprise [2011]. The diagnostic budget study is based on the first law of thermodynamics for potential temperature and the mass-continuity equation. The resulting budget equation reveals seven contributions to the potential temperature IV tendency.
As a first study, this work analyses the IV within the HIRHAM5. Therefore, atmospheric circulation parameters and the potential temperature for all four ensemble years are investigated. Similar to previous studies, the IV fluctuates strongly in time. Further, due to the fact that all ensemble members are forced with the same LBC, the IV depends on the vertical level within the troposphere, with high values in the lower troposphere and at 500 hPa and low values in the upper troposphere and at the surface. By the same reason, the spatial distribution shows low values of IV at the boundaries of the model domain.
The diagnostic budget study for the IV tendency of potential temperature reveals that the seven contributions fluctuate in time like the IV. However, the individual terms reach different absolute magnitudes. The budget study identifies the horizontal and vertical ‘baroclinic’ terms as the main contributors to the IV tendency, with the horizontal ‘baroclinic’ term producing and the vertical ‘baroclinic’ term reducing the IV. The other terms fluctuate around zero, because they are small in general or are balanced due to the domain average.
The comparison of the results obtained for the four different ensembles (summers 2006, 2007, 2009 and 2012) reveals that on average the findings for each ensemble are quite similar concerning the magnitude and the general pattern of IV and its contributions. However, near the surface a weaker IV is produced with decreasing sea ice extent. This is caused by a smaller impact of the horizontal 'baroclinic' term over some regions and by the changing diabatic processes, particularly a more intense reducing tendency of the IV due to condensative heating. However, it has to be emphasised that the behaviour of the IV and its dynamical and diabatic contributions are influenced mainly by complex atmospheric feedbacks and large-scale processes and not by the sea ice distribution.
Additionally, a comparison with a second RCM covering the Arctic and using the same LBCs and IC is performed. For both models very similar results concerning the IV and its dynamical and diabatic contributions are found. Hence, this investigation leads to the conclusion that the IV is a natural phenomenon and is independent from the applied RCM.
Der andere Weg zur Wahrheit
(2015)
Vorliegender Aufsatz befasst sich mit der Auseinandersetzung des Philosophen Franz Rosenzweig mit der Jesus-Figur. Dabei werden zwei Texte Rosenzweigs in den Blick genommen: „Atheistische Theologie“ (1914) und „Der Stern der Erlösung“ (1921). Hinzu kommen der Briefwechsel, den er mit Margrit und Eugen Rosenstock zwischen 1917 und 1929 führt und ein Text von Eduard Strauß, den letzterer im Rahmen seiner Tätigkeit im Frankfurter Jüdischen Lehrhaus entworfen hat. Durch diese Texte wird gezeigt, wie die Analyse der Jesus-Figur zur Auseinandersetzung mit seiner Historisierung durch die protestantische Theologie wird. Neben Rosenzweig, Buber und Strauß tragen weitere jüdische Gelehrte zu dieser Debatte bei. Diese Debatten um die Geschichtlichkeit Jesu werden auch in den Kontext des Verhältnisses zwischen Christentum und Judentum und ferner in Rosenzweigs Bemühungen um einen christlichjüdischen Dialog eingebettet.
We continue our study of invariant forms of the classical equations of mathematical physics,
such as the Maxwell equations or the Lamé system, on manifold with boundary. To this end we interpret them in terms of the de Rham complex at a certain step. On using the structure of the complex we get an insight to predict a degeneracy deeply encoded
in the equations. In the present paper we develop an invariant approach to the classical Navier-Stokes equations.
The main goal of this cumulative thesis is the derivation of surface emissivity data in the infrared from radiance measurements of Venus. Since these data are diagnostic of the chemical composition and grain size of the surface material, they can help to improve knowledge of the planet’s geology. Spectrally resolved images of nightside emissions in the range 1.0-5.1 μm were recently acquired by the InfraRed Mapping channel of the Visible and InfraRed Thermal Imaging Spectrometer (VIRTIS-M-IR) aboard ESA’s Venus EXpress (VEX). Surface and deep atmospheric thermal emissions in this spectral range are strongly obscured by the extremely opaque atmosphere, but three narrow spectral windows at 1.02, 1.10, and 1.18 μm allow the sounding of the surface. Additional windows between 1.3 and 2.6 μm provide information on atmospheric parameters that is required to interpret the surface signals. Quantitative data on surface and atmosphere can be retrieved from the measured spectra by comparing them to simulated spectra. A numerical radiative transfer model is used in this work to simulate the observable radiation as a function of atmospheric, surface, and instrumental parameters. It is a line-by-line model taking into account thermal emissions by surface and atmosphere as well as absorption and multiple scattering by gases and clouds. The VIRTIS-M-IR measurements are first preprocessed to obtain an optimal data basis for the subsequent steps. In this process, a detailed detector responsivity analysis enables the optimization of the data consistency. The measurement data have a relatively low spectral information content, and different parameter vectors can describe the same measured spectrum equally well. A usual method to regularize the retrieval of the wanted parameters from a measured spectrum is to take into account a priori mean values and standard deviations of the parameters to be retrieved. This decreases the probability to obtain unreasonable parameter values. The multi-spectrum retrieval algorithm MSR is developed to additionally consider physically realistic spatial and temporal a priori correlations between retrieval parameters describing different measurements. Neglecting geologic activity, MSR also allows the retrieval of an emissivity map as a parameter vector that is common to several spectrally resolved images that cover the same surface target. Even applying MSR, it is difficult to obtain reliable emissivity maps in absolute values. A detailed retrieval error analysis based on synthetic spectra reveals that this is mainly due to interferences from parameters that cannot be derived from the spectra themselves, but that have to be set to assumed values to enable the radiative transfer simulations. The MSR retrieval of emissivity maps relative to a fixed emissivity is shown to effectively avoid most emissivity retrieval errors. Relative emissivity maps at 1.02, 1.10, and 1.18 μm are finally derived from many VIRTIS-M-IR measurements that cover a surface target at Themis Regio. They are interpreted as spatial variations relative to an assumed emissivity mean of the target. It is verified that the maps are largely independent of the choice of many interfering parameters as well as the utilized measurement data set. These are the first Venus IR emissivity data maps based on a consistent application of a full radiative transfer simulation and a retrieval algorithm that respects a priori information. The maps are sufficiently reliable for future geologic interpretations.
PaRDeS. Zeitschrift der Vereinigung für Jüdische Studien e.V., möchte die fruchtbare und facettenreiche Kultur des Judentums sowie seine Berührungspunkte zur Umwelt in den unterschiedlichen Bereichen dokumentieren. Daneben dient die Zeitschrift als Forum zur Positionierung der Fächer Jüdische Studien und Judaistik innerhalb des wissenschaftlichen Diskurses sowie zur Diskussion ihrer historischen und gesellschaftlichen Verantwortung.
Brownianmotion is ergodic in the Boltzmann–Khinchin sense that long time averages of physical observables such as the mean squared displacement provide the same information as the corresponding ensemble average, even at out-of-equilibrium conditions. This property is the fundamental prerequisite for single particle tracking and its analysis in simple liquids. We study analytically and by event-driven molecular dynamics simulations the dynamics of force-free cooling granular gases and reveal a violation of ergodicity in this Boltzmann-Khinchin sense as well as distinct ageing of the system. Such granular gases comprise materials such as dilute gases of stones, sand, various types of powders, or large molecules, and their mixtures are ubiquitous in Nature and technology, in particular in Space. We treat—depending on the physical-chemical properties of the inter-particle interaction upon their pair collisions—both a constant and a velocity-dependent
(viscoelastic) restitution coefficient e. Moreover we compare the granular gas dynamics with an effective single particle stochastic model based on an underdamped Langevin equation with time dependent diffusivity. We find that both models share the same behaviour of the ensemble mean squared displacement (MSD) and the velocity correlations in the limit of weak dissipation. Qualitatively, the reported non-ergodic behaviour is generic for granular gases with any realistic dependence of e on the impact velocity of particles.
The present article is among the first reports on the effects of poly(ampholyte)s and poly(betaine)s on the biomimetic formation of calcium phosphate. We have synthesized a series of di- and triblock copolymers based on a non-ionic poly(ethylene oxide) block and several charged methacrylate monomers, 2-(trimethylammonium)ethyl methacrylate chloride, 2-((3-cyanopropyl)-dimethylammonium)ethyl methacrylate chloride, 3-sulfopropyl methacrylate potassium salt, and [2-(methacryloyloxy)ethyl]dimethyl-(3-sulfopropyl)ammonium hydroxide. The resulting copolymers are either positively charged, ampholytic, or betaine block copolymers. All the polymers have very high molecular weights of over 106 g mol−1. All polymers are water-soluble and show a strong effect on the precipitation and dissolution of calcium phosphate. The strongest effects are observed with triblock copolymers based on a large poly(ethylene oxide) middle block (nominal Mn = 100 000 g mol−1). Surprisingly, the data show that there is a need for positive charges in the polymers to exert tight control over mineralization and dissolution, but that the exact position of the charge in the polymer is of minor importance for both calcium phosphate precipitation and dissolution.
We investigate the ergodic properties of a random walker performing (anomalous) diffusion on a random fractal geometry. Extensive Monte Carlo simulations of the motion of tracer particles on an ensemble of realisations of percolation clusters are performed for a wide range of percolation densities. Single trajectories of the tracer motion are analysed to quantify the time averaged mean squared displacement (MSD) and to compare this with the ensemble averaged MSD of the particle motion. Other complementary physical observables associated with ergodicity are studied, as well. It turns out that the time averaged MSD of individual realisations exhibits non-vanishing fluctuations even in the limit of very long observation times as the percolation density approaches the critical value. This apparent non-ergodic behaviour concurs with the ergodic behaviour on the ensemble averaged level. We demonstrate how the non-vanishing fluctuations in single particle trajectories are analytically expressed in terms of the fractal dimension and the cluster size distribution of the random geometry, thus being of
purely geometrical origin. Moreover, we reveal that the convergence scaling law to ergodicity, which is known to be inversely proportional to the observation time T for ergodic diffusion processes, follows a power-law BT� h with h o 1 due to the fractal structure of the accessible space. These results provide useful measures for differentiating the subdiffusion on random fractals from an otherwise closely related process, namely, fractional Brownian motion. Implications of our results on the analysis of single particle tracking experiments are provided.
The interruption of learning processes by breaks filled with diverse activities is common in everyday life. We investigated the effects of active computer gaming and passive relaxation (rest and music) breaks on working memory performance. Young adults were exposed to breaks involving (i) eyes-open resting, (ii) listening to music and (iii) playing the video game “Angry Birds” before performing the n-back working memory task. Based on linear mixed-effects modeling, we found that playing the “Angry Birds” video game during a short learning break led to a decline in task performance over the course of the task as compared to eyes-open resting and listening to music, although overall task performance was not impaired. This effect was associated with high levels of daily mind wandering and low self-reported ability to concentrate. These findings indicate that video games can negatively affect working memory performance over time when played in between learning tasks. We suggest further investigation of these effects because of their relevance to everyday activity.
The Brazilian Cerrado is recognised as one of the most threatened biomes in the world, as the region has experienced a striking change from natural vegetation to intense cash crop production. The impacts of rapid agricultural expansion on soil and water resources are still poorly understood in the region. Therefore, the overall aim of the thesis is to improve our understanding of the ecohydrological processes causing water and soil degradation in the Brazilian Cerrado.
I first present a metaanalysis to provide quantitative evidence and identifying the main impacts of soil and water alterations resulting from land use change. Second, field studies were conducted to (i) examine the effects of land use change on soils of natural cerrado transformed to common croplands and pasture and (ii) indicate how agricultural production affects water quality across a meso-scale catchment. Third, the ecohydrological process-based model SWAT was tested with simple scenario analyses to gain insight into the impacts of land use and climate change on the water cycling in the upper São Lourenço catchment which experienced decreasing discharges in the last 40 years.
Soil and water quality parameters from different land uses were extracted from 89 soil and 18 water studies in different regions across the Cerrado. Significant effects on pH, bulk density and available P and K for croplands and less-pronounced effects on pastures were evident. Soil total N did not differ between land uses because most of the cropland sites were N-fixing soybean cultivations, which are not artificially fertilized with N. By contrast, water quality studies showed N enrichment in agricultural catchments, indicating fertilizer impacts and potential susceptibility to eutrophication. Regardless of the land use, P is widely absent because of the high-fixing capacities of deeply weathered soils and the filtering capacity of riparian vegetation. Pesticides, however, were consistently detected throughout the entire aquatic system. In several case studies, extremely high-peak concentrations exceeded Brazilian and EU water quality limits, which pose serious health risks.
My field study revealed that land conversion caused a significant reduction in infiltration rates near the soil surface of pasture (–96 %) and croplands (–90 % to –93 %). Soil aggregate stability was significantly reduced in croplands than in cerrado and pasture. Soybean crops had extremely high extractable P (80 mg kg–1), whereas pasture N levels declined. A snapshot water sampling showed strong seasonality in water quality parameters. Higher temperature, oxi-reduction potential (ORP), NO2–, and very low oxygen concentrations (<5 mg•l–1) and saturation (<60 %) were recorded during the rainy season. By contrast, remarkably high PO43– concentrations (up to 0.8 mg•l–1) were measured during the dry season. Water quality parameters were affected by agricultural activities at all sampled sub-catchments across the catchment, regardless of stream characteristic. Direct NO3– leaching appeared to play a minor role; however, water quality is affected by topsoil fertiliser inputs with impact on small low order streams and larger rivers. Land conversion leaving cropland soils more susceptible to surface erosion by increased overland flow events.
In a third study, the field data were used to parameterise SWAT. The model was tested with different input data and calibrated in SWAT-CUP using the SUFI-2 algorithm. The model was judged reliable to simulate the water balance in the Cerrado. A complete cerrado, pasture and cropland cover was used to analyse the impact of land use on water cycling as well as climate change projections (2039–2058) according to the projections of the RCP 8.5 scenario. The actual evapotranspiration (ET) for the cropland scenario was higher compared to the cerrado cover (+100 mm a–1). Land use change scenarios confirmed that deforestation caused higher annual ET rates explaining partly the trend of decreased streamflow. Taking all climate change scenarios into account, the most likely effect is a prolongation of the dry season (by about one month), with higher peak flows in the rainy season. Consequently, potential threats for crop production with lower soil moisture and increased erosion and sediment transport during the rainy season are likely and should be considered in adaption plans.
From the three studies of the thesis I conclude that land use intensification is likely to seriously limit the Cerrado’s future regarding both agricultural productivity and ecosystem stability. Because only limited data are available for the vast biome, we recommend further field studies to understand the interaction between terrestrial and aquatic systems. This thesis may serve as a valuable database for integrated modelling to investigate the impact of land use and climate change on soil and water resources and to test and develop mitigation measures for the Cerrado in the future.
Nowadays, business processes are increasingly supported by IT services that produce massive amounts of event data during process execution. Aiming at a better process understanding and improvement, this event data can be used to analyze processes using process mining techniques. Process models can be automatically discovered and the execution can be checked for conformance to specified behavior. Moreover, existing process models can be enhanced and annotated with valuable information, for example for performance analysis. While the maturity of process mining algorithms is increasing and more tools are entering the market, process mining projects still face the problem of different levels of abstraction when comparing events with modeled business activities. Mapping the recorded events to activities of a given process model is essential for conformance checking, annotation and understanding of process discovery results. Current approaches try to abstract from events in an automated way that does not capture the required domain knowledge to fit business activities. Such techniques can be a good way to quickly reduce complexity in process discovery. Yet, they fail to enable techniques like conformance checking or model annotation, and potentially create misleading process discovery results by not using the known business terminology.
In this thesis, we develop approaches that abstract an event log to the same level that is needed by the business. Typically, this abstraction level is defined by a given process model. Thus, the goal of this thesis is to match events from an event log to activities in a given process model. To accomplish this goal, behavioral and linguistic aspects of process models and event logs as well as domain knowledge captured in existing process documentation are taken into account to build semiautomatic matching approaches. The approaches establish a pre--processing for every available process mining technique that produces or annotates a process model, thereby reducing the manual effort for process analysts. While each of the presented approaches can be used in isolation, we also introduce a general framework for the integration of different matching approaches.
The approaches have been evaluated in case studies with industry and using a large industry process model collection and simulated event logs. The evaluation demonstrates the effectiveness and efficiency of the approaches and their robustness towards nonconforming execution logs.
Business process management (BPM) is a systematic and structured approach to model, analyze, control, and execute business operations also referred to as business processes that get carried out to achieve business goals. Central to BPM are conceptual models. Most prominently, process models describe which tasks are to be executed by whom utilizing which information to reach a business goal. Process models generally cover the perspectives of control flow, resource, data flow, and information systems.
Execution of business processes leads to the work actually being carried out. Automating them increases the efficiency and is usually supported by process engines. This, though, requires the coverage of control flow, resource assignments, and process data. While the first two perspectives are well supported in current process engines, data handling needs to be implemented and maintained manually. However, model-driven data handling promises to ease implementation, reduces the error-proneness through graphical visualization, and reduces development efforts through code generation.
This thesis addresses the modeling, analysis, and execution of data in business processes and presents a novel approach to execute data-annotated process models entirely model-driven. As a first step and formal grounding for the process execution, a conceptual framework for the integration of processes and data is introduced. This framework is complemented by operational semantics through a Petri net mapping extended with data considerations. Model-driven data execution comprises the handling of complex data dependencies, process data, and data exchange in case of communication between multiple process participants. This thesis introduces concepts from the database domain into BPM to enable the distinction of data operations, to specify relations between data objects of the same as well as of different types, to correlate modeled data nodes as well as received messages to the correct run-time process instances, and to generate messages for inter-process communication. The underlying approach, which is not limited to a particular process description language, has been implemented as proof-of-concept.
Automation of data handling in business processes requires data-annotated and correct process models. Targeting the former, algorithms are introduced to extract information about data nodes, their states, and data dependencies from control information and to annotate the process model accordingly. Usually, not all required information can be extracted from control flow information, since some data manipulations are not specified. This requires further refinement of the process model. Given a set of object life cycles specifying allowed data manipulations, automated refinement of the process model towards containment of all data manipulations is enabled. Process models are an abstraction focusing on specific aspects in detail, e.g., the control flow and the data flow views are often represented through activity-centric and object-centric process models. This thesis introduces algorithms for roundtrip transformations enabling the stakeholder to add information to the process model in the view being most appropriate.
Targeting process model correctness, this thesis introduces the notion of weak conformance that checks for consistency between given object life cycles and the process model such that the process model may only utilize data manipulations specified directly or indirectly in an object life cycle. The notion is computed via soundness checking of a hybrid representation integrating control flow and data flow correctness checking. Making a process model executable, identified violations must be corrected. Therefore, an approach is proposed that identifies for each violation multiple, alternative changes to the process model or the object life cycles.
Utilizing the results of this thesis, business processes can be executed entirely model-driven from the data perspective in addition to the control flow and resource perspectives already supported before. Thereby, the model creation is supported by algorithms partly automating the creation process while model consistency is ensured by data correctness checks.
In diesem Papier wird das Konzept eines Lernzentrums für die Informatik (LZI) an der Universität Paderborn vorgestellt. Ausgehend von den fachspezifischen Schwierigkeiten der Informatik Studierenden werden die Angebote des LZIs erläutert, die sich über die vier Bereiche Individuelle Beratung und Betreuung, „Offener Lernraum“, Workshops und Lehrveranstaltungen sowie Forschung erstrecken. Eine erste Evaluation mittels Feedbackbögen zeigt, dass das Angebot bei den Studierenden positiv aufgenommen wird. Zukünftig soll das Angebot des LZIs weiter ausgebaut und verbessert werden. Ausgangsbasis dazu sind weitere Studien.
Die Wahl des richtigen Studienfaches und die daran anschließende
Studieneingangsphase sind oft entscheidend für den erfolgreichen Verlauf eines Studiums. Eine große Herausforderung besteht dabei darin, bereits in den ersten Wochen des Studiums bestehende Defizite in vermeintlich einfachen Schlüsselkompetenzen zu erkennen und diese so bald wie möglich zu beheben. Eine zweite, nicht minder wichtige Herausforderung ist es, möglichst frühzeitig für jeden einzelnen Studierenden zu erkennen, ob er bzw. sie das individuell richtige Studienfach gewählt hat, das den jeweiligen persönlichen Neigungen, Interessen und Fähigkeiten entspricht und zur Verwirklichung der eigenen Lebensziele beiträgt. Denn nur dann sind Studierende ausreichend stark und dauerhaft intrinsisch motiviert, um ein anspruchsvolles, komplexes Studium erfolgreich durchzuziehen. In diesem Beitrag fokussieren wir eine Maßnahme, die die Studierenden an einen Prozess zur systematischen Reflexion des eigenen Lernprozesses und der eigenen Ziele heranführt und beides in Relation setzt.
Ziel einer neuen Studieneingangsphase ist, den Studierenden bis zum Ende des ersten Semesters ein vielfältiges Berufsbild der Informatik und Wirtschaftsinformatik mit dem breiten Aufgabenspektrum aufzublättern und damit die Zusammenhänge zwischen den einzelnen Modulen des Curriculums zu verdeutlichen. Die Studierenden sollen in die Lage versetzt werden, sehr eigenständig die Planung und Gestaltung ihres Studiums in die Hand zu nehmen.
Es wird ein Informatik-Wettbewerb für Schülerinnen und Schüler der Sekundarstufe II beschrieben, der über mehrere Wochen möglichst realitätsnah die Arbeitswelt eines Informatikers vorstellt. Im Wettbewerb erarbeiten die Schülerteams eine Android-App und organisieren ihre Entwicklung durch Projektmanagementmethoden, die sich an professionellen, agilen Prozessen orientieren. Im Beitrag werden der theoretische Hintergrund zu Wettbewerben, die organisatorischen und didaktischen Entscheidung, eine erste Evaluation sowie Reflexion und Ausblick dargestellt.
In der Lehre zur MCI (Mensch-Computer-Interaktion) stellt sich immer wieder die Herausforderung, praktische Übungen mit spannenden Ergebnissen durchzuführen, die sich dennoch nicht in technischen Details verlieren sondern MCI-fokussiert bleiben. Im Lehrmodul „Interaktionsdesign“ an der Universität Hamburg werden von Studierenden innerhalb von drei Wochen prototypische Interaktionskonzepte für das Spiel Neverball entworfen und praktisch umgesetzt. Anders als in den meisten Grundlagenkursen zur MCI werden hier nicht Mock-Ups, sondern lauffähige Software entwickelt. Um dies innerhalb der Projektzeit zu ermöglichen, wurde Neverball um eine TCP-basierte Schnittstelle erweitert. So entfällt die aufwändige Einarbeitung in den Quellcode des Spiels und die Studierenden können sich auf ihre Interaktionsprototypen konzentrieren. Wir beschreiben die Erfahrungen aus der
mehrmaligen Durchführung des Projektes und erläutern unser Vorgehen bei der Umsetzung. Die Ergebnisse sollen Lehrende im Bereich MCI unterstützen, ähnliche praxisorientierte Übungen mit Ergebnissen „zum Anfassen“ zu gestalten.
Der folgende Artikel beschreibt die Evaluation eines Lehrvideos zum informatischen Problemlösen, welches auf der Grundlage einer Vergleichsstudie mit starken und schwachen Problemlösern entwickelt wurde. Beispielhaft wird in dem Film ein Färbeproblem durch einen fiktiven Hochleister unter lautem Denken gelöst, die einzelnen Arbeitsschritte werden abschnittsweise kommentiert und erklärt. Ob dieses Lernkonzept von Studenten akzeptiert wird und sich durch Anschauen des Videos tatsächlich ein Lerneffekt einstellt, wurde durch eine Befragung und eine erste Vergleichsstudie untersucht.
Über die Autoren
(2015)
Auf der Grundlage der Planung, Durchführung, Evaluation und Revision eines gemeinsamen Seminars von Medienpädagogik und Didaktik der Informatik stellen wir in diesem Aufsatz dar, wo die Defizite klassischer Medienbildung in Bezug auf digitale bzw. interaktive Medien liegen und welche Inhalte der Informatik für Studierende aller Lehrämter – im allgemeinbildenden Sinne – aus dieser Perspektive relevant erscheinen.
When it comes to footnotes, Alexander von Humboldt was ahead of his times even though his references leave much to be desired by today’s academic standards. This article examines the footnotes of Humboldt’s Essai politique sur l‘île de Cuba (1826). While it is not always easy to decipher his sometimes cryptic references, the undertaking is worthwhile: Humboldt’s footnotes do not only reveal his vast networks of knowledge. They also provide glimpses of ongoing, contemporary disputes among different scholars that involve Humboldt’s writings. They also present Humboldt’s reactions to such disputes. Exploring Humboldt’s footnotes consequently allows the reader to access both Humboldt the scholar and Humboldt the human being.
Gustav Rose (1798-1873) begleitete Alexander von Humboldt auf seiner Russlandreise und stand bis zu Humboldts Tod persönlich und postalisch zum ihm in Kontakt. Die Edition des vorliegenden Briefs zielt darauf ab, die Bedeutung der Person Gustav Rose in ihrer Beziehung zu Alexander von Humboldt und ihrem Einfluss auf den mineralogisch-geologischen Teil des Kosmos zu beleuchten und dem Leser dieses interessante historische Dokument zugänglich zu machen.
Continental rifts are excellent regions where the interplay between extension, the build-up of topography, erosion and sedimentation can be evaluated in the context of landscape evolution. Rift basins also constitute important archives that potentially record the evolution and migration of species and the change of sedimentary conditions as a result of climatic change. Finally, rifts have increasingly become targets of resource exploration, such as hydrocarbons or geothermal systems. The study of extensional processes and the factors that further modify the mainly climate-driven surface process regime helps to identify changes in past and present tectonic and geomorphic processes that are ultimately recorded in rift landscapes.
The Cenozoic East African Rift System (EARS) is an exemplary continental rift system and ideal natural laboratory to observe such interactions. The eastern and western branches of the EARS constitute first-order tectonic and topographic features in East Africa, which exert a profound influence on the evolution of topography, the distribution and amount of rainfall, and thus the efficiency of surface processes. The Kenya Rift is an integral part of the eastern branch of the EARS and is characterized by high-relief rift escarpments bounded by normal faults, gently tilted rift shoulders, and volcanic centers along the rift axis.
Considering the Cenozoic tectonic processes in the Kenya Rift, the tectonically controlled cooling history of rift shoulders, the subsidence history of rift basins, and the sedimentation along and across the rift, may help to elucidate the morphotectonic evolution of this extensional province. While tectonic forcing of surface processes may play a minor role in the low-strain rift on centennial to millennial timescales, it may be hypothesized that erosion and sedimentation processes impacted by climate shifts associated with pronounced changes in the availability in moisture may have left important imprints in the landscape.
In this thesis I combined thermochronological, geomorphic field observations, and morphometry of digital elevation models to reconstruct exhumation processes and erosion rates, as well as the effects of climate on the erosion processes in different sectors of the rift. I present three sets of results: (1) new thermochronological data from the northern and central parts of the rift to quantitatively constrain the Tertiary exhumation and thermal evolution of the Kenya Rift. (2) 10Be-derived catchment-wide mean denudation rates from the northern, central and southern rift that characterize erosional processes on millennial to present-day timescales; and (3) paleo-denudation rates in the northern rift to constrain climatically controlled shifts in paleoenvironmental conditions during the early Holocene (African Humid Period).
Taken together, my studies show that time-temperature histories derived from apatite fission track (AFT) analysis, zircon (U-Th)/He dating, and thermal modeling bracket the onset of rifting in the Kenya Rift between 65-50 Ma and about 15 Ma to the present. These two episodes are marked by rapid exhumation and, uplift of the rift shoulders. Between 45 and 15 Ma the margins of the rift experienced very slow erosion/exhumation, with the accommodation of sediments in the rift basin.
In addition, I determined that present-day denudation rates in sparsely vegetated parts of the Kenya Rift amount to 0.13 mm/yr, whereas denudation rates in humid and more densely vegetated sectors of the rift flanks reach a maximum of 0.08 mm/yr, despite steeper hillslopes. I inferred that hillslope gradient and vegetation cover control most of the variation in denudation rates across the Kenya Rift today. Importantly, my results support the notion that vegetation cover plays a fundamental role in determining the voracity of erosion of hillslopes through its stabilizing effects on the land surface.
Finally, in a pilot study I highlighted how paleo-denudation rates in climatic threshold areas changed significantly during times of transient hydrologic conditions and involved a sixfold increase in erosion rates during increased humidity. This assessment is based on cosmogenic nuclide (10Be) dating of quartzitic deltaic sands that were deposited in the northern Kenya Rift during a highstand of Lake Suguta, which was associated with the Holocene African Humid Period. Taken together, my new results document the role of climate variability in erosion processes that impact climatic threshold environments, which may provide a template for potential future impacts of climate-driven changes in surface processes in the course of Global Change.
Spots on stellar surfaces are thought to be stellar analogues of sunspots. Thus, starspots are direct manifestations of strong magnetic fields. Their decay rate is directly related to the magnetic diffusivity, which itself is a key quantity for the deduction of an activity cycle length. So far, no single starspot decay has been observed, and thus no stellar activity cycle was inferred from its corresponding turbulent diffusivity.
We investigate the evolution of starspots on the rapidly-rotating K0 giant XX Triangulum. Continuous high-resolution and phase-resolved spectroscopy was obtained with the robotic 1.2-m STELLA telescope on Tenerife over a timespan of six years. With our line-profile inversion code iMap we reconstruct a total of 36 consecutive Doppler maps. To quantify starspot area decay and growth, we match the observed images with simplified spot models based on a Monte-Carlo approach.
It is shown that the surface of XX Tri is covered with large high-latitude and even polar spots and with occasional small equatorial spots. Just over the course of six years, we see a systematically changing spot distribution with various time scales and morphology such as spot fragmentation and spot merging as well as spot decay and formation.
For the first time, a starspot decay rate on another star than the Sun is determined. From our spot-decay analysis we determine an average linear decay rate of D = -0.067±0.006 Gm^2/day. From this decay rate, we infer a turbulent diffusivity of η_τ = (6.3±0.5) x 10^14 cm^2/s and consequently predict an activity cycle of 26±6 years. The obtained cycle length matches very well with photometric observations.
Our time-series of Doppler maps further enables to investigate the differential rotation of XX Tri. We therefore applied a cross-correlation analysis. We detect a weak solar-like differential rotation with a surface shear of α = 0.016±0.003. This value agrees with similar studies of other RS CVn stars.
Furthermore, we found evidence for active longitudes and flip-flops. Whereas the more active longitude is located in phase towards the (unseen) companion star, the weaker active longitude is located at the opposite stellar hemisphere. From their periodic appearance, we infer a flip-flop cycle of ~2 years. Both activity phenomena are common on late-type binary stars.
Last but not least we redetermine several astrophysical properties of XX Tri and its binary system, as large datasets of photometric and spectroscopic observations are available since its last determination in 1999. Additionally, we compare the rotational spot-modulation from photometric and spectroscopic studies.
The Barberton Greenstone Belt (BGB) in the northwestern part of South Africa belongs to the few well-preserved remnants of Archean crust. Over the last centuries, the BGB has been intensively studied at surface with detailed mapping of its surfacial geological units and tectonic features. Nevertheless, the deeper structure of the BGB remains poorly understood. Various tectonic evolution models have been developed based on geo-chronological and structural data. These theories are highly controversial and centre on the question whether plate tectonics - as geoscientists understand them today - was already evolving on the Early Earth or whether vertical mass movements driven by the higher temperature of the Earth in Archean times governed continent development.
To get a step closer to answering the questions regarding the internal structure and formation of the BGB, magnetotelluric (MT) field experiments were conducted as part of the German-South African research initiative Inkaba yeAfrica. Five-component MT data (three magnetic and two electric channels) were collected at ~200 sites aligned along six profiles crossing the southern part of the BGB. Tectonic features like (fossil) faults and shear zones are often mineralized and therefore can have high electrical conductivities. Hence, by obtaining an image of the conductivity distribution of the subsurface from MT measurements can provide useful information on tectonic processes.
Unfortunately, the BGB MT data set is heavily affected by man-made electromagnetic noise caused, e.g. by powerlines and electric fences. Aperiodic spikes in the magnetic and corresponding offsets in the electric field components impair the data quality particularly at periods >1 s which are required to image deep electrical structures. Application of common methods for noise reduction like delay filtering and remote reference processing, only worked well for periods <1 s. Within the framework of this thesis two new filtering approaches were developed to handle the severe noise in long period data and obtain reliable processing results. The first algorithm is based on the Wiener filter in combination with a spike detection algorithm. Comparison of data variances of a local site with those of a reference site allows the identification of disturbed time series windows for each recorded channel at the local site. Using the data of the reference site, a Wiener filter algorithm is applied to predict physically meaningful data to replace the disturbed windows. While spikes in the magnetic channels are easily recognized and replaced, steps in the electric channels are more difficult to detect depending on their offset. Therefore, I have implemented a novel approach based on time series differentiation, noise removal and subsequent integration to overcome this obstacle. A second filtering approach where spikes and steps in the time series are identified using a comparison of the short and long time average of the data was also implemented as part of my thesis. For this filtering approach the noise in the form of spikes and offsets in the data is treated by an interpolation of the affected data samples. The new developments resulted in a substantial data improvement and allowed to gain one to two decades of data (up to 10 or 100 s).
The re-processed MT data were used to image the electrical conductivity distribution of the BGB by 2D and 3D inversion. Inversion models are in good agreement with the surface geology delineating the highly resistive rocks of the BGB from surrounding more conductive geological units. Fault zones appear as conductive structures and can be traced to depths of 5 to 10 km. 2D models suggest a continuation of the faults further south across the boundary of the BGB. Based on the shallow tectonic structures (fault system) within the BGB compared to deeply rooted resistive batholiths in the area, tectonic models including both vertical mass transport and in parts present-day style plate tectonics seem to be most likely for the evolution of the BGB.
Alexander von Humboldts 1827/28 gehaltene Kosmos-Vorträge stellen ebenso einen kanonischen Bezugspunkt wie einen weißen Fleck in der Rezeption seines Schaffens dar. Zahlreiche Deutungen basieren auf lediglich zwei unkommentierten Leseausgaben von Collegheften anonym gebliebener Zuhörer Humboldts. Seine 2009 wiedergefundenen eigenhändigen Vorlesungsmanuskripte versprechen, diese schmale Quellenbasis zu erweitern. Alfred Dove hatte bereits 1872 auf ihre Existenz verwiesen und interpretierte sie damals als Nukleus der schriftlichen Ausarbeitung des Kosmos. Diese Sichtweise hat sich in der Rezeption gehalten, während die Materialien, die Dove beschrieb, in Vergessenheit gerieten. Der Artikel stellt anhand ausgewählter Beispiele die Vorlesungsmanuskripte Humboldts und Nachschriften seiner Zuhörer in ihrem Zusammenhang vor, entwickelt die These von der Eigenständigkeit der sogenannten Kosmos-Vorträge gegenüber dem fünfbändigen Kosmos und umreißt die wichtigsten Ziele und Inhalte der geplanten digitalen Edition.
In this paper we discuss how Alexander von Humboldt conceived a past to New Spain in his Political Essay on New Spain (1811) and how this text was, in turn, appropriated by the Mexican historiography during the 19th century.
In order to do so, we analyze how the Prussian drew from American sources, particularly from the text of the Jesuit Francisco Javier Clavijero, written shortly before. We also study Humboldt’s conceptions of text and of history, highlighting the place of the indigenous in the composition of his reasoning. Finally, we give examples of how the Mexican nationalist historiography read and reinterpreted the Political Essay.
Este artículo explora la recepción realizada por Alexander von Humboldt de la figura de Cristóbal Colón, principalmente en su Relation historique y en su Examen critique. En primer lugar, el artículo explora lo que los biógrafos de Humboldt han llamado “ein zweiter Kolumbus” (un segundo Colón). En segundo lugar, se traza la historia de la edificación humboldtianna de una “imaginación poética” y de una “geografía mítica” atribuidas por Humboldt a Cristóbal Colón.
Spectral fingerprinting
(2015)
Current research on runoff and erosion processes, as well as an increasing demand for sustainable watershed management emphasize the need for an improved understanding of sediment dynamics. This involves the accurate assessment of erosion rates and sediment transfer, yield and origin. A variety of methods exist to capture these processes at the catchment scale. Among these, sediment fingerprinting, a technique to trace back the origin of sediment, has attracted increasing attention by the scientific community in recent years. It is a two-step procedure, based on the fundamental assumptions that potential sources of sediment can be reliably discriminated based on a set of characteristic ‘fingerprint’ properties, and that a comparison of source and sediment fingerprints allows to quantify the relative contribution of each source.
This thesis aims at further assessing the potential of spectroscopy to assist and improve the sediment fingerprinting technique. Specifically, this work focuses on (1) whether potential sediment sources can be reliably identified based on spectral features (‘fingerprints’), whether (2) these spectral fingerprints permit the quantification of relative source contribution, and whether (3) in situ derived source information is sufficient for this purpose. Furthermore, sediment fingerprinting using spectral information is applied in a study catchment to (4) identify major sources and observe how relative source contributions change between and within individual flood events. And finally, (5) spectral fingerprinting results are compared and combined with simultaneous sediment flux measurements to study sediment origin, transport and storage behaviour.
For the sediment fingerprinting approach, soil samples were collected from potential sediment sources within the Isábena catchment, a meso-scale basin in the central Spanish Pyrenees. Undisturbed samples of the upper soil layer were measured in situ using an ASD spectroradiometer and subsequently sampled for measurements in the laboratory. Suspended sediment was sampled automatically by means of ISCO samplers at the catchment as well as at the five major subcatchment outlets during flood events, and stored fine sediment from the channel bed was collected from 14 cross-sections along the main river. Artificial mixtures of known contributions were produced from source soil samples. Then, all source, sediment and mixture samples were dried and spectrally measured in the laboratory. Subsequently, colour coefficients and physically based features with relation to organic carbon, iron oxide, clay content and carbonate, were calculated from all in situ and laboratory spectra. Spectral parameters passing a number of prerequisite tests were submitted to principal component analyses to study natural clustering of samples, discriminant function analyses to observe source differentiation accuracy, and a mixing model for source contribution assessment. In addition, annual as well as flood event based suspended sediment fluxes from the catchment and its subcatchments were calculated from rainfall, water discharge and suspended sediment concentration measurements using rating curves and Quantile Regression Forests. Results of sediment flux monitoring were interpreted individually with respect to storage behaviour, compared to fingerprinting source ascriptions and combined with fingerprinting to assess their joint explanatory potential.
In response to the key questions of this work, (1) three source types (land use) and five spatial sources (subcatchments) could be reliably discriminated based on spectral fingerprints. The artificial mixture experiment revealed that while (2) laboratory parameters permitted source contribution assessment, (3) the use of in situ derived information was insufficient. Apparently, high discrimination accuracy does not necessarily imply good quantification results. When applied to suspended sediment samples of the catchment outlet, the spectral fingerprinting approach was able to (4) quantify the major sediment sources: badlands and the Villacarli subcatchment, respectively, were identified as main contributors, which is consistent with field observations and previous studies. Thereby, source contribution was found to vary both, within and between individual flood events. Also sediment flux was found to vary considerably, annually as well as seasonally and on flood event base. Storage was confirmed to play an important role in the sediment dynamics of the studied catchment, whereas floods with lower total sediment yield tend to deposit and floods with higher yield rather remove material from the channel bed. Finally, a comparison of flux measurements with fingerprinting results highlighted the fact that (5) immediate transport from sources to the catchment outlet cannot be assumed. A combination of the two methods revealed different aspects of sediment dynamics that none of the techniques could have uncovered individually.
In summary, spectral properties provide a fast, non-destructive, and cost-efficient means to discriminate and quantify sediment sources, whereas, unfortunately, straight-forward in situ collected source information is insufficient for the approach. Mixture modelling using artificial mixtures permits valuable insights into the capabilities and limitations of the method and similar experiments are strongly recommended to be performed in the future. Furthermore, a combination of techniques such as e.g. (spectral) sediment fingerprinting and sediment flux monitoring can provide comprehensive understanding of sediment dynamics.
The Norway lobster, Nephrops norvegicus, is a burrowing decapod with a rhythmic burrow emergence (24 h) governed by the circadian system. It is an important resource for European fisheries and its behavior deeply affects its availability. The current knowledge of Nephrops circadian biology is phenomenological as it is currently the case for almost all crustaceans. In attempt to elucidate the putative molecular mechanisms underlying circadian gene regulation in Nephrops, we used a transcriptomics approach on cDNA extracted from the eyestalk, a structure playing a crucial role in controlling behavior of decapods. We studied 14 male lobsters under 12–12 light-darkness blue light cycle. We used the Hiseq 2000 Illumina platform to sequence two eyestalk libraries (under light and darkness conditions) obtaining about 90 millions 100-bp paired-end reads. Trinity was used for the de novo reconstruction of transcriptomes; the size at which half of all assembled bases reside in contigs (N50) was equal to 1796 (light) and 2055 (darkness). We found a list of candidate clock genes and focused our attention on canonical ones: timeless, period, clock and bmal1. The cloning of assembled fragments validated Trinity outputs. The putative Nephrops clock genes showed high levels of identity (blastx on NCBI) with known crustacean clock gene homologs such as Eurydice pulchra (period: 47%, timeless: 59%, bmal1: 79%) and Macrobrachium rosenbergii (clock: 100%). We also found a vertebrate-like cryptochrome 2. RT-qPCR showed that only timeless had a robust diel pattern of expression. Our data are in accordance with the current knowledge of the crustacean circadian clock, reinforcing the idea that the molecular clockwork of this group shows some differences with the established model in Drosophila melanogaster.
Climate change is likely to impact the seasonality and generation processes of floods in the Nordic countries, which has direct implications for flood risk assessment, design flood estimation, and hydropower production management. Using a multi-model/multi-parameter approach to simulate daily discharge for a reference (1961–1990) and a future (2071–2099) period, we analysed the projected changes in flood seasonality and generation processes in six catchments with mixed snowmelt/rainfall regimes under the current climate in Norway. The multi-model/multi-parameter ensemble consists of (i) eight combinations of global and regional climate models, (ii) two methods for adjusting the climate model output to the catchment scale, and (iii) one conceptual hydrological model with 25 calibrated parameter sets. Results indicate that autumn/winter events become more frequent in all catchments considered, which leads to an intensification of the current autumn/winter flood regime for the coastal catchments, a reduction of the dominance of spring/summer flood regimes in a high-mountain catchment, and a possible systematic shift in the current flood regimes from spring/summer to autumn/winter in the two catchments located in northern and south-eastern Norway. The changes in flood regimes result from increasing event magnitudes or frequencies, or a combination of both during autumn and winter. Changes towards more dominant autumn/winter events correspond to an increasing relevance of rainfall as a flood generating process (FGP) which is most pronounced in those catchments with the largest shifts in flood seasonality. Here, rainfall replaces snowmelt as the dominant FGP primarily due to increasing temperature.We further analysed the ensemble components in contributing to overall uncertainty in the projected changes and found that the climate projections and the methods for downscaling or bias correction tend to be the largest contributors. The relative role of hydrological parameter uncertainty, however, is highest for those catchments showing the largest changes in flood seasonality, which confirms the lack of robustness in hydrological model parameterization for simulations under transient hydrometeorological conditions.
Der kurze Aufsatz gibt einen Überblick über den erhaltenen Briefwechsel zwischen A. v. Humboldt und Charles Lyell. Er veröffentlicht darüber hinaus einen undatierten, bisher unbekannten Brief Humboldts an Lyell. Der Brief kann mit Hilfe eines Briefes an Pertz datiert werden. Deshalb wird auch dieser Brief zum ersten Mal vollständig veröffentlicht.
Zwischen Sein und Sollen
(2015)
Bericht über die Tätigkeit des Menschenrechtsausschusses der Vereinten Nationen im Jahre 2014
(2015)
Zusammenfassung – nichtamtliche Leitsätze:
• Ein allgemeines Burkaverbot in der Öffentlichkeit verstößt nicht gegen Art. 8 oder 9
EMRK.
• Das Tragen einer Burka unterfällt als private Lebensführung dem Schutzbereich des
Art. 8 EMRK sowie als Religionsausübung dem Schutzbereich des Art. 9 EMRK.
• Ein allgemeines Burkaverbot in der Öffentlichkeit zum Schutz der Sicherheit verfolgt
ein legitimes Ziel, ist in Ermangelung einer allgemeinen Gefahr aber nicht erforderlich
und damit unverhältnismäßig.
• Der Schutz des Respekts für die Gleichheit von Mann und Frau oder für die Menschenwürde
lässt sich nicht als legitimes Ziel für ein Burkaverbot heranziehen.
• Der Schutz des Respekts für die Minimalbedingungen gesellschaftlichen Zusammenlebens
stellt ein legitimes Ziel für ein Burkaverbot dar. Das Burkaverbot ist angesichts
des breiten staatlichen Ermessensspielraums in einer solchen Frage nicht unverhältnismäßig.
Microsaccades
(2015)
The first thing we do upon waking is open our eyes. Rotating them in our eye sockets, we scan our surroundings and collect the information into a picture in our head. Eye movements can be split into saccades and fixational eye movements, which occur when we attempt to fixate our gaze. The latter consists of microsaccades, drift and tremor. Before we even lift our eye lids, eye movements – such as saccades and microsaccades that let the eyes jump from one to another position – have partially been prepared in the brain stem. Saccades and microsaccades are often assumed to be generated by the same mechanisms. But how saccades and microsaccades can be classified according to shape has not yet been reported in a statistical manner. Research has put more effort into the investigations of microsaccades’ properties and generation only since the last decade. Consequently, we are only beginning to understand the dynamic processes governing microsaccadic eye movements. Within this thesis, the dynamics governing the generation of microsaccades is assessed and the development of a model for the underlying processes. Eye movement trajectories from different experiments are used, recorded with a video-based eye tracking technique, and a novel method is proposed for the scale-invariant detection of saccades (events of large amplitude) and microsaccades (events of small amplitude). Using a time-frequency approach, the method is examined with different experiments and validated against simulated data. A shape model is suggested that allows for a simple estimation of saccade- and microsaccade related properties. For sequences of microsaccades, in this thesis a time-dynamic Markov model is proposed, with a memory horizon that changes over time and which can best describe sequences of microsaccades.
Several authors highlighted that the time course of an experiment itself could have a substantial influence on the interpretability of experimental effects. Since mixed effects modeling had enabled researchers to investigate more complex problems with more precision than before, two naming experiments were conducted with college students, with and without non-words intermixed, and analyzed with regard to frequency, quality, interactive and trial-history effects. The present analyses build on and extend the Bates, Kliegl, Vasishth, and Baayen (2015) approach in order to converge on a parsimonious model that accounts for autocorrelated errors caused by trial history. For three of four cases, a history-sensitive model improved the model fit over a history-naïve model and explained more deviance. In one of these cases, the herein presented approach helped reveal an interaction between stimulus frequency and quality that was not significant without a trial history account. Main and joint effects, limitations, as well as directions for further research, are briefly discussed.
Intuitively, it is clear that neural processes and eye movements in reading are closely connected, but only few studies have investigated both signals simultaneously. Instead, the usual approach is to record them in separate experiments and to subsequently consolidate the results. However, studies using this approach have shown that it is feasible to coregister eye movements and EEG in natural reading and contributed greatly to the understanding of oculomotor processes in reading. The present thesis builds upon that work, assessing to what extent coregistration can be helpful for sentence processing research.
In the first study, we explore how well coregistration is suited to study subtle effects common to psycholinguistic experiments by investigating the effect of distance on dependency resolution. The results demonstrate that researchers must improve the signal-to-noise ratio to uncover more subdued effects in coregistration. In the second study, we compare oscillatory responses in different presentation modes. Using robust effects from world knowledge violations, we show that the generation and retrieval of memory traces may differ between natural reading and word-by-word presentation. In the third study, we bridge the gap between our knowledge of behavioral and neural responses to integration difficulties in reading by analyzing the EEG in the context of regressive saccades. We find the P600, a neural indicator of recovery processes, when readers make a regressive saccade in response to integration difficulties.
The results in the present thesis demonstrate that coregistration can be a useful tool for the study of sentence processing. However, they also show that it may not be suitable for some questions, especially if they involve subtle effects.
MULTILIT
(2015)
This paper presents an overview of the linguistic analyses developed in the MULTILIT project and the processing of the oral and written texts collected. The project investigates the language abilities of multilingual children and adolescents, in particular, those who have Turkish and/or Kurdish as a mother tongue. A further aim of the project is to examine from a psycholinguistic and sociolinguistic perspective the extent to which competence in academic registers is achieved on the basis of the languages spoken by the children, including the language(s) spoken at the home, the language of the country of residence and the first foreign language. To be able to examine these questions using corpus linguistic parameters, we created categories of analysis in MULTILIT.
The data collection comprises texts from bilingual and monolingual children and adolescents in Germany in their first language Turkish, their second language German und their foreign language English. Pupils aged between nine and twenty years of age produced monologue oral and written texts in the two genres of narrative and discursive. On the basis of these samples, we examine linguistic features such as lexical expression (lexical density, lexical diversity), syntactic complexity (syntactic and discursive packaging) as well as phonology in the oral texts and orthography in the written texts, with the aim of investigating the pupils’ growing mastery of these features in academic and informal registers.
To this end the raw data have been transcribed by the use of transcription conventions developed especially for the needs of the MULTILIT data. They are based on the commonly used HIAT and GAT transcription conventions and supplemented with conventions that provide additional information such as features at the graphic level.
The categories of analysis comprise a large number of linguistic categories such as word classes, syntax, noun phrase complexity, complex verbal morphology, direct speech and text structures. We also annotate errors and norm deviations at a wide range of levels (orthographic, morphological, lexical, syntactic and textual). In view of the different language systems, these criteria are considered separately for all languages investigated in the project.
Die Schule steckt mitten in einem Umbruch: Dieser wird durch verschiedene Faktoren wie sinkende Schülerzahlen, Zu- und Abwanderungen von Familien mit ihren Kindern, wachsenden Zahlen von Schulpflichtigen mit Migrationshintergrund und anderes mehr gekennzeichnet. Damit steht die Schule vor neuen Herausforderungen. Außerdem hält der Trend zum Gymnasium an. Auch wechselvolle Dauerreformen schaffen unübersichtliche Schulstrukturen und erschweren sichere Orientierung. Und: Im zunehmend raueren Wettbewerb der Schultypen laufen den öffentlichen Schulen immer mehr Schüler davon und zu den Privatschulen über. Schon jetzt sehen sich zahlreiche Kommunen gezwungen, ihre Bildungsangebote zurückzuschrauben und ihre Schulen zu schließen.
Kann eine Kommune ihren Bürgern aber nicht mehr die schulische „Grundversorgung“ anbieten, so hat dies weitreichende Folgen. Wo Schulen schließen müssen, „stirbt auch der Ort“. Ein umfangreiches, vielfältiges und flächendeckendes Bildungsangebot dagegen ist für die Kommune tragender Pfeiler einer funktionsfähigen Infrastruktur. Hier setzt die 21. Fachtagung des Kommunalwissenschaftlichen Institutes der Universität Potsdam an. Sie behandelt zentrale Themen der nachhaltigen Gewährleistung schulischer Infrastruktur in den Kommunen einschließlich der dazugehörigen Erfahrungsberichte, die über Best-Practice-Modelle sowie über Erfolgsbedingungen und Fallstricke in der Verwaltungspraxis informieren. Damit gibt die Tagung zugleich Impulse den kommunalen Entscheidungsträgern für die Generierung von und den Umgang mit Gestaltungsoptionen zur Standortsicherung im kommunalen Bildungsmanagement.
We consider a Cauchy problem for the heat equation in a cylinder X x (0,T) over a domain X in the n-dimensional space with data on a strip lying on the lateral surface. The strip is of the form
S x (0,T), where S is an open subset of the boundary of X. The problem is ill-posed. Under natural restrictions on the configuration of S we derive an explicit formula for solutions of this problem.
Seit dem UN-Gipfel 1992 in Rio de Janeiro ist die Aufmerksamkeit in Politik und Öffentlichkeit für das Thema „Nachhaltigkeit“ gestiegen. In fast allen Ländern dieser Welt wurden Programme und Maßnahmen zum Schutz und Erhalt der Umwelt und der sozialen Lebensbedingungen umgesetzt. Trotz beachtenswerter Fortschritte sind die bisherigen Effekte jedoch völlig unzureichend. Umso interessanter ist daher der Blick auf einen erfolgreichen Akteur im Bereich der Umwelt- und Nachhaltigkeitspolitik: Kuba. Über diese Erfahrungen des Karibik-Staates wird im deutschen Sprachraum kaum berichtet. Die Autoren leisten hierzu mit ihrer Studie einen Beitrag und analysieren die entsprechenden Politiken, Strategien und Maßnahmen, die in Kuba trotz vielfältiger Probleme zu einer gelungenen Nachhaltigkeitspolitik geführt haben.
Modality in Kakataibo
(2015)
This paper explores the semantic space of modality in Kakataibo
(Panoan). It is found that Kakataibo makes a distinction in the modal
space based on the modality type. Circumstantial modality is encoded
by a construction while the epistemic space is conveyed by the second
position enclitics =dapi ‘inferential’, =id ‘second-hand information’
and =kuni ‘contrastive assertion’. However, none of these strategies to
encode modality restricts the quantificational force, leaving it
underspecified. These facts are consistent with the predictions of
current typologies of modal systems.
It has been observed for many African languages that focussed subjects
have to appear outside of their syntactic base position, as opposed to
focussed objects, which can remain in-situ. This is known as subjectobject
asymmetry of focus marking, which Fiedler et al. (2010) claim
to hold also for Akan. Genzel (2013), on the other hand, argues that
Akan does not exhibit a subject-object focus asymmetry. A questionnaire
study and a production experiment were carried out to investigate
whether focussed subjects may indeed be realized in-situ in Akan. The
results suggest that (i) focussed subjects do not have to be obligatorily
realized ex-situ, and that (ii) the syntactic preference for the realization
of a focussed subject highly depends on exhaustivity.
Sentence type marking is realized by two suffixes in Aymara, one marks
declaratives and the other polar sentences (polar questions and negated
sentences) by picking out one or two propositions, respectively. A third
suffix, initially associated with wh-questions, turns out to be a (scalar)
additive and unrelated to sentence type. The sentence-type-related suffixes
associate with focus and the additive can associate with focus by
attaching to the focused constituent.
According to Aikhenvald (2007:5), descriptive linguistics or linguistic
fieldwork “ideally involves observing the language as it is used,
becoming a member of the community, and often being adopted into
the kinship system”. Descriptive linguistics therefore differs from
theoretical linguistics in that while the former seeks to describe natural
languages as they are used, the latter, other than describing, attempts
to give explanations on how or why language phenomena behave in
certain ways. Thus, I will abstract away from any preconceived ideas
on how sentences ought to be in Awing and take the linguist/reader
through focus and interrogative constructions to get a feeling of how
the Awing people interact verbally.
This paper reopens the discussion on focus marking in Akan (Kwa,
Niger-Congo) by examining the semantics of the so-called focus marker
in the language. It is shown that the so-called focus marker expresses
exhaustivity when it occurs in a sentence with narrow focus. The study
employs four standard tests for exhaustivity proposed in the literature
to examine the semantics of Akan focus constructions (Szabolsci 1981,
1994; É. Kiss 1998; Hartmann and Zimmermann 2007). It is shown that
although a focused entity with the so-called focus marker nà is
interpreted to mean ‘only X and nothing/nobody else,’ this meaning
appears to be pragmatic.
ה"חוק" וה"טבע" בברית המילה
(2015)
בפתח מאמר זה נידון המונח המקראי "חוק" (כבכתוב: אִם בְּחֻקֹּתַי תֵּלֵכוּ". ויקרא כו, ג-ד) באמצעות הצגת נקודות המבט השונות של החכמים הדנים בכך במדרש הארצישראלי "ויקרא רבה".
בחלקו השני של המאמר נידון המקרה הספציפי של מצוות ברית המילה - כדוגמא קלאסית למצווה המכונה בפי החכמים "חוק".
המאמר דן בטעמים השונים שניתנו למצווה זו (כולל אלו שהועלו בתקופה המודרנית ברוח ההסברים האנתרופולוגיים והפסיכואנליטיים); ובסיומו של הדיון מועלה לדיון ויכוח - מקורי עד כמה שניתן לשער - שנשתמר בתלמוד, בין חכם ארצישראלי בן מאה השלישית, רבי עקיבא, ובין רומאי בשם טורנוסרופוס (או טוניוס רופוס) המייצג את תפיסת ה"טבע" הרומית.
האחרון מתקיף את רבי עקיבא בנוגע לברית המילה בטענה שהיהודים מטילים בברית המילה מום בגוף התינוק. תשובת רבי עקיבא מנותחת מנקודת הראות המתמקדת במתח שבין "טבע"
Im Rahmen der EU-weiten REACH-Verordnung haben Alternativmethoden zum Tierversuch in der Toxikologie an Bedeutung gewonnen. Die Alternativmethoden gliedern sich auf in In-vitro- und In-silico-Methoden. In dieser Dissertation wurden verschiedene Konzepte der In-silico-Toxikologie behandelt.
Die bearbeiteten Themen reichen von quantitativen Strukturaktivitätsbeziehungen (QSAR) über eine neue Herangehensweise an das gängige Konzept zur Festlegung von Grenzwerten bis hin zu computerbasierten Modellierungen zum Alkohol- und Bisphenol-A-Stoffwechsel.
Das Kapitel über QSAR befasst sich im Wesentlichen mit der Erstellung und Analyse einer Datenbank mit 878 Substanzen, die sich aus Tierversuchsstudien aus dem Archiv des Bundesinstituts für Risikobewertung zusammensetzt. Das Design wurde dabei an eine bereits bestehende Datenbank angepasst, um so einen möglichst großen Datenpool zu generieren. In der Analyse konnte u.a. gezeigt werden, dass Stoffe mit niedrigerem Molekulargewicht ein erhöhtes Potential für toxikologische Schäden aufwiesen als größere Moleküle.
Mit Hilfe des sogenannten TTC-Konzepts können Grenzwerte für Stoffe geringer Exposition festgelegt werden, zu denen keine toxikologischen Daten zur Verfügung stehen. In dieser Arbeit wurden für die Stoffe dreier Datenbanken entsprechende Grenzwerte festgelegt. Es erfolgte zunächst eine gängige strukturbasierte Aufteilung der Substanzen in die Kategorien "nicht toxisch", "möglicherweise toxisch" und "eindeutig toxisch". Substanzen, die aufgrund ihrer Struktur in eine der drei Klassen eingeordnet werden, erhalten den entsprechenden Grenzwert. Da in die dritte Klasse auch Stoffe eingeordnet werden, deren Toxizität nicht bestimmbar ist, ist sie sehr groß. Daher wurden in dieser Arbeit die ersten beiden Klassen zusammengelgt, um einen größeren Datenpool zu ermöglichen. Eine weitere Neuerung umfasst die Erstellung eines internen Grenzwerts. Diese Vorgehensweise hat den Vorteil, dass der Expositionsweg herausgerechnet wird und somit beispielsweise Studien mit oraler Verabreichung mit Studien dermaler Verabreichung verglichen werden können.
Mittels physiologisch basiertem kinetischem Modelling ist es möglich, Vorgänge im menschlichen Körper mit Hilfe spezieller Software nachzuvollziehen. Durch diese Vorgehensweise können Expositionen von Chemikalien simuliert werden. In einem Teil der Arbeit wurden Alkoholexpositionen von gestillten Neugeborenen simuliert, deren Mütter unmittelbar zuvor alkoholische Getränke konsumiert hatten. Mit dem Modell konnte gezeigt werden, dass die Expositionen des Kindes durchweg gering waren. Nach einem Glas Wein wurden Spitzenkonzentrationen im Blut von Neugeborenen von 0,0034 Promille ermittelt. Zum Vergleich wurde die Exposition durch ein für Säuglinge zugelassenes alkoholhaltiges pflanzliches Arzneimittel simuliert. Hier wurden Spitzenkonzentrationen von 0,0141 Promille erreicht. Daher scheinen Empfehlungen wie gelegentlicher Konsum ohne schädigende Wirkung auf das Kind wissenschaftlich fundiert zu sein.
Ein weiteres Kinetik-Modell befasste sich mit dem Stoffwechsel von Bisphenol A. Teils widersprüchliche Daten zur Belastung mit BPA in der wissenschaftlichen Literatur führen wiederholt zu Anregungen, den Grenzwert der Chemikalie anzupassen. Die Funktionalität der am Metabolismus beteiligten Enzyme kann je nach Individuum unterschiedlich ausgeprägt sein. Mittels Modellings konnte hier gezeigt werden, dass dies maßgeblich dazu führt, dass sich berechnete Plasmaspiegel von Individuen bis zu 4,7-fach unterscheiden.
Die Arbeit konnte somit einen Beitrag zur Nutzung und Weiterentwicklung von In-silico-Modellen für diverse toxikologische Fragestellungen leisten.
Stream water and groundwater are important fresh water resources but their water quality is deteriorated by harmful solutes introduced by human activities. The interface between stream water and the subsurface water is an important zone for retention, transformation and attenuation of these solutes. Streambed structures enhance these processes by increased water and solute exchange across this interface, denoted as hyporheic exchange.
This thesis investigates the influence of hydrological and morphological factors on hyporheic water and solute exchange as well as redox-reactions in fluvial streambed structures on the intermediate scale (10–30m). For this purpose, a three-dimensional numerical modeling approach for coupling stream water flow with porous media flow is used. Multiple steady state stream water flow scenarios over different generic pool-riffle morphologies and a natural in-stream gravel bar are simulated by a computational fluid dynamics code that provides the hydraulic head distribution at the streambed. These heads are subsequently used as the top boundary condition of a reactive transport groundwater model of the subsurface beneath the streambed. Ambient groundwater that naturally interacts with the stream water is considered in scenarios of different magnitudes of downwelling stream water (losing case) and upwelling groundwater (gaining case). Also, the neutral case, where stream stage and groundwater levels are balanced is considered. Transport of oxygen, nitrate and dissolved organic carbon and their reaction by aerobic respiration and denitrification are modeled.
The results show that stream stage and discharge primarily induce hyporheic exchange flux and solute transport with implications for specific residence times and reactions at both the fully and partially submerged structures. Gaining and losing conditions significantly diminish the extent of the hyporheic zone, the water exchange flux, and shorten residence times for both the fully and partially submerged structures. With increasing magnitude of gaining or losing conditions, these metrics exponentially decrease.
Stream water solutes are transported mainly advectively into the hyporheic zone and hence their influx corresponds directly to the infiltrating water flux. Aerobic respiration takes place in the shallow streambed sediments, coinciding to large parts with the extent of the hyporheic exchange flow. Denitrification occurs mainly as a “reactive fringe” surrounding the aerobic zone, where oxygen concentration is low and still a sufficient amount of stream water carbon source is available. The solute consumption rates and the efficiency of the aerobic and anaerobic reactions depend primarily on the available reactive areas and the residence times, which are both controlled by the interplay between hydraulic head distribution at the streambed and the gradients between stream stage and ambient groundwater. Highest solute consumption rates can be expected under neutral conditions, where highest solute flux, longest residence times and largest extent of the hyporheic exchange occur. The results of this thesis show that streambed structures on the intermediate scale have a significant potential to contribute to a net solute turnover that can support a healthy status of the aquatic ecosystem.
Wie verhandelt die Praxis?
(2015)
Aus dem Inhalt:
- 10 Jahre Responsibility to Protect: Ein Sieg für die Menschenrechte? – Eine politik- und rechtswissenschaftliche Analyse
- Neue Regeln zur Abwesenheit des Angeklagten vor dem IStGH:
Menschenrechtliche Anforderungen an In-absentia-Verfahren
- EGMR: S.A.S. ./. Frankreich – Urteilsbesprechung zum Burkaverbot
We study segregation of the subducted oceanic crust (OC) at the core mantle boundary and its ability to accumulate and form large thermochemical piles (such as the seismically observed Large Low Shear Velocity Provinces - LLSVPs). Our high-resolution numerical simulations suggest that the longevity of LLSVPs for up to three billion years, and possibly longer, can be ensured by a balance in the rate of segregation of high-density OC-material to the CMB, and the rate of its entrainment away from the CMB by mantle upwellings.
For a range of parameters tested in this study, a large-scale compositional anomaly forms at the CMB, similar in shape and size to the LLSVPs. Neutrally buoyant thermochemical piles formed by mechanical stirring - where thermally induced negative density anomaly is balanced by the presence of a fraction of dense anomalous material - best resemble the geometry of LLSVPs. Such neutrally buoyant piles tend to emerge and survive for at least 3Gyr in simulations with quite different parameters. We conclude that for a plausible range of values of density anomaly of OC material in the lower mantle - it is likely that it segregates to the CMB, gets mechanically mixed with the ambient material, and forms neutrally buoyant large scale compositional anomalies similar in shape to the LLSVPs.
We have developed an efficient FEM code with dynamically adaptive time and space resolution, and marker-in-cell methodology. This enabled us to model thermochemical mantle convection at realistically high convective vigor, strong thermally induced viscosity variations, and long term evolution of compositional fields.
In this thesis we study reciprocal classes of Markov chains. Given a continuous time Markov chain on a countable state space, acting as reference dynamics, the associated reciprocal class is the set of all probability measures on path space that can be written as a mixture of its bridges. These processes possess a conditional independence property that generalizes the Markov property, and evolved from an idea of Schrödinger, who wanted to obtain a probabilistic interpretation of quantum mechanics.
Associated to a reciprocal class is a set of reciprocal characteristics, which are space-time functions that determine the reciprocal class. We compute explicitly these characteristics, and divide them into two main families: arc characteristics and cycle characteristics. As a byproduct, we obtain an explicit criterion to check when two different Markov chains share their bridges.
Starting from the characteristics we offer two different descriptions of the reciprocal class, including its non-Markov probabilities.
The first one is based on a pathwise approach and the second one on short time asymptotic. With the first approach one produces a family of functional equations whose only solutions are precisely the elements of the reciprocal class. These equations are integration by parts on path space associated with derivative operators which perturb the paths by mean of the addition of random loops. Several geometrical tools are employed to construct such formulas. The problem of obtaining sharp characterizations is also considered, showing some interesting connections with discrete geometry. Examples of such formulas are given in the framework of counting processes and random walks on Abelian groups, where the set of loops has a group structure.
In addition to this global description, we propose a second approach by looking at the short time behavior of a reciprocal process. In the same way as the Markov property and short time expansions of transition probabilities characterize Markov chains, we show that a reciprocal class is characterized by imposing the reciprocal property and two families of short time expansions for the bridges. Such local approach is suitable to study reciprocal processes on general countable graphs. As application of our characterization, we considered several interesting graphs, such as lattices, planar
graphs, the complete graph, and the hypercube.
Finally, we obtain some first results about concentration of measure implied by lower bounds on the reciprocal characteristics.
The relationship between nutrition and the development of chronic diseases including metabolic syndrome, diabetes mellitus, cancer and cardiovascular disease has been well studied. On the other hand, changes in the GH-IGF-1 axis in association with nutrition-related diseases have been reported. The interplay between GH, total IGF-1 and different inhibitory and stimulatory kinds of IGF-1 binding proteins (IGFBPs) results in IGF-1 bioactivity, the ability of IGF-1 to induce phosphorylation of its receptor and consequently its signaling. Moreover, IGF-1 bioactivity is sufficient to reflect any change in the GH-IGF-1 system. Accumulating evidence suggests that both of high protein diet, characterized by increased glucagon secretion, and insulin-induced hypoglycemia increase mortality rate and the mechanisms are unclear. However both of glucagon and insulin-induced hypoglycemia are potent stimuli of GH secretion. The aim of the current study was to identify the impact of glucagon and insulin-induced hypoglycemia on IGF-1 bioactivity as possible mechanisms. In a double-blind placebo-controlled study, glucagon was intramuscularly administrated in 13 type 1 diabetic patients (6 males /7 females; [BMI]: 24.8 ± 0.95 kg/m2), 11 obese subjects (OP; 5/ 6; 34.4 ± 1.7 kg/m2), and 13 healthy lean participants (LP; 6/ 7; 21.7 ± 0.6 kg/m2), whereas 12 obese subjects (OP; 6/ 6; 34.4 ± 1.7 kg/m2), and 13 healthy lean participants (LP; 6/ 7; 21.7 ± 0.6 kg/m2) performed insulin tolerance test in another double-blind placebo-controlled study and changes in GH, total IGF-1, IGF binding proteins (IGFBPs) and IGF-1 bioactivity, measured by the cell-based KIRA method, were investigated. In addition, the interaction between the metabolic hormones (glucagon and insulin) and the GH-IGF-1 system on the transcriptional level was studied using mouse primary hepatocytes. In this thesis, glucagon decreased IGF-1 bioactivity in humans independently of endogenous insulin levels, most likely through modulation of IGFBP-1 and-2 levels. The glucagon-induced reduction in IGF-1 bioactivity may represent a novel mechanism underlying the impact of glucagon on GH secretion and may explain the negative effect of high protein diet related to increased cardiovascular risk and mortality rate. In addition, insulin-induced hypoglycemia was correlated with a decrease in IGF-1 bioactivity through up-regulation of IGFBP-2. These results may refer to a possible and poorly explored mechanism explaining the strong association between hypoglycemia and increased cardiovascular mortality among diabetic patients.
This dissertation investigates the working memory mechanism subserving human sentence processing and its relative contribution to processing difficulty as compared to syntactic prediction. Within the last decades, evidence for a content-addressable memory system underlying human cognition in general has accumulated (e.g., Anderson et al., 2004). In sentence processing research, it has been proposed that this general content-addressable architecture is also used for language processing (e.g., McElree, 2000).
Although there is a growing body of evidence from various kinds of linguistic dependencies that is consistent with a general content-addressable memory subserving sentence processing (e.g., McElree et al., 2003; VanDyke2006), the case of reflexive-antecedent dependencies has challenged this view. It has been proposed that in the processing of reflexive-antecedent dependencies, a syntactic-structure based memory access is used rather than cue-based retrieval within a content-addressable framework (e.g., Sturt, 2003).
Two eye-tracking experiments on Chinese reflexives were designed to tease apart accounts assuming a syntactic-structure based memory access mechanism from cue-based retrieval (implemented in ACT-R as proposed by Lewis and Vasishth (2005).
In both experiments, interference effects were observed from noun phrases which syntactically do not qualify as the reflexive's antecedent but match the animacy requirement the reflexive imposes on its antecedent. These results are interpreted as evidence against a purely syntactic-structure based memory access. However, the exact pattern of effects observed in the data is only partially compatible with the Lewis and Vasishth cue-based parsing model.
Therefore, an extension of the Lewis and Vasishth model is proposed. Two principles are added to the original model, namely 'cue confusion' and 'distractor prominence'.
Although interference effects are generally interpreted in favor of a content-addressable memory architecture, an alternative explanation for interference effects in reflexive processing has been proposed which, crucially, might reconcile interference effects with a structure-based account.
It has been argued that interference effects do not necessarily reflect cue-based retrieval interference in a content-addressable memory but might equally well be accounted for by interference effects which have already occurred at the moment of encoding the antecedent in memory (Dillon, 2011).
Three experiments (eye-tracking and self-paced reading) on German reflexives and Swedish possessives were designed to tease apart cue-based retrieval interference from encoding interference. The results of all three experiments suggest that there is no evidence that encoding interference affects the retrieval of a reflexive's antecedent.
Taken together, these findings suggest that the processing of reflexives can be explained with the same cue-based retrieval mechanism that has been invoked to explain syntactic dependency resolution in a range of other structures. This supports the view that the language processing system is located within a general cognitive architecture, with a general-purpose content-addressable working memory system operating on linguistic expressions.
Finally, two experiments (self-paced reading and eye-tracking) using Chinese relative clauses were conducted to determine the relative contribution to sentence processing difficulty of working-memory processes as compared to syntactic prediction during incremental parsing.
Chinese has the cross-linguistically rare property of being a language with subject-verb-object word order and pre-nominal relative clauses. This property leads to opposing predictions of expectation-based
accounts and memory-based accounts with respect to the relative processing difficulty of subject vs. object relatives.
Previous studies showed contradictory results, which has been attributed to different kinds local ambiguities confounding the materials (Lin and Bever, 2011). The two experiments presented are the first to compare Chinese relatives clauses in syntactically unambiguous contexts.
The results of both experiments were consistent with the predictions of the expectation-based account of sentence processing but not with the memory-based account. From these findings, I conclude that any theory of human sentence processing needs to take into account the power of predictive processes unfolding in the human mind.
A lot has been published about the competencies needed by
students in the 21st century (Ravenscroft et al., 2012). However, equally
important are the competencies needed by educators in the new era
of digital education. We review the key competencies for educators in
light of the new methods of teaching and learning proposed by Massive
Open Online Courses (MOOCs) and their on-campus counterparts,
Small Private Online Courses (SPOCs).
Participants of this workshop will be confronted exemplarily
with a considerable inconsistency of global Informatics education at
lower secondary level. More importantly, they are invited to contribute
actively on this issue in form of short case studies of their countries.
Until now, very few countries have been successful in implementing
Informatics or Computing at primary and lower secondary level. The
spectrum from digital literacy to informatics, particularly as a discipline
in its own right, has not really achieved a breakthrough and seems to
be underrepresented for these age groups. The goal of this workshop
is not only to discuss the anamnesis and diagnosis of this fragmented
field, but also to discuss and suggest viable forms of therapy in form of
setting educational standards. Making visible good practices in some
countries and comparing successful approaches are rewarding tasks for
this workshop.
Discussing and defining common educational standards on a transcontinental
level for the age group of 14 to 15 years old students in a readable,
assessable and acceptable form should keep the participants of this
workshop active beyond the limited time at the workshop.