Refine
Has Fulltext
- yes (668) (remove)
Year of publication
- 2015 (668) (remove)
Document Type
- Article (255)
- Postprint (197)
- Doctoral Thesis (108)
- Monograph/Edited Volume (33)
- Part of Periodical (26)
- Preprint (12)
- Review (11)
- Conference Proceeding (7)
- Master's Thesis (7)
- Bachelor Thesis (5)
Keywords
- Patholinguistik (20)
- Sprachtherapie (20)
- geistige Behinderung (20)
- mental deficiency (20)
- patholinguistics (20)
- primary progessive aphasia (20)
- primär progessive Aphasie (20)
- speech therapy (20)
- Armut (13)
- Nachhaltigkeit (13)
Institute
- Institut für Physik und Astronomie (108)
- Mathematisch-Naturwissenschaftliche Fakultät (70)
- Institut für Informatik und Computational Science (52)
- Institut für Chemie (39)
- MenschenRechtsZentrum (37)
- Humanwissenschaftliche Fakultät (36)
- Institut für Romanistik (32)
- Department Linguistik (30)
- Institut für Geowissenschaften (29)
- Institut für Biochemie und Biologie (24)
Optical properties of modified diamondoids have been studied theoretically using vibrationally resolved electronic absorption, emission and resonance Raman spectra. A time-dependent correlation function approach has been used for electronic two-state models, comprising a ground state (g) and a bright, excited state (e), the latter determined from linear-response, time-dependent density functional theory (TD-DFT). The harmonic and Condon approximations were adopted. In most cases origin shifts, frequency alteration and Duschinsky rotation in excited states were considered. For other cases where no excited state geometry optimization and normal mode analysis were possible or desired, a short-time approximation was used. The optical properties and spectra have been computed for (i) a set of recently synthesized sp2/sp3 hybrid species with C[double bond, length as m-dash]C double-bond connected saturated diamondoid subunits, (ii) functionalized (mostly by thiol or thione groups) diamondoids and (iii) urotropine and other C-substituted diamondoids. The ultimate goal is to tailor optical and electronic features of diamondoids by electronic blending, functionalization and substitution, based on a molecular-level understanding of the ongoing photophysics.
Analysis and modeling of transient earthquake patterns and their dependence on local stress regimes
(2015)
Investigations in the field of earthquake triggering and associated interactions, which includes aftershock triggering as well as induced seismicity, is important for seismic hazard assessment due to earthquakes destructive power. One of the approaches to study earthquake triggering and their interactions is the use of statistical earthquake models, which are based on knowledge of the basic seismicity properties, in particular, the magnitude distribution and spatiotemporal properties of the triggered events.
In my PhD thesis I focus on some specific aspects of aftershock properties, namely, the relative seismic moment release of the aftershocks with respect to the mainshocks; the spatial correlation between aftershock occurrence and fault deformation; and on the influence of aseismic transients on the aftershock parameter estimation. For the analysis of aftershock sequences I choose a statistical approach, in particular, the well known Epidemic Type Aftershock Sequence (ETAS) model, which accounts for the input of background and triggered seismicity. For my specific purposes, I develop two ETAS model modifications in collaboration with Sebastian Hainzl. By means of this approach, I estimate the statistical aftershock parameters and performed simulations of aftershock sequences as well.
In the case of seismic moment release of aftershocks, I focus on the ratio of cumulative seismic moment release with respect to the mainshocks. Specifically, I investigate the ratio with respect to the focal mechanism of the mainshock and estimate an effective magnitude, which represents the cumulative aftershock energy (similar to Bath's law, which defines the average difference between mainshock and the largest aftershock magnitudes). Furthermore, I compare the observed seismic moment ratios with the results of the ETAS simulations. In particular, I test a restricted ETAS (RETAS) model which is based on results of a clock advanced model and static stress triggering.
To analyze spatial variations of triggering parameters I focus in my second approach on the aftershock occurrence triggered by large mainshocks and the study of the aftershock parameter distribution and their spatial correlation with the coseismic/postseismic slip and interseismic locking. To invert the aftershock parameters I improve the modified ETAS (m-ETAS) model, which is able to take the extension of the mainshock rupture into account. I compare the results obtained by the classical approach with the output of the m-ETAS model.
My third approach is concerned with the temporal clustering of seismicity, which might not only be related to earthquake-earthquake interactions, but also to a time-dependent background rate, potentially biasing the parameter estimations. Thus, my coauthors and I also applied a modification of the ETAS model, which is able to take into account time-dependent background activity. It can be applicable for two different cases: when an aftershock catalog has a temporal incompleteness or when the background seismicity rate changes with time, due to presence of aseismic forces.
An essential part of any research is the testing of the developed models using observational data sets, which are appropriate for the particular study case. Therefore, in the case of seismic moment release I use the global seismicity catalog. For the spatial distribution of triggering parameters I exploit two aftershock sequences of the Mw8.8 2010 Maule (Chile) and Mw 9.0 2011 Tohoku (Japan) mainshocks. In addition, I use published geodetic slip models of different authors. To test our ability to detect aseismic transients my coauthors and I use the data sets from Western Bohemia (Central Europe) and California.
Our results indicate that:
(1) the seismic moment of aftershocks with respect to mainshocks depends on the static stress changes and is maximal for the normal, intermediate for thrust and minimal for strike-slip stress regimes, where the RETAS model shows a good correspondence with the results;
(2) The spatial distribution of aftershock parameters, obtained by the m-ETAS model, shows anomalous values in areas of reactivated crustal fault systems. In addition, the aftershock density is found to be correlated with coseismic slip gradient, afterslip, interseismic coupling and b-values. Aftershock seismic moment is positively correlated with the areas of maximum coseismic slip and interseismically locked areas. These correlations might be related to the stress level or to material properties variations in space;
(3) Ignoring aseismic transient forcing or temporal catalog incompleteness can lead to the significant under- or overestimation of the underlying trigger parameters. In the case when a catalog is complete, this method helps to identify aseismic sources.
In many procedures of seismic risk mitigation, ground motion simulations are needed to test systems or improve their effectiveness. For example they may be used to estimate the level of ground shaking caused by future earthquakes. Good physical models for ground motion simulation are also thought to be important for hazard assessment, as they could close gaps in the existing datasets. Since the observed ground motion in nature shows a certain variability, part of which cannot be explained by macroscopic parameters such as magnitude or position of an earthquake, it would be desirable that a good physical model is not only able to produce one single seismogram, but also to reveal this natural variability.
In this thesis, I develop a method to model realistic ground motions in a way that is computationally simple to handle, permitting multiple scenario simulations. I focus on two aspects of ground motion modelling. First, I use deterministic wave propagation for the whole frequency range – from static deformation to approximately 10 Hz – but account for source variability by implementing self-similar slip distributions and rough fault interfaces. Second, I scale the source spectrum so that the modelled waveforms represent the correct radiated seismic energy. With this scaling I verify whether the energy magnitude is suitable as an explanatory variable, which characterises the amount of energy radiated at high frequencies – the advantage of the energy magnitude being that it can be deduced from observations, even in real-time.
Applications of the developed method for the 2008 Wenchuan (China) earthquake, the 2003 Tokachi-Oki (Japan) earthquake and the 1994 Northridge (California, USA) earthquake show that the fine source discretisations combined with the small scale source variability ensure that high frequencies are satisfactorily introduced, justifying the deterministic wave propagation approach even at high frequencies. I demonstrate that the energy magnitude can be used to calibrate the high-frequency content in ground motion simulations.
Because deterministic wave propagation is applied to the whole frequency range, the simulation method permits the quantification of the variability in ground motion due to parametric uncertainties in the source description. A large number of scenario simulations for an M=6 earthquake show that the roughness of the source as well as the distribution of fault dislocations have a minor effect on the simulated variability by diminishing directivity effects, while hypocenter location and rupture velocity more strongly influence the variability. The uncertainty in energy magnitude, however, leads to the largest differences of ground motion amplitude between different events, resulting in a variability which is larger than the one observed.
For the presented approach, this dissertation shows (i) the verification of the computational correctness of the code, (ii) the ability to reproduce observed ground motions and (iii) the validation of the simulated ground motion variability. Those three steps are essential to evaluate the suitability of the method for means of seismic risk mitigation.
Methicillin resistant Staphylococcus aureus (MRSA) is one of the most important antibiotic-resistant pathogens in hospitals and the community. Recently, a new generation of MRSA, the so called livestock associated (LA) MRSA, has emerged occupying food producing animals as a new niche. LA-MRSA can be regularly isolated from economically important live-stock species including corresponding meats. The present thesis takes a methodological approach to confirm the hypothesis that LA-MRSA are transmitted along the pork, poultry and beef production chain from animals at farm to meat on consumers` table. Therefore two new concepts were developed, adapted to differing data sets.
A mathematical model of the pig slaughter process was developed which simulates the change in MRSA carcass prevalence during slaughter with special emphasis on identifying critical process steps for MRSA transmission. Based on prevalences as sole input variables the model framework is able to estimate the average value range of both the MRSA elimination and contamination rate of each of the slaughter steps. These rates are then used to set up a Monte Carlo simulation of the slaughter process chain. The model concludes that regardless of the initial extent of MRSA contamination low outcome prevalences ranging between 0.15 and 1.15 % can be achieved among carcasses at the end of slaughter. Thus, the model demonstrates that the standard procedure of pig slaughtering in principle includes process steps with the capacity to limit MRSA cross contamination. Scalding and singeing were identified as critical process steps for a significant reduction of superficial MRSA contamination.
In the course of the German national monitoring program for zoonotic agents MRSA prevalence and typing data are regularly collected covering the key steps of different food production chains. A new statistical approach has been proposed for analyzing this cross sectional set of MRSA data with regard to show potential farm to fork transmission. For this purpose, chi squared statistics was combined with the calculation of the Czekanowski similarity index to compare the distributions of strain specific characteristics between the samples from farm, carcasses after slaughter and meat at retail. The method was implemented on the turkey and veal production chains and the consistently high degrees of similarity which have been revealed between all sample pairs indicate MRSA transmission along the chain.
As the proposed methods are not specific to process chains or pathogens they offer a broad field of application and extend the spectrum of methods for bacterial transmission assessment.
In den letzten Jahrzehnten ist der Trend der Verselbstständigung in vielen Kommunen zu beobachten. Ein Großteil der öffentlichen Leistungserbringer wird mittlerweile als privatrechtliche Gesellschaften in einem wettbewerbsorientierten Umfeld geführt. Während viele Forscher Ausgliederungen in Form von nachgeordneten Behörden auf Bundesebene untersuchen und diese Reformwelle als einen faktischen Autonomisierungsprozess beschreiben, gibt es nur einige wenige Studien, die sich explizit mit den Autonomisierungstendenzen auf Kommunalebene auseinandersetzen. Daher fehlt es an empirischen Erkenntnissen zur Steuerung der kommunalen Beteiligungen.
In dieser Arbeit werden die Steuerungsarrangements deutscher Großstädte erstmals aus Sicht der Gesteuerten beleuchtet. Das Untersuchungsziel der vorliegenden Forschungsarbeit besteht darin, Flexibilisierungstendenzen in mehrheitlich kommunalen Unternehmen zu identifizieren und hierfür Erklärungsfaktoren zu identifizieren. Die Forschungsfrage lautet: Welche instrumentellen und relationalen Faktoren beeinflussen die Managementautonomie in kommunalen Mehrheitsbeteiligungen?
Dabei interessiert insbesondere die Einflussnahme der Kommunen auf verschiedene Tätigkeitsbereiche ihrer Ausgliederungen. Über diese unternehmensspezifischen Sachverhalte ist in Deutschland fast nichts und international nur sehr wenig Empirisches bekannt. Zur Beantwortung der Forschungsfrage hat der Autor auf Basis der Transaktionskosten- und der Social-Exchange-Theorie einen Analyserahmen erstellt. Die aufgestellten Hypothesen wurden mit einer großflächigen Umfrage bei 243 Unternehmen in den 39 größten deutschen Städten empirisch getestet.
Im Ergebnis zeigen sich mehrere empirische Erkenntnisse: Erstens konnten mittels Faktorenanalyse vier unabhängige Faktoren von Managementautonomie in kommunalen Unternehmen identifiziert werden: Personalautonomie, Generelles Management, Preisautonomie und Strategische Fragen. Während die Kommunen ihren Beteiligungen einen hohen Grad an Personalautonomie zugestehen, unterliegen vor allem strategische Investitionsentscheidungen wie die finanzielle Beteiligung an Tochterfirmen, große Projektvorhaben, Diversifikationsentscheidungen oder Kreditautfnahmen einem starken politischen Einfluss.
Zweitens führt eine Rechtsformänderung und die Platzierung in einem Wettbewerbsumfeld (auch bekannt als Corporatisation) vor allem zu einer größeren Flexibilisierung der Personal- und Preispolitik, wirkt sich allerdings wenig auf die weiteren Faktoren der Managementautonomie, Generelles Management und Strategische Entscheidungen, aus. Somit behalten die Kommunen ihre Möglichkeit, auf wichtige Unternehmensfragen der Beteiligung Einfluss zu nehmen, auch im Fall einer Formalprivatisierung bei.
Letztlich können zur Erklärung der Autonomiefaktoren transaktionskostenbasierte und relationale Faktoren ergänzend herangezogen werden. In den Transaktionsspezifika wirken vor allem der wahrgenommene Wettbewerb in der Branche, die Messbarkeit der Leistung, Branchenvariablen, die Anzahl der Politiker im Aufsichtsrat und die eingesetzten Steuerungsmechanismen. In den relationalen Faktoren setzen sich die Variablen gegenseitiges Vertrauen, Effektivität der Aufsichtsräte, Informationsaustausch, Rollenkonflikte, Rollenambivalenzen und Geschäftsführererfahrung im Sektor durch.
Many chemical reactions in biological cells occur at very low concentrations of constituent molecules. Thus, transcriptional gene-regulation is often controlled by poorly expressed transcription-factors, such as E.coli lac repressor with few tens of copies. Here we study the effects of inherent concentration fluctuations of substrate-molecules on the seminal Michaelis-Menten scheme of biochemical reactions. We present a universal correction to the Michaelis-Menten equation for the reaction-rates. The relevance and validity of this correction for enzymatic reactions and intracellular gene-regulation is demonstrated. Our analytical theory and simulation results confirm that the proposed variance-corrected Michaelis-Menten equation predicts the rate of reactions with remarkable accuracy even in the presence of large non-equilibrium concentration fluctuations. The major advantage of our approach is that it involves only the mean and variance of the substrate-molecule concentration. Our theory is therefore accessible to experiments and not specific to the exact source of the concentration fluctuations.
Background
Generating percentile values is helpful for the identification of children with specific fitness characteristics (i.e., low or high fitness level) to set appropriate fitness goals (i.e., fitness/health promotion and/or long-term youth athlete development). Thus, the aim of this longitudinal study was to assess physical fitness development in healthy children aged 9–12 years and to compute sex- and age-specific percentile values.
Methods
Two-hundred and forty children (88 girls, 152 boys) participated in this study and were tested for their physical fitness. Physical fitness was assessed using the 50-m sprint test (i.e., speed), the 1-kg ball push test, the triple hop test (i.e., upper- and lower- extremity muscular power), the stand-and-reach test (i.e., flexibility), the star run test (i.e., agility), and the 9-min run test (i.e., endurance). Age- and sex-specific percentile values (i.e., P10 to P90) were generated using the Lambda, Mu, and Sigma method. Adjusted (for change in body weight, height, and baseline performance) age- and sex-differences as well as the interactions thereof were expressed by calculating effect sizes (Cohen’s d).
Results
Significant main effects of Age were detected for all physical fitness tests (d = 0.40–1.34), whereas significant main effects of Sex were found for upper-extremity muscular power (d = 0.55), flexibility (d = 0.81), agility (d = 0.44), and endurance (d = 0.32) only. Further, significant Sex by Age interactions were observed for upper-extremity muscular power (d = 0.36), flexibility (d = 0.61), and agility (d = 0.27) in favor of girls. Both, linear and curvilinear shaped curves were found for percentile values across the fitness tests. Accelerated (curvilinear) improvements were observed for upper-extremity muscular power (boys: 10–11 yrs; girls: 9–11 yrs), agility (boys: 9–10 yrs; girls: 9–11 yrs), and endurance (boys: 9–10 yrs; girls: 9–10 yrs). Tabulated percentiles for the 9-min run test indicated that running distances between 1,407–1,507 m, 1,479–1,597 m, 1,423–1,654 m, and 1,433–1,666 m in 9- to 12-year-old boys and 1,262–1,362 m, 1,329–1,434 m, 1,392–1,501 m, and 1,415–1,526 m in 9- to 12-year-old girls correspond to a “medium” fitness level (i.e., P40 to P60) in this population.
Conclusions
The observed differences in physical fitness development between boys and girls illustrate that age- and sex-specific maturational processes might have an impact on the fitness status of healthy children. Our statistical analyses revealed linear (e.g., lower-extremity muscular power) and curvilinear (e.g., agility) models of fitness improvement with age which is indicative of timed and capacity-specific fitness development pattern during childhood. Lastly, the provided age- and sex-specific percentile values can be used by coaches for talent identification and by teachers for rating/grading of children’s motor performance.
Fluid force microscopy combines the positional accuracy and force sensitivity of an atomic
force microscope (AFM) with nanofluidics via a microchanneled cantilever. However, adequate loading and cleaning procedures for such AFM micropipettes are required for various application situations. Here, a new frontloading procedure is described for an AFM micropipette functioning as a force- and pressure-controlled microscale liquid dispenser. This frontloading
procedure seems especially attractive when using target substances featuring high
costs or low available amounts. Here, the AFM micropipette could be filled from the tip side with liquid from a previously applied droplet with a volume of only a few μL using a short low-pressure pulse. The liquid-loaded AFM micropipettes could be then applied for experiments in air or liquid environments. AFM micropipette frontloading was evaluated with the well-known organic fluorescent dye rhodamine 6G and the AlexaFluor647-labeled antibody goat anti-rat IgG as an example of a larger biological compound. After micropipette usage, specific cleaning procedures were tested. Furthermore, a storage method is described, at which the AFM micropipettes could be stored for a few hours up to several days without drying out or clogging of the microchannel. In summary, the rapid, versatile and cost-efficient
frontloading and cleaning procedure for the repeated usage of a single AFM micropipette is beneficial for various application situations from specific surface modifications through to local manipulation of living cells, and provides a simplified and faster handling for already known experiments with fluid force microscopy.
Background
Body image distortion is highly prevalent among overweight individuals. Whilst there is evidence that body-dissatisfied women and those suffering from disordered eating show a negative attentional bias towards their own unattractive body parts and others’ attractive body parts, little is known about visual attention patterns in the area of obesity and with respect to males. Since eating disorders and obesity share common features in terms of distorted body image and body dissatisfaction, the aim of this study was to examine whether overweight men and women show a similar attentional bias.
Methods/Design
We analyzed eye movements in 30 overweight individuals (18 females) and 28 normalweight individuals (16 females) with respect to the participants’ own pictures as well as gender-
and BMI-matched control pictures (front and back view). Additionally, we assessed body image and disordered eating using validated questionnaires.
Discussion
The overweight sample rated their own body as less attractive and showed a more disturbed body image. Contrary to our assumptions, they focused significantly longer on attractive
compared to unattractive regions of both their own and the control body. For one’s own body, this was more pronounced for women. A higher weight status and more frequent body checking predicted attentional bias towards attractive body parts. We found that overweight adults exhibit an unexpected and stable pattern of selective attention, with a distinctive focus on their own attractive body regions despite higher levels of body dissatisfaction. This positive attentional bias may either be an indicator of a more pronounced pattern of attentional avoidance or a self-enhancing strategy. Further research is warranted to clarify these results.
The rise of evolutionary novelties is one of the major drivers of evolutionary diversification. African weakly-electric fishes (Teleostei, Mormyridae) have undergone an outstanding adaptive radiation, putatively owing to their ability to communicate through species-specific Electric Organ Discharges (EODs) produced by a novel, muscle-derived electric organ. Indeed, such EODs might have acted as effective pre-zygotic isolation mechanisms, hence favoring ecological speciation in this group of fishes. Despite the evolutionary importance of this organ, genetic investigations regarding its origin and function have remained limited.
The ultimate aim of this study is to better understand the genetic basis of EOD production by exploring the transcriptomic profiles of the electric organ and of its ancestral counterpart, the skeletal muscle, in the genus Campylomormyrus. After having established a set of reference transcriptomes using “Next-Generation Sequencing” (NGS) technologies, I performed in silico analyses of differential expression, in order to identify sets of genes that might be responsible for the functional differences observed between these two kinds of tissues. The results of such analyses indicate that: i) the loss of contractile activity and the decoupling of the excitation-contraction processes are reflected by the down-regulation of the corresponding genes in the electric organ; ii) the metabolic activity of the electric organ might be specialized towards the production and turnover of membrane structures; iii) several ion channels are highly expressed in the electric organ in order to increase excitability, and iv) several myogenic factors might be down-regulated by transcription repressors in the EO.
A secondary task of this study is to improve the genus level phylogeny of Campylomormyrus by applying new methods of inference based on the multispecies coalescent model, in order to reduce the conflict among gene trees and to reconstruct a phylogenetic tree as closest as possible to the actual species-tree. By using 1 mitochondrial and 4 nuclear markers, I was able to resolve the phylogenetic relationships among most of the currently described Campylomormyrus species. Additionally, I applied several coalescent-based species delimitation methods, in order to test the hypothesis that putatively cryptic species, which are distinguishable only from their EOD, belong to independently evolving lineages. The results of this analysis were additionally validated by investigating patterns of diversification at 16 microsatellite loci. The results suggest the presence of a new, yet undescribed species of Campylomormyrus.
This thesis investigates temporal and aspectual reference in the typologically unrelated African languages Hausa (Chadic, Afro–Asiatic) and Medumba (Grassfields Bantu).
It argues that Hausa is a genuinely tenseless language and compares the interpretation of temporally unmarked sentences in Hausa to that of morphologically tenseless sentences in Medumba, where tense marking is optional and graded.
The empirical behavior of the optional temporal morphemes in Medumba motivates an analysis as existential quantifiers over times and thus provides new evidence suggesting that languages vary in whether their (past) tense is pronominal or quantificational (see also Sharvit 2014).
The thesis proposes for both Hausa and Medumba that the alleged future tense marker is a modal element that obligatorily combines with a prospective future shifter (which is covert in Medumba). Cross-linguistic variation in whether or not a future marker is compatible with non-future interpretation is proposed to be predictable from the aspectual architecture of the given language.
Handbuch Textannotation
(2015)
Das Potsdamer Kommentarkorpus ist eine Sammlung von Zeitungstexten, die dem Genre ‘Kommentar' zuzuordnen sind. Der öffentlich verfügbare Teil besteht aus 175 Texten aus der Märkischen Allgemeinen Zeitung, die hinsichtlich Syntax, Koreferenz, Konnektoren und Rhetorische Struktur manuell annotiert wurden. Weitere Ebenen werden bei zukünftigen Korpusversionen hinzukommen. Dieses Buch enthält die Annotationsrichtlinien, die der Bearbeitung des öffentlichen Teils des Korpus zugrunde lagen, sowie auch anderer Teile, bei denen mit weiteren Annotationsebenen experimentiert wurde. Die meisten der Richtlinien werden auch für ähnliche Text-Genres und für andere Sprachen verwendbar sein.
Reconstructing climate from the Dead Sea sediment record using high-resolution micro-facies analyses
(2015)
The sedimentary record of the Dead Sea is a key archive for reconstructing climate in the eastern Mediterranean region, as it stores the environmental and tectonic history of the Levant for the entire Quaternary. Moreover, the lake is located at the boundary between Mediterranean sub-humid to semi-arid and Saharo-Arabian hyper-arid climates, so that even small shifts in atmospheric circulation are sensitively recorded in the sediments. This DFG-funded doctoral project was carried out within the ICDP Dead Sea Deep Drilling Project (DSDDP) that intended to gain the first long, continuous and high-resolution sediment core from the deep Dead Sea basin. The drilling campaign was performed in winter 2010-11 and more than 700 m of sediments were recovered. The main aim of this thesis was (1) to establish the lithostratigraphic framework for the ~455 m long sediment core from the deep Dead Sea basin and (2) to apply high-resolution micro-facies analyses for reconstructing and better understanding climate variability from the Dead Sea sediments.
Addressing the first aim, the sedimentary facies of the ~455 m long deep-basin core 5017-1 were described in great detail and characterised through continuous overview-XRF element scanning and magnetic susceptibility measurements. Three facies groups were classified: (1) the marl facies group, (2) the halite facies group and (3) a group involving different expressions of massive, graded and slumped deposits including coarse clastic detritus. Core 5017-1 encompasses a succession of four main lithological units. Based on first radiocarbon and U-Th ages and correlation of these units to on-shore stratigraphic sections, the record comprises the last ca 220 ka, i.e. the upper part of the Amora Formation (parts of or entire penultimate interglacial and glacial), the last interglacial Samra Fm. (~135-75 ka), the last glacial Lisan Fm. (~75-14 ka) and the Holocene Ze’elim Formation. A major advancement of this record is that, for the first time, also transitional intervals were recovered that are missing in the exposed formations and that can now be studied in great detail.
Micro-facies analyses involve a combination of high-resolution microscopic thin section analysis and µXRF element scanning supported by magnetic susceptibility measurements. This approach allows identifying and characterising micro-facies types, detecting event layers and reconstructing past climate variability with up to seasonal resolution, given that the analysed sediments are annually laminated. Within this thesis, micro-facies analyses, supported by further sedimentological and geochemical analyses (grain size, X-ray diffraction, total organic carbon and calcium carbonate contents) and palynology, were applied for two time intervals:
(1) The early last glacial period ~117-75 ka was investigated focusing on millennial-scale hydroclimatic variations and lake level changes recorded in the sediments. Thereby, distinguishing six different micro-facies types with distinct geochemical and sedimentological characteristics allowed estimating relative lake level and water balance changes of the lake. Comparison of the results to other records in the Mediterranean region suggests a close link of the hydroclimate in the Levant to North Atlantic and Mediterranean climates during the time of the build-up of Northern hemisphere ice sheets during the early last glacial period.
(2) A mostly annually laminated late Holocene section (~3700-1700 cal yr BP) was analysed in unprecedented detail through a multi-proxy, inter-site correlation approach of a shallow-water core (DSEn) and its deep-basin counterpart (5017-1). Within this study, a ca 1500 years comprising time series of erosion and dust deposition events was established and anchored to the absolute time-scale through 14C dating and age modelling. A particular focus of this study was the characterisation of two dry periods, from ~3500 to 3300 and from ~3000 to 2400 cal yr BP, respectively. Thereby, a major outcome was the coincidence of the latter dry period with a period of moist and cold climate in Europe related to a Grand Solar Minimum around 2800 cal yr BP and an increase in flood events despite overall dry conditions in the Dead Sea region during that time. These contrasting climate signatures in Europe and at the Dead Sea were likely linked through complex teleconnections of atmospheric circulation, causing a change in synoptic weather patterns in the eastern Mediterranean.
In summary, within this doctorate the lithostratigraphic framework of a unique long sediment core from the deep Dead Sea basin is established, which serves as a base for any further high-resolution investigations on this core. It is demonstrated in two case studies that micro-facies analyses are an invaluable tool to understand the depositional processes in the Dead Sea and to decipher past climate variability in the Levant on millennial to seasonal time-scales. Hence, this work adds important knowledge helping to establish the deep Dead Sea record as a key climate archive of supra-regional significance.
Während des Ersten Weltkrieges kam der deutsche Dramatiker und Erzähler Alfred Brust mit dem chassidischen Judentum im litauischen Kowno und Wilna in Berührung. So wie Sammy Gronemann, Arnold Zweig und andere Autoren, die in der Zensur- Abteilung von ‚Ober Ost‘ beschäftigt waren, war Brust tief beeindruckt von dessen archaischer Kultur. Diese schien ihm im Gegensatz zur Dekadenz der modernen Welt zu stehen. Beeinflusst vom expressionistischen Topos einer inneren Wandlung des Menschen entwickelte er in den folgenden Jahren die Idee einer Translation spiritueller und moralischer Werte von den osteuropäischen Juden auf die (deutschen) Ostpreußen. Für ihn galt: Die „Juden sind der Adel der Bewegung“. Brust stand in Kontakt mit Richard Dehmel, Hugo von Hofmannsthal, Florens Christian Rang und Martin Buber, selbst Walter Benjamin war zeitweise an ihm interessiert.
When Jesus Spoke Yiddish
(2015)
In this paper, I wish to bring some evidences from a Yiddish manuscript of the “Toledot Yeshu” which has not yet been the object of research: MS. Günzburg, 1730 kept in the Russian State Library in Moscow and dated 17th century. The manuscript is part of the so-called ‘Herode-tradition’ of the “Toledot Yeshu”. This means that the Yiddish manuscript is connected to the version printed in Hebrew and accompanied by a Latin translation by the Swiss pastor and theologian Johann Jacob Uldrich (Huldricus, 1683–1731) in Leiden in 1705, bearing the title “Historia Jeschuae Nazareni”. Given the uncertainty about the exact dating of the Yiddish manuscript, a comparison between the Hebrew and the Yiddish can still allow some remarks concerning the characteristics of the Yiddish version and posit some questions about the transmission and the reception of this challenging and intriguing text.
In diesem Beitrag soll es darum gehen, anhand der Distinktion von Jetztzeit und Erinnerung scheinbare Einheiten im lyrischen Werk der Nelly Sachs zu hinterfragen. Ein Nexus der Jesusfigur zur Shoah ist in den Gedichten nicht zu übersehen; inwieweit eine Korrelation beider Diskurse zwischen dem perennierenden Leiden der Opfer und dem am Kreuz Gemarterten besteht, ist der Gegenstand dieser Untersuchung. Jesus wird im lyrischen Oeuvre auf der zeitlichen wie inhaltlichen Ebene neu figuriert: nicht als christologisch-dogmatische Erlöserfigur, sondern als der qualvoll gemarterte Mann, dessen Schrei im 20. Jahrhundert eine neue Lesbarkeit generiert: als leidender Mit-Bruder. Der historische Index der Jesus-Gestalt wird in der Passionsszenerie, die jedoch transformativ modifiziert wird, manifest.
Carl Einsteins Drama „Die schlimme Botschaft“ (1921), das ihm einen Prozess wegen Gotteslästerung einbrachte, steht mit seiner kritischen Haltung gegenüber der christlichen Lehre von der Kreuzigung und Auferstehung im Trend einer seinerzeit von jüdischen Theologen geführten Diskussion mit Adolf Harnacks Modernisierungsprojekt des Christentums. Jüdische Theologen wie Joseph Eschelbacher entwarfen ein Judentum, das, anders als von Harnack dargestellt, bereits jene moderne Gestalt einer Religion erfüllt, die Harnack für das Christentum erst zu erreichen versuchte. Einsteins „vom Dogma abweichende[r] Standpunkt“, der ihm vor Gericht von einem katholischen Geistlichen als Standpunkt eines dissidenten Juden vorgeworfen wurde, hätte durchaus als Beitrag eines jüdischen Literaten zur Apologie des Judentums gelten können, wenn Einstein nicht selbst seine Glaubensbrüder im Verbund mit der von ihm abgelehnten bürgerlichen Ideologie des kapitalistischen Marktes gesehen hätte.
Bereits in der aufklärerischen Auseinandersetzung um Jesus’ Judentum finden sich Tendenzen, dieses als Ausgangspunkt einer universalisierenden Vermittlung zwischen Judentum und Christentum zu entwerfen, wie auch – zumeist von jüdischer Seite – Einsprüche gegen einen derartigen Universalismus, der entscheidende Differenzen und die Geschichte ausgrenzender Gewalt verleugne. Nach einem kurzen Blick auf den jüdischen Christus bei Heine analysiert der Beitrag Stefan Zweigs im Horizont des Ersten Weltkriegs und zionistischer Debatten entstandenes, 1917 publiziertes Drama „Jeremias“. Die für die Konzeption des Stückes zentrale Überblendung der Prophetenfigur Jeremias mit Christus treibt, so die These, die problematische Logik einer Opferstellvertretung hervor, die mit der Vorstellung einer durch (Opfer-)Blut konstituierten Gemeinschaft die Frage nach der Schuld (Christusmord) auf Andere verschiebt. Demgegenüber entwirft das Stück in Motiven der einbrechenden ‚Schrecknis‘, der Fremdheit im Eigenen, des Exils und der Weltwanderschaft einen anderen Kosmopolitismus.
Hintergrund: In Deutschland stellt der akute Myokardinfarkt (MI) eine der häufigsten Todesursachen dar. Als Ursache für regionale Unterschiede bei den Mortalitätsraten werden divergente Versorgungsstrukturen vermutet. Ziel der Untersuchung war, diese Fragestellung anhand anonymisierter krankenkassenbasierter Abrechnungsdaten zu evaluieren.
Methodik: Standardisierte Hospitalisierungs- sowie Krankenhaus- und Ein-Jahres-Mortalitätsraten nach MI wurden anhand anonymisierter Versichertendaten einer gesetzlichen Krankenkasse für das Jahr 2012 und die Bundesländer Berlin, Brandenburg und Mecklenburg-Vorpommern ermittelt (n=1.387.084, 46.3% male, 60.9 ± 18,2 years). Weiterhin wurden prädiktive Einflussfaktoren auf die Ein-Jahres-Mortalität, auf die Durchführung invasiver Prozeduren und auf eine leitliniengerechte pharmakotherapeutische Sekundärprävention analysiert.
Ergebnisse: 6.733 Patienten (73,7 ±13,0 Jahre, 56,7% männlich) wurden identifiziert. Obwohl für das Bundesland Berlin eine höhere Hospitalisierungsrate als in Mecklenburg-Vorpommern ermittelt werden konnte, ließen sich bei der Krankenhaus- und 1-Jahres-Mortalität keine signifikant abweichenden Raten zwischen den Bundesländern beobachten. Die Durchführung einer Koronarangiographie (OR: 0,42 [0,35-0,51]) und eine leitliniengerechte Pharmakotherapie (OR: 0,14 [0,12-0,17] waren mit einer geringeren 1-Jahres-Mortalität assoziiert. Die Durchführung einer Koronarangiographie und eine leitliniengerechte Pharmakotherapie von Patienten nach Myokardinfarkt wurde hingegen primär durch Alter und Geschlecht, nicht aber durch das Bundesland determiniert.
Folgerung: Eine regional divergierende stationäre und postinfarzielle Versorgung auf Bundesland-Ebene kann anhand der vorliegenden Daten nicht nachgewiesen werden.
Der deutsch-jüdische Dichter und Anarchist Erich Mühsam (1878–1934) rekurrierte in seinem schriftstellerischen OEuvre wiederholt auf die Figur Jesus’. Im Versuch, diese Beziehungsgeschichte zu rekonstruieren und zu deuten, rücken Einflussgeber aus dem Umfeld der literarischen Avantgarde Berlins sowie der Lebensreformbewegung in den Blick. In einer Verschränkung von quasireligiöser Renaissance und Aufbruchstimmung hin zum ‚Neuen Menschen‘ war Jesus in diesen Zirkeln um die Jahrhundertwende ein beliebtes Motiv und prägte sich, vermittelt über Freundschaften und Wegmarken, auch in Mühsams Schreiben ein. Neben Kain ist es Jesus, der bei Mühsam zur den fünften Stand revolutionierenden Lichtgestalt wird. In diese Bezugnahme spielen Mühsams konfliktreiches Verhältnis zu seinem Vater und dessen Assimilation an das deutsche Bürgertum mit hinein, das keineswegs gänzlich jene aufgeklärten Werte lebte, auf die es sich nominell berief: Statt Freiheit und Gleichheit prävalierten neue Zwänge, Begrenzungen und Ressentiments, wie Mühsam bemerkte. Über Gustav Landauer fand er nicht nur zum Anarchismus, sondern auch zu einer neuen Sprache für die verschüttete Tradition und prägte hierüber seiner Jesusfigur sein Verständnis eines rebellischen Judentums ein, das Mühsam selbst als Revolutionär verkörperte.
Die vorliegende Dissertation analysiert die Mentalitäten der bürgerlichen Bewohner Bogotas 19. Jahrhundert. Einblicke in die Fremd- und Eigenperspektiven der Bogotaner werden durch die Analyse von Reiseliteratur gewonnen. Methodologisch stützt sich die Arbeit auf den Vergleich zwischen europäischen Reiseberichten aus dem 19. Jh., Chroniken aus dem 16. Jh. sowie zwei kolumbianische Romane aus dem frühen 20.. Jh. Die Texte werden historiographisch behandelt; obwohl sie unterschiedlichen literarischen Genres angehören, weisen sie einen gemeinsamen autobiographischen Charakter auf. Aus den Erfahrungen und Gedanken der Reisenden werden v.a. die Auswirkungen der geographischen und sozialen Isolation thematisiert, sowie die Einflüsse von politischen und religiösen Diskursen auf die Bildung von bürgerlichem Gedankengut.
Jakob Wassermanns Roman „Die Juden von Zirndorf“ (1897), ein Sittengemälde deutsch-jüdischen Lebens, ist von diversen messianischen Vorstellungen getragen, die nur zum Teil auf die jüdische Tradition zurückzuführen sind. Folglich vereint der jüdische Knabe Agathon, die Hauptgestalt des zweiten Romanteils, Züge sowohl des jüdischen Messias als auch des christlichen Heilands. Die hohe Anzahl von Anspielungen auf die Figur Jesu und die ästhetische Funktionalisierung christologischer Motive im Roman ist in den Kontext jener „Heimholung Jesu in das Judentum“ einzubetten, die ab dem 19. Jahrhundert mit der Wissenschaft des Judentums einsetzte. Der vorliegende Beitrag wird – vor dem Hintergrund zeitgenössischer Jesus-Deutungen – sowohl auf die Spuren Jesu in der Charakterisierung Agathons als auch auf die Aneignung christologischer Motive vor allem im 2. Romanteil eingehen. Dabei soll insbesondere auf Lou von Salomés Aufsatz „Jesus der Jude“ (1896) als mögliche Quelle verwiesen werden.
Die jüdischen Künstler Maurycy Gottlieb (1856–1879) und Marc Chagall (1887–1985) stellten Jesus als gläubigen Juden, eingebettet in die jüdische Umwelt seiner Zeit, dar. Der jüdische Jesus wird für sie zu einer Auseinandersetzung mit den jüdischen Wurzeln des Christentums und mit dem Antisemitismus, und sein Martyrium zu einem Symbol des jüdischen Leidens. Der vorliegende Aufsatz untersucht, wie ihre Bilder diese Botschaften transportieren und analysiert Kontinuitäten und Brüche über einen langen Zeitraum hinweg.
In 1945, Zinovii Shenderovich Tolkatchev (1903–1977), a Soviet artist of Jewish origin, created a striking series of five images entitled “Jesus in Majdanek”. The series was the culmination of Tolkatchev‘s intensive preoccupation with the experience he, as a Red Army soldier, endured upon taking part in liberation of the concentration camps Majdanek and Auschwitz. Shocked by the actual sights he witnessed, he depicted Jesus as an actual camp inmate, wearing a striped uniform marked by every possible defamation sign – the Jewish yellow star, the red triangle of political prisoners, and the individual prison number, the numerical tattoo on his lower arm can also be seen. The different stages of camp life are portrayed as the traditional Passion of Christ. While showing the actual situations the artist based himself upon the well known European Renaissance paintings canonically depicting Jesus‘ suffering. The article places Tolkatchev‘s series in a broader cultural and visual context by exploring the development of the ‘historical Jesus’ in the 19th century European thought and Russian realist art, and by examining the impact of the German avant-garde. By doing so, a deeper understanding of the universal message Tolkatchev’s works entail is offered.
Messianic Jews are Jewish individuals who syncretically accept both the messianic character of Jesus and the ritual cultic practices provided by traditional Judaism. The present article examines the emergence of this marginal syncretic movement in contemporary Israel, and maintains that it represents a radical development in the bimillenary history of Jewish-Christian relations. This article offers a general introduction to the notion of Jewish-Christian identity, a brief history of the first group of Messianic Jews in the Land of Israel, the cultural influence and religious syncretism of the Messianic Jews in modern Israel, and, finally, the implication that Messianic Judaism is supposed to become the new paradigm within the various branches of Judaism.
The non-linear behaviour of the atmospheric dynamics is not well understood and makes the evaluation and usage of regional climate models (RCMs) difficult. Due to these non-linearities, chaos and internal variability (IV) within the RCMs are induced, leading to a sensitivity of RCMs to their initial conditions (IC). The IV is the ability of RCMs to realise different solutions of simulations that differ in their IC, but have the same lower and lateral boundary conditions (LBC), hence can be defined as the across-member spread between the ensemble members.
For the investigation of the IV and the dynamical and diabatic contributions generating the IV four ensembles of RCM simulations are performed with the atmospheric regional model HIRHAM5. The integration area is the Arctic and each ensemble consists of 20 members. The ensembles cover the time period from July to September for the years 2006, 2007, 2009 and 2012. The ensemble members have the same LBC and differ in their IC only. The different IC are arranged by an initialisation time that shifts successively by six hours. Within each ensemble the first simulation starts on 1st July at 00 UTC and the last simulation starts on 5th July at 18 UTC and each simulation runs until 30th September. The analysed time period ranges from 6th July to 30th September, the time period that is covered by all ensemble members. The model runs without any nudging to allow a free development of each simulation to get the full internal variability within the HIRHAM5.
As a measure of the model generated IV, the across-member standard deviation and the across-member variance is used and the dynamical and diabatic processes influencing the IV are estimated by applying a diagnostic budget study for the IV tendency of the potential temperature developed by Nikiema and Laprise [2010] and Nikiema and Laprise [2011]. The diagnostic budget study is based on the first law of thermodynamics for potential temperature and the mass-continuity equation. The resulting budget equation reveals seven contributions to the potential temperature IV tendency.
As a first study, this work analyses the IV within the HIRHAM5. Therefore, atmospheric circulation parameters and the potential temperature for all four ensemble years are investigated. Similar to previous studies, the IV fluctuates strongly in time. Further, due to the fact that all ensemble members are forced with the same LBC, the IV depends on the vertical level within the troposphere, with high values in the lower troposphere and at 500 hPa and low values in the upper troposphere and at the surface. By the same reason, the spatial distribution shows low values of IV at the boundaries of the model domain.
The diagnostic budget study for the IV tendency of potential temperature reveals that the seven contributions fluctuate in time like the IV. However, the individual terms reach different absolute magnitudes. The budget study identifies the horizontal and vertical ‘baroclinic’ terms as the main contributors to the IV tendency, with the horizontal ‘baroclinic’ term producing and the vertical ‘baroclinic’ term reducing the IV. The other terms fluctuate around zero, because they are small in general or are balanced due to the domain average.
The comparison of the results obtained for the four different ensembles (summers 2006, 2007, 2009 and 2012) reveals that on average the findings for each ensemble are quite similar concerning the magnitude and the general pattern of IV and its contributions. However, near the surface a weaker IV is produced with decreasing sea ice extent. This is caused by a smaller impact of the horizontal 'baroclinic' term over some regions and by the changing diabatic processes, particularly a more intense reducing tendency of the IV due to condensative heating. However, it has to be emphasised that the behaviour of the IV and its dynamical and diabatic contributions are influenced mainly by complex atmospheric feedbacks and large-scale processes and not by the sea ice distribution.
Additionally, a comparison with a second RCM covering the Arctic and using the same LBCs and IC is performed. For both models very similar results concerning the IV and its dynamical and diabatic contributions are found. Hence, this investigation leads to the conclusion that the IV is a natural phenomenon and is independent from the applied RCM.
Der andere Weg zur Wahrheit
(2015)
Vorliegender Aufsatz befasst sich mit der Auseinandersetzung des Philosophen Franz Rosenzweig mit der Jesus-Figur. Dabei werden zwei Texte Rosenzweigs in den Blick genommen: „Atheistische Theologie“ (1914) und „Der Stern der Erlösung“ (1921). Hinzu kommen der Briefwechsel, den er mit Margrit und Eugen Rosenstock zwischen 1917 und 1929 führt und ein Text von Eduard Strauß, den letzterer im Rahmen seiner Tätigkeit im Frankfurter Jüdischen Lehrhaus entworfen hat. Durch diese Texte wird gezeigt, wie die Analyse der Jesus-Figur zur Auseinandersetzung mit seiner Historisierung durch die protestantische Theologie wird. Neben Rosenzweig, Buber und Strauß tragen weitere jüdische Gelehrte zu dieser Debatte bei. Diese Debatten um die Geschichtlichkeit Jesu werden auch in den Kontext des Verhältnisses zwischen Christentum und Judentum und ferner in Rosenzweigs Bemühungen um einen christlichjüdischen Dialog eingebettet.
We continue our study of invariant forms of the classical equations of mathematical physics,
such as the Maxwell equations or the Lamé system, on manifold with boundary. To this end we interpret them in terms of the de Rham complex at a certain step. On using the structure of the complex we get an insight to predict a degeneracy deeply encoded
in the equations. In the present paper we develop an invariant approach to the classical Navier-Stokes equations.
The main goal of this cumulative thesis is the derivation of surface emissivity data in the infrared from radiance measurements of Venus. Since these data are diagnostic of the chemical composition and grain size of the surface material, they can help to improve knowledge of the planet’s geology. Spectrally resolved images of nightside emissions in the range 1.0-5.1 μm were recently acquired by the InfraRed Mapping channel of the Visible and InfraRed Thermal Imaging Spectrometer (VIRTIS-M-IR) aboard ESA’s Venus EXpress (VEX). Surface and deep atmospheric thermal emissions in this spectral range are strongly obscured by the extremely opaque atmosphere, but three narrow spectral windows at 1.02, 1.10, and 1.18 μm allow the sounding of the surface. Additional windows between 1.3 and 2.6 μm provide information on atmospheric parameters that is required to interpret the surface signals. Quantitative data on surface and atmosphere can be retrieved from the measured spectra by comparing them to simulated spectra. A numerical radiative transfer model is used in this work to simulate the observable radiation as a function of atmospheric, surface, and instrumental parameters. It is a line-by-line model taking into account thermal emissions by surface and atmosphere as well as absorption and multiple scattering by gases and clouds. The VIRTIS-M-IR measurements are first preprocessed to obtain an optimal data basis for the subsequent steps. In this process, a detailed detector responsivity analysis enables the optimization of the data consistency. The measurement data have a relatively low spectral information content, and different parameter vectors can describe the same measured spectrum equally well. A usual method to regularize the retrieval of the wanted parameters from a measured spectrum is to take into account a priori mean values and standard deviations of the parameters to be retrieved. This decreases the probability to obtain unreasonable parameter values. The multi-spectrum retrieval algorithm MSR is developed to additionally consider physically realistic spatial and temporal a priori correlations between retrieval parameters describing different measurements. Neglecting geologic activity, MSR also allows the retrieval of an emissivity map as a parameter vector that is common to several spectrally resolved images that cover the same surface target. Even applying MSR, it is difficult to obtain reliable emissivity maps in absolute values. A detailed retrieval error analysis based on synthetic spectra reveals that this is mainly due to interferences from parameters that cannot be derived from the spectra themselves, but that have to be set to assumed values to enable the radiative transfer simulations. The MSR retrieval of emissivity maps relative to a fixed emissivity is shown to effectively avoid most emissivity retrieval errors. Relative emissivity maps at 1.02, 1.10, and 1.18 μm are finally derived from many VIRTIS-M-IR measurements that cover a surface target at Themis Regio. They are interpreted as spatial variations relative to an assumed emissivity mean of the target. It is verified that the maps are largely independent of the choice of many interfering parameters as well as the utilized measurement data set. These are the first Venus IR emissivity data maps based on a consistent application of a full radiative transfer simulation and a retrieval algorithm that respects a priori information. The maps are sufficiently reliable for future geologic interpretations.
PaRDeS. Zeitschrift der Vereinigung für Jüdische Studien e.V., möchte die fruchtbare und facettenreiche Kultur des Judentums sowie seine Berührungspunkte zur Umwelt in den unterschiedlichen Bereichen dokumentieren. Daneben dient die Zeitschrift als Forum zur Positionierung der Fächer Jüdische Studien und Judaistik innerhalb des wissenschaftlichen Diskurses sowie zur Diskussion ihrer historischen und gesellschaftlichen Verantwortung.
Brownianmotion is ergodic in the Boltzmann–Khinchin sense that long time averages of physical observables such as the mean squared displacement provide the same information as the corresponding ensemble average, even at out-of-equilibrium conditions. This property is the fundamental prerequisite for single particle tracking and its analysis in simple liquids. We study analytically and by event-driven molecular dynamics simulations the dynamics of force-free cooling granular gases and reveal a violation of ergodicity in this Boltzmann-Khinchin sense as well as distinct ageing of the system. Such granular gases comprise materials such as dilute gases of stones, sand, various types of powders, or large molecules, and their mixtures are ubiquitous in Nature and technology, in particular in Space. We treat—depending on the physical-chemical properties of the inter-particle interaction upon their pair collisions—both a constant and a velocity-dependent
(viscoelastic) restitution coefficient e. Moreover we compare the granular gas dynamics with an effective single particle stochastic model based on an underdamped Langevin equation with time dependent diffusivity. We find that both models share the same behaviour of the ensemble mean squared displacement (MSD) and the velocity correlations in the limit of weak dissipation. Qualitatively, the reported non-ergodic behaviour is generic for granular gases with any realistic dependence of e on the impact velocity of particles.
The present article is among the first reports on the effects of poly(ampholyte)s and poly(betaine)s on the biomimetic formation of calcium phosphate. We have synthesized a series of di- and triblock copolymers based on a non-ionic poly(ethylene oxide) block and several charged methacrylate monomers, 2-(trimethylammonium)ethyl methacrylate chloride, 2-((3-cyanopropyl)-dimethylammonium)ethyl methacrylate chloride, 3-sulfopropyl methacrylate potassium salt, and [2-(methacryloyloxy)ethyl]dimethyl-(3-sulfopropyl)ammonium hydroxide. The resulting copolymers are either positively charged, ampholytic, or betaine block copolymers. All the polymers have very high molecular weights of over 106 g mol−1. All polymers are water-soluble and show a strong effect on the precipitation and dissolution of calcium phosphate. The strongest effects are observed with triblock copolymers based on a large poly(ethylene oxide) middle block (nominal Mn = 100 000 g mol−1). Surprisingly, the data show that there is a need for positive charges in the polymers to exert tight control over mineralization and dissolution, but that the exact position of the charge in the polymer is of minor importance for both calcium phosphate precipitation and dissolution.
We investigate the ergodic properties of a random walker performing (anomalous) diffusion on a random fractal geometry. Extensive Monte Carlo simulations of the motion of tracer particles on an ensemble of realisations of percolation clusters are performed for a wide range of percolation densities. Single trajectories of the tracer motion are analysed to quantify the time averaged mean squared displacement (MSD) and to compare this with the ensemble averaged MSD of the particle motion. Other complementary physical observables associated with ergodicity are studied, as well. It turns out that the time averaged MSD of individual realisations exhibits non-vanishing fluctuations even in the limit of very long observation times as the percolation density approaches the critical value. This apparent non-ergodic behaviour concurs with the ergodic behaviour on the ensemble averaged level. We demonstrate how the non-vanishing fluctuations in single particle trajectories are analytically expressed in terms of the fractal dimension and the cluster size distribution of the random geometry, thus being of
purely geometrical origin. Moreover, we reveal that the convergence scaling law to ergodicity, which is known to be inversely proportional to the observation time T for ergodic diffusion processes, follows a power-law BT� h with h o 1 due to the fractal structure of the accessible space. These results provide useful measures for differentiating the subdiffusion on random fractals from an otherwise closely related process, namely, fractional Brownian motion. Implications of our results on the analysis of single particle tracking experiments are provided.
The interruption of learning processes by breaks filled with diverse activities is common in everyday life. We investigated the effects of active computer gaming and passive relaxation (rest and music) breaks on working memory performance. Young adults were exposed to breaks involving (i) eyes-open resting, (ii) listening to music and (iii) playing the video game “Angry Birds” before performing the n-back working memory task. Based on linear mixed-effects modeling, we found that playing the “Angry Birds” video game during a short learning break led to a decline in task performance over the course of the task as compared to eyes-open resting and listening to music, although overall task performance was not impaired. This effect was associated with high levels of daily mind wandering and low self-reported ability to concentrate. These findings indicate that video games can negatively affect working memory performance over time when played in between learning tasks. We suggest further investigation of these effects because of their relevance to everyday activity.
The Brazilian Cerrado is recognised as one of the most threatened biomes in the world, as the region has experienced a striking change from natural vegetation to intense cash crop production. The impacts of rapid agricultural expansion on soil and water resources are still poorly understood in the region. Therefore, the overall aim of the thesis is to improve our understanding of the ecohydrological processes causing water and soil degradation in the Brazilian Cerrado.
I first present a metaanalysis to provide quantitative evidence and identifying the main impacts of soil and water alterations resulting from land use change. Second, field studies were conducted to (i) examine the effects of land use change on soils of natural cerrado transformed to common croplands and pasture and (ii) indicate how agricultural production affects water quality across a meso-scale catchment. Third, the ecohydrological process-based model SWAT was tested with simple scenario analyses to gain insight into the impacts of land use and climate change on the water cycling in the upper São Lourenço catchment which experienced decreasing discharges in the last 40 years.
Soil and water quality parameters from different land uses were extracted from 89 soil and 18 water studies in different regions across the Cerrado. Significant effects on pH, bulk density and available P and K for croplands and less-pronounced effects on pastures were evident. Soil total N did not differ between land uses because most of the cropland sites were N-fixing soybean cultivations, which are not artificially fertilized with N. By contrast, water quality studies showed N enrichment in agricultural catchments, indicating fertilizer impacts and potential susceptibility to eutrophication. Regardless of the land use, P is widely absent because of the high-fixing capacities of deeply weathered soils and the filtering capacity of riparian vegetation. Pesticides, however, were consistently detected throughout the entire aquatic system. In several case studies, extremely high-peak concentrations exceeded Brazilian and EU water quality limits, which pose serious health risks.
My field study revealed that land conversion caused a significant reduction in infiltration rates near the soil surface of pasture (–96 %) and croplands (–90 % to –93 %). Soil aggregate stability was significantly reduced in croplands than in cerrado and pasture. Soybean crops had extremely high extractable P (80 mg kg–1), whereas pasture N levels declined. A snapshot water sampling showed strong seasonality in water quality parameters. Higher temperature, oxi-reduction potential (ORP), NO2–, and very low oxygen concentrations (<5 mg•l–1) and saturation (<60 %) were recorded during the rainy season. By contrast, remarkably high PO43– concentrations (up to 0.8 mg•l–1) were measured during the dry season. Water quality parameters were affected by agricultural activities at all sampled sub-catchments across the catchment, regardless of stream characteristic. Direct NO3– leaching appeared to play a minor role; however, water quality is affected by topsoil fertiliser inputs with impact on small low order streams and larger rivers. Land conversion leaving cropland soils more susceptible to surface erosion by increased overland flow events.
In a third study, the field data were used to parameterise SWAT. The model was tested with different input data and calibrated in SWAT-CUP using the SUFI-2 algorithm. The model was judged reliable to simulate the water balance in the Cerrado. A complete cerrado, pasture and cropland cover was used to analyse the impact of land use on water cycling as well as climate change projections (2039–2058) according to the projections of the RCP 8.5 scenario. The actual evapotranspiration (ET) for the cropland scenario was higher compared to the cerrado cover (+100 mm a–1). Land use change scenarios confirmed that deforestation caused higher annual ET rates explaining partly the trend of decreased streamflow. Taking all climate change scenarios into account, the most likely effect is a prolongation of the dry season (by about one month), with higher peak flows in the rainy season. Consequently, potential threats for crop production with lower soil moisture and increased erosion and sediment transport during the rainy season are likely and should be considered in adaption plans.
From the three studies of the thesis I conclude that land use intensification is likely to seriously limit the Cerrado’s future regarding both agricultural productivity and ecosystem stability. Because only limited data are available for the vast biome, we recommend further field studies to understand the interaction between terrestrial and aquatic systems. This thesis may serve as a valuable database for integrated modelling to investigate the impact of land use and climate change on soil and water resources and to test and develop mitigation measures for the Cerrado in the future.
Nowadays, business processes are increasingly supported by IT services that produce massive amounts of event data during process execution. Aiming at a better process understanding and improvement, this event data can be used to analyze processes using process mining techniques. Process models can be automatically discovered and the execution can be checked for conformance to specified behavior. Moreover, existing process models can be enhanced and annotated with valuable information, for example for performance analysis. While the maturity of process mining algorithms is increasing and more tools are entering the market, process mining projects still face the problem of different levels of abstraction when comparing events with modeled business activities. Mapping the recorded events to activities of a given process model is essential for conformance checking, annotation and understanding of process discovery results. Current approaches try to abstract from events in an automated way that does not capture the required domain knowledge to fit business activities. Such techniques can be a good way to quickly reduce complexity in process discovery. Yet, they fail to enable techniques like conformance checking or model annotation, and potentially create misleading process discovery results by not using the known business terminology.
In this thesis, we develop approaches that abstract an event log to the same level that is needed by the business. Typically, this abstraction level is defined by a given process model. Thus, the goal of this thesis is to match events from an event log to activities in a given process model. To accomplish this goal, behavioral and linguistic aspects of process models and event logs as well as domain knowledge captured in existing process documentation are taken into account to build semiautomatic matching approaches. The approaches establish a pre--processing for every available process mining technique that produces or annotates a process model, thereby reducing the manual effort for process analysts. While each of the presented approaches can be used in isolation, we also introduce a general framework for the integration of different matching approaches.
The approaches have been evaluated in case studies with industry and using a large industry process model collection and simulated event logs. The evaluation demonstrates the effectiveness and efficiency of the approaches and their robustness towards nonconforming execution logs.
Business process management (BPM) is a systematic and structured approach to model, analyze, control, and execute business operations also referred to as business processes that get carried out to achieve business goals. Central to BPM are conceptual models. Most prominently, process models describe which tasks are to be executed by whom utilizing which information to reach a business goal. Process models generally cover the perspectives of control flow, resource, data flow, and information systems.
Execution of business processes leads to the work actually being carried out. Automating them increases the efficiency and is usually supported by process engines. This, though, requires the coverage of control flow, resource assignments, and process data. While the first two perspectives are well supported in current process engines, data handling needs to be implemented and maintained manually. However, model-driven data handling promises to ease implementation, reduces the error-proneness through graphical visualization, and reduces development efforts through code generation.
This thesis addresses the modeling, analysis, and execution of data in business processes and presents a novel approach to execute data-annotated process models entirely model-driven. As a first step and formal grounding for the process execution, a conceptual framework for the integration of processes and data is introduced. This framework is complemented by operational semantics through a Petri net mapping extended with data considerations. Model-driven data execution comprises the handling of complex data dependencies, process data, and data exchange in case of communication between multiple process participants. This thesis introduces concepts from the database domain into BPM to enable the distinction of data operations, to specify relations between data objects of the same as well as of different types, to correlate modeled data nodes as well as received messages to the correct run-time process instances, and to generate messages for inter-process communication. The underlying approach, which is not limited to a particular process description language, has been implemented as proof-of-concept.
Automation of data handling in business processes requires data-annotated and correct process models. Targeting the former, algorithms are introduced to extract information about data nodes, their states, and data dependencies from control information and to annotate the process model accordingly. Usually, not all required information can be extracted from control flow information, since some data manipulations are not specified. This requires further refinement of the process model. Given a set of object life cycles specifying allowed data manipulations, automated refinement of the process model towards containment of all data manipulations is enabled. Process models are an abstraction focusing on specific aspects in detail, e.g., the control flow and the data flow views are often represented through activity-centric and object-centric process models. This thesis introduces algorithms for roundtrip transformations enabling the stakeholder to add information to the process model in the view being most appropriate.
Targeting process model correctness, this thesis introduces the notion of weak conformance that checks for consistency between given object life cycles and the process model such that the process model may only utilize data manipulations specified directly or indirectly in an object life cycle. The notion is computed via soundness checking of a hybrid representation integrating control flow and data flow correctness checking. Making a process model executable, identified violations must be corrected. Therefore, an approach is proposed that identifies for each violation multiple, alternative changes to the process model or the object life cycles.
Utilizing the results of this thesis, business processes can be executed entirely model-driven from the data perspective in addition to the control flow and resource perspectives already supported before. Thereby, the model creation is supported by algorithms partly automating the creation process while model consistency is ensured by data correctness checks.
In diesem Papier wird das Konzept eines Lernzentrums für die Informatik (LZI) an der Universität Paderborn vorgestellt. Ausgehend von den fachspezifischen Schwierigkeiten der Informatik Studierenden werden die Angebote des LZIs erläutert, die sich über die vier Bereiche Individuelle Beratung und Betreuung, „Offener Lernraum“, Workshops und Lehrveranstaltungen sowie Forschung erstrecken. Eine erste Evaluation mittels Feedbackbögen zeigt, dass das Angebot bei den Studierenden positiv aufgenommen wird. Zukünftig soll das Angebot des LZIs weiter ausgebaut und verbessert werden. Ausgangsbasis dazu sind weitere Studien.
Die Wahl des richtigen Studienfaches und die daran anschließende
Studieneingangsphase sind oft entscheidend für den erfolgreichen Verlauf eines Studiums. Eine große Herausforderung besteht dabei darin, bereits in den ersten Wochen des Studiums bestehende Defizite in vermeintlich einfachen Schlüsselkompetenzen zu erkennen und diese so bald wie möglich zu beheben. Eine zweite, nicht minder wichtige Herausforderung ist es, möglichst frühzeitig für jeden einzelnen Studierenden zu erkennen, ob er bzw. sie das individuell richtige Studienfach gewählt hat, das den jeweiligen persönlichen Neigungen, Interessen und Fähigkeiten entspricht und zur Verwirklichung der eigenen Lebensziele beiträgt. Denn nur dann sind Studierende ausreichend stark und dauerhaft intrinsisch motiviert, um ein anspruchsvolles, komplexes Studium erfolgreich durchzuziehen. In diesem Beitrag fokussieren wir eine Maßnahme, die die Studierenden an einen Prozess zur systematischen Reflexion des eigenen Lernprozesses und der eigenen Ziele heranführt und beides in Relation setzt.
Ziel einer neuen Studieneingangsphase ist, den Studierenden bis zum Ende des ersten Semesters ein vielfältiges Berufsbild der Informatik und Wirtschaftsinformatik mit dem breiten Aufgabenspektrum aufzublättern und damit die Zusammenhänge zwischen den einzelnen Modulen des Curriculums zu verdeutlichen. Die Studierenden sollen in die Lage versetzt werden, sehr eigenständig die Planung und Gestaltung ihres Studiums in die Hand zu nehmen.
Es wird ein Informatik-Wettbewerb für Schülerinnen und Schüler der Sekundarstufe II beschrieben, der über mehrere Wochen möglichst realitätsnah die Arbeitswelt eines Informatikers vorstellt. Im Wettbewerb erarbeiten die Schülerteams eine Android-App und organisieren ihre Entwicklung durch Projektmanagementmethoden, die sich an professionellen, agilen Prozessen orientieren. Im Beitrag werden der theoretische Hintergrund zu Wettbewerben, die organisatorischen und didaktischen Entscheidung, eine erste Evaluation sowie Reflexion und Ausblick dargestellt.
In der Lehre zur MCI (Mensch-Computer-Interaktion) stellt sich immer wieder die Herausforderung, praktische Übungen mit spannenden Ergebnissen durchzuführen, die sich dennoch nicht in technischen Details verlieren sondern MCI-fokussiert bleiben. Im Lehrmodul „Interaktionsdesign“ an der Universität Hamburg werden von Studierenden innerhalb von drei Wochen prototypische Interaktionskonzepte für das Spiel Neverball entworfen und praktisch umgesetzt. Anders als in den meisten Grundlagenkursen zur MCI werden hier nicht Mock-Ups, sondern lauffähige Software entwickelt. Um dies innerhalb der Projektzeit zu ermöglichen, wurde Neverball um eine TCP-basierte Schnittstelle erweitert. So entfällt die aufwändige Einarbeitung in den Quellcode des Spiels und die Studierenden können sich auf ihre Interaktionsprototypen konzentrieren. Wir beschreiben die Erfahrungen aus der
mehrmaligen Durchführung des Projektes und erläutern unser Vorgehen bei der Umsetzung. Die Ergebnisse sollen Lehrende im Bereich MCI unterstützen, ähnliche praxisorientierte Übungen mit Ergebnissen „zum Anfassen“ zu gestalten.