Refine
Has Fulltext
- yes (649) (remove)
Year of publication
- 2016 (649) (remove)
Document Type
- Postprint (216)
- Article (175)
- Doctoral Thesis (136)
- Monograph/Edited Volume (28)
- Part of Periodical (22)
- Preprint (18)
- Review (14)
- Master's Thesis (12)
- Part of a Book (11)
- Working Paper (6)
Keywords
- Migration (13)
- migration (13)
- religion (13)
- Religion (12)
- interkulturelle Missverständnisse (12)
- religiöses Leben (12)
- confusions and misunderstandings (11)
- Logopädie (6)
- Zeitschrift (6)
- model (6)
Institute
- Mathematisch-Naturwissenschaftliche Fakultät (80)
- Institut für Slavistik (75)
- Institut für Geowissenschaften (41)
- Humanwissenschaftliche Fakultät (39)
- Institut für Chemie (39)
- Institut für Physik und Astronomie (31)
- Institut für Biochemie und Biologie (30)
- Vereinigung für Jüdische Studien e. V. (29)
- Bürgerliches Recht (28)
- Department Linguistik (23)
Блуждающие цитаты
(2016)
Erneuertes Gestern?
(2016)
Tarkovskijs Scham
(2016)
Der Jüdische Friedhof in Potsdam ist der einzige authentische Gedächtnisort, der vom Lebenszyklus der jüdischen Bevölkerung in der ehemaligen preußischen Residenz- und Garnisonstadt zeugt. Er ist zudem Ausdruck des unterschiedlichen Umgangs der Nachgeborenen mit ihrem Kulturgut. Außerdem ist dieser Jüdische Friedhof zurzeit als einziger in Deutschland durch die UNESCO als Welterbe anerkannt.
Da die jüdische Geschichte Potsdams bislang nur wenig bekannt ist, entstand ein durch die Stiftung „Erinnerung, Verantwortung und Zukunft“ gefördertes Projekt, in dem sich Schüler*innen des Potsdamer Humboldt-Gymnasiums im Rahmen eines Seminarkurses mit dem jüdischen Erbe ihrer Stadt intensiv auseinandersetzten. Neben einer Annäherung an das Thema über verschiedene, den Friedhof betreffende Sachthemen beschäftigten sich die Jugendlichen mit einzelnen jüdischen Potsdamern, ihren Familienschicksalen und Lebenskonzepten. Ergänzend wurden Aspekte des religiösen Verständnisses von Tod und Trauer im Judentum vorgestellt.
Die Ergebnisse all dieser Ausarbeitungen sind im vorliegenden Lehrmaterial vereinigt und dienen als Anregung für Lehrende und Lernende, die jüdische Geschichte ihrer jeweiligen Heimatorte zu thematisieren.
Computer Security deals with the detection and mitigation of threats to computer networks, data, and computing hardware. This
thesis addresses the following two computer security problems: email spam campaign and malware detection.
Email spam campaigns can easily be generated using popular dissemination tools by specifying simple grammars that serve as message templates. A grammar is disseminated to nodes of a bot net, the nodes create messages by instantiating the grammar at random. Email spam campaigns can encompass huge data volumes and therefore pose a threat to the stability of the infrastructure of email service providers that have to store them. Malware -software that serves a malicious purpose- is affecting web servers, client computers via active content, and client computers through executable files. Without the help of malware detection systems it would be easy for malware creators to collect sensitive information or to infiltrate computers.
The detection of threats -such as email-spam messages, phishing messages, or malware- is an adversarial and therefore intrinsically
difficult problem. Threats vary greatly and evolve over time. The detection of threats based on manually-designed rules is therefore
difficult and requires a constant engineering effort. Machine-learning is a research area that revolves around the analysis of data and the discovery of patterns that describe aspects of the data. Discriminative learning methods extract prediction models from data that are optimized to predict a target attribute as accurately as possible. Machine-learning methods hold the promise of automatically identifying patterns that robustly and accurately detect threats. This thesis focuses on the design and analysis of discriminative learning methods for the two computer-security problems under investigation: email-campaign and malware detection.
The first part of this thesis addresses email-campaign detection. We focus on regular expressions as a syntactic framework, because regular expressions are intuitively comprehensible by security engineers and administrators, and they can be applied as a detection mechanism in an extremely efficient manner. In this setting, a prediction model is provided with exemplary messages from an email-spam campaign. The prediction model has to generate a regular expression that reveals the syntactic pattern that underlies the entire campaign, and that a security engineers finds comprehensible and feels confident enough to use the expression to blacklist further messages at the email server. We model this problem as two-stage learning problem with structured input and output spaces which can be solved using standard cutting plane methods. Therefore we develop an appropriate loss function, and derive a decoder for the resulting optimization problem.
The second part of this thesis deals with the problem of predicting whether a given JavaScript or PHP file is malicious or benign. Recent malware analysis techniques use static or dynamic features, or both. In fully dynamic analysis, the software or script is executed and observed for malicious behavior in a sandbox environment. By contrast, static analysis is based on features that can be extracted directly from the program file. In order to bypass static detection mechanisms, code obfuscation techniques are used to spread a malicious program file in many different syntactic variants. Deobfuscating the code before applying a static classifier can be subjected to mostly static code analysis and can overcome the problem of obfuscated malicious code, but on the other hand increases the computational costs of malware detection by an order of magnitude. In this thesis we present a cascaded architecture in which a classifier first performs a static analysis of the original code and -based on the outcome of this first classification step- the code may be deobfuscated and classified again. We explore several types of features including token $n$-grams, orthogonal sparse bigrams, subroutine-hashings, and syntax-tree features and study the robustness of detection methods and feature types against the evolution of malware over time. The developed tool scans very large file collections quickly and accurately.
Each model is evaluated on real-world data and compared to reference methods. Our approach of inferring regular expressions to filter emails belonging to an email spam campaigns leads to models with a high true-positive rate at a very low false-positive rate that is an order of magnitude lower than that of a commercial content-based filter. Our presented system -REx-SVMshort- is being used by a commercial email service provider and complements content-based and IP-address based filtering.
Our cascaded malware detection system is evaluated on a high-quality data set of almost 400,000 conspicuous PHP files and a collection of more than 1,00,000 JavaScript files. From our case study we can conclude that our system can quickly and accurately process large data collections at a low false-positive rate.
Die vorliegende Modularbeit vergleicht die Häufigkeit des Imperativs auf Plakaten der Berliner Abgeordnetenhauswahl 2016 mit der auf den Plakaten der Weimarer Republik. Sie geht dabei der These nach, dass diese Häufigkeit abgenommen hat und kann diese bestätigen: 2016 tritt der Imperativ achtmal seltener (5,7 % zu 45,7 %) auf. Zusätzlich leistet die Arbeit einen Überblick zum Imperativ und weiteren Möglichkeiten, mittels der deutschen Sprache eine Aufforderung zu artikulieren.
Für die Untersuchung wurden zwei Untersuchungskorpora herangezogen, wovon der Korpus mit den Slogans zur Abgeordnetenhauswahl extra für diese Arbeit erstellt wurde und ihr auch beiliegt. In diesem, wie im Korpus zur Weimarer Republik, sind alle die Slogans enthalten und die verwendeten Imperative ausgezählt. So bietet sich ein Einblick in die politische Werbesprache der beiden Zeiten.
The goal of the presented work is to explore the interaction between gold nanorods (GNRs) and hyper-sound waves. For the generation of the hyper-sound I have used Azobenzene-containing polymer transducers. Multilayer polymer structures with well-defined thicknesses and smooth interfaces were built via layer-by-layer deposition. Anionic polyelectrolytes with Azobenzene side groups (PAzo) were alternated with cationic polymer PAH, for the creation of transducer films. PSS/PAH multilayer were built for spacer layers, which do not absorb in the visible light range. The properties of the PAzo/PAH film as a transducer are carefully characterized by static and transient optical spectroscopy. The optical and mechanical properties of the transducer are studied on the picosecond time scale. In particular the relative change of the refractive index of the photo-excited and expanded PAH/PAzo is Δn/n = - 2.6*10‐4. Calibration of the generated strain is performed by ultrafast X-ray diffraction calibrated the strain in a Mica substrate, into which the hyper-sound is transduced. By simulating the X-ray data with a linear-chain-model the strain in the transducer under the excitation is derived to be Δd/d ~ 5*10‐4.
Additional to the investigation of the properties of the transducer itself, I have performed a series of experiments to study the penetration of the generated strain into various adjacent materials. By depositing the PAzo/PAH film onto a PAH/PSS structure with gold nanorods incorporated in it, I have shown that nanoscale impurities can be detected via the scattering of hyper-sound.
Prior to the investigation of complex structures containing GNRs and the transducer, I have performed several sets of experiments on GNRs deposited on a small buffer of PSS/PAH. The static and transient response of GNRs is investigated for different fluence of the pump beam and for different dielectric environments (GNRs covered by PSS/PAH).
A systematic analysis of sample architectures is performed in order to construct a sample with the desired effect of GNRs responding to the hyper-sound strain wave. The observed shift of a feature related to the longitudinal plasmon resonance in the transient reflection spectra is interpreted as the event of GNRs sensing the strain wave. We argue that the shift of the longitudinal plasmon resonance is caused by the viscoelastic deformation of the polymer around the nanoparticle. The deformation is induced by the out of plane difference in strain in the area directly under a particle and next to it. Simulations based on the linear chain model support this assumption. Experimentally this assumption is proven by investigating the same structure, with GNRs embedded in a PSS/PAH polymer layer.
The response of GNRs to the hyper-sound wave is also observed for the sample structure with GNRs embedded in PAzo/PAH films. The response of GNRs in this case is explained to be driven by the change of the refractive index of PAzo during the strain propagation.
The global carbon cycle is closely linked to Earth’s climate. In the context of continuously unchecked anthropogenic CO₂ emissions, the importance of natural CO₂ bond and carbon storage is increasing. An important biogenic mechanism of natural atmospheric CO₂ drawdown is the photosynthetic carbon fixation in plants and the subsequent longterm deposition of plant detritus in sediments.
The main objective of this thesis is to identify factors that control mobilization and transport of plant organic matter (pOM) through rivers towards sedimentation basins. I investigated this aspect in the eastern Nepalese Arun Valley. The trans-Himalayan Arun River is characterized by a strong elevation gradient (205 − 8848 m asl) that is accompanied by strong changes in ecology and climate ranging from wet tropical conditions in the Himalayan forelad to high alpine tundra on the Tibetan Plateau. Therefore, the Arun is an excellent natural laboratory, allowing the investigation of the effect of vegetation cover, climate, and topography on plant organic matter mobilization and export in tributaries along the gradient.
Based on hydrogen isotope measurements of plant waxes sampled along the Arun River and its tributaries, I first developed a model that allows for an indirect quantification of pOM contributed to the mainsetm by the Arun’s tributaries. In order to determine the role of climatic and topographic parameters of sampled tributary catchments, I looked for significant statistical relations between the amount of tributary pOM export and tributary characteristics (e.g. catchment size, plant cover, annual precipitation or runoff, topographic measures). On one hand, I demonstrated that pOMsourced from the Arun is not uniformly derived from its entire catchment area. On the other, I showed that dense vegetation is a necessary, but not sufficient, criterion for high tributary pOM export. Instead, I identified erosion and rainfall and runoff as key factors controlling pOM sourcing in the Arun Valley. This finding is supported by terrestrial cosmogenic nuclide concentrations measured on river sands along the Arun and its tributaries in order to quantify catchment wide denudation rates. Highest denudation rates corresponded well with maximum pOM mobilization and export also suggesting the link between erosion and pOM sourcing.
The second part of this thesis focusses on the applicability of stable isotope records such as plant wax n-alkanes in sediment archives as qualitative and quantitative proxy for the variability of past Indian Summer Monsoon (ISM) strength. First, I determined how ISM strength affects the hydrogen and oxygen stable isotopic composition (reported as δD and δ18O values vs. Vienna Standard Mean Ocean Water) of precipitation in the Arun Valley and if this amount effect (Dansgaard, 1964) is strong enough to be recorded in potential paleo-ISM isotope proxies. Second, I investigated if potential isotope records across the Arun catchment reflect ISM strength dependent precipitation δD values only, or if the ISM isotope signal is superimposed by winter precipitation or glacial melt. Furthermore, I tested if δD values of plant waxes in fluvial deposits reflect δD values of environmental waters in the respective catchments.
I showed that surface water δD values in the Arun Valley and precipitation δD from south of the Himalaya both changed similarly during two consecutive years (2011 & 2012) with distinct ISM rainfall amounts (~20% less in 2012). In order to evaluate the effect of other water sources (Winter-Westerly precipitation, glacial melt) and evapotranspiration in the Arun Valley, I analysed satellite remote sensing data of rainfall distribution (TRMM 3B42V7), snow cover (MODIS MOD10C1), glacial coverage (GLIMSdatabase, Global Land Ice Measurements from Space), and evapotranspiration (MODIS MOD16A2). In addition to the predominant ISM in the entire catchment I found through stable isotope analysis of surface waters indications for a considerable amount of glacial melt derived from high altitude tributaries and the Tibetan Plateau. Remotely sensed snow cover data revealed that the upper portion of the Arun also receives considerable winter precipitation, but the effect of snow melt on the Arun Valley hydrology could not be evaluated as it takes place in early summer, several months prior to our sampling campaigns. However, I infer that plant wax records and other potential stable isotope proxy archives below the snowline are well-suited for qualitative, and potentially quantitative, reconstructions of past changes of ISM strength.
Background
Doping presents a potential health risk for young athletes. Prevention programs are intended to prevent doping by educating athletes about banned substances. However, such programs have their limitations in practice. This led Germany to introduce the National Doping Prevention Plan (NDPP), in hopes of ameliorating the situation among young elite athletes. Two studies examined 1) the degree to which the NDPP led to improved prevention efforts in elite sport schools, and 2) the extent to which newly developed prevention activities of the national anti-doping agency (NADA) based on the NDPP have improved knowledge among young athletes within elite sports schools.
Methods
The first objective was investigated in a longitudinal study (Study I: t0 = baseline, t1 = follow-up 4 years after NDPP introduction) with N = 22 teachers engaged in doping prevention in elite sports schools. The second objective was evaluated in a cross-sectional comparison study (Study II) in N = 213 elite sports school students (54.5 % male, 45.5 % female, age M = 16.7 ± 1.3 years (all students had received the improved NDDP measure in school; one student group had received additionally NADA anti-doping activities and a control group did not). Descriptive statistics were calculated, followed by McNemar tests, Wilcoxon tests and Analysis of Covariance (ANCOVA).
Results
Results indicate that 4 years after the introduction of the NDPP there have been limited structural changes with regard to the frequency, type, and scope of doping prevention in elite sport schools. On the other hand, in study II, elite sport school students who received further NADA anti-doping activities performed better on an anti-doping knowledge test than students who did not take part (F(1, 207) = 33.99, p <0.001), although this difference was small.
Conclusion
The integration of doping-prevention in elite sport schools as part of the NDPP was only partially successful. The results of the evaluation indicate that the introduction of the NDPP has contributed more to a change in the content of doping prevention activities than to a structural transformation in anti-doping education in elite sport schools. Moreover, while students who did receive additional education in the form of the NDPP“booster sessions” had significantly more knowledge about doping than students who did not receive such education, this difference was only small and may not translate to actual behavior.
Due to their multifunctionality, tablets offer tremendous advantages for research on handwriting dynamics or for interactive use of learning apps in schools. Further, the widespread use of tablet computers has had a great impact on handwriting in the current generation. But, is it advisable to teach how to write and to assess handwriting in pre- and primary schoolchildren on tablets rather than on paper? Since handwriting is not automatized before the age of 10 years, children's handwriting movements require graphomotor and visual feedback as well as permanent control of movement execution during handwriting. Modifications in writing conditions, for instance the smoother writing surface of a tablet, might influence handwriting performance in general and in particular those of non-automatized beginning writers. In order to investigate how handwriting performance is affected by a difference in friction of the writing surface, we recruited three groups with varying levels of handwriting automaticity: 25 preschoolers, 27 second graders, and 25 adults. We administered three tasks measuring graphomotor abilities, visuomotor abilities, and handwriting performance (only second graders and adults). We evaluated two aspects of handwriting performance: the handwriting quality with a visual score and the handwriting dynamics using online handwriting measures [e.g., writing duration, writing velocity, strokes and number of inversions in velocity (NIV)]. In particular, NIVs which describe the number of velocity peaks during handwriting are directly related to the level of handwriting automaticity. In general, we found differences between writing on paper compared to the tablet. These differences were partly task-dependent. The comparison between tablet and paper revealed a faster writing velocity for all groups and all tasks on the tablet which indicates that all participants—even the experienced writers—were influenced by the lower friction of the tablet surface. Our results for the group-comparison show advancing levels in handwriting automaticity from preschoolers to second graders to adults, which confirms that our method depicts handwriting performance in groups with varying degrees of handwriting automaticity. We conclude that the smoother tablet surface requires additional control of handwriting movements and therefore might present an additional challenge for learners of handwriting.
Loss to follow-up in a randomized controlled trial study for pediatric weight management (EPOC)
(2016)
Background
Attrition is a serious problem in intervention studies. The current study analyzed the attrition rate during follow-up in a randomized controlled pediatric weight management program (EPOC study) within a tertiary care setting.
Methods
Five hundred twenty-three parents and their 7–13-year-old children with obesity participated in the randomized controlled intervention trial. Follow-up data were assessed 6 and 12 months after the end of treatment. Attrition was defined as providing no objective weight data. Demographic and psychological baseline characteristics were used to predict attrition at 6- and 12-month follow-up using multivariate logistic regression analyses.
Results
Objective weight data were available for 49.6 (67.0) % of the children 6 (12) months after the end of treatment. Completers and non-completers at the 6- and 12-month follow-up differed in the amount of weight loss during their inpatient stay, their initial BMI-SDS, educational level of the parents, and child’s quality of life and well-being. Additionally, completers supported their child more than non-completers, and at the 12-month follow-up, families with a more structured eating environment were less likely to drop out. On a multivariate level, only educational background and structure of the eating environment remained significant.
Conclusions
The minor differences between the completers and the non-completers suggest that our retention strategies were successful. Further research should focus on prevention of attrition in families with a lower educational background.
The strong adhesion of sub-micron sized particles to surfaces is a nuisance, both for removing contaminating colloids from surfaces and for conscious manipulation of particles to create and test novel micro/nano-scale assemblies. The obvious idea of using detergents to ease these processes suffers from a lack of control: the action of any conventional surface-modifying agent is immediate and global. With photosensitive azobenzene containing surfactants we overcome these limitations. Such photo-soaps contain optical switches (azobenzene molecules), which upon illumination with light of appropriate wavelength undergo reversible trans-cis photo-isomerization resulting in a subsequent change of the physico-chemical molecular properties. In this work we show that when a spatial gradient in the composition of trans- and cis- isomers is created near a solid-liquid interface, a substantial hydrodynamic flow can be initiated, the spatial extent of which can be set, e.g., by the shape of a laser spot. We propose the concept of light induced diffusioosmosis driving the flow, which can remove, gather or pattern a particle assembly at a solid-liquid interface. In other words, in addition to providing a soap we implement selectivity: particles are mobilized and moved at the time of illumination, and only across the illuminated area.
Referential Choice
(2016)
We report a study of referential choice in discourse production, understood as the choice between various types of referential devices, such as pronouns and full noun phrases. Our goal is to predict referential choice, and to explore to what extent such prediction is possible. Our approach to referential choice includes a cognitively informed theoretical component, corpus analysis, machine learning methods and experimentation with human participants. Machine learning algorithms make use of 25 factors, including referent’s properties (such as animacy and protagonism), the distance between a referential expression and its antecedent, the antecedent’s syntactic role, and so on. Having found the predictions of our algorithm to coincide with the original almost 90% of the time, we hypothesized that fully accurate prediction is not possible because, in many situations, more than one referential option is available. This hypothesis was supported by an experimental study, in which participants answered questions about either the original text in the corpus, or about a text modified in accordance with the algorithm’s prediction. Proportions of correct answers to these questions, as well as participants’ rating of the questions’ difficulty, suggested that divergences between the algorithm’s prediction and the original referential device in the corpus occur overwhelmingly in situations where the referential choice is not categorical.
We study the adsorption–desorption transition of polyelectrolyte chains onto planar, cylindrical and spherical surfaces with arbitrarily high surface charge densities by massive Monte Carlo computer simulations. We examine in detail how the well known scaling relations for the threshold transition—demarcating the adsorbed and desorbed domains of a polyelectrolyte near weakly charged surfaces—are altered for highly charged interfaces. In virtue of high surface potentials and large surface charge densities, the Debye–Hückel approximation is often not feasible and the nonlinear Poisson–Boltzmann approach should be implemented. At low salt conditions, for instance, the electrostatic potential from the nonlinear Poisson–Boltzmann equation is smaller than the Debye–Hückel result, such that the required critical surface charge density for polyelectrolyte adsorption σc increases. The nonlinear relation between the surface charge density and electrostatic potential leads to a sharply increasing critical surface charge density with growing ionic strength, imposing an additional limit to the critical salt concentration above which no polyelectrolyte adsorption occurs at all. We contrast our simulations findings with the known scaling results for weak critical polyelectrolyte adsorption onto oppositely charged surfaces for the three standard geometries. Finally, we discuss some applications of our results for some physical–chemical and biophysical systems.
Geospatial data has become a natural part of a growing number of information systems and services in the economy, society, and people's personal lives. In particular, virtual 3D city and landscape models constitute valuable information sources within a wide variety of applications such as urban planning, navigation, tourist information, and disaster management. Today, these models are often visualized in detail to provide realistic imagery. However, a photorealistic rendering does not automatically lead to high image quality, with respect to an effective information transfer, which requires important or prioritized information to be interactively highlighted in a context-dependent manner.
Approaches in non-photorealistic renderings particularly consider a user's task and camera perspective when attempting optimal expression, recognition, and communication of important or prioritized information. However, the design and implementation of non-photorealistic rendering techniques for 3D geospatial data pose a number of challenges, especially when inherently complex geometry, appearance, and thematic data must be processed interactively. Hence, a promising technical foundation is established by the programmable and parallel computing architecture of graphics processing units.
This thesis proposes non-photorealistic rendering techniques that enable both the computation and selection of the abstraction level of 3D geospatial model contents according to user interaction and dynamically changing thematic information. To achieve this goal, the techniques integrate with hardware-accelerated rendering pipelines using shader technologies of graphics processing units for real-time image synthesis. The techniques employ principles of artistic rendering, cartographic generalization, and 3D semiotics—unlike photorealistic rendering—to synthesize illustrative renditions of geospatial feature type entities such as water surfaces, buildings, and infrastructure networks. In addition, this thesis contributes a generic system that enables to integrate different graphic styles—photorealistic and non-photorealistic—and provide their seamless transition according to user tasks, camera view, and image resolution.
Evaluations of the proposed techniques have demonstrated their significance to the field of geospatial information visualization including topics such as spatial perception, cognition, and mapping. In addition, the applications in illustrative and focus+context visualization have reflected their potential impact on optimizing the information transfer regarding factors such as cognitive load, integration of non-realistic information, visualization of uncertainty, and visualization on small displays.
Change points in time series are perceived as heterogeneities in the statistical or dynamical characteristics of the observations. Unraveling such transitions yields essential information for the understanding of the observed system’s intrinsic evolution and potential external influences. A precise detection of multiple changes is therefore of great importance for various research disciplines, such as environmental sciences, bioinformatics and economics. The primary purpose of the detection approach introduced in this thesis is the investigation of transitions underlying direct or indirect climate observations. In order to develop a diagnostic approach capable to capture such a variety of natural processes, the generic statistical features in terms of central tendency and dispersion are employed in the light of Bayesian inversion. In contrast to established Bayesian approaches to multiple changes, the generic approach proposed in this thesis is not formulated in the framework of specialized partition models of high dimensionality requiring prior specification, but as a robust kernel-based approach of low dimensionality employing least informative prior distributions.
First of all, a local Bayesian inversion approach is developed to robustly infer on the location and the generic patterns of a single transition. The analysis of synthetic time series comprising changes of different observational evidence, data loss and outliers validates the performance, consistency and sensitivity of the inference algorithm. To systematically investigate time series for multiple changes, the Bayesian inversion is extended to a kernel-based inference approach. By introducing basic kernel measures, the weighted kernel inference results are composed into a proxy probability to a posterior distribution of multiple transitions. The detection approach is applied to environmental time series from the Nile river in Aswan and the weather station Tuscaloosa, Alabama comprising documented changes. The method’s performance confirms the approach as a powerful diagnostic tool to decipher multiple changes underlying direct climate observations.
Finally, the kernel-based Bayesian inference approach is used to investigate a set of complex terrigenous dust records interpreted as climate indicators of the African region of the Plio-Pleistocene period. A detailed inference unravels multiple transitions underlying the indirect climate observations, that are interpreted as conjoint changes. The identified conjoint changes coincide with established global climate events. In particular, the two-step transition associated to the establishment of the modern Walker-Circulation contributes to the current discussion about the influence of paleoclimate changes on the environmental conditions in tropical and subtropical Africa at around two million years ago.
Die Elbe und ihr Einzugsgebiet sind vom Klimawandel betroffen. Um die Wirkkette von projizierten Klimaveränderungen auf den Wasserhaushalt und die daraus resultierenden Nährstoffeinträge und -frachten für große Einzugsgebiete wie das der Elbe zu analysieren, können integrierte Umweltmodellsysteme eingesetzt werden. Fallstudien, die mit diesen Modellsystemen ad hoc durchgeführt werden, repräsentieren den Istzustand von Modellentwicklungen und -unsicherheiten und sind damit statisch.
Diese Arbeit beschreibt den Einstieg in die Dynamisierung von Klimafolgenanalysen im Elbegebiet. Dies umfasst zum einen eine Plausibilitätsprüfung von Auswirkungsrechnungen, die mit Szenarien des statistischen Szenariengenerators STARS durchgeführt wurden, durch den Vergleich mit den Auswirkungen neuerer Klimaszenarien aus dem ISI-MIP Projekt, die dem letzten Stand der Klimamodellierung entsprechen. Hierfür wird ein integriertes Modellsystem mit "eingefrorenem Entwicklungsstand" verwendet. Die Klimawirkungsmodelle bleiben dabei unverändert. Zum anderen wird ein Bestandteil des integrierten Modellsystems – das ökohydrologische Modell SWIM – zu einer "live"-Version weiterentwickelt. Diese wird durch punktuelle Testung an langjährigen Versuchsreihen eines Lysimeterstandorts sowie an aktuellen Abflussreihen validiert und verbessert.
Folgende Forschungsfragen werden bearbeitet: (i) Welche Effekte haben unterschiedliche Klimaszenarien auf den Wasserhaushalt im Elbegebiet und ist eine Neubewertung der Auswirkung des Klimawandels auf den Wasserhaushalt notwendig?, (ii) Was sind die Auswirkungen des Klimawandels auf die Nährstoffeinträge und -frachten im Elbegebiet sowie die Wirksamkeit von Maßnahmen zur Reduktion der Nährstoffeinträge?, (iii) Ist unter der Nutzung (selbst einer sehr geringen Anzahl) verfügbarer tagesaktueller Witterungsdaten in einem stark heterogenen Einzugsgebiet eine valide Ansprache der aktuellen ökohydrologischen Situation des Elbeeinzugsgebiets möglich?
Die aktuellen Szenarien bestätigen die Richtung, jedoch nicht das Ausmaß der Klimafolgen: Die Rückgänge des mittleren jährlichen Gesamtabflusses und der monatlichen Abflüsse an den Pegeln bis Mitte des Jahrhunderts betragen für das STARS-Szenario ca. 30 %. Die Rückgänge bei den auf dem ISI-MIP-Szenario basierenden Modellstudien liegen hingegen nur bei ca. 10 %. Hauptursachen für diese Divergenz sind die Unterschiede in den Niederschlagsprojektionen sowie die Unterschiede in der jahreszeitlichen Verteilung der Erwärmung. Im STARS-Szenario gehen methodisch bedingt die Niederschläge zurück und der Winter erwärmt sich stärker als der Sommer. In dem ISI-MIP-Szenario bleiben die Niederschläge nahezu stabil und die Erwärmung im Sommer und Winter unterscheidet sich nur geringfügig.
Generell nehmen die Nährstoffeinträge und -frachten mit den Abflüssen in beiden Szenarien unterproportional ab, wobei die Frachten jeweils stärker als die Einträge zurückgehen. Die konkreten Effekte der Abflussänderungen sind gering und liegen im einstelligen Prozentbereich. Gleiches gilt für die Unterschiede zwischen den Szenarien. Der Effekt von zwei ausgewählten Maßnahmen zur Reduktion der Nährstoffeinträge und -frachten unterscheidet sich bei verschiedenen Abflussverhältnissen, repräsentiert durch unterschiedliche Klimaszenarien in unterschiedlich feuchter Ausprägung, ebenfalls nur geringfügig.
Die Beantwortung der ersten beiden Forschungsfragen zeigt, dass die Aktualisierung von Klimaszenarien in einem ansonsten "eingefrorenen" Verbund von ökohydrologischen Daten und Modellen eine wichtige Prüfoption für die Plausibilisierung von Klimafolgenanalysen darstellt. Sie bildet die methodische Grundlage für die Schlussfolgerung, dass bei der Wassermenge eine Neubewertung der Klimafolgen notwendig ist, während dies bei den Nährstoffeinträgen und -frachten nicht der Fall ist.
Die zur Beantwortung der dritten Forschungsfrage mit SWIM-live durchgeführten Validierungsstudien ergeben Diskrepanzen am Lysimeterstandort und bei den Abflüssen aus den Teilgebieten Saale und Spree. Sie lassen sich zum Teil mit der notwendigen Interpolationsweite der Witterungsdaten und dem Einfluss von Wasserbewirtschaftungsmaßnahmen erklären. Insgesamt zeigen die Validierungsergebnisse, dass schon die Pilotversion von SWIM-live für eine ökohydrologische Ansprache des Gebietswasserhaushaltes im Elbeeinzugsgebiet genutzt werden kann. SWIM-live ermöglicht eine unmittelbare Betrachtung und Beurteilung simulierter Daten. Dadurch werden Unsicherheiten bei der Modellierung direkt offengelegt und können infolge dessen reduziert werden. Zum einen führte die Verdichtung der meteorologischen Eingangsdaten durch die Verwendung von nun ca. 700 anstatt 19 Klima- bzw. Niederschlagstationen zu einer Verbesserung der Ergebnisse. Zum anderen wurde SWIM-live beispielhaft für einen Zyklus aus punktueller Modellverbesserung und flächiger Überprüfung der Simulationsergebnisse genutzt.
Die einzelnen Teilarbeiten tragen jeweils zur Dynamisierung von Klimafolgenanalysen im Elbegebiet bei. Der Anlass hierfür war durch die fehlerhaften methodischen Grundlagen von STARS gegeben. Die Sinnfälligkeit der Dynamisierung ist jedoch nicht an diesen konkreten Anlass gebunden, sondern beruht auf der grundlegenden Einsicht, dass Ad-hoc-Szenarienanalysen immer auch pragmatische Vereinfachungen zugrunde liegen, die fortlaufend überprüft werden müssen.
Ein Blick zurück
(2016)
1 Einleitung, 2 Die Entstehung des organisierten Turnens in Deutschland, 3 Vom Turnen zum Sport, 4 Friedrich Ludwig Jahn und die Herausbildung der deutschen Turnersprache, 5 Sportsprache in der Zeit des Nationalsozialismus, 6 Didaktische Anregungen, 7 Materialien und Diskussionsanregungen, 8 Literatur
Mit Korbmachern zum Sieg
(2016)
1 Hinführung und Zielstellung, 2 Angestrebte Ergebnisse der Entwicklung lexikalischer Kompetenz - vernetzt mit der Entwicklung von Lese-/Textverstehenskompetenz, 3 Arbeit am Wortschatz und Textverstehen - Textanalyse als das Erschließen eines Feldes von Möglichkeiten, 4 Die Tätigkeit des Schülers optimal in Gang setzen - handlungs-regulierende Aufgaben stellen, 5 Literatur
Rezensiertes Werk:
Anne-Katrin Henkel / Thomas Rahe (Hrsg.): Publizistik in jüdischen Displaced-Persons-Camps. Charakteristika, Medien und bibliothekarische Überlieferung, Zeitschrift für Bibliothekswesen und Bibliographie. Sonderbände, Bd. 112, Frankfurt am Main: Vittorio Klostermann Verlag 2014. 194 S.
Rezensiertes Werk:
Naphtali Herz Wessely: Worte des Friedens und der Wahrheit. Dokumente einer Kontroverse über Erziehung in der europäischen Spätaufklärung. Herausgegeben, eingeleitet und kommentiert von Ingrid Lohmann, mitherausgegeben von Rainer Wenzel / Uta Lohmann. Aus dem Hebräischen übersetzt und mit Anmerkungen versehen von Rainer Wenzel, Jüdische Bildungsgeschichte in Deutschland, Bd. 8, Münster: Waxmann 2014. 800 S.
Duldung und Diskriminierung
(2016)
Die Worte „entjuden“ und „Entjudung“ sind sprachlicher Ausdruck zumeist judenfeindlicher Haltungen und Taten in der deutschen Geschichte. Der Beitrag zeichnet die Entwicklung des Begriffs anhand seiner Verwendungszusammenhänge nach. Im Kontext der Assimilation des beginnenden 19. Jahrhunderts meinte der Terminus, dass man sich jener jüdischen „Eigenheit“ zu entkleiden habe, welche als Postulat gemeinhin Konsens war. Innerhalb der innerjüdischen Diskussion wird „Entjudung“ zu Beginn des 20. Jahrhunderts zum diagnostischen Ausdruck des Identitätsverlustes. Als politischer Kampfbegriff der Nationalsozialisten ist er wiederum zum Synonym für die Entrechtung und Vernichtung jüdischer Menschen geworden. Protestantische Theologen verwendeten diesen Begriff in der Debatte um die Erneuerung des Christentums, was durch die Entfernung jüdischer Einflüsse geschehen sollte. Bereits Ende des 18. Jahrhunderts formuliert, findet diese Forderung in der 1939 erfolgten Gründung des Instituts zur Erforschung und Beseitigung des jüdischen Einflusses auf das deutsche kirchliche Leben seine programmatische Umsetzung.
Archaeology can be understood as a tool used in the process of identity formation,
contributing to a sense of belonging and unity within a diverse set of communities.
Research was conducted with the intention of analyzing the wide range of perceptions
regarding archaeological sites in the mixed city of Lod, Israel. I explored the impact of
urban cultural heritage on shaping the identity of local Jewish and Arab children, who
were chosen as the youngest active members of society living in the city, and who
participated in the 2013 archaeological excavation season at the Khan al-Hilu. Israel is
an ideal location for such research, due to its nature as simultaneously being the focus
of extensive archaeological excavations as well as being the setting of an intractable conflict. Ancestral attachment to the land serves as a foundation for the collective
identity of both Jews and Arabs. Yet, each community and individual may relate differently
to the surrounding archaeological sites, which is further shaped by their sense of
societal hierarchy and cultural heritage.