Refine
Has Fulltext
- yes (462) (remove)
Year of publication
- 2012 (462) (remove)
Document Type
- Article (157)
- Doctoral Thesis (105)
- Postprint (60)
- Monograph/Edited Volume (40)
- Preprint (27)
- Review (25)
- Part of Periodical (21)
- Master's Thesis (11)
- Other (11)
- Habilitation Thesis (2)
Language
- German (280)
- English (178)
- Multiple languages (2)
- Russian (1)
- Spanish (1)
Keywords
- Nachhaltigkeit (19)
- Curriculum Framework (18)
- European values education (18)
- Europäische Werteerziehung (18)
- Lehrevaluation (18)
- Politik (18)
- Studierendenaustausch (18)
- Unterrichtseinheiten (18)
- Wirtschaft (18)
- Zukunft (18)
Institute
- WeltTrends e.V. Potsdam (38)
- Extern (37)
- MenschenRechtsZentrum (31)
- Institut für Mathematik (30)
- Vereinigung für Jüdische Studien e. V. (28)
- Institut für Jüdische Studien und Religionswissenschaft (26)
- Institut für Biochemie und Biologie (23)
- Mathematisch-Naturwissenschaftliche Fakultät (23)
- Institut für Umweltwissenschaften und Geographie (22)
- Department Linguistik (20)
Daniel Kehlmann gibt vor, mit seinem Roman Die Vermessung der Welt „verschwiegene oder übersehene Wahrheiten sichtbar“ zu machen. Dadurch, dass er die Leser bewusst im Zweifel lässt, was historisch belegt und was erfunden ist, entstehen Missverständnisse. Der Beitrag analysiert die wichtigsten Charakteristika des Kehlmann‘schen und des historischen Alexander von Humboldt und weist nach, dass diese nicht übereinstimmen. Der Aufsatz fragt nach Kehlmanns Rolle, der in der Öffentlichkeit gerne die Pose des Gelehrten einnimmt, seine Erfindungen jedoch nicht offenlegt und sie als angebliches Humboldt-Zitat sogar in einem wissenschaftlichen Text publiziert. Der Beitrag kommt zu dem Schluss, dass alle, die etwas für ihre Allgemeinbildung tun möchten, bei Die Vermessung der Welt an der falschen Adresse sind.
"O du fröhliche ..."
(2012)
Der Begriff „Kindjees“ (Kind Jesus) bezeichnete lange Zeit in ländlichen Gegenden Norddeutschlands den Weihnachtsmann. Peters erinnert sich hier an die Weihnachtsbräuche seiner Kindheit und schildert gleichzeitig die eigene Entwicklung von dem blind der Macht des Kindjees vertrauenden Knaben zu dem Friedrich Schiller verehrenden Vierzehnjährigen, der selbst für die Kleineren die Rolle des Kindjees übernimmt. Die Schönheit der in der Schule gesungenen Kirchenlieder von Gerhard Tersteegen, die Faszination der Kinder für die Neuruppiner Bilderbogen (plattdeutsch: Lex) tragen viel zu der märchenhaften Stimmung bei, in der die Weihnachtsvorbereitungen in Luhnstedt, dem Heimatdorf des Dichters, stattfinden: „Überhaupt kam es vor Weihnachten sehr darauf an, alles, auch das Geringste, in der vertrauten Form wiederkehren zu lassen, Tradition zu schaffen. Die periodische, unbedingt gleichförmige Wiederkehr äußerer Ereignisse ist eine Quelle der Poesie.“
"Staken und Bretter"
(2012)
Eine aus der 1975 posthum veröffentlichten "Baasdörper Krönk" entwickelte Geschichte, die F.E. Peters in hochdeutscher Sprache veröffentlicht hat. Es geht um die Liebesgeschichte, aber auch den kleinen Machtkampf zwischen einem verwitweten Bauern mit vier Kindern und seiner Haushälterin, die für den Geschmack des angehenden Ehemannes zu viel Wert auf Äußerlichkeiten legt. „Staken und Bretter dienen dazu, das Fassungsvermögen eines Leiterwagens zu steigern.“ Der Ausdruck wird in Baasdorf im übertragenen Sinn für Anzeichen eines übertriebenen Geltungsbedürfnisses gebraucht. Berühmt ist die zu der Erzählung gehörende Episode aus der "Baasdörper Krönk" wegen des Ausspruchs von Johann-Detlef: „Dar fohrt Puls [hier: Thun] mit Tante längs.“ (Krönk, S. 67, Staken und Bretter, S. 8). In der "Krönk" folgen sprachsoziologisch interessante Ausführungen des Erzählers zu den vier Gründen, warum der Ausspruch von Johann-Detlef, der die Geziertheit der Haushälterin parodiert, im Dorf zu einem Lacherfolg wird. Nach Baasdorfer Meinung heißt es „föhrt“ und nicht „fahrt“, außerdem „lang“ und nicht „längs“. Zu allem Überfluss lässt sich die Haushälterin von den Kindern „Tante“ nennen, „wat ok dumm Tüüch weer“ und schließlich nennt sie Hansjörn Puls (hier: Ehler Thun) beim Nachnamen. Johann-Detlef macht also mit seinem Spott mobil gegen die Haushälterin, denn der Gebrauch des Hochdeutschen wird in Baasdorf als ein Zeichen von Hochnäsigkeit gewertet: „So’n Lö, de – mit een Woort geseggt – ümmer höger schieten wüllt, as se den Mors hebbt, de nehmt wi erst mal en betjen in’e Maak; de mööt erstmal wat ümlehren.“ (Krönk, S. 67). In der "Krönk" geht der sich später fortsetzende eheliche Kampf um „Staken und Bretter“ anders und weniger harmonisch aus als in der Erzählung.
"Ulenspegel un Jan Dood"
(2012)
Charakterisierung des literarischen Schaffens von Moritz Jahn. Seine Bedeutung für das Plattdeutsche. F. E. Peters beschreibt, Moritz Jahn humorvoll folgend, "polare Spannungen" in Niederdeutschland: "Es gibt Niederdeutsche, die ihrem Lande nie so ganz trauen, die daheim in einer immerwährenden leisen Unruhe hinleben und die aufatmen, wenn sie in gebirgiger Gegend das nackte Gestein unter ihren Füßen fühlen. Dann wird klar, was sie sich bis dahin mühsam verhehlt haben: dass sie nämlich in einem geheimen Winkel ihrer Seele glauben, ihr Land sei vom Meere unterspült, könne an besonders dünnen Stellen durchbrechen, vom festen Kern der Erde losgerissen werden und in die Nebel der Unendlichkeit hinaustreiben. Auf felsigem Grund werden sie dann die neue und wohltuende Sicherheit mit großer Freude so lange genießen, bis das unwiderstehliche Heimweh sie zurücktreibt in die Unsicherheit."
The potential increase in frequency and magnitude of extreme floods is currently discussed in terms of global warming and the intensification of the hydrological cycle. The profound knowledge of past natural variability of floods is of utmost importance in order to assess flood risk for the future. Since instrumental flood series cover only the last ~150 years, other approaches to reconstruct historical and pre-historical flood events are needed. Annually laminated (varved) lake sediments are meaningful natural geoarchives because they provide continuous records of environmental changes > 10000 years down to a seasonal resolution. Since lake basins additionally act as natural sediment traps, the riverine sediment supply, which is preserved as detrital event layers in the lake sediments, can be used as a proxy for extreme discharge events. Within my thesis I examined a ~ 8.50 m long sedimentary record from the pre-Alpine Lake Mondsee (Northeast European Alps), which covered the last 7000 years. This sediment record consists of calcite varves and intercalated detrital layers, which range in thickness from 0.05 to 32 mm. Detrital layer deposition was analysed by a combined method of microfacies analysis via thin sections, Scanning Electron Microscopy (SEM), μX-ray fluorescence (μXRF) scanning and magnetic susceptibility. This approach allows characterizing individual detrital event layers and assigning a corresponding input mechanism and catchment. Based on varve counting and controlled by 14C age dates, the main goals of this thesis are (i) to identify seasonal runoff processes, which lead to significant sediment supply from the catchment into the lake basin and (ii) to investigate flood frequency under changing climate boundary conditions. This thesis follows a line of different time slices, presenting an integrative approach linking instrumental and historical flood data from Lake Mondsee in order to evaluate the flood record inferred from Lake Mondsee sediments. The investigation of eleven short cores covering the last 100 years reveals the abundance of 12 detrital layers. Therein, two types of detrital layers are distinguished by grain size, geochemical composition and distribution pattern within the lake basin. Detrital layers, which are enriched in siliciclastic and dolomitic material, reveal sediment supply from the Flysch sediments and Northern Calcareous Alps into the lake basin. These layers are thicker in the northern lake basin (0.1-3.9 mm) and thinner in the southern lake basin (0.05-1.6 mm). Detrital layers, which are enriched in dolomitic components forming graded detrital layers (turbidites), indicate the provenance from the Northern Calcareous Alps. These layers are generally thicker (0.65-32 mm) and are solely recorded within the southern lake basin. In comparison with instrumental data, thicker graded layers result from local debris flow events in summer, whereas thin layers are deposited during regional flood events in spring/summer. Extreme summer floods as reported from flood layer deposition are principally caused by cyclonic activity from the Mediterranean Sea, e.g. July 1954, July 1997 and August 2002. During the last two millennia, Lake Mondsee sediments reveal two significant flood intervals with decadal-scale flood episodes, during the Dark Ages Cold Period (DACP) and the transition from the Medieval Climate Anomaly (MCA) into the Little Ice Age (LIA) suggesting a linkage of transition to climate cooling and summer flood recurrences in the Northeastern Alps. In contrast, intermediate or decreased flood episodes appeared during the MWP and the LIA. This indicates a non-straightforward relationship between temperature and flood recurrence, suggesting higher cyclonic activity during climate transition in the Northeast Alps. The 7000-year flood chronology reveals 47 debris flows and 269 floods, with increased flood activity shifting around 3500 and 1500 varve yr BP (varve yr BP = varve years before present, before present = AD 1950). This significant increase in flood activity shows a coincidence with millennial-scale climate cooling that is reported from main Alpine glacier advances and lower tree lines in the European Alps since about 3300 cal. yr BP (calibrated years before present). Despite relatively low flood occurrence prior to 1500 varve yr BP, floods at Lake Mondsee could have also influenced human life in early Neolithic lake dwellings (5750-4750 cal. yr BP). While the first lake dwellings were constructed on wetlands, the later lake dwellings were built on piles in the water suggesting an early flood risk adaptation of humans and/or a general change of the Late Neolithic Culture of lake-dwellers because of socio-economic reasons. However, a direct relationship between the final abandonment of the lake dwellings and higher flood frequencies is not evidenced.
In the context of cosmological structure formation sheets, filaments and eventually halos form due to gravitational instabilities. It is noteworthy, that at all times, the majority of the baryons in the universe does not reside in the dense halos but in the filaments and the sheets of the intergalactic medium. While at higher redshifts of z > 2, these baryons can be detected via the absorption of light (originating from more distant sources) by neutral hydrogen at temperatures of T ~ 10^4 K (the Lyman-alpha forest), at lower redshifts only about 20 % can be found in this state. The remain (about 50 to 70 % of the total baryons mass) is unaccounted for by observational means. Numerical simulations predict that these missing baryons could reside in the filaments and sheets of the cosmic web at high temperatures of T = 10^4.5 - 10^7 K, but only at low to intermediate densities, and constitutes the warm-hot intergalactic medium (WHIM). The high temperatures of the WHIM are caused by the formation of shocks and the subsequent shock-heating of the gas. This results in a high degree of ionization and renders the reliable detection of the WHIM a challenging task. Recent high-resolution hydrodynamical simulations indicate that, at redshifts of z ~ 2, filaments are able to provide very massive galaxies with a significant amount of cool gas at temperatures of T ~ 10^4 K. This could have an important impact on the star-formation in those galaxies. It is therefore of principle importance to investigate the particular hydro- and thermodynamical conditions of these large filament structures. Density and temperature profiles, and velocity fields, are expected to leave their special imprint on spectroscopic observations. A potential multiphase structure may act as tracer in observational studies of the WHIM. In the context of cold streams, it is important to explore the processes, which regulate the amount of gas transported by the streams. This includes the time evolution of filaments, as well as possible quenching mechanisms. In this context, the halo mass range in which cold stream accretion occurs is of particular interest. In order to address these questions, we perform particular hydrodynamical simulations of very high resolution, and investigate the formation and evolution of prototype structures representing the typical filaments and sheets of the WHIM. We start with a comprehensive study of the one-dimensional collapse of a sinusoidal density perturbation (pancake formation) and examine the influence of radiative cooling, heating due to an UV background, thermal conduction, and the effect of small-scale perturbations given by the cosmological power spectrum. We use a set of simulations, parametrized by the wave length of the initial perturbation L. For L ~ 2 Mpc/h the collapse leads to shock-confined structures. As a result of radiative cooling and of heating due to an UV background, a relatively cold and dense core forms. With increasing L the core becomes denser and more concentrated. Thermal conduction enhances this trend and may lead to an evaporation of the core at very large L ~ 30 Mpc/h. When extending our simulations into three dimensions, instead of a pancake structure, we obtain a configuration consisting of well-defined sheets, filaments, and a gaseous halo. For L > 4 Mpc/h filaments form, which are fully confined by an accretion shock. As with the one-dimensional pancakes, they exhibit an isothermal core. Thus, our results confirm a multiphase structure, which may generate particular spectral tracers. We find that, after its formation, the core becomes shielded against further infall of gas onto the filament, and its mass content decreases with time. In the vicinity of the halo, the filament's core can be attributed to the cold streams found in other studies. We show, that the basic structure of these cold streams exists from the very beginning of the collapse process. Further on, the cross section of the streams is constricted by the outwards moving accretion shock of the halo. Thermal conduction leads to a complete evaporation of the cold stream for L > 6 Mpc/h. This corresponds to halos with a total mass higher than M_halo = 10^13 M_sun, and predicts that in more massive halos star-formation can not be sustained by cold streams. Far away from the gaseous halo, the temperature gradients in the filament are not sufficiently strong for thermal conduction to be effective.
Background
High blood glucose and diabetes are amongst the conditions causing the greatest losses in years of healthy life worldwide. Therefore, numerous studies aim to identify reliable risk markers for development of impaired glucose metabolism and type 2 diabetes. However, the molecular basis of impaired glucose metabolism is so far insufficiently understood. The development of so called 'omics' approaches in the recent years promises to identify molecular markers and to further understand the molecular basis of impaired glucose metabolism and type 2 diabetes. Although univariate statistical approaches are often applied, we demonstrate here that the application of multivariate statistical approaches is highly recommended to fully capture the complexity of data gained using high-throughput methods.
Methods
We took blood plasma samples from 172 subjects who participated in the prospective Metabolic Syndrome Berlin Potsdam follow-up study (MESY-BEPO Follow-up). We analysed these samples using Gas Chromatography coupled with Mass Spectrometry (GC-MS), and measured 286 metabolites. Furthermore, fasting glucose levels were measured using standard methods at baseline, and after an average of six years. We did correlation analysis and built linear regression models as well as Random Forest regression models to identify metabolites that predict the development of fasting glucose in our cohort.
Results
We found a metabolic pattern consisting of nine metabolites that predicted fasting glucose development with an accuracy of 0.47 in tenfold cross-validation using Random Forest regression. We also showed that adding established risk markers did not improve the model accuracy. However, external validation is eventually desirable. Although not all metabolites belonging to the final pattern are identified yet, the pattern directs attention to amino acid metabolism, energy metabolism and redox homeostasis.
Conclusions
We demonstrate that metabolites identified using a high-throughput method (GC-MS) perform well in predicting the development of fasting plasma glucose over several years. Notably, not single, but a complex pattern of metabolites propels the prediction and therefore reflects the complexity of the underlying molecular mechanisms. This result could only be captured by application of multivariate statistical approaches. Therefore, we highly recommend the usage of statistical methods that seize the complexity of the information given by high-throughput methods.
The Riemann hypothesis is equivalent to the fact the the reciprocal function 1/zeta (s) extends from the interval (1/2,1) to an analytic function in the quarter-strip 1/2 < Re s < 1 and Im s > 0. Function theory allows one to rewrite the condition of analytic continuability in an elegant form amenable to numerical experiments.
Abschied von KyotoPlus?
(2012)
Die Ergebnisse des Klimagipfels von Kopenhagen sind eine bittere
Enttäuschung für die EU. Ihr ist es nicht gelungen, ihren Führungsambitionen
beim globalen Klimaschutz gerecht zu werden und die
Konferenz zur Weichenstellung für ein rechtsverbindliches Klimaabkommen
nach 2012 zu nutzen. Damit steht die Union vor grundlegenden
strategischen Fragen zum Kurs ihrer Klimapolitik.
The field of machine learning studies algorithms that infer predictive models from data. Predictive models are applicable for many practical tasks such as spam filtering, face and handwritten digit recognition, and personalized product recommendation. In general, they are used to predict a target label for a given data instance. In order to make an informed decision about the deployment of a predictive model, it is crucial to know the model’s approximate performance. To evaluate performance, a set of labeled test instances is required that is drawn from the distribution the model will be exposed to at application time. In many practical scenarios, unlabeled test instances are readily available, but the process of labeling them can be a time- and cost-intensive task and may involve a human expert. This thesis addresses the problem of evaluating a given predictive model accurately with minimal labeling effort. We study an active model evaluation process that selects certain instances of the data according to an instrumental sampling distribution and queries their labels. We derive sampling distributions that minimize estimation error with respect to different performance measures such as error rate, mean squared error, and F-measures. An analysis of the distribution that governs the estimator leads to confidence intervals, which indicate how precise the error estimation is. Labeling costs may vary across different instances depending on certain characteristics of the data. For instance, documents differ in their length, comprehensibility, and technical requirements; these attributes affect the time a human labeler needs to judge relevance or to assign topics. To address this, the sampling distribution is extended to incorporate instance-specific costs. We empirically study conditions under which the active evaluation processes are more accurate than a standard estimate that draws equally many instances from the test distribution. We also address the problem of comparing the risks of two predictive models. The standard approach would be to draw instances according to the test distribution, label the selected instances, and apply statistical tests to identify significant differences. Drawing instances according to an instrumental distribution affects the power of a statistical test. We derive a sampling procedure that maximizes test power when used to select instances, and thereby minimizes the likelihood of choosing the inferior model. Furthermore, we investigate the task of comparing several alternative models; the objective of an evaluation could be to rank the models according to the risk that they incur or to identify the model with lowest risk. An experimental study shows that the active procedure leads to higher test power than the standard test in many application domains. Finally, we study the problem of evaluating the performance of ranking functions, which are used for example for web search. In practice, ranking performance is estimated by applying a given ranking model to a representative set of test queries and manually assessing the relevance of all retrieved items for each query. We apply the concepts of active evaluation and active comparison to ranking functions and derive optimal sampling distributions for the commonly used performance measures Discounted Cumulative Gain and Expected Reciprocal Rank. Experiments on web search engine data illustrate significant reductions in labeling costs.