Refine
Year of publication
- 2014 (1681) (remove)
Document Type
- Article (1121)
- Doctoral Thesis (202)
- Postprint (102)
- Monograph/Edited Volume (55)
- Review (54)
- Preprint (47)
- Conference Proceeding (42)
- Part of a Book (20)
- Part of Periodical (14)
- Other (11)
Language
Is part of the Bibliography
- yes (1681) (remove)
Keywords
- anomalous diffusion (14)
- prevention (10)
- radiation mechanisms: non-thermal (9)
- Eye movements (8)
- Holocene (8)
- gamma rays: galaxies (8)
- living cells (8)
- Earthquake source observations (7)
- gamma rays: general (7)
- violence (7)
Institute
- Institut für Biochemie und Biologie (246)
- Institut für Physik und Astronomie (237)
- Institut für Geowissenschaften (226)
- Institut für Chemie (188)
- Department Psychologie (90)
- Wirtschaftswissenschaften (59)
- Institut für Ernährungswissenschaft (58)
- Sozialwissenschaften (55)
- Institut für Mathematik (53)
- Historisches Institut (50)
In processing and data storage mainly ferromagnetic (FM) materials are being used. Approaching physical limits, new concepts have to be found for faster, smaller switches, for higher data densities and more energy efficiency. Some of the discussed new concepts involve the material classes of correlated oxides and materials with antiferromagnetic coupling. Their applicability depends critically on their switching behavior, i.e., how fast and how energy efficient material properties can be manipulated. This thesis presents investigations of ultrafast non-equilibrium phase transitions on such new materials. In transition metal oxides (TMOs) the coupling of different degrees of freedom and resulting low energy excitation spectrum often result in spectacular changes of macroscopic properties (colossal magneto resistance, superconductivity, metal-to-insulator transitions) often accompanied by nanoscale order of spins, charges, orbital occupation and by lattice distortions, which make these material attractive. Magnetite served as a prototype for functional TMOs showing a metal-to-insulator-transition (MIT) at T = 123 K. By probing the charge and orbital order as well as the structure after an optical excitation we found that the electronic order and the structural distortion, characteristics of the insulating phase in thermal equilibrium, are destroyed within the experimental resolution of 300 fs. The MIT itself occurs on a 1.5 ps timescale. It shows that MITs in functional materials are several thousand times faster than switching processes in semiconductors. Recently ferrimagnetic and antiferromagnetic (AFM) materials have become interesting. It was shown in ferrimagnetic GdFeCo, that the transfer of angular momentum between two opposed FM subsystems with different time constants leads to a switching of the magnetization after laser pulse excitation. In addition it was theoretically predicted that demagnetization dynamics in AFM should occur faster than in FM materials as no net angular momentum has to be transferred out of the spin system. We investigated two different AFM materials in order to learn more about their ultrafast dynamics. In Ho, a metallic AFM below T ≈ 130 K, we found that the AFM Ho can not only be faster but also ten times more energy efficiently destroyed as order in FM comparable metals. In EuTe, an AFM semiconductor below T ≈ 10 K, we compared the loss of magnetization and laser-induced structural distortion in one and the same experiment. Our experiment shows that they are effectively disentangled. An exception is an ultrafast release of lattice dynamics, which we assign to the release of magnetostriction. The results presented here were obtained with time-resolved resonant soft x-ray diffraction at the Femtoslicing source of the Helmholtz-Zentrum Berlin and at the free-electron laser in Stanford (LCLS). In addition the development and setup of a new UHV-diffractometer for these experiments will be reported.
A new concept for shortening hard X-ray pulses emitted from a third-generation synchrotron source down to few picoseconds is presented. The device, called the PicoSwitch, exploits the dynamics of coherent acoustic phonons in a photo-excited thin film. A characterization of the structure demonstrates switching times of <= 5 ps and a peak reflectivity of similar to 10(-3). The device is tested in a real synchrotron-based pump-probe experiment and reveals features of coherent phonon propagation in a second thin film sample, thus demonstrating the potential to significantly improve the temporal resolution at existing synchrotron facilities.
Ultraschall Berlin
(2014)
Ultraschall Berlin
(2014)
Scientific inquiry requires that we formulate not only what we know, but also what we do not know and by how much. In climate data analysis, this involves an accurate specification of measured quantities and a consequent analysis that consciously propagates the measurement errors at each step. The dissertation presents a thorough analytical method to quantify errors of measurement inherent in paleoclimate data. An additional focus are the uncertainties in assessing the coupling between different factors that influence the global mean temperature (GMT).
Paleoclimate studies critically rely on `proxy variables' that record climatic signals in natural archives. However, such proxy records inherently involve uncertainties in determining the age of the signal. We present a generic Bayesian approach to analytically determine the proxy record along with its associated uncertainty, resulting in a time-ordered sequence of correlated probability distributions rather than a precise time series. We further develop a recurrence based method to detect dynamical events from the proxy probability distributions. The methods are validated with synthetic examples and
demonstrated with real-world proxy records. The proxy estimation step reveals the interrelations between proxy variability and uncertainty. The recurrence analysis of the East Asian Summer Monsoon during the last 9000 years confirms the well-known `dry' events at 8200 and 4400 BP, plus an additional significantly dry event at 6900 BP.
We also analyze the network of dependencies surrounding GMT. We find an intricate, directed network with multiple links between the different factors at multiple time delays. We further uncover a significant feedback from the GMT to the El Niño Southern Oscillation at quasi-biennial timescales. The analysis highlights the need of a more nuanced formulation of influences between different climatic factors, as well as the limitations in trying to estimate such dependencies.
The aim of the present thesis is to answer the question to what degree the processes involved in sentence comprehension are sensitive to task demands. A central phenomenon in this regard is the so-called ambiguity advantage, which is the finding that ambiguous sentences can be easier to process than unambiguous sentences. This finding may appear counterintuitive, because more meanings should be associated with a higher computational effort. Currently, two theories exist that can explain this finding.
The Unrestricted Race Model (URM) by van Gompel et al. (2001) assumes that several sentence interpretations are computed in parallel, whenever possible, and that the first interpretation to be computed is assigned to the sentence. Because the duration of each structure-building process varies from trial to trial, the parallelism in structure-building predicts that ambiguous sentences should be processed faster. This is because when two structures are permissible, the chances that some interpretation will be computed quickly are higher than when only one specific structure is permissible. Importantly, the URM is not sensitive to task demands such as the type of comprehension questions being asked.
A radically different proposal is the strategic underspecification model by Swets et al. (2008). It assumes that readers do not attempt to resolve ambiguities unless it is absolutely necessary. In other words, they underspecify. According the strategic underspecification hypothesis, all attested replications of the ambiguity advantage are due to the fact that in those experiments, readers were not required to fully understand the sentence.
In this thesis, these two models of the parser’s actions at choice-points in the sentence are presented and evaluated. First, it is argued that the Swets et al.’s (2008) evidence against the URM and in favor of underspecification is inconclusive. Next, the precise predictions of the URM as well as the underspecification model are refined. Subsequently, a self-paced reading experiment involving the attachment of pre-nominal relative clauses in Turkish is presented, which provides evidence against strategical underspecification. A further experiment is presented which investigated relative clause attachment in German using the speed-accuracy tradeoff (SAT) paradigm. The experiment provides evidence against strategic underspecification and in favor of the URM. Furthermore the results of the experiment are used to argue that human sentence comprehension is fallible, and that theories of parsing should be able to account for that fact. Finally, a third experiment is presented, which provides evidence for the sensitivity to task demands in the treatment of ambiguities. Because this finding is incompatible with the URM, and because the strategic underspecification model has been ruled out, a new model of ambiguity resolution is proposed: the stochastic multiple-channel model of ambiguity resolution (SMCM). It is further shown that the quantitative predictions of the SMCM are in agreement with experimental data.
In conclusion, it is argued that the human sentence comprehension system is parallel and fallible, and that it is sensitive to task-demands.
Background: Agrammatic speakers have problems with grammatical encoding and decoding. However, not all syntactic processes are equally problematic: present time reference, who questions, and reflexives can be processed by narrow syntax alone and are relatively spared compared to past time reference, which questions, and personal pronouns, respectively. The latter need additional access to discourse and information structures to link to their referent outside the clause (Avrutin, 2006). Linguistic processing that requires discourse-linking is difficult for agrammatic individuals: verb morphology with reference to the past is more difficult than with reference to the present (Bastiaanse et al., 2011). The same holds for which questions compared to who questions and for pronouns compared to reflexives (Avrutin, 2006). These results have been reported independently for different populations in different languages. The current study, for the first time, tested all conditions within the same population.
Aims: We had two aims with the current study. First, we wanted to investigate whether discourse-linking is the common denominator of the deficits in time reference, wh questions, and object pronouns. Second, we aimed to compare the comprehension of discourse-linked elements in people with agrammatic and fluent aphasia.
Methods and procedures: Three sentence-picture-matching tasks were administered to 10 agrammatic, 10 fluent aphasic, and 10 non-brain-damaged Russian speakers (NBDs): (1) the Test for Assessing Reference of Time (TART) for present imperfective (reference to present) and past perfective (reference to past), (2) the Wh Extraction Assessment Tool (WHEAT) for which and who subject questions, and (3) the Reflexive-Pronoun Test (RePro) for reflexive and pronominal reference.
Outcomes and results: NBDs scored at ceiling and significantly higher than the aphasic participants. We found an overall effect of discourse-linking in the TART and WHEAT for the agrammatic speakers, and in all three tests for the fluent speakers. Scores on the RePro were at ceiling.
Conclusions: The results are in line with the prediction that problems that individuals with agrammatic and fluent aphasia experience when comprehending sentences that contain verbs with past time reference, which question words and pronouns are caused by the fact that these elements involve discourse linking. The effect is not specific to agrammatism, although it may result from different underlying disorders in agrammatic and fluent aphasia.
A detailed knowledge of cell wall heterogeneity and complexity is crucial for understanding plant growth and development. One key challenge is to establish links between polysaccharide-rich cell walls and their phenotypic characteristics. It is of particular interest for some plant material, like cotton fibers, which are of both biological and industrial importance. To this end, we attempted to study cotton fiber characteristics together with glycan arrays using regression based approaches. Taking advantage of the comprehensive microarray polymer profiling technique (CoMPP), 32 cotton lines from different cotton species were studied. The glycan array was generated by sequential extraction of cell wall polysaccharides from mature cotton fibers and screening samples against eleven extensively characterized cell wall probes. Also, phenotypic characteristics of cotton fibers such as length, strength, elongation and micronaire were measured. The relationship between the two datasets was established in an integrative manner using linear regression methods. In the conducted analysis, we demonstrated the usefulness of regression based approaches in establishing a relationship between glycan measurements and phenotypic traits. In addition, the analysis also identified specific polysaccharides which may play a major role during fiber development for the final fiber characteristics. Three different regression methods identified a negative correlation between micronaire and the xyloglucan and homogalacturonan probes. Moreover, homogalacturonan and callose were shown to be significant predictors for fiber length. The role of these polysaccharides was already pointed out in previous cell wall elongation studies. Additional relationships were predicted for fiber strength and elongation which will need further experimental validation.
Unfälle der Sprache
(2014)
Der Begriff »Katastrophe« hat in unserer Alltags- und Mediensprache Hochkonjunktur. Was in der Abfolge von Kriegen, Attentaten, Erdbeben, Vulkanausbrüchen und Tsunamis als »Katastrophe« bezeichnet wird, verlangt nach einer zugespitzten Analyse. In der Literaturwissenschaft wird der Ausdruck als Bezeichnung für das schreckliche Unglück verwendet, mit dem eine Tragödie endet. Die »Strophe« bezeichnet dabei ursprünglich die körperliche Drehung, mit welcher der Chor in der antiken Tragödie seinen Gesang begleitete, bevor etwas Neues beginnt ..
Obwohl in den unionalen Verträgen bis heute keine Vorschrift bezüglich einer Staatshaftung der Mitgliedstaaten für Entscheidungen ihrer Gerichte existiert, hat der Gerichtshof der Europäischen Union (EuGH) in einer Reihe von Entscheidungen eine solche Haftung entwickelt und präzisiert. Die vorliegende Arbeit analysiert eingehend diese Rechtsprechung mitsamt den sich daraus ergebenden facettenreichen Rechtsfragen. Im ersten Kapitel widmet sich die Arbeit der historischen Entwicklung der unionsrechtlichen Staatshaftung im Allgemeinen, ausgehend von dem bekannten Francovich-Urteil aus dem Jahr 1991. Sodann werden im zweiten Kapitel die zur Haftung für judikatives Unrecht grundlegenden Entscheidungen in den Rechtssachen Köbler und Traghetti vorgestellt. In dem sich anschließenden dritten Kapitel wird der Rechtscharakter der unionsrechtlichen Staatshaftung – einschließlich der Frage einer Subsidiarität des unionsrechtlichen Anspruchs gegenüber bestehenden nationalen Staatshaftungsansprüchen – untersucht. Das vierte Kapitel widmet sich der Frage, ob eine unionsrechtliche Staatshaftung für judikatives Unrecht prinzipiell anzuerkennen ist, wobei die wesentlichen für und gegen eine solche Haftung sprechenden Argumente ausführlich behandelt und bewertet werden. Im fünften Kapitel werden die im Zusammenhang mit den unionsrechtlichen Haftungsvoraussetzungen stehenden Probleme der Haftung für letztinstanzliche Gerichtsentscheidungen detailliert erörtert. Zugleich wird der Frage nachgegangen, ob eine Haftung für fehlerhafte unterinstanzliche Gerichtsentscheidungen zu befürworten ist. Das sechste Kapitel befasst sich mit der Ausgestaltung der unionsrechtlichen Staatshaftung für letztinstanzliche Gerichtsentscheidungen durch die Mitgliedstaaten, wobei u.a. zur Anwendbarkeit der deutschen Haftungsprivilegien bei judikativem Unrecht auf den unionsrechtlichen Staatshaftungsanspruch Stellung genommen wird. Im letzten Kapitel wird der Frage nachgegangen, ob der EuGH überhaupt über eine Kompetenz zur Schaffung der Staatshaftung für letztinstanzliche Gerichtsentscheidungen verfügte. Abschließend werden die wichtigsten Ergebnisse der Arbeit präsentiert und ein Ausblick auf weitere mögliche Auswirkungen und Entwicklungen der unionsrechtlichen Staatshaftung für judikatives Unrecht gegeben.
Uno sguardo che renda omogenee le teorie della lingua relative al XVII e al XVIII secolo non può cogliere che a grandi linee la realtà delle concezioni della lingua abbracciate in questo periodo. Il riconoscimento di una teoria cartesiana della lingua come la spiegazione indifferenziata degli sviluppi conseguenti il passaggio da visioni razionalistiche a concezioni orientate ai sensi sono risultato di tale omogeneizzazione, un processo che contempla la realtà solo in parte.
Il pensiero linguistico era contrassegnato da un misto di forme di riflessione di carattere narrativo e di tipo concettuale-razionale che si completavano in modo reciproco. Se l’approccio concettuale tentava di rilevare le proprietà fondamentali della lingua e ordinarle razionalmente, le forme narrative della riflessione linguistica non si rivolgevano alla lingua in quanto oggetto concettuale. Piuttosto la rappresentavano come oggetto da comprendere. Gli approcci narrativi e concettuali alla lingua prevedono differenze discorsive nelle impostazioni teorico-linguistiche. Anche lo stampo del pensiero teoretico-linguistico contribuisce, attraverso tradizioni differenti, alla molteplicità delle vedute teoretico-linguistiche. Per tradizioni intendiamo posizioni dominanti nella riflessione metalinguistica, presenti in contesti regionali, che possono differenziarsi da altre tradizioni. Ad ogni modo, anche il ritardato sviluppo o la ricezione di teorie linguistiche può condurre a differenze caratteristiche. Le teorie linguistiche dell´Illuminismo furono per esempio recepite in Spagna più tardi che in altri Paesi europei. Ciò condusse all’accettazione sincronica di elementi teorici relativi a teorie diverse e consecutive. Se si concentra l’attenzione al di fuori dell’Europa si verrà attratti dallo sviluppo degli approcci analoghi alla riflessione linguistica che trovarono sviluppo in Cina all’inizio del XX secolo.
Unità e diversità sono tuttavia rintracciabili non solo sul piano della conoscenza metalinguistica ma anche sul piano dell’oggetto. Una sfida per la descrizione della lingua orientata alla tradizione greco-latina era rappresentata dalle lingue indigene con le quali si stava iniziando ad entrare in contatto attraverso i viaggi di scoperta e in seguito all’inizio del colonialismo. Da affiancare alla comunicazione esogena della trasmissione metalinguistica dei rapporti nell’ambito delle lingue europee sono presenti anche approcci per una percezione della specificità categoriale delle lingue americane. Sebbene in alcuni casi non verranno riconosciute le giuste categorizzazioni per le lingue descritte, per lo meno verrà assodato che le categorie rese note dalla grammatica latina non sussistono.
Nella ricerca degli ultimi decenni, la rappresentazione di un paradigma della filosofia della lingua del XVII e del XVIII secolo che postordini e subordini universalmente la molteplicità delle lingue a strutture valide di pensiero e che prescriva per la riflessione linguistica categorie fisse di una grammatica generale strettamente orientata alla logica razionalistica è stata più volte relativizzata. In quanto connessa con la fondatezza dell’unità e con l’inalterabilità del genere umano a seconda di spazio e tempo, la tesi che le lingue nella loro natura molteplice possano esistere solo alle dipendenze di una struttura universale del pensiero si lasciava catalogare tra quelle posizioni paradigmatiche sussistenti nell’ambito della filosofia della lingua di allora. Attraverso la conoscenza dell’origine storica dell’evoluzione dell’uomo, di tutti i suoi stili di vita e forme di comunicazione, assume rilievo un’altra posizione paradigmatica che attribuisce alla lingua un influsso formativo sul pensiero.
Attraverso la differenziazione ideologico-filosofica e la specificità nazionale delle sue tesi relative alla lingua in generale e alle lingue storiche in particolare la visione secolarizzata dell’uomo e della società elaborata all’apice dell’Illuminismo si associava allo sviluppo corrispettivo e al cambiamento delle posizioni teorico-linguistiche. Con la proclamazione della lingua e del pensiero come risultati di un lungo sviluppo corrispettivo nella storia dell’umanità viene assegnato nuovo valore alle prese di posizione sulla natura e sull’origine della lingua.
Unstetige Galerkin-Diskretisierung niedriger Ordnung in einem atmosphärischen Multiskalenmodell
(2014)
Die Dynamik der Atmosphäre der Erde umfasst einen Bereich von mikrophysikalischer Turbulenz über konvektive Prozesse und Wolkenbildung bis zu planetaren Wellenmustern. Für Wettervorhersage und zur Betrachtung des Klimas über Jahrzehnte und Jahrhunderte ist diese Gegenstand der Modellierung mit numerischen Verfahren. Mit voranschreitender Entwicklung der Rechentechnik sind Neuentwicklungen der dynamischen Kerne von Klimamodellen, die mit der feiner werdenden Auflösung auch entsprechende Prozesse auflösen können, notwendig. Der dynamische Kern eines Modells besteht in der Umsetzung (Diskretisierung) der grundlegenden dynamischen Gleichungen für die Entwicklung von Masse, Energie und Impuls, so dass sie mit Computern numerisch gelöst werden können. Die vorliegende Arbeit untersucht die Eignung eines unstetigen Galerkin-Verfahrens niedriger Ordnung für atmosphärische Anwendungen. Diese Eignung für Gleichungen mit Wirkungen von externen Kräften wie Erdanziehungskraft und Corioliskraft ist aus der Theorie nicht selbstverständlich. Es werden nötige Anpassungen beschrieben, die das Verfahren stabilisieren, ohne sogenannte „slope limiter” einzusetzen. Für das unmodifizierte Verfahren wird belegt, dass es nicht geeignet ist, atmosphärische Gleichgewichte stabil darzustellen. Das entwickelte stabilisierte Modell reproduziert eine Reihe von Standard-Testfällen der atmosphärischen Dynamik mit Euler- und Flachwassergleichungen in einem weiten Bereich von räumlichen und zeitlichen Skalen. Die Lösung der thermischen Windgleichung entlang der mit den Isobaren identischen charakteristischen Kurven liefert atmosphärische Gleichgewichtszustände mit durch vorgegebenem Grundstrom einstellbarer Neigung zu(barotropen und baroklinen)Instabilitäten, die für die Entwicklung von Zyklonen wesentlich sind. Im Gegensatz zu früheren Arbeiten sind diese Zustände direkt im z-System(Höhe in Metern)definiert und müssen nicht aus Druckkoordinaten übertragen werden.Mit diesen Zuständen, sowohl als Referenzzustand, von dem lediglich die Abweichungen numerisch betrachtet werden, und insbesondere auch als Startzustand, der einer kleinen Störung unterliegt, werden verschiedene Studien der Simulation von barotroper und barokliner Instabilität durchgeführt. Hervorzuheben ist dabei die durch die Formulierung von Grundströmen mit einstellbarer Baroklinität ermöglichte simulationsgestützte Studie des Grades der baroklinen Instabilität verschiedener Wellenlängen in Abhängigkeit von statischer Stabilität und vertikalem Windgradient als Entsprechung zu Stabilitätskarten aus theoretischen Betrachtungen in der Literatur.
Inferring the internal interaction patterns of a complex dynamical system is a challenging problem. Traditional methods often rely on examining the correlations among the dynamical units. However, in systems such as transcription networks, one unit's variable is also correlated with the rate of change of another unit's variable. Inspired by this, we introduce the concept of derivative-variable correlation, and use it to design a new method of reconstructing complex systems (networks) from dynamical time series. Using a tunable observable as a parameter, the reconstruction of any system with known interaction functions is formulated via a simple matrix equation. We suggest a procedure aimed at optimizing the reconstruction from the time series of length comparable to the characteristic dynamical time scale. Our method also provides a reliable precision estimate. We illustrate the method's implementation via elementary dynamical models, and demonstrate its robustness to both model error and observation error.
Es ist in dieser Arbeit gelungen, starre Oligospiroketal(OSK)-Stäbe als Grundbausteine für komplexe 2D- und 3D-Systeme zu verwenden. Dazu wurde ein difunktionalisierter starrer Stab synthetisiert, der mit seines Gleichen und anderen verzweigten Funktionalisierungseinheiten in Azid-Alkin-Klickreaktionen eingesetzt wurde. An zwei über Klickreaktion verknüpften OSK-Stäben konnten mittels theoretischer Berechnungen Aussagen über die neuartige Bimodalität der Konformation getroffen werden. Es wurde dafür der Begriff Gelenkstab eingeführt, da die Moleküle um ein Gelenk gedreht sowohl gestreckt als auch geknickt vorliegen können. Aufbauend auf diesen Erkenntnissen konnte gezeigt werden, dass nicht nur gezielt große Polymere aus bis zu vier OSK-Stäben synthetisiert werden können, sondern es auch möglich ist, durch gezielte Änderung von Reaktionsbedingungen der Klickreaktion auch Cyclen aus starren OSK-Stäben herzustellen. Die neu entwickelte Substanzklasse der Gelenkstäbe wurde im Hinblick auf die Steuerung des vorliegenden Gleichgewichts zwischen geknicktem und gestrecktem Gelenkstab hin untersucht. Dafür wurde der Gelenkstab mit Pyrenylresten in terminaler Position versehen. Es wurde durch Fluoreszenzmessungen festgestellt, dass das Gleichgewicht z. B. durch die Temperatur oder die Wahl des Lösungsmittels beeinflussbar ist. Für vielfache Anwendungen wurde eine vereinfachte Synthesestrategie gefunden, mit der eine beliebige Funktionalisierung in nur einem Syntheseschritt erreicht werden konnte. Es konnten photoaktive Gelenkstäbe synthetisiert werden, die gezielt zur intramolekularen Dimerisierung geführt werden konnten. Zusätzlich wurde durch Aminosäuren ein Verknüpfungselement am Ende der Gelenkstäbe gefunden, das eine stereoselektive Synthese von Mehrfachfunktionalisierungen zulässt. Die Synthese der komplexen Gelenkstäbe wurde als ein neuartiges Gebiet aufgezeigt und bietet ein breites Forschungspotential für weitere Anwendungen z. B. in der Biologie (als molekulare Schalter für Ionentransporte) und in der Materialchemie (als Ladungs- oder Energietransporteure).
Untersuchungen zur pro-inflammatorischen Wirkung von Serum-Amyloid A in glatten Gefäßmuskelzellen
(2014)
The Epoch of Reionization marks after recombination the second major change in the ionization state of the universe, going from a neutral to an ionized state. It starts with the appearance of the first stars and galaxies; a fraction of high-energy photons emitted from galaxies permeate into the intergalactic medium (IGM) and gradually ionize the hydrogen, until the IGM is completely ionized at z~6 (Fan et al., 2006). While the progress of reionization is driven by galaxy evolution, it changes the ionization and thermal state of the IGM substantially and affects subsequent structure and galaxy formation by various feedback mechanisms.
Understanding this interaction between reionization and galaxy formation is further impeded by a lack of understanding of the high-redshift galactic properties such as the dust distribution and the escape fraction of ionizing photons. Lyman Alpha Emitters (LAEs) represent a sample of high-redshift galaxies that are sensitive to all these galactic properties and the effects of reionization.
In this thesis we aim to understand the progress of reionization by performing cosmological simulations, which allows us to investigate the limits of constraining reionization by high-redshift galaxies as LAEs, and examine how galactic properties and the ionization state of the IGM affect the visibility and observed quantities of LAEs and Lyman Break galaxies (LBGs).
In the first part of this thesis we focus on performing radiative transfer calculations to simulate reionization. We have developed a mapping-sphere-scheme, which, starting from spherically averaged temperature and density fields, uses our 1D radiative transfer code and computes the effect of each source on the IGM temperature and ionization (HII, HeII, HeIII) profiles, which are subsequently mapped onto a grid. Furthermore we have updated the 3D Monte-Carlo radiative transfer pCRASH, enabling detailed reionization simulations which take individual source characteristics into account.
In the second part of this thesis we perform a reionization simulation by post-processing a smoothed-particle hydrodynamical (SPH) simulation (GADGET-2) with 3D radiative transfer (pCRASH), where the ionizing sources are modelled according to the characteristics of the stellar populations in the hydrodynamical simulation. Following the ionization fractions of hydrogen (HI) and helium (HeII, HeIII), and temperature in our simulation, we find that reionization starts at z~11 and ends at z~6, and high density regions near sources are ionized earlier than low density regions far from sources.
In the third part of this thesis we couple the cosmological SPH simulation and the radiative transfer simulations with a physically motivated, self-consistent model for LAEs, in order to understand the importance of the ionization state of the IGM, the escape fraction of ionizing photons from galaxies and dust in the interstellar medium (ISM) on the visibility of LAEs. Comparison of our models results with the LAE Lyman Alpha (Lya) and UV luminosity functions at z~6.6 reveals a three-dimensional degeneracy between the ionization state of the IGM, the ionizing photons escape fraction and the ISM dust distribution, which implies that LAEs act not only as tracers of reionization but also of the ionizing photon escape fraction and of the ISM dust distribution. This degeneracy does not even break down when we compare simulated with observed clustering of LAEs at z~6.6. However, our results show that reionization has the largest impact on the amplitude of the LAE angular correlation functions, and its imprints are clearly distinguishable from those of properties on galactic scales. These results show that reionization cannot be constrained tightly by exclusively using LAE observations. Further observational constraints, e.g. tomographies of the redshifted hydrogen 21cm line, are required.
In addition we also use our LAE model to probe the question when a galaxy is visible as a LAE or a LBG. Within our model galaxies above a critical stellar mass can produce enough luminosity to be visible as a LBG and/or a LAE. By finding an increasing duty cycle of LBGs with Lya emission as the UV magnitude or stellar mass of the galaxy rises, our model reveals that the brightest (and most massive) LBGs most often show Lya emission.
Predicting the Lya equivalent width (Lya EW) distribution and the fraction of LBGs showing Lya emission at z~6.6, we reproduce the observational trend of the Lya EWs with UV magnitude. However, the Lya EWs of the UV brightest LBGs exceed observations and can only be reconciled by accounting for an increased Lya attenuation of massive galaxies, which implies that the observed Lya brightest LAEs do not necessarily coincide with the UV brightest galaxies. We have analysed the dependencies of LAE observables on the properties of the galactic and intergalactic medium and the LAE-LBG connection, and this enhances our understanding of the nature of LAEs.
Herein, we report the use of upconversion agents to modify graphite carbon nitride (g-C3N4) by direct thermal condensation of a mixture of ErCl3 center dot 6H(2)O and the supramolecular precursor cyanuric acid-melamine. We show the enhancement of g-C3N4 photoactivity after Er3+ doping by monitoring the photodegradation of Rhodamine B dye under visible light. The contribution of the upconversion agent is demonstrated by measurements using only a red laser. The Er3+ doping alters both the electronic and the chemical properties of g-C3N4. The Er3+ doping reduces emission intensity and lifetime, indicating the formation of new, nonradiative deactivation pathways, probably involving charge-transfer processes.
Inorganic arsenicals are environmental toxins that have been connected with neuropathies and impaired cognitive functions. To investigate whether such substances accumulate in brain astrocytes and affect their viability and glutathione metabolism, we have exposed cultured primary astrocytes to arsenite or arsenate. Both arsenicals compromised the cell viability of astrocytes in a time- and concentration-dependent manner. However, the early onset of cell toxicity in arsenite-treated astrocytes revealed the higher toxic potential of arsenite compared with arsenate. The concentrations of arsenite and arsenate that caused within 24 h half-maximal release of the cytosolic enzyme lactate dehydrogenase were around 0.3 mM and 10 mM, respectively. The cellular arsenic contents of astrocytes increased rapidly upon exposure to arsenite or arsenate and reached after 4 h of incubation almost constant steady state levels. These levels were about 3-times higher in astrocytes that had been exposed to a given concentration of arsenite compared with the respective arsenate condition. Analysis of the intracellular arsenic species revealed that almost exclusively arsenite was present in viable astrocytes that had been exposed to either arsenate or arsenite. The emerging toxicity of arsenite 4 h after exposure was accompanied by a loss in cellular total glutathione and by an increase in the cellular glutathione disulfide content. These data suggest that the high arsenite content of astrocytes that had been exposed to inorganic arsenicals causes an increase in the ratio of glutathione disulfide to glutathione which contributes to the toxic potential of these substances.
Aims: Contrast media-induced nephropathy (CIN) is associated with increased morbidity and mortality. The renal endothelin system has been associated with disease progression of various acute and chronic renal diseases. However, robust data coming from adequately powered prospective clinical studies analyzing the short and long-term impacts of the renal ET system in patients with CIN are missing so far. We thus performed a prospective study addressing this topic.
Main methods: We included 327 patients with diabetes or renal impairment undergoing coronary angiography. Blood and spot urine were collected before and 24 h after contrast media (CM) application. Patients were followed for 90 days for major clinical events like need for dialysis, unplanned rehospitalization or death.
Key findings: The concentration of ET-1 and the urinary ET-1/creatinine ratio decreased in spot urine after CM application (ET-1 concentration: 0.91 +/- 1.23pg/ml versus 0.63 +/- 1.03pg/ml, p<0.001; ET-1/creatinine ratio: 0.14 +/- 0.23 versus 0.09 +/- 0.19, p<0.001). The urinary ET-1 concentrations in patients with CIN decreased significantly more than in patients without CIN (-0.26 +/- 1.42pg/ml vs. -0.79 +/- 1.69pg/ml, p=0.041), whereas the decrease of the urinary ET-1/creatinine ratio was not significantly different (non-CIN patients: -0.05 +/- 0.30; CIN patients: -0.11 +/- 0.21, p=0.223). Urinary ET-1 concentrations as well as the urinary ET-1/creatinine ratio were not associated with clinical events (need for dialysis, rehospitalization or death) during the 90day follow-up after contrast media exposure. However, the urinary ET-1 concentration and the urinary ET-1/creatinine ratio after CM application were higher in those patients who had a decrease of GFR of at least 25% after 90days of follow-up.
Significance: In general the ET-1 system in the kidney seems to be down-regulated after contrast media application in patients with moderate CIN risk. Major long-term complications of CIN (need for dialysis, rehospitalization or death) are not associated with the renal ET system. (C) 2014 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license.
Due to increasing demands and competition for high quality groundwater resources in many parts of the world, there is an urgent need for efficient methods that shed light on the interplay between complex natural settings and anthropogenic impacts. Thus a new approach is introduced, that aims to identify and quantify the predominant processes or factors of influence that drive groundwater and lake water dynamics on a catchment scale. The approach involves a non-linear dimension reduction method called Isometric feature mapping (Isomap). This method is applied to time series of groundwater head and lake water level data from a complex geological setting in Northeastern Germany. Two factors explaining more than 95% of the observed spatial variations are identified: (1) the anthropogenic impact of a waterworks in the study area and (2) natural groundwater recharge with different degrees of dampening at the respective sites of observation. The approach enables a presumption-free assessment to be made of the existing geological conception in the catchment, leading to an extension of the conception. Previously unknown hydraulic connections between two aquifers are identified, and connections revealed between surface water bodies and groundwater. (C) 2014 Elsevier B.V. All rights reserved.
The magnetosphere-ionosphere-thermosphere (MIT) dynamic system significantly depends on the highly variable solar wind conditions, in particular, on changes of the strength and orientation of the interplanetary magnetic field (IMF). The solar wind and IMF interactions with the magnetosphere drive the MIT system via the magnetospheric field-aligned currents (FACs). The global modeling helps us to understand the physical background of this complex system. With the present study, we test the recently developed high-resolution empirical model of field-aligned currents MFACE (a high-resolution Model of Field-Aligned Currents through Empirical orthogonal functions analysis). These FAC distributions were used as input of the time-dependent, fully self-consistent global Upper Atmosphere Model (UAM) for different seasons and various solar wind and IMF conditions. The modeling results for neutral mass density and thermospheric wind are directly compared with the CHAMP satellite measurements. In addition, we perform comparisons with the global empirical models: the thermospheric wind model (HWM07) and the atmosphere density model (Naval Research Laboratory Mass Spectrometer and Incoherent Scatter Extended 2000). The theoretical model shows a good agreement with the satellite observations and an improved behavior compared with the empirical models at high latitudes. Using the MFACE model as input parameter of the UAM model, we obtain a realistic distribution of the upper atmosphere parameters for the Northern and Southern Hemispheres during stable IMF orientation as well as during dynamic situations. This variant of the UAM can therefore be used for modeling the MIT system and space weather predictions.
Background: Knowing and, if necessary, altering competitive athletes' real attitudes towards the use of banned performance-enhancing substances is an important goal of worldwide doping prevention efforts. However athletes will not always be willing to reporting their real opinions. Reaction time-based attitude tests help conceal the ultimate goal of measurement from the participant and impede strategic answering. This study investigated how well a reaction time-based attitude test discriminated between athletes who were doping and those who were not. We investigated whether athletes whose urine samples were positive for at least one banned substance (dopers) evaluated doping more favorably than clean athletes (non-dopers).
Methods: We approached a group of 61 male competitive bodybuilders and collected urine samples for biochemical testing. The pictorial doping Brief Implicit Association Test (BIAT) was used for attitude measurement. This test quantifies the difference in response latencies (in milliseconds) to stimuli representing related concepts (i.e. doping-dislike/like-[health food]).
Results: Prohibited substances were found in 43% of all tested urine samples. Dopers had more lenient attitudes to doping than non-dopers (Hedges's g = -0.76). D-scores greater than -0.57 (CI95 = -0.72 to -0.46) might be indicative of a rather lenient attitude to doping. In urine samples evidence of administration of combinations of substances, complementary administration of substances to treat side effects and use of stimulants to promote loss of body fat was common.
Conclusion: This study demonstrates that athletes' attitudes to doping can be assessed indirectly with a reaction time-based test, and that their attitudes are related to their behavior. Although bodybuilders may be more willing to reveal their attitude to doping than other athletes, these results still provide evidence that the pictorial doping BIAT may be useful in athletes from other sports, perhaps as a complementary measure in evaluations of the effectiveness of doping prevention interventions.
Sedimentary proxies used to reconstruct marine productivity suffer from variable preservation and are sensitive to factors other than productivity. Therefore, proxy calibration is warranted. Here we map the spatial patterns of two paleoproductivity proxies, biogenic opal and barium fluxes, from a set of core-top sediments recovered in the Subarctic North Pacific. Comparisons of the proxy data with independent estimates of primary and export production, surface water macronutrient concentrations, and biological pCO(2) drawdown indicate that neither proxy shows a significant correlation with primary or export productivity for the entire region. Biogenic opal fluxes, when corrected for preservation using Th-230-normalized accumulation rates, show a good correlation with primary productivity along the volcanic arcs (tau = 0.71, p = 0.0024) and with export productivity throughout the western Subarctic North Pacific (tau = 0.71, p = 0.0107). Moderate and good correlations of biogenic barium flux with export production (tau = 0.57, p = 0.0022) and with surface water silicate concentrations (tau = 0.70, p = 0.0002) are observed for the central and eastern Subarctic North Pacific. For reasons unknown, however, no correlation is found in the western Subarctic North Pacific between biogenic barium flux and the reference data. Nonetheless, we show that barite saturation, uncertainty in the lithogenic barium corrections, and problems with the reference data sets are not responsible for the lack of a significant correlation between biogenic barium flux and the reference data. Further studies evaluating the factors controlling the variability of the biogenic constituents in the sediments are desirable in this region.
The International Project for the Evaluation of Educational Achievement (IEA) was formed in the 1950s (Postlethwaite, 1967). Since that time, the IEA has conducted many studies in the area of mathematics, such as the First International Mathematics Study (FIMS) in 1964, the Second International Mathematics Study (SIMS) in 1980-1982, and a series of studies beginning with the Third International Mathematics and Science Study (TIMSS) which has been conducted every 4 years since 1995. According to Stigler et al. (1999), in the FIMS and the SIMS, U.S. students achieved low scores in comparison with students in other countries (p. 1). The TIMSS 1995 “Videotape Classroom Study” was therefore a complement to the earlier studies conducted to learn “more about the instructional and cultural processes that are associated with achievement” (Stigler et al., 1999, p. 1). The TIMSS Videotape Classroom Study is known today as the TIMSS Video Study. From the findings of the TIMSS 1995 Video Study, Stigler and Hiebert (1999) likened teaching to “mountain ranges poking above the surface of the water,” whereby they implied that we might see the mountaintops, but we do not see the hidden parts underneath these mountain ranges (pp. 73-78). By watching the videotaped lessons from Germany, Japan, and the United States again and again, they discovered that “the systems of teaching within each country look similar from lesson to lesson. At least, there are certain recurring features [or patterns] that typify many of the lessons within a country and distinguish the lessons among countries” (pp. 77-78). They also discovered that “teaching is a cultural activity,” so the systems of teaching “must be understood in relation to the cultural beliefs and assumptions that surround them” (pp. 85, 88). From this viewpoint, one of the purposes of this dissertation was to study some cultural aspects of mathematics teaching and relate the results to mathematics teaching and learning in Vietnam. Another research purpose was to carry out a video study in Vietnam to find out the characteristics of Vietnamese mathematics teaching and compare these characteristics with those of other countries. In particular, this dissertation carried out the following research tasks: - Studying the characteristics of teaching and learning in different cultures and relating the results to mathematics teaching and learning in Vietnam - Introducing the TIMSS, the TIMSS Video Study and the advantages of using video study in investigating mathematics teaching and learning - Carrying out the video study in Vietnam to identify the image, scripts and patterns, and the lesson signature of eighth-grade mathematics teaching in Vietnam - Comparing some aspects of mathematics teaching in Vietnam and other countries and identifying the similarities and differences across countries - Studying the demands and challenges of innovating mathematics teaching methods in Vietnam – lessons from the video studies Hopefully, this dissertation will be a useful reference material for pre-service teachers at education universities to understand the nature of teaching and develop their teaching career.
The nature of the links between speech production and perception has been the subject of longstanding debate. The present study investigated the articulatory parameter of tongue height and the acoustic F1–F0 difference for the phonological distinction of vowel height in American English front vowels. Multiple repetitions of /i, ɪ, e, ɛ, æ/ in [(h)Vd] sequences were recorded in seven adult speakers. Articulatory (ultrasound) and acoustic data were collected simultaneously to provide a direct comparison of variability in vowel production in both domains. Results showed idiosyncratic patterns of articulation for contrasting the three front vowel pairs /i-ɪ/, /e-ɛ/, and /ɛ-æ/ across subjects, with the degree of variability in vowel articulation comparable to that observed in the acoustics for all seven participants. However, contrary to what was expected, some speakers showed reversals for tongue height for /ɪ/-/e/ that were also reflected in acoustics, with F1 higher for /ɪ/ than for /e/. The data suggest the phonological distinction of height is conveyed via speaker-specific articulatory-acoustic patterns that do not strictly match features descriptions. However, the acoustic signal is faithful to the articulatory configuration that generated it, carrying the crucial information for perceptual contrast.
Two opposing viewpoints have been advanced to account for morphological productivity, one according to which some knowledge is couched in the form of operations over variables, and another in which morphological generalization is primarily determined by similarity. We investigated this controversy by examining the generalization of Portuguese verb stems, which fall into one of three conjugation classes. In Study 1, an elicited production task revealed that the generalization of 2nd and 3rd conjugation stems is influenced by the degree of phonological similarity between novel roots and existing verbs, whereas the 1st conjugation generalizes beyond similarity. In Study 2, we directly contrasted two distinct computational implementations of conjugation class assignment in how well they matched the human data: a similarity-driven model that captures phonological similarities, and a dual-mechanism model that implements an explicit distinction between context-free and similarity-based generalizations. The similarity-driven model consistently underestimated 1st conjugation responses and overestimated proportions of 2nd and 3rd conjugation responses, especially for novel verbs that are highly similar to existing verbs of those classes. In contrast, the expected proportions produced by the dual-mechanism model were statistically indistinguishable from human responses. We conclude that both context-free and context-sensitive processes determine the generalization of conjugations in Portuguese, and that similarity-based algorithms of morphological acquisition are insufficient to exhibit default-like generalization. (C) 2014 Elsevier Inc. All rights reserved.
Varietätenlinguistik
(2014)
We discuss the solution theory of operators of the form del(x) + A, acting on smooth sections of a vector bundle with connection del over a manifold M, where X is a vector field having a critical point with positive linearization at some point p is an element of M. As an operator on a suitable space of smooth sections Gamma(infinity)(U, nu), it fulfills a Fredholm alternative, and the same is true for the adjoint operator. Furthermore, we show that the solutions depend smoothly on the data del, X and A.
QuestionDoes eutrophication drive vegetation change in pine forests on nutrient deficient sites and thus lead to the homogenization of understorey species composition?
LocationForest area (1600ha) in the Lower Spreewald, Brandenburg, Germany.
MethodsResurvey of 77 semi-permanent plots after 45yr, including vascular plants, bryophytes and ground lichens. We applied multidimensional ordination of species composition, dissimilarity indices, mean Ellenberg indicator values and the concept of winner/loser species to identify vegetation change between years. Differential responses along a gradient of nutrient availability were analysed on the basis of initial vegetation type, reflecting topsoil N availability of plots.
ResultsSpecies composition changed strongly and overall shifted towards higher N and slightly lower light availability. Differences in vegetation change were related to initial vegetation type, with strongest compositional changes in the oligotrophic forest type, but strongest increase of nitrophilous species in the mesotrophic forest type. Despite an overall increase in species number, species composition was homogenized between study years due to the loss of species (mainly ground lichens) on the most oligotrophic sites.
ConclusionsThe response to N enrichment is confounded by canopy closure on the N-richest sites and probably by water limitation on N-poorest sites. The relative importance of atmospheric N deposition in the eutrophication effect is difficult to disentangle from natural humus accumulation after historical litter raking. However, the profound differences in species composition between study years across all forest types suggest that atmospheric N deposition contributes to the eutrophication, which drives understorey vegetation change and biotic homogenization in Central European Scots pine forests on nutrient deficient sites.
We assessed tropical montane cloud forest (TMCF) sensitivity to natural disturbance by drought, fire, and dieback with a 7300-year-long paleorecord. We analyzed pollen assemblages, charcoal accumulation rates, and higher plant biomarker compounds (average chain length [ACL] of n-alkanes) in sediments from Wai 'anapanapa, a small lake near the upper forest limit and the mean trade wind inversion ('IWI) in Hawai`i. The paleorecord of ACL suggests increased drought frequency and a lower awl elevation from 2555-1323 cal yr B.P. and 606-334 cal yr B.P. Charcoal began to accumulate and a novel fire regime was initiated ca. 880 cal yr B.P., followed by a decreased fire return interval at ca. 550 cal yr B.P. Diebacks occurred at 2931, 2161, 1162, and 306 cal yr B.P., and two of these were independent of drought or fire. Pollen assemblages indicate that on average species composition changed only 2.8% per decade. These dynamics, though slight, were significantly associated with disturbance. The direction of species composition change varied with disturbance type. Drought was associated with significantly more vines and lianas; fire was associated with an increase in the tree fern Sadleria and indicators of open, disturbed landscapes at the expense of epiphytic ferns; whereas stand-scale dieback was associated with an increase in the tree fern Cibotium. Though this cloud forest was dynamic in response to past disturbance, it has recovered, suggesting a resilient TMCF with no evidence of state change in vegetation type (e.g., grassland or shrubland).
Crustal earthquake swarms are an expression of intensive cracking and rock damaging over periods of days, weeks or month in a small source region in the crust. They are caused by longer lasting stress changes in the source region. Often, the localized stressing of the crust is associated with fluid or gas migration, possibly in combination with pre-existing zones of weaknesses. However, verifying and quantifying localized fluid movement at depth remains difficult since the area affected is small and geophysical prospecting methods often cannot reach the required resolution.
We apply a simple and robust method to estimate the velocity ratio between compressional (P) and shear (S) waves (upsilon(P)/upsilon(S)-ratio) in the source region of an earthquake swarm. The upsilon(P)/upsilon(S)-ratio may be unusual small if the swarm is related to gas in a porous or fractured rock. The method uses arrival time difference between P and S waves observed at surface seismic stations, and the associated double differences between pairs of earthquakes. An advantage is that earthquake locations are not required and the method seems lesser dependent on unknown velocity variations in the crust outside the source region. It is, thus, suited for monitoring purposes.
Applications comprise three natural, mid-crustal (8-10 km) earthquake swarms between 1997 and 2008 from the NW-Bohemia swarm region. We resolve a strong temporal decrease of upsilon(P)/upsilon(S) before and during the main activity of the swarm, and a recovery of upsilon(P)/upsilon(S) to background levels at the end of the swarms. The anomalies are interpreted in terms of the Biot-Gassman equations, assuming the presence of oversaturated fluids degassing during the beginning phase of the swarm activity.
Vertical radar profiling (VRP) is a single-borehole geophysical technique, in which the receiver antenna is located within a borehole and the transmitter antenna is placed at one or various offsets from the borehole. Today, VRP surveying is primarily used to derive 1D velocity models by inverting the arrival times of direct waves. Using field data collected at a well-constrained test site in Germany, we evaluated a VRP workflow relying on the analysis of direct-arrival traveltimes and amplitudes as well as on imaging reflection events. To invert our VRP traveltime data, we used a global inversion strategy resulting in an ensemble of acceptable velocity models, and thus, it allowed us to appraise uncertainty issues in the estimated velocities as well as in porosity models derived via petrophysical translations. In addition to traveltime inversion, the analysis of direct-wave amplitudes and reflection events provided further valuable information regarding subsurface properties and architecture. The used VRP amplitude preprocessing and inversion procedures were adapted from raybased crosshole ground-penetrating radar (GPR) attenuation tomography and resulted in an attenuation model, which can be used to estimate variations in electrical resistivity. Our VRP reflection imaging approach relied on corridor stacking, which is a well-established processing sequence in vertical seismic profiling. The resulting reflection image outlines bounding layers and can be directly compared to surface-based GPR reflection profiling. Our results of the combined analysis of VRP, traveltimes, amplitudes, and reflections were consistent with independent core and borehole logs as well as GPR reflection profiles, which enabled us to derive a detailed hydro-stratigraphic model as needed, for example, to understand and model groundwater flow and transport.
The Galactic center is an interesting region for high-energy (0.1-100 GeV) and very-high-energy (E > 100 GeV) gamma-ray observations. Potential sources of GeV/TeV gamma-ray emission have been suggested, e.g., the accretion of matter onto the supermassive black hole, cosmic rays from a nearby supernova remnant (e.g., Sgr A East), particle acceleration in a plerion, or the annihilation of dark matter particles. The Galactic center has been detected by EGRET and by Fermi/LAT in the MeV/GeV energy band. At TeV energies, the Galactic center was detected with moderate significance by the CANGAROO and Whipple 10 m telescopes and with high significance by H.E.S.S., MAGIC, and VERITAS. We present the results from three years of VERITAS observations conducted at large zenith angles resulting in a detection of the Galactic center on the level of 18 standard deviations at energies above similar to 2.5 TeV. The energy spectrum is derived and is found to be compatible with hadronic, leptonic, and hybrid emission models discussed in the literature. Future, more detailed measurements of the high-energy cutoff and better constraints on the high-energy flux variability will help to refine and/or disentangle the individual models.
Die Arbeitsbibliothek von Dieter Adelmann befindet sich in der Universitätsbibliothek Potsdam und ist in diesem Band verzeichnet; der Nachlass und das Findbuch befinden sich im Universitätsarchiv Potsdam. Dieter Adelmann wurde am 1. Februar 1936 in Eisenach, Thüringen, geboren. Er studierte Philosophie, Germanistik und Soziologie an der Freien Universität Berlin und an der Universität Heidelberg und wurde dort 1968 mit der Arbeit Einheit des Bewusstseins als Grundlage der Philosophie Hermann Cohens bei Dieter Henrich und Hans-Georg Gadamer promoviert. Von 1968 bis 1970 war Adelmann Leiter des „Collegium Academicum“ der Universität Heidelberg; von 1970 bis 1974 Landesgeschäftsführer der SPD in Baden-Württemberg (Zuständigkeit: Politische Planung) und zeitweise auch Wahlkreisassistent des SPD-Bundestagsabgeordneten Horst Ehmke. Anschließend arbeitete Adelmann publizistisch mit dem Grafiker und gegenwärtigen Präsidenten der Berliner Akademie der Künste, Klaus Staeck zusammen, bevor er von Juli 1977 bis einschließlich September 1979 beim Vorwärts im Ressort Parteien und Programme beschäftigt war. Nach seinem Abschied vom Vorwärts war Adelmann freiberuflich in Bonn tätig, u.a. als Journalist. 1995 war er wissenschaftlicher Mitarbeiter im Rahmen der Herausgabe der Werke Hermann Cohens am „Moses-Mendelssohn-Zentrum“ und am Lehrstuhl für Innenraumgestaltung an der Technischen Universität Dresden. Nach dem Ende der Tätigkeit in Potsdam war Adelmann bis zu seinem Tod am 30. September 2008 freiberuflicher Philosoph und Cohen-Forscher.
Using density functional theory and Ab Initio Molecular Dynamics with Electronic Friction (AIMDEF), we study the adsorption and dissipative vibrational dynamics of hydrogen atoms chemisorbed on free-standing lead films of increasing thickness. Lead films are known for their oscillatory behaviour of certain properties with increasing thickness, e.g., energy and electron spill-out change in discontinuous manner, due to quantum size effects [G. Materzanini, P. Saalfrank, and P.J.D. Lindan, Phys. Rev. B 63, 235405 (2001)]. Here, we demonstrate that oscillatory features arise also for hydrogen when chemisorbed on lead films. Besides stationary properties of the adsorbate, we concentrate on finite vibrational lifetimes of H-surface vibrations. As shown by AIMDEF, the damping via vibration-electron hole pair coupling dominates clearly over the vibration-phonon channel, in particular for high-frequency modes. Vibrational relaxation times are a characteristic function of layer thickness due to the oscillating behaviour of the embedding surface electronic density. Implications derived from AIMDEF for frictional many-atom dynamics, and physisorbed species will also be given. (C) 2014 AIP Publishing LLC.
Das Projekt „Medienbildung in der LehrerInnenbildung“ hat das Ziel, den Einsatz digitaler Medien in den Lehramtsstudiengängen der Universität Potsdam nachhaltig zu fördern. Am Beispiel der Musiklehrerausbildung (Lehrstuhl für Musikpädagogik und Musikdidaktik) wurde ein Konzept für die Nutzung von Video-Podcasts in schulischen Praxisphasen entwickelt, um Studierende bei der Unterrichtsplanung zu unterstützen. Die fachspezifische Umsetzung des E-Learning-Ansatzes und die damit verbundenen Möglichkeiten und Heraus- forderungen werden gezeigt und betonen die Wichtigkeit der Zusammenarbeit zwischen Fachdidaktik und Mediendidaktik, um eine bedarfsorientierte Lösung zu finden, die praktisch umsetzbar ist.
Vielheit statt Einheit
(2014)