Refine
Has Fulltext
- yes (649) (remove)
Year of publication
- 2016 (649) (remove)
Document Type
- Postprint (216)
- Article (175)
- Doctoral Thesis (136)
- Monograph/Edited Volume (28)
- Part of Periodical (22)
- Preprint (18)
- Review (14)
- Master's Thesis (12)
- Part of a Book (11)
- Working Paper (6)
Keywords
- Migration (13)
- migration (13)
- religion (13)
- Religion (12)
- interkulturelle Missverständnisse (12)
- religiöses Leben (12)
- confusions and misunderstandings (11)
- Logopädie (6)
- Zeitschrift (6)
- model (6)
Institute
- Mathematisch-Naturwissenschaftliche Fakultät (80)
- Institut für Slavistik (75)
- Institut für Geowissenschaften (41)
- Humanwissenschaftliche Fakultät (39)
- Institut für Chemie (39)
- Institut für Physik und Astronomie (31)
- Institut für Biochemie und Biologie (30)
- Vereinigung für Jüdische Studien e. V. (29)
- Bürgerliches Recht (28)
- Department Linguistik (23)
„Wir schaffen das!“
(2016)
„Was ist Migration?“
(2016)
Auch wenn die Appelle, die Bedeutung von Migration für Erwachsenenbildung deutlicher wahrzunehmen, unüberhörbar sind, bleiben sie bezüglich kategorialer Arbeit bemerkenswert wenig beachtet. Grundlagentheoretisch motivierte Arbeit am Begriff „Migration“ ist in der Erwachsenenbildung noch lange nicht hinreichend ausgeschöpft. Auch wenn sich einzelne Studien mit ihm auseinandersetzen, besteht dennoch der Eindruck, dass kategoriale Klärungsversuche singulär bleiben. Die nicht einfache Aufgabe, den Begriff Migration vor seiner kategorialen Stilllegung zu bewahren, bleibt eine ernsthafte Herausforderung für erwachsenenpädagogische Migrationsforschung, sofern sie daran interessiert ist, die Risiken eines bisher essentialistischen Kurses ernsthaft ins Visier zu nehmen.
„Vajanie iz vremeni“
(2016)
Die Worte „entjuden“ und „Entjudung“ sind sprachlicher Ausdruck zumeist judenfeindlicher Haltungen und Taten in der deutschen Geschichte. Der Beitrag zeichnet die Entwicklung des Begriffs anhand seiner Verwendungszusammenhänge nach. Im Kontext der Assimilation des beginnenden 19. Jahrhunderts meinte der Terminus, dass man sich jener jüdischen „Eigenheit“ zu entkleiden habe, welche als Postulat gemeinhin Konsens war. Innerhalb der innerjüdischen Diskussion wird „Entjudung“ zu Beginn des 20. Jahrhunderts zum diagnostischen Ausdruck des Identitätsverlustes. Als politischer Kampfbegriff der Nationalsozialisten ist er wiederum zum Synonym für die Entrechtung und Vernichtung jüdischer Menschen geworden. Protestantische Theologen verwendeten diesen Begriff in der Debatte um die Erneuerung des Christentums, was durch die Entfernung jüdischer Einflüsse geschehen sollte. Bereits Ende des 18. Jahrhunderts formuliert, findet diese Forderung in der 1939 erfolgten Gründung des Instituts zur Erforschung und Beseitigung des jüdischen Einflusses auf das deutsche kirchliche Leben seine programmatische Umsetzung.
The aim of this study was to develop a one-step synthesis of gold nanotriangles (NTs) in the presence of mixed phospholipid vesicles followed by a separation process to isolate purified NTs. Negatively charged vesicles containing AOT and phospholipids, in the absence and presence of additional reducing agents (polyampholytes, polyanions or low molecular weight compounds), were used as a template phase to form anisotropic gold nanoparticles. Upon addition of the gold chloride solution, the nucleation process is initiated and both types of particles, i.e., isotropic spherical and anisotropic gold nanotriangles, are formed simultaneously. As it was not possible to produce monodisperse nanotriangles with such a one-step procedure, the anisotropic nanoparticles needed to be separated from the spherical ones. Therefore, a new type of separation procedure using combined polyelectrolyte/micelle depletion flocculation was successfully applied. As a result of the different purification steps, a green colored aqueous dispersion was obtained containing highly purified, well-defined negatively charged flat nanocrystals with a platelet thickness of 10 nm and an edge length of about 175 nm. The NTs produce promising results in surface-enhanced Raman scattering.
Тезисы
(2016)
Тезисы
(2016)
Тезисы
(2016)
Тезисы
(2016)
Тезисы
(2016)
Тезисы
(2016)
Солярис как процесс
(2016)
О новых истоках кино
(2016)
Блуждающие цитаты
(2016)
Биографические мотивы условно можно разделить на два вида:
1. Отдельные небольшие эпизоды или впечатления, запомнившие-
ся Тарковскому, которые заняли свое точное место в его сценариях и
фильмах. Режиссер А. Гордон условно назвал этот прием Тарковского
«теорией пазлов». Такие элементы встречаются почти во всех фильмах
Андрея Тарковского.
2. Расширенное включение в фильмы автобиографии, естественно, не
во всей полноте. К таким фильмам относятся Зеркало (1974) и Offret
(«Жертвоприношение» – 1986).
After the mass immigration to Israel from 1948 to 1950, about 2000 Jews remained
in Yemen. These Jews lived in small communities and continued to maintain their
religious environment as it was. In the years that followed, many of them, however, moved from Yemen to Israel with the assistance of the Jewish Agency and the Joint
Distribution Committee (JDC). The community was of a small size and the fact that it
was dispersed throughout the predominantly Muslim areas, created a certain closeness
between the two groups. About ten percent of the Jews chose to convert to Islam, many
of them in groups. In about twenty cases, the husbands chose to convert to Islam while
their wives emigrated to preserve their Judaism. Some of the converts refused to grant
their wives a divorce, because, according to Muslim law, conversion is enough to sever
the marital relationship. This procedure is called ʿAgunot. Meaning, women bound in
marriage to a husband and they no longer lived together, but the husband didn’t formally
‘released’ her from marriage union. The article follows the efforts undertaken
to release the ʿAgunot, and shows that Jewish and Muslim scholars were able to find
solutions to the ʿAgunot problem and, at times, managed to bridge the gap between the
two religions.
Černobylʼ, Gewalt, Mythos
(2016)
Im vorliegenden Beitrag werden einige zentrale Berichte und Motive aus den frühen Quellen des Islam über die militärischen Konflikte des Propheten Muhammad mit den
Juden von Medina beleuchtet. Als Grundlage der Untersuchung dient die Prophetenbiografie des Gelehrten Muḥammad ibn Isḥāq (gest. 150 nach der Hedschra), die auch heute noch maßgeblich ist. Im Beitrag wird unter anderem aufgezeigt, dass es sowohl innerhalb der Gattung der Sīra-Literatur, der Ibn Isḥāqs Werk angehört, als auch in den frühen Traditionen der islamischen Rechtswissenschaft, der Koranexegese sowie im Korantext selbst zahlreiche Hinweise auf alternative Darstellungen dieser Konflikte gibt. Diese gerieten in den ersten Jahrhunderten des Islam infolge des Siegeszuges von Ibn Isḥāqs Werk zunehmend aus dem Blickfeld, sind aber für zeitgenössische Diskurse um das Verhältnis des Islam zu Nichtmuslimen durchaus von Interesse. Ziel der Untersuchung ist es die normative Aussagekraft der unterschiedlichen Szenarien für Grundsatzfragen insbesondere für das Verhältnis zwischen Muslimen und Juden herauszuarbeiten. Einen inhaltlichen Schwerpunkt im Beitrag bilden dabei unterschiedliche Zugänge zum berühmten Bericht über die Vernichtung des jüdischen Stammes der Banū Qurayza im Anschluss an die Grabenschlacht.
La référence à Colomb est un lieu commun de la biographie humboldtienne. Humboldt lui-même le souligne particulièrement dans son Examen critique, en y ajoutant une dimension autobiographique. La contribution analyse, dans une perspective philologique, le matériel et les formes de mise en scène avec lesquelles une vie est représentée au travers d'une autre.
Zurück am Neuen Palais
(2016)
Zu Hause
(2016)
Zigaretten und Honig
(2016)
Zersplitterung statt Einheit
(2016)
Das frühe 20. Jahrhundert brachte in Europa eine politische Ideologie hervor, die sich rasch auf dem Kontinent ausbreitete und ihn nachhaltig prägte: den Faschismus. In den Fokus der Geschichtswissenschaft, aber auch der Medien und der Unterhaltungsindustrie rückten dabei vor allem der italienische Faschismus und der deutsche Nationalsozialismus. Doch auch in anderen europäischen Staaten formierten sich in der Zwischenkriegszeit faschistische Bewegungen, die in einigen Fällen mehr, in anderen Fällen weniger Erfolg hatten. Die vorliegende Arbeit beschäftigt sich mit dem Phänomen Faschismus in einem dieser Staaten, der oft als Vorreiter der modernen Demokratie angesehen wird: Großbritannien.
Den Untersuchungen liegt die zentrale Frage zugrunde, warum der britische Faschismus in den turbulenten Jahren zwischen den beiden Weltkriegen nicht denselben Erfolg wie in Deutschland oder Italien erringen und zu einer bestimmenden politischen Kraft aufsteigen konnte. Zur Klärung dieser Frage werden zwei Aspekte betrachtet. Zum einen untersucht die Arbeit eine Auswahl an faschistischen Bewegungen, die sich in dieser Zeit gebildet hatten und politisch aktiv wurden. Die persönlichen Einstellungen, politischen Ambitionen und gegenseitigen Differenzen der jeweiligen Führungsfiguren spielen dabei ebenso eine Rolle wie das politische Programm und die Erfolgsgeschichte ihrer Bewegungen. Der zweite Teil fokussiert die britische Gesellschaft und beleuchtet die Rolle der Bevölkerung sowie die politischen und soziokulturellen Umstände in Großbritannien.
Mithilfe dieser personenbezogenen sowie politik- und gesellschaftszentrierten Analyse sollen die Gründe für den Misserfolg des britischen Faschismus in der Zwischenkriegszeit ermittelt werden. Neben einschlägiger Sekundärliteratur stützt sich die Arbeit auch auf ausgewähltes Quellenmaterial. Zeitgenössische Zeitungsartikel sowie die von den zu untersuchenden Personen verfassten Werke sollen einen detaillierteren Einblick in die politischen Absichten und persönlichen Intentionen der Bewegungen und ihrer Protagonisten liefern.
Since the economic crisis in 2008, European youth unemployment rates have been persistently high at around 20% on average. The majority of European countries spends significant resources each year on active labor market programs (ALMP) with the aim of improving the integration prospects of struggling youths. Among the most common programs used are training courses, job search assistance and monitoring, subsidized employment, and public work programs. For policy makers, it is of upmost importance to know which of these programs work and which are able to achieve the intended goals – may it be the integration into the first labor market or further education. Based on a detailed assessment of the particularities of the youth labor market situation, we discuss the pros and cons of different ALMP types. We then provide a comprehensive survey of the recent evidence on the effectiveness of these ALMP for youth in Europe, highlighting factors that seem to promote or impede their effectiveness in practice. Overall, the findings with respect to employment outcomes are only partly promising. While job search assistance (with and without monitoring) results in overwhelmingly positive effects, we find more mixed effects for training and wage subsidies, whereas the effects for public work programs are clearly negative. The evidence on the impact of ALMP on furthering education participation as well as employment quality is scarce, requiring additional research and allowing only limited conclusions so far.
www.BrAnD2. Wille.
(2016)
2014 fand der Potsdamer Lateintag zum 10. Mal statt. Das Jubiläum war ein angemessener Anlass, unser neues Projekt vorzustellen. Die Robert Bosch-Stiftung fördert wieder für drei Jahre die Zusammenarbeit der Klassischen Philologie der Universität Potsdam mit Schulen aus Brandenburg. Der Titel lautet: www.BrAnD2. Wille. Würde. Wissen. Zweites Brandenburger Antike-Denkwerk. Zur Auftaktveranstaltung zum Thema „Wille“ erschienen wieder über 500 Teilnehmerinnen und Teilnehmer. Der Band versammelt einen Projektbericht, die Vorträge von Frau Prof. Dr. Christiane Kunst und Herrn Prof. Dr. Christoph Horn sowie eine Auswahl an Materialen der betreuenden Studierenden.
This dissertation uses a common grammatical phenomenon, light verb constructions (LVCs) in English and German, to investigate how syntax-semantics mapping defaults influence the relationships between language processing, representation and conceptualization. LVCs are analyzed as a phenomenon of mismatch in the argument structure. The processing implication of this mismatch are experimentally investigated, using ERPs and a dual task. Data from these experiments point to an increase in working memory. Representational questions are investigated using structural priming. Data from this study suggest that while the syntax of LVCs is not different from other structures’, the semantics and mapping are represented differently. This hypothesis is tested with a new categorization paradigm, which reveals that the conceptual structure that LVC evoke differ in interesting, and predictable, ways from non-mismatching structures’.
In this study, a new reliable, economic, and environmentally-friendly one-step synthesis is established to obtain carbon nanodots (CNDs) with well-defined and reproducible photoluminescence (PL) properties via the microwave-assisted hydrothermal treatment of starch and Tris-acetate-EDTA (TAE) buffer as carbon sources. Three kinds of CNDs are prepared using different sets of above mentioned starting materials. The as-synthesized CNDs: C-CND (starch only), N-CND 1 (starch in TAE) and N-CND 2 (TAE only) exhibit highly homogenous PL and are ready to use without need for further purification. The CNDs are stable over a long period of time (>1 year) either in solution or as freeze-dried powder. Depending on starting material, CNDs with PL quantum yield (PLQY) ranging from less than 1% up to 28% are obtained. The influence of the precursor concentration, reaction time and type of additives on the optical properties (UV-Vis absorption, PL emission spectrum and PLQY) is carefully investigated, providing insight into the chemical processes that occur during CND formation. Remarkably, upon freeze-drying the initially brown CND-solution turns into a non-fluorescent white/slightly brown powder which recovers PL in aqueous solution and can potentially be applied as fluorescent marker in bio-imaging, as a reduction agent or as a photocatalyst.
We examined the effects of argument-head distance in SVO and SOV languages (Spanish and German), while taking into account readers' working memory capacity and controlling for expectation (Levy, 2008) and other factors. We predicted only locality effects, that is, a slowdown produced by increased dependency distance (Gibson, 2000; Lewis and Vasishth, 2005). Furthermore, we expected stronger locality effects for readers with low working memory capacity. Contrary to our predictions, low-capacity readers showed faster reading with increased distance, while high-capacity readers showed locality effects. We suggest that while the locality effects are compatible with memory-based explanations, the speedup of low-capacity readers can be explained by an increased probability of retrieval failure. We present a computational model based on ACT-R built under the previous assumptions, which is able to give a qualitative account for the present data and can be tested in future research. Our results suggest that in some cases, interpreting longer RTs as indexing increased processing difficulty and shorter RTs as facilitation may be too simplistic: The same increase in processing difficulty may lead to slowdowns in high-capacity readers and speedups in low-capacity ones. Ignoring individual level capacity differences when investigating locality effects may lead to misleading conclusions.
Der Unterricht großer Studierendengruppen im wissenschaftlichen Schreiben birgt vielfältige organisatorische Herausforderungen und eine zeitintensive Betreuung durch die Dozenten. Diese Arbeit stellt ein Lehrkonzept mit Peer-Reviews vor, in dem das Feedback der Peers durch eine automatisierte Analyse ergänzt wird. Die Software Confopy liefert metrik- und strukturbasierte Hinweise für die Verbesserung des wissenschaftlichen Schreibstils. Der Nutzen von Confopy wird an 47 studentischen Arbeiten in Draft- und Final-Version illustriert.
Werben für sich selbst
(2016)
In the past decades, development cooperation (DC) led by conventional bi- and multilateral donors has been joined by a large number of small, private or public-private donors. This pluralism of actors raises questions as to whether or not these new donors are able to implement projects more or less effectively than their conventional counterparts. In contrast to their predecessors, the new donors have committed themselves to be more pragmatic, innovative and flexible in their development cooperation measures. However, they are also criticized for weakening the function of local civil society and have the reputation of being an intransparent and often controversial alternative to public services. With additional financial resources and their new approach to development, the new donors have been described in the literature as playing a controversial role in transforming development cooperation. This dissertation compares the effectiveness of initiatives by new and conventional donors with regard to the provision of public goods and services to the poor in the water and sanitation sector in India.
India is an emerging country but it is experiencing high poverty rates and poor water supply in predominantly rural areas. It lends itself for analyzing this research theme as it is currently being confronted by a large number of actors and approaches that aim to find solutions for these challenges .
In the theoretical framework of this dissertation, four governance configurations are derived from the interaction of varying actor types with regard to hierarchical and non-hierarchical steering of their interactions. These four governance configurations differ in decision-making responsibilities, accountability and delegation of tasks or direction of information flow. The assumption on actor relationships and steering is supplemented by possible alternative explanations in the empirical investigation, such as resource availability, the inheritance of structures and institutions from previous projects in a project context, gaining acceptance through beneficiaries (local legitimacy) as a door opener, and asymmetries of power in the project context.
Case study evidence from seven projects reveals that the actors' relationship is important for successful project delivery. Additionally, the results show that there is a systematic difference between conventional and new donors. Projects led by conventional donors were consistently more successful, due to an actor relationship that placed the responsibility in the hands of the recipient actors and benefited from the trust and reputation of a long-term cooperation. The trust and reputation of conventional donors always went along with a back-up from federal level and trickled down as reputation also at local level implementation. Furthermore, charismatic leaders, as well as the acquired structures and institutions of predecessor projects, also proved to be a positive influencing factor for successful project implementation.
Despite the mixed results of the seven case studies, central recommendations for action can be derived for the various actors involved in development cooperation. For example, new donors could fulfill a supplementary function with conventional donors by developing innovative project approaches through pilot studies and then implementing them as a supplement to the projects of conventional donors on the ground. In return, conventional donors would have to make room the new donors by integrating their approaches into already programs in order to promote donor harmonization. It is also important to identify and occupy niches for activities and to promote harmonization among donors on state and federal sides.
The empirical results demonstrate the need for a harmonization strategy of different donor types in order to prevent duplication, over-experimentation and the failure of development programs. A transformation to successful and sustainable development cooperation can only be achieved through more coordination processes and national self-responsibility.
The hydrological budget of a region is determined based on the horizontal and vertical water fluxes acting in both inward and outward directions. These integrated water fluxes vary, altering the total water storage and consequently the gravitational force of the region. The time-dependent gravitational field can be observed through the Gravity Recovery and Climate Experiment (GRACE) gravimetric satellite mission, provided that the mass variation is above the sensitivity of GRACE. This study evaluates mass changes in prominent reservoir regions through three independent approaches viz. fluxes, storages, and gravity, by combining remote sensing products, in-situ data and hydrological model outputs using WaterGAP Global Hydrological Model (WGHM) and Global Land Data Assimilation System (GLDAS). The results show that the dynamics revealed by the GRACE signal can be better explored by a hybrid method, which combines remote sensing-based reservoir volume estimates with hydrological model outputs, than by exclusive model-based storage estimates. For the given arid/ semi-arid regions, GLDAS based storage estimations perform better than WGHM.
Wassertragen
(2016)
Waage und Schwert
(2016)
Die Rezeption des Propheten Jona im Koran setzt dessen biblischen Narrativ im Wesentlichen voraus und deutet diesen vor allem dort aus, wo man um eine Korrektur seines Prophetenbildes bemüht ist. Im Fokus stehen dabei die Buße, Umkehr und Erlösung Yūnus’ und seines Volkes. Nachkoranische Prophetenerzählungen (qisas alanbiyā’) füllen die narrativen Leerstellen der ‚Jona-Suren‘ wiederum mit erklärendem Erzählmaterial auf und schöpfen dafür auch aus dem umfangreichen Fundus biblischer und rabbinischer Traditionen, die sie sich im äußeren Rahmen der koranischen Yūnus Überlieferung schöpferisch zu eigen machen. So entstehen Erzählkompositionen, die sich als dialogische Auseinandersetzung mit religiösen Themen von gemeinsamer Relevanz lesen lassen. Der Artikel reflektiert gezielt Entwicklung und Verhältnis der Rezeptionen Jonas im Koran sowie in den Prophetenerzählungen von Ibn-Muhammad at-Ta‛labī und Muhammad ibn ‛Abd Allāh al-Kisā’i, in stetiger Auseinandersetzung mit der jüdischen Jona-Tradition.
Verspielt
(2016)
We present an X-ray-optical cross-correlator for the soft (> 150 eV) up to the hard X-ray regime based on a molybdenum-silicon superlattice. The cross-correlation is done by probing intensity and position changes of superlattice Bragg peaks caused by photoexcitation of coherent phonons. This approach is applicable for a wide range of X-ray photon energies as well as for a broad range of excitation wavelengths and requires no external fields or changes of temperature. Moreover, the cross-correlator can be employed on a 10 ps or 100 fs time scale featuring up to 50% total X-ray reflectivity and transient signal changes of more than 20%. (C) 2016 Author(s).
Verlaufen
(2016)
Die vorliegende Modularbeit vergleicht die Häufigkeit des Imperativs auf Plakaten der Berliner Abgeordnetenhauswahl 2016 mit der auf den Plakaten der Weimarer Republik. Sie geht dabei der These nach, dass diese Häufigkeit abgenommen hat und kann diese bestätigen: 2016 tritt der Imperativ achtmal seltener (5,7 % zu 45,7 %) auf. Zusätzlich leistet die Arbeit einen Überblick zum Imperativ und weiteren Möglichkeiten, mittels der deutschen Sprache eine Aufforderung zu artikulieren.
Für die Untersuchung wurden zwei Untersuchungskorpora herangezogen, wovon der Korpus mit den Slogans zur Abgeordnetenhauswahl extra für diese Arbeit erstellt wurde und ihr auch beiliegt. In diesem, wie im Korpus zur Weimarer Republik, sind alle die Slogans enthalten und die verwendeten Imperative ausgezählt. So bietet sich ein Einblick in die politische Werbesprache der beiden Zeiten.
Recent research has indicated that university students sometimes use caffeine pills for neuroenhancement (NE; non-medical use of psychoactive substances or technology to produce a subjective enhancement in psychological functioning and experience), especially during exam preparation. In our factorial survey experiment, we manipulated the evidence participants were given about the prevalence of NE amongst peers and measured the resulting effects on the psychological predictors included in the Prototype-Willingness Model of risk behavior. Two hundred and thirty-one university students were randomized to a high prevalence condition (read faked research results overstating usage of caffeine pills amongst peers by a factor of 5; 50%), low prevalence condition (half the estimated prevalence; 5%) or control condition (no information about peer prevalence). Structural equation modeling confirmed that our participants’ willingness and intention to use caffeine pills in the next exam period could be explained by their past use of neuroenhancers, attitude to NE and subjective norm about use of caffeine pills whilst image of the typical user was a much less important factor. Provision of inaccurate information about prevalence reduced the predictive power of attitude with respect to willingness by 40-45%. This may be because receiving information about peer prevalence which does not fit with their perception of the social norm causes people to question their attitude. Prevalence information might exert a deterrent effect on NE via the attitude-willingness association. We argue that research into NE and deterrence of associated risk behaviors should be informed by psychological theory.
Background
Vitamin-D-binding protein (VDBP) is a low molecular weight protein that is filtered through the glomerulus as a 25-(OH) vitamin D 3/VDBP complex. In the normal kidney VDBP is reabsorbed and catabolized by proximal tubule epithelial cells reducing the urinary excretion to trace amounts. Acute tubular injury is expected to result in urinary VDBP loss. The purpose of our study was to explore the potential role of urinary VDBP as a biomarker of an acute renal damage.
Method
We included 314 patients with diabetes mellitus or mild renal impairment undergoing coronary angiography and collected blood and urine before and 24 hours after the CM application. Patients were followed for 90 days for the composite endpoint major adverse renal events (MARE: need for dialysis, doubling of serum creatinine after 90 days, unplanned emergency rehospitalization or death).
Results
Increased urine VDBP concentration 24 hours after contrast media exposure was predictive for dialysis need (no dialysis: 113.06 +/- 299.61ng/ml, n = 303; need for dialysis: 613.07 +/- 700.45 ng/ml, n = 11, Mean +/- SD, p < 0.001), death (no death during follow-up: 121.41 +/- 324.45 ng/ml, n = 306; death during follow-up: 522.01 +/- 521.86 ng/ml, n = 8; Mean +/- SD, p < 0.003) and MARE (no MARE: 112.08 +/- 302.00ng/ml, n = 298; MARE: 506.16 +/- 624.61 ng/ml, n = 16, Mean +/- SD, p < 0.001) during the follow-up of 90 days after contrast media exposure. Correction of urine VDBP concentrations for creatinine excretion confirmed its predictive value and was consistent with increased levels of urinary Kidney Injury Molecule1 (KIM-1) and baseline plasma creatinine in patients with above mentioned complications. The impact of urinary VDBP and KIM-1 on MARE was independent of known CIN risk factors such as anemia, preexisting renal failure, preexisting heart failure, and diabetes.
Conclusions
Urinary VDBP is a promising novel biomarker of major contrast induced nephropathy-associated events 90 days after contrast media exposure.
In the context of an increasing population of aging people and a shift of medical paradigm towards an individualized medicine in health care, nanostructured lanthanides doped sodium yttrium fluoride (NaYF4) represents an exciting class of upconversion nanomaterials (UCNM) which are suitable to bring forward developments in biomedicine and -biodetection. Despite the fact that among various fluoride based upconversion (UC) phosphors lanthanide doped NaYF4 is one of the most studied upconversion nanomaterial, many open questions are still remaining concerning the interplay of the population routes of sensitizer and activator electronic states involved in different luminescence upconversion photophysics as well as the role of phonon coupling. The collective work aims to explore a detailed understanding of the upconversion mechanism in nanoscaled NaYF4 based materials co-doped with several lanthanides, e.g. Yb3+ and Er3+ as the "standard" type upconversion nanoparticles (UCNP) up to advanced UCNP with Gd3+ and Nd3+. Especially the impact of the crystal lattice structure as well as the resulting lattice phonons on the upconversion luminescence was investigated in detail based on different mixtures of cubic and hexagonal NaYF4 nanoscaled crystals. Three synthesis methods, depending on the attempt of the respective central spectroscopic questions, could be accomplished in the following work. NaYF4 based upconversion nanoparticles doped with several combination of lanthanides (Yb3+, Er3+, Gd3+ and Nd3+) were synthesized successfully using a hydrothermal synthesis method under mild conditions as well as a co-precipitation and a high temperature co-precipitation technique. Structural information were gathered by means of X-ray diffraction (XRD), electron microscopy (TEM), dynamic light scattering (DLS), Raman spectroscopy and inductively coupled plasma atomic emission spectrometry (ICP-OES). The results were discussed in detail with relation to the spectroscopic results. A variable spectroscopic setup was developed for multi parameter upconversion luminescence studies at various temperature 4 K to 328 K. Especially, the study of the thermal behavior of upconversion luminescence as well as time resolved area normalized emission spectra were a prerequisite for the detailed understanding of intramolecular deactivation processes, structural changes upon annealing or Gd3+ concentration, and the role of phonon coupling for the upconversion efficiency. Subsequently it became possible to synthesize UCNP with tailored upconversion luminescence properties. In the end, the potential of UCNP for life science application should be enunciated in context of current needs and improvements of a nanomaterial based optical sensors, whereas the "standard" UCNP design was attuned according to the special conditions in the biological matrix. In terms of a better biocompatibility due to a lower impact on biological tissue and higher penetrability for the excitation light. The first step into this direction was to use Nd3+ ions as a new sensitizer in tridoped NaYF4 based UCNP, whereas the achieved absolute and relative temperature sensitivity is comparable to other types of local temperature sensors in the literature.
Diese Arbeit befasst sich mit der Herstellung und Charakterisierung von thermoresponsiven Filmen auf Goldelektroden durch Fixierung eines bereits synthetisierten thermoresponsiven Polymers. Als Basis für die Entwicklung der responsiven Grenzfläche dienten drei unterschiedliche Copolymere (Polymere I, II und III) aus der Gruppe der thermisch schaltbaren Poly(oligo(ethylenglykol)methacrylate).
Die turbidimetrischen Messungen der Copolymere in Lösungen haben gezeigt, dass der Trübungspunkt vom pH-Wert, der Gegenwart von Salzen sowie von der Ionenstärke der Lösung abhängig ist. Nach der Charakterisierung der Polymere in Lösung wurden Experimente der kovalenten Kopplung der Polymere I bis III an die Oberfläche der Gold-Elektroden durchgeführt. Während bei Polymeren I und II die Ankopplung auf einer Amidverbrückung basierte, wurde bei Polymer III als alternative Methode zur Immobilisierung eine photoinduzierte Anbindung unter gleichzeitiger Vernetzung gewählt. Der Nachweis der erfolgreichen Ankopplung erfolgte bei allen Polymeren elektrochemisch mittels Cyclovoltammetrie und Impedanzspektroskopie in K3/4[Fe(CN)6]-Lösungen. Wie die Ellipsometrie-Messungen zeigten, waren die erhaltenen Polymer-Filme unterschiedlich dick. Die Ankopplung über Amidverbrückung lieferte dünne Filme (10 – 15 nm), während der photovernetzte Film deutlich dicker war (70-80 nm) und die darunter liegende Oberfläche relativ gut isolierte.
Elektrochemische Temperaturexperimente an Polymer-modifizierten Oberflächen in Lösungen in Gegenwart von K3/4[Fe(CN)6] zeigten, dass auch die immobilisierten Polymere I bis III responsives Temperaturverhalten zeigen. Bei Elektroden mit den immobilisierten Polymeren I und II ist der Temperaturverlauf der Parameterwerte diskontinuierlich – ab einem kritischen Punkt (37 °C für Polymer I und 45 °C für Polymer II) wird zunächst langsame Zunahme der Peakströme wird deutlich schneller. Das Temperaturverhalten von Polymer III ist dagegen bis 50 °C kontinuierlich, der Peakstrom sinkt hier durchgehend.
Weiterhin wurde mit den auf Polymeren II und III basierten Elektroden deren Anwendung als responsive Matrix für Bioerkennungsreaktionen untersucht. Es wurde die Ankopplung von kleinen Biorezeptoren, TAG-Peptiden, an Polymer II- und Polymer III-modifizierten Elektroden durchgeführt. Das hydrophile FLAG-TAG-Peptid verändert das Temperaturverhalten des Polymer II-Films unwesentlich, da es die Hydrophilie des Netzwerkes nicht beeinflusst. Weiterhin wurde der Effekt der Ankopplung der ANTI-FLAG-TAG-Antikörper an FLAG-TAG-modifizierte Polymer II-Filme untersucht. Es konnte gezeigt werden, dass die Antikörper spezifisch an FLAG-TAG-modifiziertes Polymer II binden. Es wurde keine unspezifische Anbindung von ANTI-FLAG-TAG an Polymer II beobachtet. Die Temperaturexperimente haben gezeigt, dass die thermische Restrukturierung des Polymer II-FLAG-TAG-Filmes auch nach der Antikörper-Ankopplung noch stattfindet. Der Einfluss der ANTI-FLAG-TAG-Ankopplung ist gering, da der Unterschied in der Hydrophilie zwischen Polymer II und FLAG-TAG bzw. ANTI-FLAG-TAG zu gering ist.
Für die Untersuchungen mit Polymer III-Elektroden wurde neben dem hydrophilen FLAG-TAG-Peptid das deutlich hydrophobere HA-TAG-Peptid ausgewählt. Wie im Falle der Polymer II Elektrode beeinflusst das gekoppelte FLAG-TAG-Peptid das Temperaturverhalten des Polymer III-Netzwerkes nur geringfügig. Die gemessenen Stromwerte sind geringer als bei der Polymer III-Elektrode. Das Temperaturverhalten der FLAG-TAG-Elektrode ähnelt dem der reinen Polymer III-Elektrode – die Stromwerte sinken kontinuierlich bis die Temperatur von ca. 40 °C erreicht ist, bei der ein Plateau beobachtet wird. Offensichtlich verändert FLAG-TAG auch in diesem Fall nicht wesentlich die Hydrophilie des Polymer III-Netzwerkes. Das an Polymer III-Elektroden gekoppelte hydrophobe HA-TAG-Peptid beeinflusst dagegen im starken Maße den Quellzustand des Netzwerkes. Die Ströme für die HA-TAG-Elektroden sind deutlich geringer als die für die FLAG-TAG-Polymer III-Elektroden, was auf geringeren Wassergehalt und dickeren Film zurückzuführen ist. Bereits ab 30 °C erfolgt der Anstieg von Stromwerten, der bei Polymer III- bzw. bei Polymer III-FLAG-TAG-Elektroden nicht beobachtet werden kann. Das gekoppelte hydrophobe HA-TAG-Peptid verdrängt Wasser aus dem Polymer III-Netzwerk, was in der Stauchung des Films bereits bei Raumtemperatur resultiert. Dies führt dazu, dass der Film im Laufe des Temperaturanstieges kaum noch komprimiert. Die Stromwerte steigen in diesem Fall entsprechend des Anstiegs der temperaturabhängigen Diffusion des Redoxpaares. Diese Untersuchungen zeigen, dass das HA-TAG-Peptid als Ankermolekül deutlich besser für eine potentielle Verwendung der Polymer III-Filme für sensorische Zwecke geeignet ist, da es sich deutlich in der Hydrophilie von Polymer III unterscheidet.
In der vorliegenden Arbeit wird die planetare Grenzschicht in Ny-Ålesund, Spitzbergen, sowohl bezüglich kleinskaliger („mikrometeorologischer“) Effekte als auch in ihrer Kopplung mit der Synoptik untersucht. Dazu werden verschiedene Beobachtungsdaten aus der Säule und in Bodennähe zusammengezogen und bewertet. Die so gewonnenen Datensätze werden dann zur Validierung eines nicht-hydrostatischen, regionalen Klimamodells genutzt. Weiterhin werden orographisch bedingte Einflüsse, die Untergrundbeschaffenheit und die lokale Heterogenität der Unterlage untersucht. Hierzu werden meteorologische Größen, wie die Variabilität der Temperatur und insbesondere die jährliche Windverteilung in Bodennähe untersucht und es erfolgt ein Vergleich von in-situ gemessenen turbulenten Flüssen von den Eddy-Kovarianz-Messkomplexen bei Ny-Ålesund und im Bayelva-Tal unter demselben Aspekt. Es zeigt sich, dass der Eddy-Kovarianz-Messkomplex im Bayelva-Tal sehr stark durch eine orographisch bedingte Kanalisierung der Strömung beeinflusst ist und sich nicht für Vergleiche mit regionalen Klimamodellen mit horizontalen Auflösungen von <1km eignet. Die hohe Bodenfeuchte im Bayelva-Tal führt zudem zu einem deutlich kleineren Bowen-Verhältnis, als es für diese Region zu erwarten ist. Der Eddy-Kovarianz-Messkomplex bei Ny-Ålesund erweist sich hingegen als geeigneter für solche Modellvergleiche, aufgrund der typischen, küstennahen Windverteilung und des repräsentativen Footprints. Letzteres wird durch die Bestimmung der Footprint-Klimatologie des Jahres 2013 mit einem aktuellen Footprint-Modell erarbeitet.
Weiterhin wird die Auswirkung von (Anti-) Zyklonen über den Archipel auf die zeitliche Variabilität der lokalen Grenzschichteigenschaften untersucht und bewertet. Dazu wird ein Zyklonen-Detektions-Algorithmus auf ERA-Interim-Reanalysedatensätze angewendet, wodurch die Häufigkeit von nahezu ideal konzentrischen Hoch- und die Tiefdruckgebieten für drei Jahre bestimmt wird. Aus dieser Verteilung werden insgesamt drei interessante Zeiträume zu verschiedenen Jahreszeiten ausgewählt und im Rahmen von Prozessstudien die lokalen bodennahen meteorologischen Messungen, der turbulente Austausch an der Oberfläche und die Grenzschichtdynamik in der Säule untersucht. Die zeitliche Variabilität der dynamischen Grenzschichtstabilität in der Säule wird anhand von zeitlich hoch aufgelösten vertikalen Profilen der Bulk-Richardson-Zahl aus Kompositprofilen aus Fernerkundungsinstrumenten (Radiometer, Wind-LIDAR) sowie Mastdaten (BSRN-Mast) untersucht und die Grenzschichthöhe ermittelt. Aus diesen Analysen ergibt sich eine deutliche Abhängigkeit der thermischen Stabilität beim Durchzug von Fronten, eine damit einhergehende erhebliche Abhängigkeit der Grenzschichtdynamik und der Grenzschichthöhe sowie des turbulenten Austauschs von der zeitlichen Variabilität der Windgeschwindigkeit in der Säule.
Auf Grundlage der Standortanalysen und Prozessstudien erfolgt ein Vergleich der bodennahen Messungen und den Beobachtungen aus der Säule, sowohl von den genannten Fernerkundungsinstrumenten als auch von In-situ-Messungen (Radiosonden) für den Zeitraum einer Radiosondierungskampagne mit dem nicht-hydrostatischen, regionalen Klimamodel WRF (ARW). Auf Grundlage der Fragestellung, inwieweit aktuelle Schemata die Grenzschichtcharakteristika in orographisch stark gegliedertem Gelände in der Arktis reproduzieren können, werden zwei Grenzschichtparametrisierungsschemata mit verschiedenen Ordnungen der Schließung validiert. Hierzu wird die zeitliche Variabilität der Temperatur, der Feuchte und des Windfeldes in der Säule bis 2000m in den Simulationen mit den Beobachtungsdaten vergleichen. Es wird gezeigt, dass durch Modifikation der Initialwertfelder eine sehr gute Übereinstimmung zwischen den Simulationen und den Beobachtungen bereits bei einer horizontalen Auflösung von 1km erreicht werden kann und die Wahl des Grenzschichtschemas nur untergeordneten Einfluss hat. Hieraus werden Ansätze der Weiterentwicklung der Parametrisierungen, aber auch Empfehlungen bezüglich der Initialwertfelder, wie der Landmaske und der Orographie, vorgeschlagen.
Do properties of individual languages shape the mechanisms by which they are processed? By virtue of their non-concatenative morphological structure, the recognition of complex words in Semitic languages has been argued to rely strongly on morphological information and on decomposition into root and pattern constituents. Here, we report results from a masked priming experiment in Hebrew in which we contrasted verb forms belonging to two morphological classes, Paal and Piel, which display similar properties, but crucially differ on whether they are extended to novel verbs. Verbs from the open-class Piel elicited familiar root priming effects, but verbs from the closed-class Paal did not. Our findings indicate that, similarly to other (e.g., Indo-European) languages, down-to-the-root decomposition in Hebrew does not apply to stems of non-productive verbal classes. We conclude that the Semitic word processor is less unique than previously thought: Although it operates on morphological units that are combined in a non-linear way, it engages the same universal mechanisms of storage and computation as those seen in other languages.
Background:
Deception can distort psychological tests on socially sensitive topics. Understanding the cerebral
processes that are involved in such faking can be useful in detection and prevention of deception. Previous research
shows that faking a brief implicit association test (BIAT ) evokes a characteristic ERP response. It is not yet known
whether temporarily available self-control resources moderate this response. We randomly assigned 22 participants
(15 females, 24.23
±
2.91
years old) to a counterbalanced repeated-measurements design. Participants first com-
pleted a Brief-IAT (BIAT ) on doping attitudes as a baseline measure and were then instructed to fake a negative dop
-
ing attitude both when self-control resources were depleted and non-depleted. Cerebral activity during BIAT perfor
-
mance was assessed using high-density EEG.
Results:
Compared to the baseline BIAT, event-related potentials showed a first interaction at the parietal P1,
while significant post hoc differences were found only at the later occurring late positive potential. Here, signifi-
cantly decreased amplitudes were recorded for ‘normal’ faking, but not in the depletion condition. In source space,
enhanced activity was found for ‘normal’ faking in the bilateral temporoparietal junction. Behaviorally, participants
were successful in faking the BIAT successfully in both conditions.
Conclusions:
Results indicate that temporarily available self-control resources do not affect overt faking success on
a BIAT. However, differences were found on an electrophysiological level. This indicates that while on a phenotypical
level self-control resources play a negligible role in deliberate test faking the underlying cerebral processes are markedly different.
Swets et al. (2008. Underspecification of syntactic ambiguities: Evidence from self-paced reading. Memory and Cognition, 36(1), 201–216) presented evidence that the so-called ambiguity advantage [Traxler et al. (1998). Adjunct attachment is not a form of lexical ambiguity resolution. Journal of Memory and Language, 39(4), 558–592], which has been explained in terms of the Unrestricted Race Model, can equally well be explained by assuming underspecification in ambiguous conditions driven by task-demands. Specifically, if comprehension questions require that ambiguities be resolved, the parser tends to make an attachment: when questions are about superficial aspects of the target sentence, readers tend to pursue an underspecification strategy. It is reasonable to assume that individual differences in strategy will play a significant role in the application of such strategies, so that studying average behaviour may not be informative. In order to study the predictions of the good-enough processing theory, we implemented two versions of underspecification: the partial specification model (PSM), which is an implementation of the Swets et al. proposal, and a more parsimonious version, the non-specification model (NSM). We evaluate the relative fit of these two kinds of underspecification to Swets et al.’s data; as a baseline, we also fitted three models that assume no underspecification. We find that a model without underspecification provides a somewhat better fit than both underspecification models, while the NSM model provides a better fit than the PSM. We interpret the results as lack of unambiguous evidence in favour of underspecification; however, given that there is considerable existing evidence for good-enough processing in the literature, it is reasonable to assume that some underspecification might occur. Under this assumption, the results can be interpreted as tentative evidence for NSM over PSM. More generally, our work provides a method for choosing between models of real-time processes in sentence comprehension that make qualitative predictions about the relationship between several dependent variables. We believe that sentence processing research will greatly benefit from a wider use of such methods.
Understanding the rates and processes of denudation is key to unraveling the dynamic processes that shape active orogens. This includes decoding the roles of tectonic and climate-driven processes in the long-term evolution of high- mountain landscapes in regions with pronounced tectonic activity and steep climatic and surface-process gradients. Well-constrained denudation rates can be used to address a wide range of geologic problems. In steady-state landscapes, denudation rates are argued to be proportional to tectonic or isostatic uplift rates and provide valuable insight into the tectonic regimes underlying surface denudation. The use of denudation rates based on terrestrial cosmogenic nuclide (TCN) such as 10Beryllium has become a widely-used method to quantify catchment-mean denudation rates. Because such measurements are averaged over timescales of 102 to 105 years, they are not as susceptible to stochastic changes as shorter-term denudation rate estimates (e.g., from suspended sediment measurements) and are therefore considered more reliable for a comparison to long-term processes that operate on geologic timescales. However, the impact of various climatic, biotic, and surface processes on 10Be concentrations and the resultant denudation rates remains unclear and is subject to ongoing discussion. In this thesis, I explore the interaction of climate, the biosphere, topography, and geology in forcing and modulating denudation rates on catchment to orogen scales.
There are many processes in highly dynamic active orogens that may effect 10Be concentrations in modern river sands and therefore impact 10Be-derived denudation rates. The calculation of denudation rates from 10Be concentrations, however, requires a suite of simplifying assumptions that may not be valid or applicable in many orogens. I investigate how these processes affect 10Be concentrations in the Arun Valley of Eastern Nepal using 34 new 10Be measurements from the main stem Arun River and its tributaries. The Arun Valley is characterized by steep gradients in climate and topography, with elevations ranging from <100 m asl in the foreland basin to >8,000 asl in the high sectors to the north. This is coupled with a five-fold increase in mean annual rainfall across strike of the orogen. Denudation rates from tributary samples increase toward the core of the orogen, from <0.2 to >5 mm/yr from the Lesser to Higher Himalaya. Very high denudation rates (>2 mm/yr), however, are likely the result of 10Be TCN dilution by surface and climatic processes, such as large landsliding and glaciation, and thus may not be representative of long-term denudation rates. Mainstem Arun denudation rates increase downstream from ~0.2 mm/yr at the border with Tibet to 0.91 mm/yr at its outlet into the Sapt Kosi. However, the downstream 10Be concentrations may not be representative of the entire upstream catchment. Instead, I document evidence for downstream fining of grains from the Tibetan Plateau, resulting in an order-of-magnitude apparent decrease in the measured 10Be concentration.
In the Arun Valley and across the Himalaya, topography, climate, and vegetation are strongly interrelated. The observed increase in denudation rates at the transition from the Lesser to Higher Himalaya corresponds to abrupt increases in elevation, hillslope gradient, and mean annual rainfall. Thus, across strike (N-S), it is difficult to decipher the potential impacts of climate and vegetation cover on denudation rates. To further evaluate these relationships I instead took advantage of an along-strike west-to-east increase of mean annual rainfall and vegetation density in the Himalaya. An analysis of 136 published 10Be denudation rates from along strike of the revealed that median denudation rates do not vary considerably along strike of the Himalaya, ~1500 km E-W. However, the range of denudation rates generally decreases from west to east, with more variable denudation rates in the northwestern regions of the orogen than in the eastern regions. This denudation rate variability decreases as vegetation density increases (R=- 0.90), and increases proportionately to the annual seasonality of vegetation (R=0.99). Moreover, rainfall and vegetation modulate the relationship between topographic steepness and denudation rates such that in the wet, densely vegetated regions of the Himalaya, topography responds more linearly to changes in denudation rates than in dry, sparsely vegetated regions, where the response of topographic steepness to denudation rates is highly nonlinear. Understanding the relationships between denudation rates, topography, and climate is also critical for interpreting sedimentary archives. However, there is a lack of understanding of how terrestrial organic matter is transported out of orogens and into sedimentary archives. Plant wax lipid biomarkers derived from terrestrial and marine sedimentary records are commonly used as paleo- hydrologic proxy to help elucidate these problems. I address the issue of how to interpret the biomarker record by using the plant wax isotopic composition of modern suspended and riverbank organic matter to identify and quantify organic matter source regions in the Arun Valley. Topographic and geomorphic analysis, provided by the 10Be catchment-mean denudation rates, reveals that a combination of topographic steepness (as a proxy for denudation) and vegetation density is required to capture organic matter sourcing in the Arun River.
My studies highlight the importance of a rigorous and careful interpretation of denudation rates in tectonically active orogens that are furthermore characterized by strong climatic and biotic gradients. Unambiguous information about these issues is critical for correctly decoding and interpreting the possible tectonic and climatic forces that drive erosion and denudation, and the manifestation of the erosion products in sedimentary archives.
It is quite generally assumed that the overdamped Langevin equation provides a quantitative description of the dynamics of a classical Brownian particle in the long time limit. We establish and investigate a paradigm anomalous diffusion process governed by an underdamped Langevin equation with an explicit time dependence of the system temperature and thus the diffusion and damping coefficients. We show that for this underdamped scaled Brownian motion (UDSBM) the overdamped limit fails to describe the long time behaviour of the system and may practically even not exist at all for a certain range of the parameter values. Thus persistent inertial effects play a non-negligible role even at significantly long times. From this study a general questions on the applicability of the overdamped limit to describe the long time motion of an anomalously diffusing particle arises, with profound consequences for the relevance of overdamped anomalous diffusion models. We elucidate our results in view of analytical and simulations results for the anomalous diffusion of particles in free cooling granular gases.
Himalayan water resources attract a rapidly growing number of hydroelectric power projects (HPP) to satisfy Asia's soaring energy demands. Yet HPP operating or planned in steep, glacier-fed mountain rivers face hazards of glacial lake outburst floods (GLOFs) that can damage hydropower infrastructure, alter water and sediment yields, and compromise livelihoods downstream. Detailed appraisals of such GLOF hazards are limited to case studies, however, and a more comprehensive, systematic analysis remains elusive. To this end we estimate the regional exposure of 257 Himalayan HPP to GLOFs, using a flood-wave propagation model fed by Monte Carlo-derived outburst volumes of >2300 glacial lakes. We interpret the spread of thus modeled peak discharges as a predictive uncertainty that arises mainly from outburst volumes and dam-breach rates that are difficult to assess before dams fail. With 66% of sampled HPP are on potential GLOF tracks, up to one third of these HPP could experience GLOF discharges well above local design floods, as hydropower development continues to seek higher sites closer to glacial lakes. We compute that this systematic push of HPP into headwaters effectively doubles the uncertainty about GLOF peak discharge in these locations. Peak discharges farther downstream, in contrast, are easier to predict because GLOF waves attenuate rapidly. Considering this systematic pattern of regional GLOF exposure might aid the site selection of future Himalayan HPP. Our method can augment, and help to regularly update, current hazard assessments, given that global warming is likely changing the number and size of Himalayan meltwater lakes.
The molecular ability to selectively and efficiently convert sunlight into other forms of energy like heat, bond change, or charge separation is truly remarkable. The decisive steps in these transformations often happen on a femtosecond timescale and require transitions among different electronic states that violate the Born-Oppenheimer approximation (BOA). Non-BOA transitions pose challenges to both theory and experiment. From a theoretical point of view, excited state dynamics and nonadiabatic transitions both are difficult problems (see Figure 1(a)). However, the theory on non-BOA dynamics has advanced significantly over the last two decades. Full dynamical simulations for molecules of the size of nucleobases have been possible for a couple of years and allow predictions of experimental observables like photoelectron energy or ion yield. The availability of these calculations for isolated molecules has spurred new experimental efforts to develop methods that are sufficiently different from all optical techniques. For determination of transient molecular structure, femtosecond X-ray diffraction and electron diffraction have been implemented on optically excited molecules.
In this thesis, the two prototype catalysts Fe(CO)₅ and Cr(CO)₆ are investigated with time-resolved photoelectron spectroscopy at a high harmonic setup. In both of these metal carbonyls, a UV photon can induce the dissociation of one or more ligands of the complex. The mechanism of the dissociation has been debated over the last decades. The electronic dynamics of the first dissociation occur on the femtosecond timescale.
For the experiment, an existing high harmonic setup was moved to a new location, was extended, and characterized. The modified setup can induce dynamics in gas phase samples with photon energies of 1.55eV, 3.10eV, and 4.65eV. The valence electronic structure of the samples can be probed with photon energies between 20eV and 40eV. The temporal resolution is 111fs to 262fs, depending on the combination of the two photon energies.
The electronically excited intermediates of the two complexes, as well as of the reaction product Fe(CO)₄, could be observed with photoelectron spectroscopy in the gas phase for the first time. However, photoelectron spectroscopy gives access only to the final ionic states. Corresponding calculations to simulate these spectra are still in development. The peak energies and their evolution in time with respect to the initiation pump pulse have been determined, these peaks have been assigned based on literature data. The spectra of the two complexes show clear differences. The dynamics have been interpreted with the assumption that the motion of peaks in the spectra relates to the movement of the wave packet in the multidimensional energy landscape. The results largely confirm existing models for the reaction pathways. In both metal carbonyls, this pathway involves a direct excitation of the wave packet to a metal-to-ligand charge transfer state and the subsequent crossing to a dissociative ligand field state. The coupling of the electronic dynamics to the nuclear dynamics could explain the slower dissociation in Fe(CO)₅ as compared to Cr(CO)₆.
Udmurt as an OV language
(2016)
This is the first study to investigate Hubert Haider's (2000, 2010, 2013, 2014) proposed systematic differences between OV and VO language in a family other than Germanic. Its aim is to gather evidence on whether basic word order is predictive of further properties of a language. The languages under investigation are the Finno-Ugric languages Udmurt (as an OV language) and Finnish (as a VO language). Counter to Kayne (1994), Haider proposes that the structure of a sentence with a head-final VP is fundamentally different from that of a sentence with a head-initial VP, e.g., OV languages do not exhibit a VP-shell structure, and they do not employ a TP layer with a structural subject position. Haider's proposed structural differences are said to result in the following empirically testable differences:
(a) VP: the availability of VP-internal adverbial intervention and scrambling only in OV-VPs;
(b) subjects: the lack of certain subject-object asymmetries in OV languages, i.e., lack of the subject condition and lack of superiority effects;
(c) V-complexes: the availability of partial predicate fronting only in OV languages; different orderings between selecting and selected verbs; the intervention of non-verbal material between verbs only in VO languages;
(d) V-particles: differences in the distribution of resultative phrases and verb particles.
Udmurt and Finnish behave in line with Haider's predictions with regard to the status of the subject, with regard to the order of selecting and selected verbs, and with regard to the availability of partial predicate fronting. Moreover, Udmurt allows for adverbial intervention and scrambling, as predicted, whereas the status of these properties in Finnish could not be reliably determined due to obligatory V-to-T. There is also counterevidence to Haider's predictions: Udmurt allows for non-verbal material between verbs, and the distribution of resultative phrases and verb particles is essentially as free as the distribution of adverbial phrases in both Finno-Ugric languages. As such, Haider's theory is not falsified by the data from Udmurt and Finnish (except for his theory on verb particles), but it is also not fully supported by the data.
The advantages of remote sensing using Unmanned Aerial Vehicles (UAVs) are a high spatial resolution of images, temporal flexibility and narrow-band spectral data from different wavelengths domains. This enables the detection of spatio-temporal dynamics of environmental variables, like plant-related carbon dynamics in agricultural landscapes. In this paper, we quantify spatial patterns of fresh phytomass and related carbon (C) export using imagery captured by a 12-band multispectral camera mounted on the fixed wing UAV Carolo P360. The study was performed in 2014 at the experimental area CarboZALF-D in NE Germany. From radiometrically corrected and calibrated images of lucerne (Medicago sativa), the performance of four commonly used vegetation indices (VIs) was tested using band combinations of six near-infrared bands. The highest correlation between ground-based measurements of fresh phytomass of lucerne and VIs was obtained for the Enhanced Vegetation Index (EVI) using near-infrared band b(899). The resulting map was transformed into dry phytomass and finally upscaled to total C export by harvest. The observed spatial variability at field- and plot-scale could be attributed to small-scale soil heterogeneity in part.
In complement to the well-established zwitterionic monomers 3-((2-(methacryloyloxy)ethyl)dimethylammonio)propane-1-sulfonate (“SPE”) and 3-((3-methacrylamidopropyl)dimethylammonio)propane-1-sulfonate (“SPP”), the closely related sulfobetaine monomers were synthesized and polymerized by reversible addition-fragmentation chain transfer (RAFT) polymerization, using a fluorophore labeled RAFT agent. The polyzwitterions of systematically varied molar mass were characterized with respect to their solubility in water, deuterated water, and aqueous salt solutions. These poly(sulfobetaine)s show thermoresponsive behavior in water, exhibiting upper critical solution temperatures (UCST). Phase transition temperatures depend notably on the molar mass and polymer concentration, and are much higher in D2O than in H2O. Also, the phase transition temperatures are effectively modulated by the addition of salts. The individual effects can be in parts correlated to the Hofmeister series for the anions studied. Still, they depend in a complex way on the concentration and the nature of the added electrolytes, on the one hand, and on the detailed structure of the zwitterionic side chain, on the other hand. For the polymers with the same zwitterionic side chain, it is found that methacrylamide-based poly(sulfobetaine)s exhibit higher UCST-type transition temperatures than their methacrylate analogs. The extension of the distance between polymerizable unit and zwitterionic groups from 2 to 3 methylene units decreases the UCST-type transition temperatures. Poly(sulfobetaine)s derived from aliphatic esters show higher UCST-type transition temperatures than their analogs featuring cyclic ammonium cations. The UCST-type transition temperatures increase markedly with spacer length separating the cationic and anionic moieties from 3 to 4 methylene units. Thus, apparently small variations of their chemical structure strongly affect the phase behavior of the polyzwitterions in specific aqueous environments.
Water-soluble block copolymers were prepared from the zwitterionic monomers and the non-ionic monomer N-isopropylmethacrylamide (“NIPMAM”) by the RAFT polymerization. Such block copolymers with two hydrophilic blocks exhibit twofold thermoresponsive behavior in water. The poly(sulfobetaine) block shows an UCST, whereas the poly(NIPMAM) block exhibits a lower critical solution temperature (LCST). This constellation induces a structure inversion of the solvophobic aggregate, called “schizophrenic micelle”. Depending on the relative positions of the two different phase transitions, the block copolymer passes through a molecularly dissolved or an insoluble intermediate regime, which can be modulated by the polymer concentration or by the addition of salt. Whereas, at low temperature, the poly(sulfobetaine) block forms polar aggregates that are kept in solution by the poly(NIPMAM) block, at high temperature, the poly(NIPMAM) block forms hydrophobic aggregates that are kept in solution by the poly(sulfobetaine) block. Thus, aggregates can be prepared in water, which switch reversibly their “inside” to the “outside”, and vice versa.
The synthesis and photophysical properties of two new FRET pairs based on coumarin as a donor and DBD dye as an acceptor are described. The introduction of a bromo atom dramatically increases the two-photon excitation (2PE) cross section providing a 2PE-FRET system, which is also suitable for 2PE-FLIM.
It is commonly recognized that soil moisture exhibits spatial heterogeneities occurring in a wide range of scales. These heterogeneities are caused by different factors ranging from soil structure at the plot scale to land use at the landscape scale. There is an urgent need for effi-cient approaches to deal with soil moisture heterogeneity at large scales, where manage-ment decisions are usually made. The aim of this dissertation was to test innovative ap-proaches for making efficient use of standard soil hydrological data in order to assess seep-age rates and main controls on observed hydrological behavior, including the role of soil het-erogeneities.
As a first step, the applicability of a simplified Buckingham-Darcy method to estimate deep seepage fluxes from point information of soil moisture dynamics was assessed. This was done in a numerical experiment considering a broad range of soil textures and textural het-erogeneities. The method performed well for most soil texture classes. However, in pure sand where seepage fluxes were dominated by heterogeneous flow fields it turned out to be not applicable, because it simply neglects the effect of water flow heterogeneity. In this study a need for new efficient approaches to handle heterogeneities in one-dimensional water flux models was identified.
As a further step, an approach to turn the problem of soil moisture heterogeneity into a solu-tion was presented: Principal component analysis was applied to make use of the variability among soil moisture time series for analyzing apparently complex soil hydrological systems. It can be used for identifying the main controls on the hydrological behavior, quantifying their relevance, and describing their particular effects by functional averaged time series. The ap-proach was firstly tested with soil moisture time series simulated for different texture classes in homogeneous and heterogeneous model domains. Afterwards, it was applied to 57 mois-ture time series measured in a multifactorial long term field experiment in Northeast Germa-ny.
The dimensionality of both data sets was rather low, because more than 85 % of the total moisture variance could already be explained by the hydrological input signal and by signal transformation with soil depth. The perspective of signal transformation, i.e. analyzing how hydrological input signals (e.g., rainfall, snow melt) propagate through the vadose zone, turned out to be a valuable supplement to the common mass flux considerations. Neither different textures nor spatial heterogeneities affected the general kind of signal transfor-mation showing that complex spatial structures do not necessarily evoke a complex hydro-logical behavior. In case of the field measured data another 3.6% of the total variance was unambiguously explained by different cropping systems. Additionally, it was shown that dif-ferent soil tillage practices did not affect the soil moisture dynamics at all.
The presented approach does not require a priori assumptions about the nature of physical processes, and it is not restricted to specific scales. Thus, it opens various possibilities to in-corporate the key information from monitoring data sets into the modeling exercise and thereby reduce model uncertainties.
Exploring generalisation following treatment of language deficits in aphasia can provide insights into the functional relation of the cognitive processing systems involved. In the present study, we first review treatment outcomes of interventions targeting sentence processing deficits and, second report a treatment study examining the occurrence of practice effects and generalisation in sentence comprehension and production. In order to explore the potential linkage between processing systems involved in comprehending and producing sentences, we investigated whether improvements generalise within (i.e., uni-modal generalisation in comprehension or in production) and/or across modalities (i.e., cross-modal generalisation from comprehension to production or vice versa). Two individuals with aphasia displaying co-occurring deficits in sentence comprehension and production were trained on complex, non-canonical sentences in both modalities. Two evidence-based treatment protocols were applied in a crossover intervention study with sequence of treatment phases being randomly allocated. Both participants benefited significantly from treatment, leading to uni-modal generalisation in both comprehension and production. However, cross-modal generalisation did not occur. The magnitude of uni-modal generalisation in sentence production was related to participants’ sentence comprehension performance prior to treatment. These findings support the assumption of modality-specific sub-systems for sentence comprehension and production, being linked uni-directionally from comprehension to production.
This article assesses the distance between the laws of stochastic differential equations with multiplicative Lévy noise on path space in terms of their characteristics. The notion of transportation distance on the set of Lévy kernels introduced by Kosenkova and Kulik yields a natural and statistically tractable upper bound on the noise sensitivity. This extends recent results for the additive case in terms of coupling distances to the multiplicative case. The strength of this notion is shown in a statistical implementation for simulations and the example of a benchmark time series in paleoclimate.
Transmorphic
(2016)
Defining Graphical User Interfaces (GUIs) through functional abstractions can reduce the complexity that arises from mutable abstractions. Recent examples, such as Facebook's React GUI framework have shown, how modelling the view as a functional projection from the application state to a visual representation can reduce the number of interacting objects and thus help to improve the reliabiliy of the system. This however comes at the price of a more rigid, functional framework where programmers are forced to express visual entities with functional abstractions, detached from the way one intuitively thinks about the physical world.
In contrast to that, the GUI Framework Morphic allows interactions in the graphical domain, such as grabbing, dragging or resizing of elements to evolve an application at runtime, providing liveness and directness in the development workflow. Modelling each visual entity through mutable abstractions however makes it difficult to ensure correctness when GUIs start to grow more complex. Furthermore, by evolving morphs at runtime through direct manipulation we diverge more and more from the symbolic description that corresponds to the morph. Given that both of these approaches have their merits and problems, is there a way to combine them in a meaningful way that preserves their respective benefits?
As a solution for this problem, we propose to lift Morphic's concept of direct manipulation from the mutation of state to the transformation of source code. In particular, we will explore the design, implementation and integration of a bidirectional mapping between the graphical representation and a functional and declarative symbolic description of a graphical user interface within a self hosted development environment. We will present Transmorphic, a functional take on the Morphic GUI Framework, where the visual and structural properties of morphs are defined in a purely functional, declarative fashion. In Transmorphic, the developer is able to assemble different morphs at runtime through direct manipulation which is automatically translated into changes in the code of the application. In this way, the comprehensiveness and predictability of direct manipulation can be used in the context of a purely functional GUI, while the effects of the manipulation are reflected in a medium that is always in reach for the programmer and can even be used to incorporate the source transformations into the source files of the application.
Species can adjust their traits in response to selection which may strongly influence species coexistence. Nevertheless, current theory mainly assumes distinct and time-invariant trait values. We examined the combined effects of the range and the speed of trait adaptation on species coexistence using an innovative multispecies predator–prey model. It allows for temporal trait changes of all predator and prey species and thus simultaneous coadaptation within and among trophic levels. We show that very small or slow trait adaptation did not facilitate coexistence because the stabilizing niche differences were not sufficient to offset the fitness differences. In contrast, sufficiently large and fast trait adaptation jointly promoted stable or neutrally stable species coexistence. Continuous trait adjustments in response to selection enabled a temporally variable convergence and divergence of species traits; that is, species became temporally more similar (neutral theory) or dissimilar (niche theory) depending on the selection pressure, resulting over time in a balance between niche differences stabilizing coexistence and fitness differences promoting competitive exclusion. Furthermore, coadaptation allowed prey and predator species to cluster into different functional groups. This equalized the fitness of similar species while maintaining sufficient niche differences among functionally different species delaying or preventing competitive exclusion. In contrast to previous studies, the emergent feedback between biomass and trait dynamics enabled supersaturated coexistence for a broad range of potential trait adaptation and parameters. We conclude that accounting for trait adaptation may explain stable and supersaturated species coexistence for a broad range of environmental conditions in natural systems when the absence of such adaptive changes would preclude it. Small trait changes, coincident with those that may occur within many natural populations, greatly enlarged the number of coexisting species.
Well-developed phonological awareness skills are a core prerequisite for early literacy development. Although effective phonological awareness training programs exist, children at risk often do not reach similar levels of phonological awareness after the intervention as children with normally developed skills. Based on theoretical considerations and first promising results the present study explores effects of an early musical training in combination with a conventional phonological training in children with weak phonological awareness skills. Using a quasi-experimental pretest-posttest control group design and measurements across a period of 2 years, we tested the effects of two interventions: a consecutive combination of a musical and a phonological training and a phonological training alone. The design made it possible to disentangle effects of the musical training alone as well the effects of its combination with the phonological training. The outcome measures of these groups were compared with the control group with multivariate analyses, controlling for a number of background variables. The sample included N = 424 German-speaking children aged 4–5 years at the beginning of the study. We found a positive relationship between musical abilities and phonological awareness. Yet, whereas the well-established phonological training produced the expected effects, adding a musical training did not contribute significantly to phonological awareness development. Training effects were partly dependent on the initial level of phonological awareness. Possible reasons for the lack of training effects in the musical part of the combination condition as well as practical implications for early literacy education are discussed.
When realizing a programming language as VM, implementing behavior as part of the VM, as primitive, usually results in reduced execution times. But supporting and developing primitive functions requires more effort than maintaining and using code in the hosted language since debugging is harder, and the turn-around times for VM parts are higher. Furthermore, source artifacts of primitive functions are seldom reused in new implementations of the same language. And if they are reused, the existing API usually is emulated, reducing the performance gains. Because of recent results in tracing dynamic compilation, the trade-off between performance and ease of implementation, reuse, and changeability might now be decided adversely.
In this work, we investigate the trade-offs when creating primitives, and in particular how large a difference remains between primitive and hosted function run times in VMs with tracing just-in-time compiler. To that end, we implemented the algorithmic primitive BitBlt three times for RSqueak/VM. RSqueak/VM is a Smalltalk VM utilizing the PyPy RPython toolchain. We compare primitive implementations in C, RPython, and Smalltalk, showing that due to the tracing just-in-time compiler, the performance gap has lessened by one magnitude to one magnitude.
This doctoral thesis seeks to elaborate how Wittgenstein’s very sparse writings on ethics and ethical thought, together with his later work on the more general problem of normativity and his approach to philosophical problems as a whole, can be applied to contemporary meta-ethical debates about the nature of moral thought and language and the sources of moral obligation. I begin with a discussion of Wittgenstein’s early “Lecture on Ethics”, distinguishing the thesis of a strict fact/value dichotomy that Wittgenstein defends there from the related thesis that all ethical discourse is essentially and intentionally nonsensical, an attempt to go beyond the limits of sense. The first chapter discusses and defends Wittgenstein’s argument that moral valuation always goes beyond any ascertaining of fact; the second chapter seeks to draw out the valuable insights from Wittgenstein’s (early) insistence that value discourse is nonsensical while also arguing that this thesis is ultimately untenable and also incompatible with later Wittgensteinian understanding of language. On the basis of this discussion I then take up the writings of the American philosopher Cora Diamond, who has worked out an ethical approach in a very closely Wittgensteinian spirit, and show how this approach shares many of the valuable insights of the moral expressivism and constructivism of contemporary authors such as Blackburn and Korsgaard while suggesting a way to avoid some of the problems and limitations of their approaches. Subsequently I turn to a criticism of the attempts by Lovibond and McDowell to enlist Wittgenstein in the support for a non-naturalist moral realism. A concluding chapter treats the ways that a broadly Wittgensteinian conception expands the subject of metaethics itself by questioning the primacy of discursive argument in moral thought and of moral propositions as the basic units of moral belief.
In experiments investigating sentence processing, eye movement measures such as fixation durations and regression proportions while reading are commonly used to draw conclusions about processing difficulties. However, these measures are the result of an interaction of multiple cognitive levels and processing strategies and thus are only indirect indicators of processing difficulty. In order to properly interpret an eye movement response, one has to understand the underlying principles of adaptive processing such as trade-off mechanisms between reading speed and depth of comprehension that interact with task demands and individual differences. Therefore, it is necessary to establish explicit models of the respective mechanisms as well as their causal relationship with observable behavior. There are models of lexical processing and eye movement control on the one side and models on sentence parsing and memory processes on the other. However, no model so far combines both sides with explicitly defined linking assumptions.
In this thesis, a model is developed that integrates oculomotor control with a parsing mechanism and a theory of cue-based memory retrieval. On the basis of previous empirical findings and independently motivated principles, adaptive, resource-preserving mechanisms of underspecification are proposed both on the level of memory access and on the level of syntactic parsing. The thesis first investigates the model of cue-based retrieval in sentence comprehension of Lewis & Vasishth (2005) with a comprehensive literature review and computational modeling of retrieval interference in dependency processing. The results reveal a great variability in the data that is not explained by the theory. Therefore, two principles, 'distractor prominence' and 'cue confusion', are proposed as an extension to the theory, thus providing a more adequate description of systematic variance in empirical results as a consequence of experimental design, linguistic environment, and individual differences. In the remainder of the thesis, four interfaces between parsing and eye movement control are defined: Time Out, Reanalysis, Underspecification, and Subvocalization. By comparing computationally derived predictions with experimental results from the literature, it is investigated to what extent these four interfaces constitute an appropriate elementary set of assumptions for explaining specific eye movement patterns during sentence processing. Through simulations, it is shown how this system of in itself simple assumptions results in predictions of complex, adaptive behavior.
In conclusion, it is argued that, on all levels, the sentence comprehension mechanism seeks a balance between necessary processing effort and reading speed on the basis of experience, task demands, and resource limitations. Theories of linguistic processing therefore need to be explicitly defined and implemented, in particular with respect to linking assumptions between observable behavior and underlying cognitive processes. The comprehensive model developed here integrates multiple levels of sentence processing that hitherto have only been studied in isolation. The model is made publicly available as an expandable framework for future studies of the interactions between parsing, memory access, and eye movement control.
Touring Katutura!
(2016)
Guided sightseeing tours of the former township of Katutura have been offered in Windhoek since the mid-1990s. City tourism in the Namibian capital had thus become, at quite an early point in time, part of the trend towards utilising poor urban areas for purposes of tourism – a trend that set in at the beginning of the same decade. Frequently referred to as “slum tourism” or “poverty tourism”, the phenomenon of guided tours around places of poverty has not only been causing some media sensation and much public outrage since its emergence; in the past few years, it has developed into a vital field of scientific research, too. “Global Slumming” provides the grounds for a rethinking of the relationship between poverty and tourism in world society.
This book is the outcome of a study project of the Institute of Geography at the School of Cultural Studies and Social Science of the University of Osnabrueck, Germany. It represents the first empirical case study on township tourism in Namibia. It focuses on four aspects:
1. Emergence, development and (market) structure of township tourism in Windhoek
2. Expectations/imaginations, representations as well as perceptions of the township and its inhabitants from the tourist’s perspective
3. Perception and assessment of township tourism from the residents’ perspective
4. Local economic effects and the poverty-alleviating impact of township tourism
The aim is to make an empirical contribution to the discussion around the tourism-poverty nexus and to an understanding of the global phenomenon of urban poverty tourism.
Two experiments examined how individuals respond to a restriction presented within an approach versus an avoidance frame. In Study 1, working on a problem-solving task, participants were initially free to choose their strategy, but for a second task were told to change their strategy. The message to change was embedded in either an approach or avoidance frame. When confronted with an avoidance compared to an approach frame, the participants’ reactance toward the request was greater and, in turn, led to impaired performance. The role of reactance as a response to threat to freedom was explicitly examined in Study 2, in which participants evaluated a potential change in policy affecting their program of study herein explicitly varying whether a restriction was present or absent and whether the message was embedded in an approach versus avoidance frame. When communicated with an avoidance frame and as a restriction, participants showed the highest resistance in terms of reactance, message agreement and evaluation of the communicator. The difference in agreement with the change was mediated by reactance only when a restriction was present. Overall, avoidance goal frames were associated with more resistance to change on different levels of experience (reactance, performance, and person perception). Reactance mediated the effect of goal frame on other outcomes only when a restriction was present.
Theses
(2016)
Thesen
(2016)
Thermophony in real gases
(2016)
A thermophone is an electrical device for sound generation. The advantages of thermophones over conventional sound transducers such as electromagnetic, electrostatic or piezoelectric transducers are their operational principle which does not require any moving parts, their resonance-free behavior, their simple construction and their low production costs.
In this PhD thesis, a novel theoretical model of thermophonic sound generation in real gases has been developed. The model is experimentally validated in a frequency range from 2 kHz to 1 MHz by testing more then fifty thermophones of different materials, including Carbon nano-wires, Titanium, Indium-Tin-Oxide, different sizes and shapes for sound generation in gases such as air, argon, helium, oxygen, nitrogen and sulfur hexafluoride.
Unlike previous approaches, the presented model can be applied to different kinds of thermophones and various gases, taking into account the thermodynamic properties of thermophone materials and of adjacent gases, degrees of freedom and the volume occupied by the gas atoms and molecules, as well as sound attenuation effects, the shape and size of the thermophone surface and the reduction of the generated acoustic power due to photonic emission. As a result, the model features better prediction accuracy than the existing models by a factor up to 100. Moreover, the new model explains previous experimental findings on thermophones which can not be explained with the existing models.
The acoustic properties of the thermophones have been tested in several gases using unique, highly precise experimental setups comprising a Laser-Doppler-Vibrometer combined with a thin polyethylene film which acts as a broadband and resonance-free sound-pressure detector. Several outstanding properties of the thermophones have been demonstrated for the first time, including the ability to generate arbitrarily shaped acoustic signals, a greater acoustic efficiency compared to conventional piezoelectric and electrostatic airborne ultrasound transducers, and applicability as powerful and tunable sound sources with a bandwidth up to the megahertz range and beyond.
Additionally, new applications of thermophones such as the study of physical properties of gases, the thermo-acoustic gas spectroscopy, broad-band characterization of transfer functions of sound and ultrasound detection systems, and applications in non-destructive materials testing are discussed and experimentally demonstrated.