Refine
Has Fulltext
- yes (649) (remove)
Year of publication
- 2016 (649) (remove)
Document Type
- Postprint (216)
- Article (175)
- Doctoral Thesis (136)
- Monograph/Edited Volume (28)
- Part of Periodical (22)
- Preprint (18)
- Review (14)
- Master's Thesis (12)
- Part of a Book (11)
- Working Paper (6)
Keywords
- Migration (13)
- migration (13)
- religion (13)
- Religion (12)
- interkulturelle Missverständnisse (12)
- religiöses Leben (12)
- confusions and misunderstandings (11)
- Logopädie (6)
- Zeitschrift (6)
- model (6)
Institute
- Mathematisch-Naturwissenschaftliche Fakultät (80)
- Institut für Slavistik (75)
- Institut für Geowissenschaften (41)
- Humanwissenschaftliche Fakultät (39)
- Institut für Chemie (39)
- Institut für Physik und Astronomie (31)
- Institut für Biochemie und Biologie (30)
- Vereinigung für Jüdische Studien e. V. (29)
- Bürgerliches Recht (28)
- Department Linguistik (23)
The link between cognitive scripts for consensual sexual interactions and attitudes towards sexual coercion was studied in 524 Polish high school students. We proposed that risky sexual scripts, containing risk elements linked to sexual aggression, would be associated with attitudes condoning sexual coercion. Pornography use and religiosity were included as predictors of participants’ risky sexual scripts and attitudes towards sexual coercion. Risky sexual scripts were linked to attitudes condoning sexual coercion. Pornography use was indirectly linked to attitudes condoning sexual coercion via risky sexual scripts. Religiosity showed a positive direct link with attitudes towards sexual coercion, but a negative indirect link through risky sexual scripts. The results are discussed regarding the significance of risky sexual scripts, pornography use, and religiosity in understanding attitudes towards sexual coercion as well as their implications for preventing sexually aggressive behaviour.
In a network with a mixture of different electrophysiological types of neurons linked by excitatory and inhibitory connections, temporal evolution leads through repeated epochs of intensive global activity separated by intervals with low activity level. This behavior mimics "up" and "down" states, experimentally observed in cortical tissues in absence of external stimuli. We interpret global dynamical features in terms of individual dynamics of the neurons. In particular, we observe that the crucial role both in interruption and in resumption of global activity is played by distributions of the membrane recovery variable within the network. We also demonstrate that the behavior of neurons is more influenced by their presynaptic environment in the network than by their formal types, assigned in accordance with their response to constant current.
Im vorliegenden Beitrag werden einige zentrale Berichte und Motive aus den frühen Quellen des Islam über die militärischen Konflikte des Propheten Muhammad mit den
Juden von Medina beleuchtet. Als Grundlage der Untersuchung dient die Prophetenbiografie des Gelehrten Muḥammad ibn Isḥāq (gest. 150 nach der Hedschra), die auch heute noch maßgeblich ist. Im Beitrag wird unter anderem aufgezeigt, dass es sowohl innerhalb der Gattung der Sīra-Literatur, der Ibn Isḥāqs Werk angehört, als auch in den frühen Traditionen der islamischen Rechtswissenschaft, der Koranexegese sowie im Korantext selbst zahlreiche Hinweise auf alternative Darstellungen dieser Konflikte gibt. Diese gerieten in den ersten Jahrhunderten des Islam infolge des Siegeszuges von Ibn Isḥāqs Werk zunehmend aus dem Blickfeld, sind aber für zeitgenössische Diskurse um das Verhältnis des Islam zu Nichtmuslimen durchaus von Interesse. Ziel der Untersuchung ist es die normative Aussagekraft der unterschiedlichen Szenarien für Grundsatzfragen insbesondere für das Verhältnis zwischen Muslimen und Juden herauszuarbeiten. Einen inhaltlichen Schwerpunkt im Beitrag bilden dabei unterschiedliche Zugänge zum berühmten Bericht über die Vernichtung des jüdischen Stammes der Banū Qurayza im Anschluss an die Grabenschlacht.
Die hohe Energieaufnahme durch Fette ist ein Hauptfaktor für die Entstehung von Adipositas, was zu weltweiten Bestrebungen führte, die Fettaufnahme zu verringern. Fettreduzierte Lebensmittel erreichen jedoch, trotz ihrer Weiterentwicklung, nicht die Schmackhaftigkeit ihrer Originale. Die traditionelle Sichtweise, dass die Attraktivität von Fetten allein durch Textur, Geruch, Aussehen und postingestive Effekte bestimmt wird, wird nun durch das Konzept einer gustatorischen Wahrnehmung ergänzt. Bei Nagetieren zeigte sich, dass Lipide unabhängig von den vorgenannten Eigenschaften erkannt werden, sowie, dass Fettsäuren, freigesetzt durch linguale Lipasen, als gustatorische Stimuli fungieren und Fettsäuresensoren in Geschmackszellen exprimiert sind. Die Datenlage für den Menschen erwies sich jedoch als sehr begrenzt, daher war es Ziel der vorliegenden Arbeit molekulare und histologische Voraussetzungen für eine gustatorische Fettwahrnehmung beim Menschen zu untersuchen.
Zunächst wurde humanes Geschmacksgewebe mittels RT-PCR und immunhistochemischen Methoden auf die Expression von Fettsäuresensoren untersucht, sowie exprimierende Zellen in Kofärbeexperimenten charakterisiert und quantifiziert. Es wurde die Expression fettsäuresensitiver Rezeptoren nachgewiesen, deren Agonisten das gesamte Spektrum an kurz- bis langkettigen Fettsäuren abdecken (GPR43, GPR84, GPR120, CD36, KCNA5). Ein zweifelsfreier Nachweis des Proteins konnte für den auf langkettige Fettsäuren spezialisierten Rezeptor GPR120 in Typ-I- und Typ-III-Geschmackszellen der Wallpapillen erbracht werden. Etwa 85 % dieser GPR120-exprimierenden Zellen enthielten keine der ausgewählten Rezeptoren der Geschmacksqualitäten süß (TAS1R2/3), umami (TAS1R1/3) oder bitter (TAS2R38). Somit findet sich in humanen Geschmackspapillen nicht nur mindestens ein Sensor, sondern möglicherweise auch eine spezifische, fettsäuresensitive Zellpopulation. Weitere RT-PCR-Experimente und Untersuchungen mittels In-situ-Hybridisierung wurden zur Klärung der Frage durchgeführt, ob Lipasen in den Von-Ebner-Speicheldrüsen (VED) existieren, die freie Fettsäuren aus Triglyceriden als gustatorischen Stimulus freisetzen können. Es zeigte sich zwar keine Expression der bei Nagetieren gefundenen Lipase F (LIPF), jedoch der eng verwandten Lipasen K, M und N in den serösen Zellen der VED. In-silico-Untersuchungen der Sekundär- und Tertiärstrukturen zeigten die hohe Ähnlichkeit zu LIPF, erwiesen aber auch Unterschiede in den Bindungstaschen der Enzyme, welche auf ein differenziertes Substratspektrum hinweisen. Die Anwesenheit eines spezifischen Signalpeptids macht eine Sekretion der Lipasen in den die Geschmacksporen umspülenden Speichel wahrscheinlich und damit auch eine Bereitstellung von Fettsäuren als Stimuli für Fettsäuresensoren. Die Übertragung des durch diese Stimuli hervorgerufenen Signals von Geschmackszellen auf gustatorische Nervenfasern über P2X-Rezeptormultimere wurde mit Hilfe einer vorherigen Intervention mit einem P2X3 /P2X2/3-spezifischen Antagonisten an der Maus als Modellorganismus im Kurzzeit-Präferenztest untersucht. Es zeigte sich weder eine Beeinträchtigung der Wahrnehmung einer Fettsäurelösung, noch einer zuckerhaltigen Kontrolllösung, wohingegen die Wahrnehmung einer Bitterstofflösung reduziert wurde. Somit ist anhand der Ergebnisse dieser Arbeit eine Beteiligung des P2X3-Homomers bzw. des P2X2/3-Heteromers unwahrscheinlich, jedoch die des P2X2-Homomers und damit der gustatorischen Nervenfasern nicht ausgeschlossen.
Die Ergebnisse dieser Arbeit weisen auf die Erfüllung grundlegender Voraussetzungen für die gustatorische Fett(säure)wahrnehmung hin und tragen zum Verständnis der sensorischen Fettwahrnehmung und der Regulation der Fettaufnahme bei. Das Wissen um die Regulation dieser Mechanismen stellt eine Grundlage zur Aufklärung der Ursachen und damit der Bekämpfung von Adipositas und assoziierten Krankheiten dar.
In this contribution, we study using first principles the co-adsorption and catalytic behaviors of CO and O2 on a single gold atom deposited at defective magnesium oxide surfaces. Using cluster models and point charge embedding within a density functional theory framework, we simulate the CO oxidation reaction for Au1 on differently charged oxygen vacancies of MgO(001) to rationalize its experimentally observed lack of catalytic activity. Our results show that: (1) co-adsorption is weakly supported at F0 and F2+ defects but not at F1+ sites, (2) electron redistribution from the F0 vacancy via the Au1 cluster to the adsorbed molecular oxygen weakens the O2 bond, as required for a sustainable catalytic cycle, (3) a metastable carbonate intermediate can form on defects of the F0 type, (4) only a small activation barrier exists for the highly favorable dissociation of CO2 from F0, and (5) the moderate adsorption energy of the gold atom on the F0 defect cannot prevent insertion of molecular oxygen inside the defect. Due to the lack of protection of the color centers, the surface becomes invariably repaired by the surrounding oxygen and the catalytic cycle is irreversibly broken in the first oxidation step.
Die sensorisch einwandfreie, konstant gute Qualität von Backprodukten, die beim Verbraucher einen hohen Stellenwert hat, wird maßgeblich durch den Gehalt endogener Getreideenzyme beeinflusst. Seit dem Auftreten züchtungsbedingter Enzymdefizite ist der Einsatz technischer Enzyme zur Gewährleistung dieser geforderten Qualität eine feste Größe in der Backwarenindustrie. Lebensmittelrechtlich werden technische Enzyme nicht als Zutat betrachtet, da sie theoretisch während des Backprozesses umgesetzt werden und im Endprodukt keine technologische Wirkung mehr zeigen. Vor allem in gebackenen Produkten bedarf es der Prüfung, dass die eingesetzten technischen Enzyme nicht mehr als Zutat vorliegen und sich somit einer potentiellen Deklarationspflicht entziehen. Zur Gewährleistung der Wirtschaftlichkeit muss der quantitative Einsatz technischer Enzyme in der Backwarenindustrie gesteuert werden, um optimale Effekte zu erzielen und Kosten zu sparen. Ziel dieser Arbeit war daher die Entwicklung eines Analysenverfahrens, das den simultanen Nachweis verschiedener technischer Enzyme und deren Quantifizierung im Spurenbereich auch in gebackenen Produkten ermöglicht.
Für die Einschätzung der Wirkung der technischen Enzyme Fungamyl (Novozymes), Amylase TXL (ASA Spezialenzyme GmbH) sowie Lipase FE-01 (ASA Spezialenzyme GmbH) wurden Backversuche durchgeführt, die zeigten, dass Fungamyl und Amylase TXL zu einer verbesserten Brotqualität (Volumenausbeute, Feuchtegehalt, Sensorik) beitrugen. Die Zugabe der Lipase FE-01 führte zu einer vermehrten Bildung freier Fettsäuren und wirkte sich negativ auf die sensorische Brotqualität aus. Dieser bisher nicht beschriebene Effekt konnte auf die Nutzung eines Spezialöls als Backzutat zurückgeführt werden, welches ausschließlich aus gesättigten Fettsäuren besteht. Dies bestätigt die Bedeutung der Auswahl eines geeigneten Fettes beim Zusatz technischer Lipase zum Backprozess.
Um die in Fungamyl und Lipase FE-01 enthaltenen Enzyme zu identifizieren, wurden SDS-PAGE und anschließender In-Gel-Verdau angewendet um die Analyse proteolytisch gespaltener Proteine mit MALDI-TOF-MS zu ermöglichen. Es konnte gezeigt werden, dass Fungamyl ein Gemisch aus 9,8 % alpha-Amylase (Aspergillus oryzae) und 5,2 % Endo-1,4-Xylanase (Thermomyces lanuginosus) enthält. Lipase FE-01 besteht aus der Lipase (Thermomyces lanuginosus), Amylase TXL wurde als alpha-Amylase (Aspergillus oryzae) identifiziert.
Zur Analyse der technischen Enzyme in Backwaren wurde aufgrund seiner Robustheit und Sensitivität das Verfahren der LC-MS/MS gewählt. Die Entwicklung einer solchen Methode zur Detektion spezifischer Peptide ermöglichte den qualitativen Nachweis der 3 Enzyme alpha-Amylase (Aspergillus oryzae), Endo-1,4-Xylanase (Thermomyces lanuginosus) und Lipase (Thermomyces lanuginosus). Durch eine lineare Kalibrierung aus synthetisch hergestellten Peptiden unter Einbeziehung eines Protein-Internen-Standards sowie isotopenmarkierter Peptidstandards erfolgte darüber hinaus die quantitative Bestimmung in selbst hergestellten Referenzmaterialien (Weizenmehl, Toastbrot und Biskuitkeks). In weniger als 20 Minuten Messzeit kann das Enzym alpha-Amylase ab einer Konzentration von 2,58 mg/kg (Mehl, Keks), bzw. 7,61 mg/kg (Brot) quantitativ nachgewiesen werden. Zeitgleich können die Enzyme Endo-1,4-Xylanase ab einer Konzentration von 7,75 mg/kg (Brot), 3,64 mg/kg (Keks) bzw. 15,60 mg/kg (Mehl) sowie Lipase ab einer Konzentration von 1,26 mg/kg (Mehl, Keks), bzw. 2,68 mg/kg (Brot) quantifiziert werden. Die Methode wurde nach allgemein verwendeten Richtlinien im Zuge einer Validierung statistisch geprüft und lieferte sehr robuste und reproduzierbare quantitative Werte mit Wiederfindungsraten zwischen 50 % und 122 %. Das primäre Ziel dieser Arbeit, die Entwicklung eines quantitativen Multiparameterverfahrens zum Nachweis technischer Enzyme in Backwaren, wurde somit erfolgreich umgesetzt.
Foam fractionation of surfactant and protein solutions is a process dedicated to separate surface active molecules from each other due to their differences in surface activities. The process is based on forming bubbles in a certain mixed solution followed by detachment and rising of bubbles through a certain volume of this solution, and consequently on the formation of a foam layer on top of the solution column. Therefore, systematic analysis of this whole process comprises of at first investigations dedicated to the formation and growth of single bubbles in solutions, which is equivalent to the main principles of the well-known bubble pressure tensiometry. The second stage of the fractionation process includes the detachment of a single bubble from a pore or capillary tip and its rising in a respective aqueous solution. The third and final stage of the process is the formation and stabilization of the foam created by these bubbles, which contains the adsorption layers formed at the growing bubble surface, carried up and gets modified during the bubble rising and finally ends up as part of the foam layer.
Bubble pressure tensiometry and bubble profile analysis tensiometry experiments were performed with protein solutions at different bulk concentrations, solution pH and ionic strength in order to describe the process of accumulation of protein and surfactant molecules at the bubble surface. The results obtained from the two complementary methods allow understanding the mechanism of adsorption, which is mainly governed by the diffusional transport of the adsorbing protein molecules to the bubble surface. This mechanism is the same as generally discussed for surfactant molecules. However, interesting peculiarities have been observed for protein adsorption kinetics at sufficiently short adsorption times. First of all, at short adsorption times the surface tension remains constant for a while before it decreases as expected due to the adsorption of proteins at the surface. This time interval is called induction time and it becomes shorter with increasing protein bulk concentration. Moreover, under special conditions, the surface tension does not stay constant but even increases over a certain period of time. This so-called negative surface pressure was observed for BCS and BLG and discussed for the first time in terms of changes in the surface conformation of the adsorbing protein molecules. Usually, a negative surface pressure would correspond to a negative adsorption, which is of course impossible for the studied protein solutions. The phenomenon, which amounts to some mN/m, was rather explained by simultaneous changes in the molar area required by the adsorbed proteins and the non-ideality of entropy of the interfacial layer. It is a transient phenomenon and exists only under dynamic conditions.
The experiments dedicated to the local velocity of rising air bubbles in solutions were performed in a broad range of BLG concentration, pH and ionic strength. Additionally, rising bubble experiments were done for surfactant solutions in order to validate the functionality of the instrument. It turns out that the velocity of a rising bubble is much more sensitive to adsorbing molecules than classical dynamic surface tension measurements. At very low BLG or surfactant concentrations, for example, the measured local velocity profile of an air bubble is changing dramatically in time scales of seconds while dynamic surface tensions still do not show any measurable changes at this time scale. The solution’s pH and ionic strength are important parameters that govern the measured rising velocity for protein solutions. A general theoretical description of rising bubbles in surfactant and protein solutions is not available at present due to the complex situation of the adsorption process at a bubble surface in a liquid flow field with simultaneous Marangoni effects. However, instead of modelling the complete velocity profile, new theoretical work has been started to evaluate the maximum values in the profile as characteristic parameter for dynamic adsorption layers at the bubble surface more quantitatively.
The studies with protein-surfactant mixtures demonstrate in an impressive way that the complexes formed by the two compounds change the surface activity as compared to the original native protein molecules and therefore lead to a completely different retardation behavior of rising bubbles. Changes in the velocity profile can be interpreted qualitatively in terms of increased or decreased surface activity of the formed protein-surfactant complexes. It was also observed that the pH and ionic strength of a protein solution have strong effects on the surface activity of the protein molecules, which however, could be different on the rising bubble velocity and the equilibrium adsorption isotherms. These differences are not fully understood yet but give rise to discussions about the structure of protein adsorption layer under dynamic conditions or in the equilibrium state.
The third main stage of the discussed process of fractionation is the formation and characterization of protein foams from BLG solutions at different pH and ionic strength. Of course a minimum BLG concentration is required to form foams. This minimum protein concentration is a function again of solution pH and ionic strength, i.e. of the surface activity of the protein molecules. Although at the isoelectric point, at about pH 5 for BLG, the hydrophobicity and hence the surface activity should be the highest, the concentration and ionic strength effects on the rising velocity profile as well as on the foamability and foam stability do not show a maximum. This is another remarkable argument for the fact that the interfacial structure and behavior of BLG layers under dynamic conditions and at equilibrium are rather different. These differences are probably caused by the time required for BLG molecules to adapt respective conformations once they are adsorbed at the surface.
All bubble studies described in this work refer to stages of the foam fractionation process. Experiments with different systems, mainly surfactant and protein solutions, were performed in order to form foams and finally recover a solution representing the foamed material. As foam consists to a large extent of foam lamella – two adsorption layers with a liquid core – the concentration in a foamate taken from foaming experiments should be enriched in the stabilizing molecules. For determining the concentration of the foamate, again the very sensitive bubble rising velocity profile method was applied, which works for any type of surface active materials. This also includes technical surfactants or protein isolates for which an accurate composition is unknown.
Proteins are natural polypeptides produced by cells; they can be found in both animals and plants, and possess a variety of functions. One of these functions is to provide structural support to the surrounding cells and tissues. For example, collagen (which is found in skin, cartilage, tendons and bones) and keratin (which is found in hair and nails) are structural proteins. When a tissue is damaged, however, the supporting matrix formed by structural proteins cannot always spontaneously regenerate. Tailor-made synthetic polypeptides can be used to help heal and restore tissue formation.
Synthetic polypeptides are typically synthesized by the so-called ring opening polymerization (ROP) of α-amino acid N-carboxyanhydrides (NCA). Such synthetic polypeptides are generally non-sequence-controlled and thus less complex than proteins. As such, synthetic polypeptides are rarely as efficient as proteins in their ability to self-assemble and form hierarchical or structural supramolecular assemblies in water, and thus, often require rational designing. In this doctoral work, two types of amino acids, γ-benzyl-L/D-glutamate (BLG / BDG) and allylglycine (AG), were selected to synthesize a series of (co)polypeptides of different compositions and molar masses.
A new and versatile synthetic route to prepare polypeptides was developed, and its mechanism and kinetics were investigated. The polypeptide properties were thoroughly studied and new materials were developed from them. In particular, these polypeptides were able to aggregate (or self-assemble) in solution into microscopic fibres, very similar to those formed by collagen. By doing so, they formed robust physical networks and organogels which could be processed into high water-content, pH-responsive hydrogels. Particles with highly regular and chiral spiral morphologies were also obtained by emulsifying these polypeptides. Such polypeptides and the materials derived from them are, therefore, promising candidates for biomedical applications.
Thesen
(2016)
The Gradient Symbolic Computation (GSC) model presented in the keynote article (Goldrick, Putnam & Schwarz) constitutes a significant theoretical development, not only as a model of bilingual code-mixing, but also as a general framework that brings together symbolic grammars and graded representations. The authors are to be commended for successfully integrating a theory of grammatical knowledge with the voluminous research on lexical co-activation in bilinguals. It is, however, unfortunate that a certain conception of bilingualism was inherited from this latter research tradition, one in which the contrast between native and non-native language takes a back seat.
Die vorliegende Arbeit befasst sich mit Führungsverhalten im öffentlichen Sektor sowie mit Einflussfaktoren auf dieses Führungsverhalten. Hierzu wurde eine Taxonomie, bestehend aus sechs Metakategorien von Führungsverhalten, entwickelt. Die Metakategorien umfassen Aufgaben-, Beziehungs-, Veränderungs-, Außen-, Ethik- und Sachbearbeitungsorientierung. Eine Analyse von Umfragedaten, die für diese Arbeit bei Mitarbeitern und unteren Führungskräften dreier Behörden erhoben wurden, zeigt, dass diese Taxonomie sehr gut geeignet ist, die Führungsrealität in der öffentlichen Verwaltung abzubilden.
Eine deskriptive Auswertung der Daten zeigt außerdem, dass es eine relativ große Differenz zwischen der Selbsteinschätzung der Führungskräfte und der Fremdeinschätzung durch ihre Mitarbeiter gibt. Diese Differenz ist bei der Beziehungs- und Veränderungsorientierung besonders hoch.
Der deskriptiven Auswertung schließt sich eine Analyse von Einflussfaktoren auf das Führungsverhalten an. Die Einflussfaktoren können den vier Kategorien "Charakteristika und Eigenschaften der Führungskräfte", "Erwartungen und Interesse von Vorgesetzten", "Charakteristika und Einstellungen von Geführten" und "Managementinstrumente und -rahmenbedingungen" zugeordnet werden.
Eine Analyse mit Hilfe von hierarchischen linearen Modellen zeigt, dass vor allem die Führungsmotivation und die Managementorientierung der Führungskräfte, die Gemeinwohlorientierung und die Art der Aufgabe der Geführten sowie die strategische Führungskräfteauswahl und die Leistungsmessung durch die Führungskräfte anhand konkreter Ziele einen Einfluss auf das Führungsverhalten haben.
Die Ergebnisse dieser Arbeit ergänzen die Literatur zu Führungsverhalten im öffentlichen Sektor um die Perspektive der Einflussfaktoren auf das Führungsverhalten und leisten zusätzlich mit Hilfe der verwendeten Taxonomie einen Beitrag zur theoretischen Diskussion von Führungsverhalten in der Public-Management-Forschung. Darüber hinaus bieten die gewonnenen Erkenntnisse der Verwaltungspraxis Hinweise zu relevanten Einflussfaktoren auf das Führungsverhalten sowie auf beachtliche Differenzen zwischen Selbst- und Fremdwahrnehmung des Führungsverhaltens.
This article is a response to calls in prior research that we need more longitudi-nal analyses to better understand the foundations of PSM and related prosocial values. There is wide agreement that it is crucial for theory-building but also for tailoring hiring practices and human resource development programs to sort out whether PSM-related values are stable or developable. The article summarizes existent theoretical expecta-tions, which turn out to be partially conflicting, and tests them against multiple waves of data from the German Socio-Economic Panel Study which covers a time period of sixteen years. It finds that PSM-related values of public employees are stable rather than dynamic but tend to increase with age and decrease with organizational member-ship. The article also examines cohort effects, which have been neglected in prior work, and finds moderate evidence that there are differences between those born during the Second World War and later generations.
Introduction
Genes involved in body weight regulation that were previously investigated in genome-wide association studies (GWAS) and in animal models were target-enriched followed by massive parallel next generation sequencing.
Methods
We enriched and re-sequenced continuous genomic regions comprising FTO, MC4R, TMEM18, SDCCAG8, TKNS, MSRA and TBC1D1 in a screening sample of 196 extremely obese children and adolescents with age and sex specific body mass index (BMI) >= 99th percentile and 176 lean adults (BMI <= 15th percentile). 22 variants were confirmed by Sanger sequencing. Genotyping was performed in up to 705 independent obesity trios (extremely obese child and both parents), 243 extremely obese cases and 261 lean adults.
Results and Conclusion
We detected 20 different non-synonymous variants, one frame shift and one nonsense mutation in the 7 continuous genomic regions in study groups of different weight extremes. For SNP Arg695Cys (rs58983546) in TBC1D1 we detected nominal association with obesity (p(TDT) = 0.03 in 705 trios). Eleven of the variants were rare, thus were only detected heterozygously in up to ten individual(s) of the complete screening sample of 372 individuals. Two of them (in FTO and MSRA) were found in lean individuals, nine in extremely obese. In silico analyses of the 11 variants did not reveal functional implications for the mutations. Concordant with our hypothesis we detected a rare variant that potentially leads to loss of FTO function in a lean individual. For TBC1D1, in contrary to our hypothesis, the loss of function variant (Arg443Stop) was found in an obese individual. Functional in vitro studies are warranted.
Die Rezeption des Propheten Jona im Koran setzt dessen biblischen Narrativ im Wesentlichen voraus und deutet diesen vor allem dort aus, wo man um eine Korrektur seines Prophetenbildes bemüht ist. Im Fokus stehen dabei die Buße, Umkehr und Erlösung Yūnus’ und seines Volkes. Nachkoranische Prophetenerzählungen (qisas alanbiyā’) füllen die narrativen Leerstellen der ‚Jona-Suren‘ wiederum mit erklärendem Erzählmaterial auf und schöpfen dafür auch aus dem umfangreichen Fundus biblischer und rabbinischer Traditionen, die sie sich im äußeren Rahmen der koranischen Yūnus Überlieferung schöpferisch zu eigen machen. So entstehen Erzählkompositionen, die sich als dialogische Auseinandersetzung mit religiösen Themen von gemeinsamer Relevanz lesen lassen. Der Artikel reflektiert gezielt Entwicklung und Verhältnis der Rezeptionen Jonas im Koran sowie in den Prophetenerzählungen von Ibn-Muhammad at-Ta‛labī und Muhammad ibn ‛Abd Allāh al-Kisā’i, in stetiger Auseinandersetzung mit der jüdischen Jona-Tradition.
Lake Towuti is a tectonic basin, surrounded by ultramafic rocks. Lateritic soils form through weathering and deliver abundant iron (oxy)hydroxides but very little sulfate to the lake and its sediment. To characterize the sediment biogeochemistry, we collected cores at three sites with increasing water depth and decreasing bottom water oxygen concentrations. Microbial cell densities were highest at the shallow site a feature we attribute to the availability of labile organic matter (OM) and the higher abundance of electron acceptors due to oxic bottom water conditions. At the two other sites, OM degradation and reduction processes below the oxycline led to partial electron acceptor depletion. Genetic information preserved in the sediment as extracellular DNA (eDNA) provided information on aerobic and anaerobic heterotrophs related to Nitrospirae. Chloroflexi, and Therrnoplasmatales. These taxa apparently played a significant role in the degradation of sinking OM. However, eDNA concentrations rapidly decreased with core depth. Despite very low sulfate concentrations, sulfate-reducing bacteria were present and viable in sediments at all three sites, as confirmed by measurement of potential sulfate reduction rates. Microbial community fingerprinting supported the presence of taxa related to Deltaproteobacteria and Firmicutes with demonstrated capacity for iron and sulfate reduction. Concomitantly, sequences of Ruminococcaceae, Clostridiales, and Methanornicrobiales indicated potential for fermentative hydrogen and methane production. Such first insights into ferruginous sediments showed that microbial populations perform successive metabolisms related to sulfur, iron, and methane. In theory, iron reduction could reoxidize reduced sulfur compounds and desorb OM from iron minerals to allow remineralization to methane. Overall, we found that biogeochemical processes in the sediments can be linked to redox differences in the bottom waters of the three sites, like oxidant concentrations and the supply of labile OM. At the scale of the lacustrine record, our geomicrobiological study should provide a means to link the extant subsurface biosphere to past environments.
The present work is a case study contributing to the major planning project “Suedlink”. It is structured as follows: first, in a theoretical part, mandatory theories of social acceptance (Wüstenhagen et al., 2007), steps of participation (Münnich, 2014), and the governance theory (Benz and Dose, 2011) are elaborated. Secondly, the relevant methods are discussed. Thirdly, in a qualitative analytical part, the information that were gathered from the expert interviews are analyzed with the use of the aforementioned theories. In the fourth place, an empirical quantitative analysis of data regarding the public acceptance towards Suedlink is presented.
In this case study, with the use of qualitative and quantitative methods, two questions are answered: first, which governance aspects were relevant for the priority use of underground cables for the construction of high voltage direct current transmission lines? For this question, intensive document analysis and different expert interviews were conducted. Secondly, the central question of the present work addresses the question whether local or/and individual factors affect the public acceptance towards SüdLink. Here, in particular, it is interesting to analyze if the priority use of underground cables affected the people’s acceptance towards SuedLink. In order to respond to both questions, an online survey was conducted among citizen initiatives, district administrators, and individuals in social media during March till July 2016. Thereafter, the data was analyzed with the use of descriptive quantitative methods. The data shows, that underground cables not necessarily increase public acceptance (see also Menges and Beyer, 2013). On the contrary, individual and local criteria were relevant for the survey respondents. For example criteria such as the quality of participation, distance between home and transmission lines, and the additional financial burden (taxes, higher prices for electricity) were important for the evaluation. In addition, survey respondents who participated in citizen initiatives were more critical against the priority use of underground cables and SuedLink in general. Likewise, residential homeowners rejected every form of transmission lines.
We tested the influence of two light intensities [40 and 300 μmol PAR / (m2s)] on the fatty acid composition of three distinct lipid classes in four freshwater phytoplankton species. We chose species of different taxonomic classes in order to detect potentially similar reaction characteristics that might also be present in natural phytoplankton communities. From samples of the bacillariophyte Asterionella formosa, the chrysophyte Chromulina sp., the cryptophyte Cryptomonas ovata and the zygnematophyte Cosmarium botrytis we first separated glycolipids (monogalactosyldiacylglycerol, digalactosyldiacylglycerol, and sulfoquinovosyldiacylglycerol), phospholipids (phosphatidylcholine, phosphatidylethanolamine, phosphatidylglycerol, phosphatidylinositol, and phosphatidylserine) as well as non-polar lipids (triacylglycerols), before analyzing the fatty acid composition of each lipid class. High variation in the fatty acid composition existed among different species. Individual fatty acid compositions differed in their reaction to changing light intensities in the four species. Although no generalizations could be made for species across taxonomic classes, individual species showed clear but small responses in their ecologically-relevant omega-3 and omega-6 polyunsaturated fatty acids (PUFA) in terms of proportions and of per tissue carbon quotas. Knowledge on how lipids like fatty acids change with environmental or culture conditions is of great interest in ecological food web studies, aquaculture, and biotechnology, since algal lipids are the most important sources of omega-3 long-chain PUFA for aquatic and terrestrial consumers, including humans.
Die vorliegende Modularbeit vergleicht die Häufigkeit des Imperativs auf Plakaten der Berliner Abgeordnetenhauswahl 2016 mit der auf den Plakaten der Weimarer Republik. Sie geht dabei der These nach, dass diese Häufigkeit abgenommen hat und kann diese bestätigen: 2016 tritt der Imperativ achtmal seltener (5,7 % zu 45,7 %) auf. Zusätzlich leistet die Arbeit einen Überblick zum Imperativ und weiteren Möglichkeiten, mittels der deutschen Sprache eine Aufforderung zu artikulieren.
Für die Untersuchung wurden zwei Untersuchungskorpora herangezogen, wovon der Korpus mit den Slogans zur Abgeordnetenhauswahl extra für diese Arbeit erstellt wurde und ihr auch beiliegt. In diesem, wie im Korpus zur Weimarer Republik, sind alle die Slogans enthalten und die verwendeten Imperative ausgezählt. So bietet sich ein Einblick in die politische Werbesprache der beiden Zeiten.
Since 1998, elite athletes’ sport injuries have been monitored in single sport event, which leads to the development of first comprehensive injury surveillance system in multi-sport Olympic Games in 2008. However, injury and illness occurred in training phases have not been systematically studied due to its multi-facets, potentially interactive risk related factors. The present thesis aim to address issues of feasibility of establishing a validated measure for injury/illness, training environment and psychosocial risk factors by creating the evaluation tool namely risk of injury questionnaire (Risk-IQ) for elite athletes, which based on IOC consensus statement 2009 recommended content of preparticipation evaluation(PPE) and periodic health exam (PHE).
A total of 335 top level athletes and a total of 88 medical care providers from Germany and Taiwan participated in tow “cross-sectional plus longitudinal” Risk-IQ and MCPQ surveys respectively. Four categories of injury/illness related risk factors questions were asked in Risk-IQ for athletes while injury risk and psychological related questions were asked in MCPQ for MCP cohorts. Answers were quantified scales wise/subscales wise before analyzed with other factors/scales. In addition, adapted variables such as sport format were introduced for difference task of analysis.
Validated with 2-wyas translation and test-retest reliabilities, the Risk-IQ was proved to be in good standard which were further confirmed by analyzed results from official surveys in both Germany and Taiwan. The result of Risk-IQ revealed that elite athletes’ accumulated total injuries, in general, were multi-factor dependent; influencing factors including but not limited to background experiences, medical history, PHE and PPE medical resources as well as stress from life events. Injuries of different body parts were sport format and location specific. Additionally, medical support of PPE and PHE indicated significant difference between German and Taiwan.
The result of the present thesis confirmed that it is feasible to construct a comprehensive evalua-tion instrument for heterogeneous elite athletes cohorts’ risk factor analysis for injury/illness oc-curred during their non-competition periods. In average and with many moderators involved, Ger-man elite athletes have superior medical care support yet suffered more severe injuries than Tai-wanese counterparts. Opinions of injury related psychological issues reflected differently on vari-ous MCP groups irrespective of different nationalities. In general, influencing factors and interac-tions existed among relevant factors in both studies which implied further investigation with multiple regression analysis is needed for better understanding.
Molecular paleoclimate reconstructions over the last 9 ka from a peat sequence in South China
(2016)
To achieve a better understanding of Holocene climate change in the monsoon regions of China, we investigated the molecular distributions and carbon and hydrogen isotope compositions delta C-13 and delta D values) of long-chain n-alkanes in a peat core from the Shiwangutian SWGT) peatland, south China over the last 9 ka. By comparisons with other climate records, we found that the delta C-13 values of the long-chain n-alkanes can be a proxy for humidity, while the dD values of the long-chain n-alkanes primarily recorded the moisture source dD signal during 9-1.8 ka BP and responded to the dry climate during 1.8-0.3 ka BP. Together with the average chain length ACL) and the carbon preference index CPI) data, the climate evolution over last 9 ka in the SWGT peatland can be divided into three stages. During the first stage 9-5 ka BP), the delta C-13 values were depleted and CPI and Paq values were low, while ACL values were high. They reveal a period of warm and wet climate, which is regarded as the Holocene optimum. The second stage 5-1.8 ka BP) witnessed a shift to relatively cool and dry climate, as indicated by the more positive delta C-13 values and lower ACL values. During the third stage 1.8-0.3 ka BP), the delta C-13, delta D, CPI and Paq values showed marked increase and ACL values varied greatly, implying an abrupt change to cold and dry conditions. This climate pattern corresponds to the broad decline in Asian monsoon intensity through the latter part of the Holocene. Our results do not support a later Holocene optimum in south China as suggested by previous studies.
Loss to follow-up in a randomized controlled trial study for pediatric weight management (EPOC)
(2016)
Background
Attrition is a serious problem in intervention studies. The current study analyzed the attrition rate during follow-up in a randomized controlled pediatric weight management program (EPOC study) within a tertiary care setting.
Methods
Five hundred twenty-three parents and their 7–13-year-old children with obesity participated in the randomized controlled intervention trial. Follow-up data were assessed 6 and 12 months after the end of treatment. Attrition was defined as providing no objective weight data. Demographic and psychological baseline characteristics were used to predict attrition at 6- and 12-month follow-up using multivariate logistic regression analyses.
Results
Objective weight data were available for 49.6 (67.0) % of the children 6 (12) months after the end of treatment. Completers and non-completers at the 6- and 12-month follow-up differed in the amount of weight loss during their inpatient stay, their initial BMI-SDS, educational level of the parents, and child’s quality of life and well-being. Additionally, completers supported their child more than non-completers, and at the 12-month follow-up, families with a more structured eating environment were less likely to drop out. On a multivariate level, only educational background and structure of the eating environment remained significant.
Conclusions
The minor differences between the completers and the non-completers suggest that our retention strategies were successful. Further research should focus on prevention of attrition in families with a lower educational background.
When realizing a programming language as VM, implementing behavior as part of the VM, as primitive, usually results in reduced execution times. But supporting and developing primitive functions requires more effort than maintaining and using code in the hosted language since debugging is harder, and the turn-around times for VM parts are higher. Furthermore, source artifacts of primitive functions are seldom reused in new implementations of the same language. And if they are reused, the existing API usually is emulated, reducing the performance gains. Because of recent results in tracing dynamic compilation, the trade-off between performance and ease of implementation, reuse, and changeability might now be decided adversely.
In this work, we investigate the trade-offs when creating primitives, and in particular how large a difference remains between primitive and hosted function run times in VMs with tracing just-in-time compiler. To that end, we implemented the algorithmic primitive BitBlt three times for RSqueak/VM. RSqueak/VM is a Smalltalk VM utilizing the PyPy RPython toolchain. We compare primitive implementations in C, RPython, and Smalltalk, showing that due to the tracing just-in-time compiler, the performance gap has lessened by one magnitude to one magnitude.
The advantages of remote sensing using Unmanned Aerial Vehicles (UAVs) are a high spatial resolution of images, temporal flexibility and narrow-band spectral data from different wavelengths domains. This enables the detection of spatio-temporal dynamics of environmental variables, like plant-related carbon dynamics in agricultural landscapes. In this paper, we quantify spatial patterns of fresh phytomass and related carbon (C) export using imagery captured by a 12-band multispectral camera mounted on the fixed wing UAV Carolo P360. The study was performed in 2014 at the experimental area CarboZALF-D in NE Germany. From radiometrically corrected and calibrated images of lucerne (Medicago sativa), the performance of four commonly used vegetation indices (VIs) was tested using band combinations of six near-infrared bands. The highest correlation between ground-based measurements of fresh phytomass of lucerne and VIs was obtained for the Enhanced Vegetation Index (EVI) using near-infrared band b(899). The resulting map was transformed into dry phytomass and finally upscaled to total C export by harvest. The observed spatial variability at field- and plot-scale could be attributed to small-scale soil heterogeneity in part.
Der Wunderbaum
(2016)
Heute sind die Themen Frauen und Frieden auf der Ebene der Sicherheitspolitik der Vereinten Nationen als Resultat von Resolution 1325 (2000) eng miteinander verbunden. Welche rechtlichen und tatsächlichen Konsequenzen haben sich aus dieser Entwicklung einerseits für die Arbeit der Vereinten Nationen selbst, andererseits für die Mitgliedstaaten ergeben und wie steht es um ihre Umsetzung? Die Studie zeichnet die WPS-Agenda nach und diskutiert die diesbezüglichen Aktivitäten der Vereinten Nationen. Die Umsetzungsmaßnahmen Deutschlands werden im Anschluss untersucht und bewertet.
The coil-to-globule transition of poly(N-isopropylacrylamide) (PNIPAM) microgel particles suspended in water has been investigated in situ as a function of heating and cooling rate with four optical process analytical technologies (PAT), sensitive to structural changes of the polymer. Photon Density Wave (PDW) spectroscopy, Focused Beam Reflectance Measurements (FBRM), turbidity measurements, and Particle Vision Microscope (PVM) measurements are found to be powerful tools for the monitoring of the temperature-dependent transition of such thermo-responsive polymers. These in-line technologies allow for monitoring of either the reduced scattering coefficient and the absorption coefficient, the chord length distribution, the reflected intensities, or the relative backscatter index via in-process imaging, respectively. Varying heating and cooling rates result in rate-dependent lower critical solution temperatures (LCST), with different impact of cooling and heating. Particularly, the data obtained by PDW spectroscopy can be used to estimate the thermodynamic transition temperature of PNIPAM for infinitesimal heating or cooling rates. In addition, an inverse hysteresis and a reversible building of micrometer-sized agglomerates are observed for the PNIPAM transition process.
Herein we present an efficient synthesis of a biomimetic probe with modular construction that can be specifically bound by the mannose binding FimH protein – a surface adhesion protein of E. coli bacteria. The synthesis combines the new and interesting DBD dye with the carbohydrate ligand mannose via a Click reaction. We demonstrate the binding to E. coli bacteria over a large concentration range and also present some special characteristics of those molecules that are of particular interest for the application as a biosensor. In particular, the mix-and-measure ability and the very good photo-stability should be highlighted here.
The synthesis and photophysical properties of two new FRET pairs based on coumarin as a donor and DBD dye as an acceptor are described. The introduction of a bromo atom dramatically increases the two-photon excitation (2PE) cross section providing a 2PE-FRET system, which is also suitable for 2PE-FLIM.
Ende September 1910 erschütterten schwere Straßenkämpfe zwischen Bevölkerung und Polizei den Berliner Stadtteil Moabit. Die mehrtägigen Unruhen hatten unter anderem eine Vielzahl Verletzter auf beiden Seiten, zwei Tote, diplomatische Beschwerden und zwei Mammutprozesse vor Berliner Gerichten zur Folge.
Ausgehend von dem Gedanken Wilhelm Mommsens „nichts versetzt so leicht in die Atmosphäre einer Zeit als ihre Zeitungen und nichts zeigt so gut, was die Zeitgenossen beschäftigt und hauptsächlich interessiert hat,“ soll in dieser Arbeit der Versuch unternommen werden, den die Moabiter Unruhen begleitenden Diskurs in den Tageszeitungen nachzuzeichnen, um somit anhand der zeitnahen zeitgenössischen Rezeption dieser zugespitzten Ausnahmesituation Wesen und Charakter der öffentlichen Diskussion in der wilhelminischen Gesellschaft sichtbar zu machen.
In der vorliegenden Arbeit sollen zunächst kurz der Bezirk Moabit sowie seine Bewohner vorgestellt werden, ehe daran anknüpfend der Verlauf von Streik und Auseinandersetzungen überblicksartig vorgestellt werden. Daran anschließend soll ein Bild der zeitnahen Berichterstattung während der Entwicklung der Ereignisse in der deutschen Presselandschaft entworfen werden.
Um ein detailliertes Bild des die in dieser Arbeit vorgestellten Ereignisse begleitenden Diskurses zu erhalten, wurden für diese Arbeit die Ausgaben der nachstehenden deutschen Tageszeitung von Beginn des die Unruhen auslösenden Streiks am 19. September bis zum Ende der Prozesse im Februar 1911 systematisch durchgesehen. Die detaillierte Diskussion der Berichterstattung beschränkt sich in dieser Arbeit jedoch auf den Zeitraum von Beginn des Streiks bis zu dessen Ende im Oktober 1910.
Seitens der liberal ausgerichteten Blätter wurden das liberal-bürgerliche Berliner Tageblatt sowie die Frankfurter Zeitung gesichtet. Als Vertreter der katholischen Zentrumspartei wurde die Germania durchgesehen. Als führendes Organ der Sozialdemokratie wurde der Vorwärts hinzugezogen. Von der konservativen Presse wurden der Berliner Lokal‑Anzeiger und die protestantische Neue Preußische Zeitung begutachtet.
In many statistical applications, the aim is to model the relationship between covariates and some outcomes. A choice of the appropriate model depends on the outcome and the research objectives, such as linear models for continuous outcomes, logistic models for binary outcomes and the Cox model for time-to-event data. In epidemiological, medical, biological, societal and economic studies, the logistic regression is widely used to describe the relationship between a response variable as binary outcome and explanatory variables as a set of covariates. However, epidemiologic cohort studies are quite expensive regarding data management since following up a large number of individuals takes long time. Therefore, the case-cohort design is applied to reduce cost and time for data collection. The case-cohort sampling collects a small random sample from the entire cohort, which is called subcohort. The advantage of this design is that the covariate and follow-up data are recorded only on the subcohort and all cases (all members of the cohort who develop the event of interest during the follow-up process).
In this thesis, we investigate the estimation in the logistic model for case-cohort design. First, a model with a binary response and a binary covariate is considered. The maximum likelihood estimator (MLE) is described and its asymptotic properties are established. An estimator for the asymptotic variance of the estimator based on the maximum likelihood approach is proposed; this estimator differs slightly from the estimator introduced by Prentice (1986). Simulation results for several proportions of the subcohort show that the proposed estimator gives lower empirical bias and empirical variance than Prentice's estimator.
Then the MLE in the logistic regression with discrete covariate under case-cohort design is studied. Here the approach of the binary covariate model is extended. Proving asymptotic normality of estimators, standard errors for the estimators can be derived. The simulation study demonstrates the estimation procedure of the logistic regression model with a one-dimensional discrete covariate. Simulation results for several proportions of the subcohort and different choices of the underlying parameters indicate that the estimator developed here performs reasonably well. Moreover, the comparison between theoretical values and simulation results of the asymptotic variance of estimator is presented.
Clearly, the logistic regression is sufficient for the binary outcome refers to be available for all subjects and for a fixed time interval. Nevertheless, in practice, the observations in clinical trials are frequently collected for different time periods and subjects may drop out or relapse from other causes during follow-up. Hence, the logistic regression is not appropriate for incomplete follow-up data; for example, an individual drops out of the study before the end of data collection or an individual has not occurred the event of interest for the duration of the study. These observations are called censored observations. The survival analysis is necessary to solve these problems. Moreover, the time to the occurence of the event of interest is taken into account. The Cox model has been widely used in survival analysis, which can effectively handle the censored data. Cox (1972) proposed the model which is focused on the hazard function. The Cox model is assumed to be
λ(t|x) = λ0(t) exp(β^Tx)
where λ0(t) is an unspecified baseline hazard at time t and X is the vector of covariates, β is a p-dimensional vector of coefficient.
In this thesis, the Cox model is considered under the view point of experimental design. The estimability of the parameter β0 in the Cox model, where β0 denotes the true value of β, and the choice of optimal covariates are investigated. We give new representations of the observed information matrix In(β) and extend results for the Cox model of Andersen and Gill (1982). In this way conditions for the estimability of β0 are formulated. Under some regularity conditions, ∑ is the inverse of the asymptotic variance matrix of the MPLE of β0 in the Cox model and then some properties of the asymptotic variance matrix of the MPLE are highlighted. Based on the results of asymptotic estimability, the calculation of local optimal covariates is considered and shown in examples. In a sensitivity analysis, the efficiency of given covariates is calculated. For neighborhoods of the exponential models, the efficiencies have then been found. It is appeared that for fixed parameters β0, the efficiencies do not change very much for different baseline hazard functions. Some proposals for applicable optimal covariates and a calculation procedure for finding optimal covariates are discussed.
Furthermore, the extension of the Cox model where time-dependent coefficient are allowed, is investigated. In this situation, the maximum local partial likelihood estimator for estimating the coefficient function β(·) is described. Based on this estimator, we formulate a new test procedure for testing, whether a one-dimensional coefficient function β(·) has a prespecified parametric form, say β(·; ϑ). The score function derived from the local constant partial likelihood function at d distinct grid points is considered. It is shown that the distribution of the properly standardized quadratic form of this d-dimensional vector under the null hypothesis tends to a Chi-squared distribution. Moreover, the limit statement remains true when replacing the unknown ϑ0 by the MPLE in the hypothetical model and an asymptotic α-test is given by the quantiles or p-values of the limiting Chi-squared distribution. Finally, we propose a bootstrap version of this test. The bootstrap test is only defined for the special case of testing whether the coefficient function is constant. A simulation study illustrates the behavior of the bootstrap test under the null hypothesis and a special alternative. It gives quite good results for the chosen underlying model.
References
P. K. Andersen and R. D. Gill. Cox's regression model for counting processes: a large samplestudy. Ann. Statist., 10(4):1100{1120, 1982.
D. R. Cox. Regression models and life-tables. J. Roy. Statist. Soc. Ser. B, 34:187{220, 1972.
R. L. Prentice. A case-cohort design for epidemiologic cohort studies and disease prevention trials. Biometrika, 73(1):1{11, 1986.
Isostasy is one of the oldest and most widely applied concepts in the geosciences, but the geoscientific community lacks a coherent, easy-to-use tool to simulate flexure of a realistic (i.e., laterally heterogeneous) lithosphere under an arbitrary set of surface loads. Such a model is needed for studies of mountain building, sedimentary basin formation, glaciation, sea-level change, and other tectonic, geodynamic, and surface processes. Here I present gFlex (for GNU flexure), an open-source model that can produce analytical and finite difference solutions for lithospheric flexure in one (profile) and two (map view) dimensions. To simulate the flexural isostatic response to an imposed load, it can be used by itself or within GRASS GIS for better integration with field data. gFlex is also a component with the Community Surface Dynamics Modeling System (CSDMS) and Landlab modeling frameworks for coupling with a wide range of Earth-surface-related models, and can be coupled to additional models within Python scripts. As an example of this in-script coupling, I simulate the effects of spatially variable lithospheric thickness on a modeled Iceland ice cap. Finite difference solutions in gFlex can use any of five types of boundary conditions: 0-displacement, 0-slope (i.e., clamped); 0-slope, 0-shear; 0-moment, 0-shear (i.e., broken plate); mirror symmetry; and periodic. Typical calculations with gFlex require << 1 s to similar to 1 min on a personal laptop computer. These characteristics - multiple ways to run the model, multiple solution methods, multiple boundary conditions, and short compute time - make gFlex an effective tool for flexural isostatic modeling across the geosciences.
Previous research on the interplay between static manual postures and visual attention revealed enhanced visual selection near the hands (near-hand effect). During active movements there is also superior visual performance when moving toward compared to away from the stimulus (direction effect). The "modulated visual pathways" hypothesis argues that differential involvement of magno- and parvocellular visual processing streams causes the near-hand effect. The key finding supporting this hypothesis is an increase in temporal and a reduction in spatial processing in near-hand space (Gozli et al., 2012). Since this hypothesis has, so far, only been tested with static hand postures, we provide a conceptual replication of Gozli et al.'s (2012) result with moving hands, thus also probing the generality of the direction effect. Participants performed temporal or spatial gap discriminations while their right hand was moving below the display. In contrast to Gozli et al (2012), temporal gap discrimination was superior at intermediate and not near hand proximity. In spatial gap discrimination, a direction effect without hand proximity effect suggests that pragmatic attentional maps overshadowed temporal/spatial processing biases for far/near-hand space.
Regen
(2016)
„Wir schaffen das!“
(2016)
Background
Doping presents a potential health risk for young athletes. Prevention programs are intended to prevent doping by educating athletes about banned substances. However, such programs have their limitations in practice. This led Germany to introduce the National Doping Prevention Plan (NDPP), in hopes of ameliorating the situation among young elite athletes. Two studies examined 1) the degree to which the NDPP led to improved prevention efforts in elite sport schools, and 2) the extent to which newly developed prevention activities of the national anti-doping agency (NADA) based on the NDPP have improved knowledge among young athletes within elite sports schools.
Methods
The first objective was investigated in a longitudinal study (Study I: t0 = baseline, t1 = follow-up 4 years after NDPP introduction) with N = 22 teachers engaged in doping prevention in elite sports schools. The second objective was evaluated in a cross-sectional comparison study (Study II) in N = 213 elite sports school students (54.5 % male, 45.5 % female, age M = 16.7 ± 1.3 years (all students had received the improved NDDP measure in school; one student group had received additionally NADA anti-doping activities and a control group did not). Descriptive statistics were calculated, followed by McNemar tests, Wilcoxon tests and Analysis of Covariance (ANCOVA).
Results
Results indicate that 4 years after the introduction of the NDPP there have been limited structural changes with regard to the frequency, type, and scope of doping prevention in elite sport schools. On the other hand, in study II, elite sport school students who received further NADA anti-doping activities performed better on an anti-doping knowledge test than students who did not take part (F(1, 207) = 33.99, p <0.001), although this difference was small.
Conclusion
The integration of doping-prevention in elite sport schools as part of the NDPP was only partially successful. The results of the evaluation indicate that the introduction of the NDPP has contributed more to a change in the content of doping prevention activities than to a structural transformation in anti-doping education in elite sport schools. Moreover, while students who did receive additional education in the form of the NDPP“booster sessions” had significantly more knowledge about doping than students who did not receive such education, this difference was only small and may not translate to actual behavior.
The interaction of water with α-alumina (i.e. α-Al2O3) surfaces is important in a variety of applications and a useful model for the interaction of water with environmentally abundant aluminosilicate phases. Despite its significance, studies of water interaction with α-Al2O3 surfaces other than the (0001) are extremely limited. Here we characterize the interaction of water (D2O) with a well defined α-Al2O3(1[1 with combining macron]02) surface in UHV both experimentally, using temperature programmed desorption and surface-specific vibrational spectroscopy, and theoretically, using periodic-slab density functional theory calculations. This combined approach makes it possible to demonstrate that water adsorption occurs only at a single well defined surface site (the so-called 1–4 configuration) and that at this site the barrier between the molecularly and dissociatively adsorbed forms is very low: 0.06 eV. A subset of OD stretch vibrations are parallel to this dissociation coordinate, and thus would be expected to be shifted to low frequencies relative to an uncoupled harmonic oscillator. To quantify this effect we solve the vibrational Schrödinger equation along the dissociation coordinate and find fundamental frequencies red-shifted by more than 1500 cm−1. Within the context of this model, at moderate temperatures, we further find that some fraction of surface deuterons are likely delocalized: dissociatively and molecularly absorbed states are no longer distinguishable.
This dissertation uses a common grammatical phenomenon, light verb constructions (LVCs) in English and German, to investigate how syntax-semantics mapping defaults influence the relationships between language processing, representation and conceptualization. LVCs are analyzed as a phenomenon of mismatch in the argument structure. The processing implication of this mismatch are experimentally investigated, using ERPs and a dual task. Data from these experiments point to an increase in working memory. Representational questions are investigated using structural priming. Data from this study suggest that while the syntax of LVCs is not different from other structures’, the semantics and mapping are represented differently. This hypothesis is tested with a new categorization paradigm, which reveals that the conceptual structure that LVC evoke differ in interesting, and predictable, ways from non-mismatching structures’.
The UN sustainable development goals contain environmental, economic, and social objectives. They may only be reached, or at least it would be easier to reach them, if instead of a trade-off between these objectives that implies a need for balancing them, there are synergies to be reaped. This paper discusses how the structures of economic models typically used in policy analysis influence whether win-win strategies for the environment and the economy can be conceptualised and analysed. With a focus on climate policy modelling, the paper points out how, by construction, commonly used model structures find mitigation costs rather than benefits. This paper describes mechanisms that, when added to these model structures, can bring win- win options into a model's solution horizon, and which provide a spectrum of alternative modelling approaches that allow for the identification of such options.