Refine
Has Fulltext
- yes (636) (remove)
Year of publication
- 2017 (636) (remove)
Document Type
- Postprint (249)
- Article (172)
- Doctoral Thesis (135)
- Monograph/Edited Volume (25)
- Part of Periodical (22)
- Master's Thesis (12)
- Working Paper (5)
- Conference Proceeding (4)
- Preprint (4)
- Bachelor Thesis (2)
Keywords
- Philosophie (23)
- philosophy (23)
- Bürgerkommune (12)
- Partizipation (12)
- Partizipationsprozesse (12)
- kommunale Demokratie (12)
- kommunale Entscheidungsprozesse (12)
- Anthropologie (10)
- Genisa (10)
- Geniza (10)
Institute
- Mathematisch-Naturwissenschaftliche Fakultät (73)
- Institut für Geowissenschaften (44)
- Institut für Biochemie und Biologie (40)
- Humanwissenschaftliche Fakultät (36)
- Institut für Chemie (35)
- Department Musik und Kunst (31)
- Extern (30)
- Institut für Physik und Astronomie (30)
- Department Linguistik (27)
- Department Erziehungswissenschaft (24)
Background: Obesity is not only a highly prevalent disease but also poses a considerable burden on children and their families. Evidence is increasing that a lack of self-regulation skills may play a role in the etiology and maintenance of obesity. Our goal with this currently ongoing trial is to examine whether training that focuses on the enhancement of self-regulation skills may increase the sustainability of a complex lifestyle intervention.
Methods/Design: In a multicenter, prospective, parallel group, randomized controlled superiority trial, 226 obese children and adolescents aged 8 to 16 years will be allocated either to a newly developed computer-training program to improve their self-regulation abilities or to a placebo control group. Randomization occurs centrally and blockwise at a 1:1 allocation ratio for each center. This study is performed in pediatric inpatient rehabilitation facilities specialized in the treatment of obesity. Observer-blind assessments of outcome variables take place at four times: at the beginning of the rehabilitation (pre), at the end of the training in the rehabilitation (post), and 6 and 12 months post-rehabilitation intervention. The primary outcome is the course of BMI-SDS over 1 year after the end of the inpatient rehabilitation. Secondary endpoints are the self-regulation skills. In addition, health-related quality of life, and snack intake will be analyzed.
Discussion: The computer-based training programs might be a feasible and attractive tool to increase the sustainability of the weight loss reached during inpatient rehabilitation.
The article presents first results of a pilot study on the syntactic changes in Polish as a language contact in Germany. On the base of the experimental data tests the study examines the syntactic changes in Polish of two diaspora-generations: the so called forgetters and the incomplete learners. The article focuses on the questions: how the situation of languages in contact influences the syntactic changes in the heritage language (Polish) and which status have those syntactic transferences. Other linguistic and sociolinguistic factors, capable to cause the language change in the situation of language contact, are also discussed in the article.
Humboldt hat auf seiner Russlandreise 1829 eine Anzahl von Büchern und Schriften in mongolischer, kalmükischer, armenischer, chinesischer, tibetischer und mandschurischer Sprache als Geschenk erhalten. Darüber hinaus hat er drei persische Handschriften käuflich erworben. Das umfangreichste Stück ist der chinesische Roman Geschichte der Drei Reiche. Humboldt hatte den Altphilologen und Kenner des Armenischen und Chinesischen, Carl Friedrich Neumann, gebeten, diese Titel zu katalogisieren. Als die Liste im Druck erschien (während Neumann sich auf einer Chinareise befand), führten die Monita zu einer Gelehrtenfehde. Danach gerieten die Bücher, eine bunte Mischung, bald in Vergessenheit. Vorliegender Artikel gibt eine Liste auf Grund des heutigen Befundes in der Staatsbibliothek zu Berlin, für die der Sammler die Bücher von Anfang an bestimmt hatte, wie mehrere Beischriften belegen.
Background
Foot orthoses are usually assumed to be effective by optimizing mechanically dynamic rearfoot configuration. However, the effect from a foot orthosis on kinematics that has been demonstrated scientifically has only been marginal. The aim of this study was to examine the effect of different heights in medial arch-supported foot orthoses on rear foot motion during gait.
Methods
Nineteen asymptomatic runners (36±11years, 180±5cm, 79±10kg; 41±22km/week) participated in the study. Trials were recorded at 3.1 mph (5 km/h) on a treadmill. Athletes walked barefoot and with 4 different not customized medial arch-supported foot orthoses of various arch heights (N:0 mm, M:30 mm, H:35 mm, E:40mm). Six infrared cameras and the `Oxford Foot Model´ were used to capture motion. The average stride in each condition was calculated from 50 gait cycles per condition. Eversion excursion and internal tibia rotation were analyzed. Descriptive statistics included calculating the mean ± SD and 95% CIs. Group differences by condition were analyzed by one factor (foot orthoses) repeated measures ANOVA (α = 0.05).
Results
Eversion excursion revealed the lowest values for N and highest for H (B:4.6°±2.2°; 95% CI [3.1;6.2]/N:4.0°±1.7°; [2.9;5.2]/M:5.2°±2.6°; [3.6;6.8]/H:6.2°±3.3°; [4.0;8.5]/E:5.1°±3.5°; [2.8;7.5]) (p>0.05). Range of internal tibia rotation was lowest with orthosis H and highest with E (B:13.3°±3.2°; 95% CI [11.0;15.6]/N:14.5°±7.2°; [9.2;19.6]/M:13.8°±5.0°; [10.8;16.8]/H:12.3°±4.3°; [9.0;15.6]/E:14.9°±5.0°; [11.5;18.3]) (p>0.05). Differences between conditions were small and the intrasubject variation high.
Conclusion
Our results indicate that different arch support heights have no systematic effect on eversion excursion or the range of internal tibia rotation and therefore might not exert a crucial influence on rear foot alignment during gait.
Das Ziel der Arbeit ist die Entwicklung eines heuristischen Bezugsrahmens zur Erklärung der Komplexität im Kontext von Industrie 4.0 und der demografischen Entwicklung aus strukturationstheoretischer Sicht. Dabei sind in Bezug auf die zukünftig zu erwartenden kognitiven Anforderungen an die Beschäftigten die Fragen essentiell, vor welchen Herausforderungen Unternehmen bezüglich der Einstellung und dem Verhalten sowie dem Erfahrungswissen der Beschäftigten stehen und welche Lösungsansätze sich im Umgang mit den Herausforderungen in der Praxis bisher als hilfreich erweisen.
In Kapitel 1 erfolgt zunächst die Beschreibung der Ausgangslage. Es werden die Begriffe Industrie 4.0 und demografische Entwicklung inhaltlich diskutiert und in einen theoretischen Zusammenhang gebracht.
In Kapitel 2 erfolgt die theoretische Fundierung der Arbeit. Dabei wird eine strukturationstheoretische Sicht auf Unternehmen als soziotechnische Systeme eingenommen. Durch diese „nicht deterministische“ Sichtweise wird ein prozessualer Blick auf den Wandlungsprozess in Unternehmen geschaffen, der es möglich macht, die Beschäftigten als aktiv handelnde Akteure im Sinne von „organisieren“ zur Erklärung möglicher Zusammenhänge zwischen Industrie 4.0 und der demografischen Entwicklung mit einzubeziehen. Der soziotechnische Systemansatz und die Strukturationstheorie bilden in diesem Sinne den „Kern“ des zu entwickelnden heuristischen Bezugsrahmens.
Die inhaltliche Gestaltung des theoriebasierten heuristischen Bezugsrahmens erfolgt in Kapitel 3 und Kapitel 4.
Kapitel 3 beschreibt ausgewählte Aspekte zukünftiger Anforderungen an die Arbeit, die durch eine systematische Aufbereitung des derzeitigen Forschungsstandes zu Industrie 4.0 ermittelt wurden. Sie bilden die „Gestaltungsgrenzen“, innerhalb derer sich je nach betrieblicher Situation unterschiedliche neue oder geänderte Anforderungen an die Beschäftigten bei der Gestaltung von Industrie 4.0 ableiten lassen.
In Kapitel 4 werden ausgewählte Aspekte menschlichen Handelns am Beispiel älterer Beschäftigter in Form zweier Schwerpunkte beschrieben.
Der erste Schwerpunkt betrifft mögliche Einflussfaktoren auf die Einstellung und das Verhalten älterer Beschäftigter im Wandlungsprozess aufgrund eines vorherrschenden Altersbildes im Unternehmen. Grundlage hierzu bildete die Stigmatisierungstheorie als interaktionistischer Ansatz der Sozialtheorie.
Mit dem zweiten Schwerpunkt, den ausgewählten handlungstheoretischen Aspekten der Alternsforschung aus der Entwicklungspsychologie, wird eine Lebensspannenperspektive eingenommen. Inhaltlich werden die komplexitätsinduzierten Faktoren, die sich aus handlungstheoretischer Perspektive mit der Adaptation von älteren Beschäftigten an veränderte äußere und persönliche Lebensbedingungen beschäftigen, systematisiert.
Anschließend wird auf Grundlage der bisherigen theoretischen Vorüberlegungen ein erster theoriebasierter Bezugsrahmen abgeleitet.
Kapitel 5 und Kapitel 6 beschreiben den empirischen Teil, die Durchführung teilstrukturierter Interviews, der Arbeit. Ziel der empirischen Untersuchung war es, neben der theoretischen Fundierung den theoriebasierten heuristischen Bezugsrahmen um Praxiserfahrungen zu konkretisieren und gegebenenfalls zu ergänzen. Hierzu wurde auf Grundlage des theoriebasierten heuristischen Bezugsrahmens mittels teilstrukturierter Interviews das Erfahrungswissen von 23 Experten in persönlichen Gesprächen abgefragt.
Nachdem in Kapitel 5 die Vorgehensweise der empirischen Untersuchung beschrieben wird, erfolgt in Kapitel 6 die Beschreibung der Ergebnisse aus der qualitativen Befragung. Hierzu werden aus den persönlichen Gesprächen zentrale Einflussfaktoren bei der Gestaltung und Umsetzung von Industrie 4.0 im Kontext mit der demografischen Entwicklung analysiert und in die übergeordneten Kategorien Handlungskompetenzen, Einstellung/ Verhalten sowie Erfahrungswissen geclustert.
Anschließend wird der theoriebasierte heuristische Bezugsrahmen durch die übergeordneten Kategorien und Faktoren aus den Expertengesprächen konkretisiert und ergänzt.
In Kapitel 7 werden auf Grundlage des heuristischen Bezugsrahmens sowie der Empfehlungen aus den Experteninterviews beispielhaft Implikationen für die Praxis abgeleitet. Es werden Interventionsmöglichkeiten zur Unterstützung einer positiven Veränderungsbereitschaft und einem positiven Veränderungsverhalten für den Strukturwandel aufgezeigt. Hierzu gehören die Anpassung des Führungsverhaltens im Wandlungsprozess, der Umgang mit der Paradoxie von Stabilität und Flexibilität, der Umgang mit Altersstereotypen in Unternehmen, die Unterstützung von Strategien zu Selektion, Optimierung und Kompensation sowie Maßnahmen zur Ausrichtung von Aktivitäten an die Potenzialrisiken der Beschäftigten.
Eine Zusammenfassung, ein Resümee und ein Ausblick erfolgen abschließend in Kapitel 8.
Frailty is a geriatric syndrome characterised by a vulnerability status associated with declining function of multiple physiological systems and loss of physiological reserves. Two main models of frailty have been advanced: the phenotypic model (primary frailty) or deficits accumulation model (secondary frailty), and different instruments have been proposed and validated to measure frailty. However measured, frailty correlates to medical outcomes in the elderly, and has been shown to have prognostic value for patients in different clinical settings, such as in patients with coronary artery disease, after cardiac surgery or transvalvular aortic valve replacement, in patients with chronic heart failure or after left ventricular assist device implantation.
The prevalence, clinical and prognostic relevance of frailty in a cardiac rehabilitation setting has not yet been well characterised, despite the increasing frequency of elderly patients in cardiac rehabilitation, where frailty is likely to influence the onset, type and intensity of the exercise training programme and the design of tailored rehabilitative interventions for these patients.
Therefore, we need to start looking for frailty in elderly patients entering cardiac rehabilitation programmes and become more familiar with some of the tools to recognise and evaluate the severity of this condition. Furthermore, we need to better understand whether exercise-based cardiac rehabilitation may change the course and the prognosis of frailty in cardiovascular patients.
Is there an ideal time window for language acquisition after which nativelike
representation and processing are unattainable? Although this question has
been heavily debated, no consensus has been reached. Here, we present
evidence for a sensitive period in language development and show that it is
specific to grammar. We conducted a masked priming task with a group of
Turkish-German bilinguals and examined age of acquisition (AoA) effects on
the processing of complex words. We compared a subtle but meaningful
linguistic contrast, that between grammatical inflection and lexical-based
derivation. The results showed a highly selective AoA effect on inflectional
(but not derivational) priming. In addition, the effect displayed a discontinuity
indicative of a sensitive period: Priming from inflected forms was nativelike
when acquisition started before the age of 5 but declined with increasing
AoA. We conclude that the acquisition of morphological rules expressing
morphosyntactic properties is constrained by maturational factors.
In this combined theoretical and experimental study we report a full analysis of the resonant inelastic X-ray scattering (RIXS) spectra of H2O, D2O and HDO. We demonstrate that electronically-elastic RIXS has an inherent capability to map the potential energy surface and to perform vibrational analysis of the electronic ground state in multimode systems. We show that the control and selection of vibrational excitation can be performed by tuning the X-ray frequency across core-excited molecular bands and that this is clearly reflected in the RIXS spectra. Using high level ab initio electronic structure and quantum nuclear wave packet calculations together with high resolution RIXS measurements, we discuss in detail the mode coupling, mode localization and anharmonicity in the studied systems.
The interdisciplinary workshop STOCHASTIC PROCESSES WITH APPLICATIONS IN THE NATURAL SCIENCES was held in Bogotá, at Universidad de los Andes from December 5 to December 9, 2016. It brought together researchers from Colombia, Germany, France, Italy, Ukraine, who communicated recent progress in the mathematical research related to stochastic processes with application in biophysics.
The present volume collects three of the four courses held at this meeting by Angelo Valleriani, Sylvie Rœlly and Alexei Kulik.
A particular aim of this collection is to inspire young scientists in setting up research goals within the wide scope of fields represented in this volume.
Angelo Valleriani, PhD in high energy physics, is group leader of the team "Stochastic processes in complex and biological systems" from the Max-Planck-Institute of Colloids and Interfaces, Potsdam.
Sylvie Rœlly, Docteur en Mathématiques, is the head of the chair of Probability at the University of Potsdam.
Alexei Kulik, Doctor of Sciences, is a Leading researcher at the Institute of Mathematics of Ukrainian National Academy of Sciences.
Water infiltration in soil is not only affected by the inherent heterogeneities of soil, but even more by the interaction with plant roots and their water uptake. Neutron tomography is a unique non-invasive 3D tool to visualize plant root systems together with the soil water distribution in situ. So far, acquisition times in the range of hours have been the major limitation for imaging 3D water dynamics. Implementing an alternative acquisition procedure we boosted the speed of acquisition capturing an entire tomogram within 10 s. This allows, for the first time, tracking of a water front ascending in a rooted soil column upon infiltration of deuterated water time-resolved in 3D. Image quality and resolution could be sustained to a level allowing for capturing the root system in high detail. Good signal-to-noise ratio and contrast were the key to visualize dynamic changes in water content and to localize the root uptake. We demonstrated the ability of ultra-fast tomography to quantitatively image quick changes of water content in the rhizosphere and outlined the value of such imaging data for 3D water uptake modelling. The presented method paves the way for time-resolved studies of various 3D flow and transport phenomena in porous systems
Researchers have made many approaches to study the complexities of the mammalian taste system; however molecular mechanisms of taste processing in the early structures of the central taste pathway remain unclear. More recently the Arc catFISH (cellular compartment analysis of temporal activity by fluorescent in situ hybridisation) method has been used in our lab to study neural activation following taste stimulation in the first central structure in the taste pathway, the nucleus of the solitary tract. This method uses the immediate early gene Arc as a neural activity marker to identify taste-responsive neurons. Arc plays a critical role in memory formation and is necessary for conditioned taste aversion memory formation. In the nucleus of the solitary tract only bitter taste stimulation resulted in increased Arc expression, however this did not occur following stimulation with tastants of any other taste quality. The primary target for gustatory NTS neurons is the parabrachial nucleus (PbN) and, like Arc, the PbN plays an important role in conditioned taste aversion learning.
The aim of this thesis is to investigate Arc expression in the PbN following taste stimulation to elucidate the molecular identity and function of Arc expressing, taste- responsive neurons. Naïve and taste-conditioned mice were stimulated with tastants from each of the five basic taste qualities (sweet, salty, sour, umami, and bitter), with additional bitter compounds included for comparison. The expression patterns of Arc and marker genes were analysed using in situ hybridisation (ISH). The Arc catFISH method was used to observe taste-responsive neurons following each taste stimulation. A double fluorescent in situ hybridisation protocol was then established to investigate possible neuropeptide genes involved in neural responses to taste stimulation.
The results showed that bitter taste stimulation induces increased Arc expression in the PbN in naïve mice. This was not true for other taste qualities. In mice conditioned to find an umami tastant aversive, subsequent umami taste stimulation resulted in an increase in Arc expression similar to that seen in bitter-stimulated mice. Taste-responsive Arc expression was denser in the lateral PbN than the medial PbN. In mice that received two temporally separated taste stimulations, each stimulation time-point showed a distinct population of Arc-expressing neurons, with only a small population (10 – 18 %) of neurons responding to both stimulations. This suggests that either each stimulation event activates a different population of neurons, or that Arc is marking something other than simple cellular activation, such as long-term cellular changes that do not occur twice within a 25 minute time frame. Investigation using the newly established double-FISH protocol revealed that, of the bitter-responsive Arc expressing neuron population: 16 % co-expressed calcitonin RNA; 17 % co-expressed glucagon-like peptide 1 receptor RNA; 17 % co-expressed hypocretin receptor 1 RNA; 9 % co-expressed gastrin-releasing peptide RNA; and 20 % co-expressed neurotensin RNA. This co-expression with multiple different neuropeptides suggests that bitter-activated Arc expression mediates multiple neural responses to the taste event, such as taste aversion learning, suppression of food intake, increased heart rate, and involves multiple brain structures such as the lateral hypothalamus, amygdala, bed nucleus of the stria terminalis, and the thalamus.
The increase in Arc-expression suggests that bitter taste stimulation, and umami taste stimulation in umami-averse animals, may result in an enhanced state of Arc- dependent synaptic plasticity in the PbN, allowing animals to form taste-relevant memories to these aversive compounds more readily. The results investigating neuropeptide RNA co- expression suggest the amygdala, bed nucleus of the stria terminalis, and thalamus as possible targets for bitter-responsive Arc-expressing PbN neurons.
Einleitung: Die Erdnussallergie zählt zu den häufigsten Nahrungsmittelallergien im Kindesalter. Bereits kleine Mengen Erdnuss (EN) können zu schweren allergischen Reaktionen führen. EN ist der häufigste Auslöser einer lebensbedrohlichen Anaphylaxie bei Kindern und Jugendlichen. Im Gegensatz zu anderen frühkindlichen Nahrungsmittelallergien entwickeln Patienten mit einer EN-Allergie nur selten eine natürliche Toleranz. Seit mehreren Jahren wird daher an kausalen Therapiemöglichkeiten für EN-Allergiker, insbesondere an der oralen Immuntherapie (OIT), geforscht. Erste kleinere Studien zur OIT bei EN-Allergie zeigten erfolgsversprechende Ergebnisse. Im Rahmen einer randomisierten, doppelblind, Placebo-kontrollierten Studie mit größerer Fallzahl werden in der vorliegenden Arbeit die klinische Wirksamkeit und Sicherheit dieser Therapieoption bei Kindern mit EN-Allergie genauer evaluiert. Des Weiteren werden immunologische Veränderungen sowie die Lebensqualität und Therapiebelastung unter OIT untersucht.
Methoden: Kinder zwischen 3-18 Jahren mit einer IgE-vermittelten EN-Allergie wurden in die Studie eingeschlossen. Vor Beginn der OIT wurde eine orale Provokation mit EN durchgeführt. Die Patienten wurden 1:1 randomisiert und entsprechend der Verum- oder Placebogruppe zugeordnet. Begonnen wurde mit 2-120 mg EN bzw. Placebo pro Tag, abhängig von der Reaktionsdosis bei der oralen Provokation. Zunächst wurde die tägliche OIT-Dosis alle zwei Wochen über etwa 14 Monate langsam bis zu einer Erhaltungsdosis von mindestens 500 mg EN (= 125 mg EN-Protein, ~ 1 kleine EN) bzw. Placebo gesteigert. Die maximal erreichte Dosis wurde dann über zwei Monate täglich zu Hause verabreicht. Im Anschluss erfolgte erneut eine orale Provokation mit EN. Der primäre Endpunkt der Studie war die Anzahl an Patienten der Verum- und Placebogruppe, die unter oraler Provokation nach OIT ≥1200 mg EN vertrugen (=„partielle Desensibilisierung“). Sowohl vor als auch nach OIT wurde ein Hautpricktest mit EN durchgeführt und EN-spezifisches IgE und IgG4 im Serum bestimmt. Außerdem wurden die Basophilenaktivierung sowie die Ausschüttung von T-Zell-spezifischen Zytokinen nach Stimulation mit EN in vitro gemessen. Anhand von Fragebögen wurde die Lebensqualität vor und nach OIT sowie die Therapiebelastung während OIT erfasst.
Ergebnisse: 62 Patienten wurden in die Studie eingeschlossen und randomisiert. Nach etwa 16 Monaten unter OIT zeigten 74,2% (23/31) der Patienten der Verumgruppe und nur 16,1% (5/31) der Placebogruppe eine „partielle Desensibilisierung“ gegenüber EN (p<0,001). Im Median vertrugen Patienten der Verumgruppe 4000 mg EN (~8 kleine EN) unter der Provokation nach OIT wohingegen Patienten der Placebogruppe nur 80 mg EN (~1/6 kleine EN) vertrugen (p<0,001). Fast die Hälfte der Patienten der Verumgruppe (41,9%) tolerierten die Höchstdosis von 18 g EN unter Provokation („komplette Desensibilisierung“). Es zeigte sich ein vergleichbares Sicherheitsprofil unter Verum- und Placebo-OIT in Bezug auf objektive Nebenwirkungen. Unter Verum-OIT kam es jedoch signifikant häufiger zu subjektiven Nebenwirkungen wie oralem Juckreiz oder Bauchschmerzen im Vergleich zu Placebo (3,7% der Verum-OIT-Gaben vs. 0,5% der Placebo-OIT-Gaben, p<0,001). Drei Kinder der Verumgruppe (9,7%) und sieben Kinder der Placebogruppe (22,6%) beendeten die Studie vorzeitig, je zwei Patienten beider Gruppen aufgrund von Nebenwirkungen. Im Gegensatz zu Placebo, zeigten sich unter Verum-OIT signifikante immunologische Veränderungen. So kam es zu einer Abnahme des EN-spezifischen Quaddeldurchmessers im Hautpricktest, einem Anstieg der EN-spezifischen IgG4-Werte im Serum sowie zu einer verminderten EN-spezifischen Zytokinsekretion, insbesondere der Th2-spezifischen Zytokine IL-4 und IL-5. Hinsichtlich der EN-spezifischen IgE-Werte sowie der EN-spezifischen Basophilenaktivierung zeigten sich hingegen keine Veränderungen unter OIT. Die Lebensqualität von Kindern der Verumgruppe war nach OIT signifikant verbessert, jedoch nicht bei Kindern der Placebogruppe. Während der OIT wurde die Therapie von fast allen Kindern (82%) und Müttern (82%) als positiv bewertet (= niedrige Therapiebelastung).
Diskussion: Die EN-OIT führte bei einem Großteil der EN-allergischen Kinder zu einer Desensibilisierung und einer deutlich erhöhten Reaktionsschwelle auf EN. Somit sind die Kinder im Alltag vor akzidentellen Reaktionen auf EN geschützt, was die Lebensqualität der Kinder deutlich verbessert. Unter den kontrollierten Studienbedingungen zeigte sich ein akzeptables Sicherheitsprofil, mit vorrangig milder Symptomatik. Die klinische Desensibilisierung ging mit Veränderungen auf immunologischer Ebene einher. Langzeitstudien zur EN-OIT müssen jedoch abgewartet werden, um die klinische und immunologische Wirksamkeit hinsichtlich einer möglichen langfristigen oralen Toleranzinduktion sowie die Sicherheit unter langfristiger OIT zu untersuchen, bevor das Therapiekonzept in die Praxis übertragen werden kann.
This cumulative doctoral dissertation, based on three publications, is devoted to the investigation of several aspects of azobenzene molecular switches, with the aid of computational chemistry.
In the first paper, the isomerization rates of a thermal cis → trans isomerization of azobenzenes for species formed upon an integer electron transfer, i.e., with added or removed electron, are calculated from Eyring’s transition state theory and activation energy barriers, computed by means of density functional theory. The obtained results are discussed in connection with an experimental study of the thermal cis → trans isomerization of azobenzene derivatives in the presence of gold nanoparticles, which is demonstrated to be greatly accelerated in comparison to the same isomerization reaction in the absence of nanoparticles.
The second paper is concerned with electronically excited states of (i) dimers, composed of two photoswitchable units placed closely side-by-side, as well as (ii) monomers and dimers adsorbed on a silicon cluster. A variety of quantum chemistry methods, capable of calculating molecular electronic absorption spectra, based on density functional and wave function theories, is employed to quantify changes in optical absorption upon dimerization and covalent grafting to a surface. Specifically, the exciton (Davydov) splitting between states of interest is determined from first-principles calculations with the help of natural transition orbital analysis, allowing for insight into the nature of excited states.
In the third paper, nonadiabatic molecular dynamics with trajectory surface hopping is applied to model the photoisomerization of azobenzene dimers, (i) for the isolated case (exhibiting the exciton coupling between two molecules) as well as (ii) for the constrained case (providing the van der Waals interaction with environment in addition to the exciton coupling between two monomers). For the latter, the additional azobenzene molecules, surrounding the dimer, are introduced, mimicking a densely packed self-assembled monolayer. From obtained results it is concluded that the isolated dimer is capable of isomerization likewise the monomer, whereas the steric hindrance considerably suppresses trans → cis photoisomerization.
Furthermore, the present dissertation comprises the general introduction describing the main features of the azobenzene photoswitch and objectives of this work, theoretical basis of the employed methods, and discussion of gained findings in the light of existing literature. Also, additional results on (i) activation parameters of the thermal cis → trans isomerization of azobenzenes, (ii) an approximate scheme to account for anharmonicity of molecular vibrations in calculation of the activation entropy, as well as (iii) absorption spectra of photoswitch–silicon composites obtained from time-demanding wave function-based methods are presented.
Um den zunehmenden Diebstahl digitaler Identitäten zu bekämpfen, gibt es bereits mehr als ein Dutzend Technologien. Sie sind, vor allem bei der Authentifizierung per Passwort, mit spezifischen Nachteilen behaftet, haben andererseits aber auch jeweils besondere Vorteile. Wie solche Kommunikationsstandards und -Protokolle wirkungsvoll miteinander kombiniert werden können, um dadurch mehr Sicherheit zu erreichen, haben die Autoren dieser Studie analysiert. Sie sprechen sich für neuartige Identitätsmanagement-Systeme aus, die sich flexibel auf verschiedene Rollen eines einzelnen Nutzers einstellen können und bequemer zu nutzen sind als bisherige Verfahren. Als ersten Schritt auf dem Weg hin zu einer solchen Identitätsmanagement-Plattform beschreiben sie die Möglichkeiten einer Analyse, die sich auf das individuelle Verhalten eines Nutzers oder einer Sache stützt.
Ausgewertet werden dabei Sensordaten mobiler Geräte, welche die Nutzer häufig bei sich tragen und umfassend einsetzen, also z.B. internetfähige Mobiltelefone, Fitness-Tracker und Smart Watches. Die Wissenschaftler beschreiben, wie solche Kleincomputer allein z.B. anhand der Analyse von Bewegungsmustern, Positionsund Netzverbindungsdaten kontinuierlich ein „Vertrauens-Niveau“ errechnen können. Mit diesem ermittelten „Trust Level“ kann jedes Gerät ständig die Wahrscheinlichkeit angeben, mit der sein aktueller Benutzer auch der tatsächliche Besitzer ist, dessen typische Verhaltensmuster es genauestens „kennt“.
Wenn der aktuelle Wert des Vertrauens-Niveaus (nicht aber die biometrischen Einzeldaten) an eine externe Instanz wie einen Identitätsprovider übermittelt wird, kann dieser das Trust Level allen Diensten bereitstellen, welche der Anwender nutzt und darüber informieren will. Jeder Dienst ist in der Lage, selbst festzulegen, von welchem Vertrauens-Niveau an er einen Nutzer als authentifiziert ansieht. Erfährt er von einem unter das Limit gesunkenen Trust Level, kann der Identitätsprovider seine Nutzung und die anderer Services verweigern.
Die besonderen Vorteile dieses Identitätsmanagement-Ansatzes liegen darin, dass er keine spezifische und teure Hardware benötigt, um spezifische Daten auszuwerten, sondern lediglich Smartphones und so genannte Wearables. Selbst Dinge wie Maschinen, die Daten über ihr eigenes Verhalten per Sensor-Chip ins Internet funken, können einbezogen werden. Die Daten werden kontinuierlich im Hintergrund erhoben, ohne dass sich jemand darum kümmern muss. Sie sind nur für die Berechnung eines Wahrscheinlichkeits-Messwerts von Belang und verlassen niemals das Gerät. Meldet sich ein Internetnutzer bei einem Dienst an, muss er sich nicht zunächst an ein vorher festgelegtes Geheimnis – z.B. ein Passwort – erinnern, sondern braucht nur die Weitergabe seines aktuellen Vertrauens-Wertes mit einem „OK“ freizugeben.
Ändert sich das Nutzungsverhalten – etwa durch andere Bewegungen oder andere Orte des Einloggens ins Internet als die üblichen – wird dies schnell erkannt. Unbefugten kann dann sofort der Zugang zum Smartphone oder zu Internetdiensten gesperrt werden. Künftig kann die Auswertung von Verhaltens-Faktoren noch erweitert werden, indem z.B. Routinen an Werktagen, an Wochenenden oder im Urlaub erfasst werden. Der Vergleich mit den live erhobenen Daten zeigt dann an, ob das Verhalten in das übliche Muster passt, der Benutzer also mit höchster Wahrscheinlichkeit auch der ausgewiesene Besitzer des Geräts ist.
Über die Techniken des Managements digitaler Identitäten und die damit verbundenen Herausforderungen gibt diese Studie einen umfassenden Überblick. Sie beschreibt zunächst, welche Arten von Angriffen es gibt, durch die digitale Identitäten gestohlen werden können. Sodann werden die unterschiedlichen Verfahren von Identitätsnachweisen vorgestellt. Schließlich liefert die Studie noch eine zusammenfassende Übersicht über die 15 wichtigsten Protokolle und technischen Standards für die Kommunikation zwischen den drei beteiligten Akteuren: Service Provider/Dienstanbieter, Identitätsprovider und Nutzer. Abschließend wird aktuelle Forschung des Hasso-Plattner-Instituts zum Identitätsmanagement vorgestellt.
Swearing in a public place
(2017)
The paper deals with the usage of swear words on the online forum "reddit". Three research questions are dealt with:
How often are swear words used?
How are these swear words received by other users?
Does the topic of the conversation have an influence on the reception and amount of usage of swear words?
The corpus from which the results are taken comprises almost 900 million words. The words are taken from February 2017. Compared to other, similar studies, the corpus is considerably larger and contempory.
In addition, the theoretical part discusses the linguistic basics of swear words. These include concepts such as the theory of politeness, the topic of taboos and its corresponding words and censorship. This is done to explain the factors that influence the use and application of swear words and to explain why swearwords are so special in comparison to other word groups. In addition, further research results from other corpora are presented and compared with the results afterwards. This includes corpora that are also composed of online communication, as well as corpora that reproduce spoken language. The results from all the corpora presented deal with results from the English language.
The results of this study indicate that the swear words on "reddit" are used approximately as often as they are on other platforms. The perception of these swear words is mostly positive, which suggests that the use of swear words on "reddit" is not perceived as impolite. In addition, an influence of the discussion topic on the frequency and reception of swear words could be determined.
In Vorbereitung zur Teilnahme am Times Higher Education Ranking hat die Universität Potsdam das Publikationsaufkommen im Rahmen einer Output-Analyse gemessen. Es stellte sich heraus, dass die Angabe der Affiliation, welche die Wissenschaftlerinnen und Wissenschaftler tätigen, eine wichtige Stellschraube zur Ermittlung der Grundzahl ist. Der vorliegende Artikel spiegelt die herausfordernde Ausgangssituation wider.
Water deficit (drought stress) massively restricts plant growth and the yield of crops; reducing the deleterious effects of drought is therefore of high agricultural relevance. Drought triggers diverse cellular processes including the inhibition of photosynthesis, the accumulation of cell‐damaging reactive oxygen species and gene expression reprogramming, besides others. Transcription factors (TF) are central regulators of transcriptional reprogramming and expression of many TF genes is affected by drought, including members of the NAC family. Here, we identify the NAC factor JUNGBRUNNEN1 (JUB1) as a regulator of drought tolerance in tomato (Solanum lycopersicum). Expression of tomato JUB1 (SlJUB1) is enhanced by various abiotic stresses, including drought. Inhibiting SlJUB1 by virus‐induced gene silencing drastically lowers drought tolerance concomitant with an increase in ion leakage, an elevation of hydrogen peroxide (H2O2) levels and a decrease in the expression of various drought‐responsive genes. In contrast, overexpression of AtJUB1 from Arabidopsis thaliana increases drought tolerance in tomato, alongside with a higher relative leaf water content during drought and reduced H2O2 levels. AtJUB1 was previously shown to stimulate expression of DREB2A, a TF involved in drought responses, and of the DELLA genes GAI and RGL1. We show here that SlJUB1 similarly controls the expression of the tomato orthologs SlDREB1, SlDREB2 and SlDELLA. Furthermore, AtJUB1 directly binds to the promoters of SlDREB1, SlDREB2 and SlDELLA in tomato. Our study highlights JUB1 as a transcriptional regulator of drought tolerance and suggests considerable conservation of the abiotic stress‐related gene regulatory networks controlled by this NAC factor between Arabidopsis and tomato.
Der technologische Wandel stellt Organisationen vor die Herausforderung, Innovationen möglichst schnell produktiv zu nutzen und damit einen Wettbewerbsvorteil zu erzielen. Der Erfolg der Technologieeinführung hängt stark mit der Schaffung von Akzeptanz bei den Mitarbeitern zusammen. Bestehende Ansätze wie die Diffusionstheorie (Rogers, 2003) oder das Technology Acceptance Model (Davis, 1989; Venkatesh und Davis, 1996; Venkatesh und Davis, 2000; Venkatesh, Morris u. a., 2003) widmen sich dem Organisationskontext jedoch nur am Rande. Ihre Modelle zielen auf die Übernahme einer Technologie in freier Entscheidung und im Marktkontext ab. Weiterhin beleuchten sie den Widerstand gegen Neuerungen nicht, welcher sich bei der verpflichtenden Übernahme bilden kann. Zur Untersuchung der Technologieeinführung und von Akzeptanzbildungsprozessen in Organisationen sind sie daher nur begrenzt nutzbar.
Das Ziel dieser Arbeit ist es daher, den spezifischen Einfluss des Kontextes Organisation auf die Akzeptanz und das Nutzungsverhalten herauszuarbeiten. Konkreter soll die Forschungsfrage geklärt werden, welchen Einfluss unterschiedliche Organisationstypen auf die Akzeptanz- und Nutzungsdynamik innerhalb von Organisationen haben. Hierfür wird die Erweiterung und Synthese bestehender Modelle der Akzeptanzforschung um organisationsspezifische Attribute vorgenommen. Das resultierende Modell erfasst die dynamische Entwicklung innerhalb der Organisation und ermöglicht damit die Beobachtung des Wandels. Die Funktionsweise des entwickelten Modells soll in einem Simulationsexperiment demonstriert und die Wirkung unterschiedlicher Organisationsformen verdeutlicht werden.
Das Modell vereint daher zwei Perspektiven: Die personale Perspektive fasst Akzeptanz als kognitiv-psychischen Prozess auf individueller Ebene. Dieser basiert auf den Kalkülen und Entscheidungen einzelner Personen. Zentral sind hierfür die Beiträge der Diffusionstheorie (Rogers, 2003) sowie das Technology Acceptance Model in seinen diversen Weiterentwicklungen und Veränderungen (Davis, 1989; Venkatesh und Davis, 1996; Venkatesh und Davis, 2000; Venkatesh, Morris u. a., 2003). Individuelle Faktoren aus unterschiedlichen Fit-Theorien (Goodhue und Thompson, 1995; Floyd, 1986; Liu, Lee und Chen, 2011; Parkes, 2013) werden genutzt, um diese Modelle anzureichern. Neben der Entwicklung
einer positiven, förderlichen Einstellung muss jedoch auch die Ablehnung und das offene Opponieren gegen die Innovation berücksichtigt werden (Patsiotis, Hughes und Webber, 2012).
Die organisatorische Perspektive hingegen sieht Akzeptanzentscheidungen eingebettet in den sozialen Kontext der Organisation. Die gegenseitige Beeinflussung basiert auf der Beobachtung der Umgebung und der Internalisierung sozialen Drucks. Dem steht in Organisationen die intendierte Beeinflussung in Form von Steuerung gegenüber. Beide Vorgänge formen das Akzeptanz- oder das Nutzungsverhalten der Mitarbeiter. Ausgehend von einem systemtheoretischen Organisationsbegriff werden unterschiedliche Steuerungsmedien (Luhmann, 1997; Fischer, 2009) vorgestellt. Diese können durch Steuerungsakteure
(Change Agents, Management) intendiert eingesetzt werden, um den Akzeptanz- und Nutzungsprozess über Interventionen zu gestalten.
Die Wirkung der Medien unterscheidet sich in verschiedenen Organisationstypen. Zur Analyse unterschiedlicher Organisationstypen werden die Konfigurationen nach Mintzberg (1979) herangezogen. Diese zeichnen sich durch unterschiedliche Koordinationsmechanismen aus, welche wiederum auf dem Einsatz von Steuerungsmedien beruhen.
Die Demonstration der Funktionsweise und Analysemöglichkeiten des entwickelten Modells erfolgt anhand eines Simulationsexperiments mittels der Simulationsplattform AnyLogic. Das Gültigkeitsspektrum wird anhand einer Sensitivitätsanalyse geprüft.
In der Simulation lassen sich spezifische Muster der Nutzung und Akzeptanzentwicklung nachweisen. Die Akzeptanz ist durch ein initiales Absinken und ein anschließendes gedämpftes Wachstum gekennzeichnet. Die Nutzung wird in der Organisation hingegen schnell durchgesetzt und verharrt dann auf einem stabilen Niveau. Für die Organisationstypen konnten unterschiedliche Effekte beobachtet werden. So eignet sich die bürokratische Steuerungsform zur Nutzungserhöhung, schafft es jedoch nicht, die Akzeptanz zu steigern. Organisationen, welche eher auf gegenseitige Abstimmung zur Koordination ausgelegt sind, erhöhen die Akzeptanz, jedoch nicht die Nutzung. Weiterhin ist die Entwicklung der Akzeptanz in diesem Organisationstyp sehr unsicher und weist einen hohen Schwankungsbereich auf.
Approaching physical limits in speed and size of today's magnetic storage and processing technologies demands new concepts for controlling magnetization and moves researches on optically induced magnetic dynamics. Studies on photoinduced magnetization dynamics and their underlying mechanisms have been primarily performed on ferromagnetic metals. Ferromagnetic dynamics bases on transfer of the conserved angular momentum connected with atomic magnetic moments out of the parallel aligned magnetic system into other degrees of freedom.
In this thesis the so far rarely studied response of antiferromagnetic order to ultra-short optical laser pulses in a metal is investigated. The experiments were performed at the FemtoSpex slicing facility at the storage ring BESSY II, an unique source for ultra-short elliptically polarized x-ray pulses. Laser-induced changes of the 4f-magnetic order parameter in ferro- and antiferromagnetic dysprosium (Dy), were studied by x-ray methods, which yield directly comparable quantities. The discovered fundamental differences in the temporal and spatial behavior of ferro- and antiferrmagnetic dynamics are assinged to an additional channel for angular momentum transfer, which reduces the antiferromagnetic order by redistributing angular momentum within the non-parallel aligned magnetic system, and hence conserves the zero net magnetization. It is shown that antiferromagnetic dynamics proceeds considerably faster and more energy-efficient than demagnetization in ferromagnets. By probing antiferromagnetic order in time and space, it is found to be affected along the whole sample depth of an in situ grown 73 nm tick Dy film. Interatomic transfer of angular momentum via fast diffusion of laser-excited 5d electrons is held responsible for the out-most long-ranging effect. Ultrafast ferromagnetic dynamics can be expected to base on the same origin, which however leads to demagnetization only in regions close to interfaces caused by super-diffusive spin transport. Dynamics due to local scattering processes of excited but less mobile electrons, occur in both magnetic alignments only in directly excited regions of the sample and on slower pisosecond timescales. The thesis provides fundamental insights into photoinduced magnetic dynamics by directly comparing ferro- and antiferromagnetic dynamics in the same material and by consideration of the laser-induced magnetic depth profile.
Während des Praxissemesters sammeln Studierende ihre ersten längeren praktischen Erfahrungen als Lehrkraft. Es ist davon auszugehen, dass diese ersten Erfahrungen bereits prägend für die spätere Lehrtätigkeit sein können. Sei es als berufliche Orientierung oder als kleiner Schritt zur Herausbildung der eigenen Lehrpersönlichkeit. Herr Ingmar Thews widmet sich mit seiner Arbeit der wichtigen Frage, inwieweit Studierende Schule als Raum für „expansives“ oder „defensives“ Lernen erleben. Dabei führt er nach der subjektwissenschaftlichen Theorie von Klaus Holzkamp eine Inhaltsanalyse von Tonaufnahmen, welche während des Praxissemesters entstanden sind, durch. Die Ergebnisse bieten einen tieferen Einblick, wie Studierende Schule während ihrer Praxisphasen erleben. Dass defensives Lernen immer noch einen großen Teil der Erfahrungen der Studierenden ausmacht und zum Alltag von Schule gehört, ist eine Erkenntnis, die einen nachdenklich stimmen kann. Mit dem Seminarkonzept „Expansives Lernen fördern durch eine prozessorientierte Didaktik“ hat Herr Thews einen Rahmen geschaffen, der Studierenden die Möglichkeit gibt, über ihre negativen und positiven Erfahrungen zu sprechen und gleichzeitig über die Gestaltung expansiver Lernräume nachzudenken. Im Sinne einer prozessorientierten Didaktik würden wir uns freuen, wenn die Handreichung von Herrn Thews vielfältige Verwendung und Erweiterung in anderen Seminaren des Praxissemesters findet.
Anthropogenically amplified erosion leads to increased fine-grained sediment input into the fluvial system in the 15.000 km2 Kharaa River catchment in northern Mongolia and constitutes a major stressing factor for the aquatic ecosystem. This study uniquely combines the application of intensive monitoring, source fingerprinting and catchment modelling techniques to allow for the comparison of the credibility and accuracy of each single method. High-resolution discharge data were used in combination with daily suspended solid measurements to calculate the suspended sediment budget and compare it with estimations of the sediment budget model SedNet. The comparison of both techniques showed that the development of an overall sediment budget with SedNet was possible, yielding results in the same order of magnitude (20.3 kt a- 1 and 16.2 kt a- 1).
Radionuclide sediment tracing, using Be-7, Cs-137 and Pb-210 was applied to differentiate sediment sources for particles < 10μm from hillslope and riverbank erosion and showed that riverbank erosion generates 74.5% of the suspended sediment load, whereas surface erosion contributes 21.7% and gully erosion only 3.8%. The contribution of the single subcatchments of the Kharaa to the suspended sediment load was assessed based on their variation in geochemical composition (e.g. in Ti, Sn, Mo, Mn, As, Sr, B, U, Ca and Sb). These variations were used for sediment source discrimination with geochemical composite fingerprints based on Genetic Algorithm driven Discriminant Function Analysis, the Kruskal–Wallis H-test and Principal Component Analysis. The contributions of the individual sub-catchment varied from 6.4% to 36.2%, generally showing higher contributions from the sub-catchments in the middle, rather than the upstream portions of the study area.
The results indicate that river bank erosion generated by existing grazing practices of livestock is the main cause for elevated fine sediment input. Actions towards the protection of the headwaters and the stabilization of the river banks within the middle reaches were identified as the highest priority. Deforestation and by lodging and forest fires should be prevented to avoid increased hillslope erosion in the mountainous areas. Mining activities are of minor importance for the overall catchment sediment load but can constitute locally important point sources for particular heavy metals in the fluvial system.
The effect of cellulose-based polyelectrolytes on biomimetic calcium phosphate mineralization is described. Three cellulose derivatives, a polyanion, a polycation, and a polyzwitterion were used as additives. Scanning electron microscopy, X-ray diffraction, IR and Raman spectroscopy show that, depending on the composition of the starting solution, hydroxyapatite or brushite precipitates form. Infrared and Raman spectroscopy also show that significant amounts of nitrate ions are incorporated in the precipitates. Energy dispersive X-ray spectroscopy shows that the Ca/P ratio varies throughout the samples and resembles that of other bioinspired calcium phosphate hybrid materials. Elemental analysis shows that the carbon (i.e., polymer) contents reach 10% in some samples, clearly illustrating the formation of a true hybrid material. Overall, the data indicate that a higher polymer concentration in the reaction mixture favors the formation of polymer-enriched materials, while lower polymer concentrations or high precursor concentrations favor the formation of products that are closely related to the control samples precipitated in the absence of polymer. The results thus highlight the potential of (water-soluble) cellulose derivatives for the synthesis and design of bioinspired and bio-based hybrid materials.
The present work will introduce a Finite State Machine (FSM) that processes any Collatz Sequence; further, we will endeavor to investigate its behavior in relationship to transformations of a special infinite input. Moreover, we will prove that the machine’s word transformation is equivalent to the standard Collatz number transformation and subsequently discuss the possibilities for use of this approach at solving similar problems. The benefit of this approach is that the investigation of the word transformation performed by the Finite State Machine is less complicated than the traditional number-theoretical transformation.
The present work will introduce a Finite State Machine (FSM) that processes any Collatz Sequence; further, we will endeavor to investigate its behavior in relationship to transformations of a special infinite input. Moreover, we will prove that the machine’s word transformation is equivalent to the standard Collatz number transformation and subsequently discuss the possibilities for use of this approach at solving similar problems. The benefit of this approach is that the investigation of the word transformation performed by the Finite State Machine is less complicated than the traditional number-theoretical transformation.
Lafora disease (LD, OMIM #254780) is a rare, recessively inherited neurodegenerative disease with adolescent onset, resulting in progressive myoclonus epilepsy which is fatal usually within ten years of symptom onset. The disease is caused by loss-of-function mutations in either of the two genes EPM2A (laforin) or EPM2B (malin). It characteristically involves the accumulation of insoluble glycogen-derived particles, named Lafora bodies (LBs), which are considered neurotoxic and causative of the disease. The pathogenesis of LD is therefore centred on the question of how insoluble LBs emerge from soluble glycogen. Recent data clearly show that an abnormal glycogen chain length distribution, but neither hyperphosphorylation nor impairment of general autophagy, strictly correlates with glycogen accumulation and the presence of LBs. This review summarizes results obtained with patients, mouse models, and cell lines and consolidates apparent paradoxes in the LD literature. Based on the growing body of evidence, it proposes that LD is predominantly caused by an impairment in chain-length regulation affecting only a small proportion of the cellular glycogen. A better grasp of LD pathogenesis will further develop our understanding of glycogen metabolism and structure. It will also facilitate the development of clinical interventions that appropriately target the underlying cause of LD.
Age of Empires 3
(2017)
As an emerging sub-field of music information retrieval (MIR), music imagery information retrieval (MIIR) aims to retrieve information from brain activity recorded during music cognition–such as listening to or imagining music pieces. This is a highly inter-disciplinary endeavor that requires expertise in MIR as well as cognitive neuroscience and psychology. The OpenMIIR initiative strives to foster collaborations between these fields to advance the state of the art in MIIR. As a first step, electroencephalography (EEG) recordings of music perception and imagination have been made publicly available, enabling MIR researchers to easily test and adapt their existing approaches for music analysis like fingerprinting, beat tracking or tempo estimation on this new kind of data. This paper reports on first results of MIIR experiments using these OpenMIIR datasets and points out how these findings could drive new research in cognitive neuroscience.
Age-related decline in executive functions and postural control due to degenerative processes in the central nervous system have been related to increased fall-risk in old age. Many studies have shown cognitive-postural dual-task interference in old adults, but research on the role of specific executive functions in this context has just begun. In this study, we addressed the question whether postural control is impaired depending on the coordination of concurrent response-selection processes related to the compatibility of input and output modality mappings as compared to impairments related to working-memory load in the comparison of cognitive dual and single tasks. Specifically, we measured total center of pressure (CoP) displacements in healthy female participants aged 19–30 and 66–84 years while they performed different versions of a spatial one-back working memory task during semi-tandem stance on an unstable surface (i.e., balance pad) while standing on a force plate. The specific working-memory tasks comprised: (i) modality compatible single tasks (i.e., visual-manual or auditory-vocal tasks), (ii) modality compatible dual tasks (i.e., visual-manual and auditory-vocal tasks), (iii) modality incompatible single tasks (i.e., visual-vocal or auditory-manual tasks), and (iv) modality incompatible dual tasks (i.e., visual-vocal and auditory-manual tasks). In addition, participants performed the same tasks while sitting. As expected from previous research, old adults showed generally impaired performance under high working-memory load (i.e., dual vs. single one-back task). In addition, modality compatibility affected one-back performance in dual-task but not in single-task conditions with strikingly pronounced impairments in old adults. Notably, the modality incompatible dual task also resulted in a selective increase in total CoP displacements compared to the modality compatible dual task in the old but not in the young participants. These results suggest that in addition to effects of working-memory load, processes related to simultaneously overcoming special linkages between input- and output modalities interfere with postural control in old but not in young female adults. Our preliminary data provide further evidence for the involvement of cognitive control processes in postural tasks.
Long-term policy issues are a particularly vexing class of environmental policy issues which merit increasing attention due to the long-time horizons involved, the incongruity with political cycles, and the challenges for collective action. Following the definition of long-term environmental policy challenges, I pose three questions as challenges for future research, namely 1. Are present democracies well suited to cope with long-term policy challenges? 2. Are top-down or bottom-up solutions to long-term environmental policy challenges advisable? 3. Will mitigation and adaptation of environmental challenges suffice? In concluding, the contribution raises the issue of credible commitment for long-term policy issues and potential design options.
Flooding is assessed as the most important natural hazard in Europe, causing thousands of deaths, affecting millions of people and accounting for large economic losses in the past decade. Little is known about the damage processes associated with extreme rainfall in cities, due to a lack of accurate, comparable and consistent damage data. The objective of this study is to investigate the impacts of extreme rainfall on residential buildings and how affected households coped with these impacts in terms of precautionary and emergency actions. Analyses are based on a unique dataset of damage characteristics and a wide range of potential damage explaining variables at the household level, collected through computer-aided telephone interviews (CATI) and an online survey. Exploratory data analyses based on a total of 859 completed questionnaires in the cities of Munster (Germany) and Amsterdam (the Netherlands) revealed that the uptake of emergency measures is related to characteristics of the hazardous event. In case of high water levels, more efforts are made to reduce damage, while emergency response that aims to prevent damage is less likely to be effective. The difference in magnitude of the events in Munster and Amsterdam, in terms of rainfall intensity and water depth, is probably also the most important cause for the differences between the cities in terms of the suffered financial losses. Factors that significantly contributed to damage in at least one of the case studies are water contamination, the presence of a basement in the building and people's awareness of the upcoming event. Moreover, this study confirms conclusions by previous studies that people's experience with damaging events positively correlates with precautionary behaviour. For improving future damage data acquisition, we recommend the inclusion of cell phones in a CATI survey to avoid biased sampling towards certain age groups.
During the drug discovery & development process, several phases encompassing a number of preclinical and clinical studies have to be successfully passed to demonstrate safety and efficacy of a new drug candidate. As part of these studies, the characterization of the drug's pharmacokinetics (PK) is an important aspect, since the PK is assumed to strongly impact safety and efficacy. To this end, drug concentrations are measured repeatedly over time in a study population. The objectives of such studies are to describe the typical PK time-course and the associated variability between subjects. Furthermore, underlying sources significantly contributing to this variability, e.g. the use of comedication, should be identified. The most commonly used statistical framework to analyse repeated measurement data is the nonlinear mixed effect (NLME) approach. At the same time, ample knowledge about the drug's properties already exists and has been accumulating during the discovery & development process: Before any drug is tested in humans, detailed knowledge about the PK in different animal species has to be collected. This drug-specific knowledge and general knowledge about the species' physiology is exploited in mechanistic physiological based PK (PBPK) modeling approaches -it is, however, ignored in the classical NLME modeling approach.
Mechanistic physiological based models aim to incorporate relevant and known physiological processes which contribute to the overlying process of interest. In comparison to data--driven models they are usually more complex from a mathematical perspective. For example, in many situations, the number of model parameters outrange the number of measurements and thus reliable parameter estimation becomes more complex and partly impossible. As a consequence, the integration of powerful mathematical estimation approaches like the NLME modeling approach -which is widely used in data-driven modeling -and the mechanistic modeling approach is not well established; the observed data is rather used as a confirming instead of a model informing and building input.
Another aggravating circumstance of an integrated approach is the inaccessibility to the details of the NLME methodology so that these approaches can be adapted to the specifics and needs of mechanistic modeling. Despite the fact that the NLME modeling approach exists for several decades, details of the mathematical methodology is scattered around a wide range of literature and a comprehensive, rigorous derivation is lacking. Available literature usually only covers selected parts of the mathematical methodology. Sometimes, important steps are not described or are only heuristically motivated, e.g. the iterative algorithm to finally determine the parameter estimates.
Thus, in the present thesis the mathematical methodology of NLME modeling is systemically described and complemented to a comprehensive description,
comprising the common theme from ideas and motivation to the final parameter estimation. Therein, new insights for the interpretation of different approximation methods used in the context of the NLME modeling approach are given and illustrated; furthermore, similarities and differences between them are outlined. Based on these findings, an expectation-maximization (EM) algorithm to determine estimates of a NLME model is described.
Using the EM algorithm and the lumping methodology by Pilari2010, a new approach on how PBPK and NLME modeling can be combined is presented and exemplified for the antibiotic levofloxacin. Therein, the lumping identifies which processes are informed by the available data and the respective model reduction improves the robustness in parameter estimation. Furthermore, it is shown how apriori known factors influencing the variability and apriori known unexplained variability is incorporated to further mechanistically drive the model development. Concludingly, correlation between parameters and between covariates is automatically accounted for due to the mechanistic derivation of the lumping and the covariate relationships.
A useful feature of PBPK models compared to classical data-driven PK models is in the possibility to predict drug concentration within all organs and tissue in the body. Thus, the resulting PBPK model for levofloxacin is used to predict drug concentrations and their variability within soft tissues which are the site of action for levofloxacin. These predictions are compared with data of muscle and adipose tissue obtained by microdialysis, which is an invasive technique to measure a proportion of drug in the tissue, allowing to approximate the concentrations in the interstitial fluid of tissues. Because, so far, comparing human in vivo tissue PK and PBPK predictions are not established, a new conceptual framework is derived. The comparison of PBPK model predictions and microdialysis measurements shows an adequate agreement and reveals further strengths of the presented new approach.
We demonstrated how mechanistic PBPK models, which are usually developed in the early stage of drug development, can be used as basis for model building in the analysis of later stages, i.e. in clinical studies. As a consequence, the extensively collected and accumulated knowledge about species and drug are utilized and updated with specific volunteer or patient data. The NLME approach combined with mechanistic modeling reveals new insights for the mechanistic model, for example identification and quantification of variability in mechanistic processes. This represents a further contribution to the learn & confirm paradigm across different stages of drug development.
Finally, the applicability of mechanism--driven model development is demonstrated on an example from the field of Quantitative Psycholinguistics to analyse repeated eye movement data. Our approach gives new insight into the interpretation of these experiments and the processes behind.
High Mountain Asia (HMA) - encompassing the Tibetan Plateau and surrounding mountain ranges - is the primary water source for much of Asia, serving more than a billion downstream users. Many catchments receive the majority of their yearly water budget in the form of snow, which is poorly monitored by sparse in situ weather networks. Both the timing and volume of snowmelt play critical roles in downstream water provision, as many applications - such as agriculture, drinking-water generation, and hydropower - rely on consistent and predictable snowmelt runoff. Here, we examine passive microwave data across HMA with five sensors (SSMI, SSMIS, AMSR-E, AMSR2, and GPM) from 1987 to 2016 to track the timing of the snowmelt season - defined here as the time between maximum passive microwave signal separation and snow clearance. We validated our method against climate model surface temperatures, optical remote-sensing snow-cover data, and a manual control dataset (n = 2100, 3 variables at 25 locations over 28 years); our algorithm is generally accurate within 3-5 days. Using the algorithm-generated snowmelt dates, we examine the spatiotemporal patterns of the snowmelt season across HMA. The climatically short (29-year) time series, along with complex interannual snowfall variations, makes determining trends in snowmelt dates at a single point difficult. We instead identify trends in snowmelt timing by using hierarchical clustering of the passive microwave data to determine trends in self-similar regions. We make the following four key observations. (1) The end of the snowmelt season is trending almost universally earlier in HMA (negative trends). Changes in the end of the snowmelt season are generally between 2 and 8 days decade 1 over the 29-year study period (5-25 days total). The length of the snowmelt season is thus shrinking in many, though not all, regions of HMA. Some areas exhibit later peak signal separation (positive trends), but with generally smaller magnitudes than trends in snowmelt end. (2) Areas with long snowmelt periods, such as the Tibetan Plateau, show the strongest compression of the snowmelt season (negative trends). These trends are apparent regardless of the time period over which the regression is performed. (3) While trends averaged over 3 decades indicate generally earlier snowmelt seasons, data from the last 14 years (2002-2016) exhibit positive trends in many regions, such as parts of the Pamir and Kunlun Shan. Due to the short nature of the time series, it is not clear whether this change is a reversal of a long-term trend or simply interannual variability. (4) Some regions with stable or growing glaciers - such as the Karakoram and Kunlun Shan - see slightly later snowmelt seasons and longer snowmelt periods. It is likely that changes in the snowmelt regime of HMA account for some of the observed heterogeneity in glacier response to climate change. While the decadal increases in regional temperature have in general led to earlier and shortened melt seasons, changes in HMA's cryosphere have been spatially and temporally heterogeneous.