Refine
Has Fulltext
- yes (636) (remove)
Year of publication
- 2017 (636) (remove)
Document Type
- Postprint (249)
- Article (172)
- Doctoral Thesis (135)
- Monograph/Edited Volume (25)
- Part of Periodical (22)
- Master's Thesis (12)
- Working Paper (5)
- Conference Proceeding (4)
- Preprint (4)
- Bachelor Thesis (2)
Keywords
- Philosophie (23)
- philosophy (23)
- Bürgerkommune (12)
- Partizipation (12)
- Partizipationsprozesse (12)
- kommunale Demokratie (12)
- kommunale Entscheidungsprozesse (12)
- Anthropologie (10)
- Genisa (10)
- Geniza (10)
Institute
- Mathematisch-Naturwissenschaftliche Fakultät (73)
- Institut für Geowissenschaften (44)
- Institut für Biochemie und Biologie (40)
- Humanwissenschaftliche Fakultät (36)
- Institut für Chemie (35)
- Department Musik und Kunst (31)
- Extern (30)
- Institut für Physik und Astronomie (30)
- Department Linguistik (27)
- Department Erziehungswissenschaft (24)
Background: Inferring regulatory interactions between genes from transcriptomics time-resolved data, yielding reverse engineered gene regulatory networks, is of paramount importance to systems biology and bioinformatics studies. Accurate methods to address this problem can ultimately provide a deeper insight into the complexity, behavior, and functions of the underlying biological systems. However, the large number of interacting genes coupled with short and often noisy time-resolved read-outs of the system renders the reverse engineering a challenging task. Therefore, the development and assessment of methods which are computationally efficient, robust against noise, applicable to short time series data, and preferably capable of reconstructing the directionality of the regulatory interactions remains a pressing research problem with valuable applications.
Results: Here we perform the largest systematic analysis of a set of similarity measures and scoring schemes within the scope of the relevance network approach which are commonly used for gene regulatory network reconstruction from time series data. In addition, we define and analyze several novel measures and schemes which are particularly suitable for short transcriptomics time series. We also compare the considered 21 measures and 6 scoring schemes according to their ability to correctly reconstruct such networks from short time series data by calculating summary statistics based on the corresponding specificity and sensitivity. Our results demonstrate that rank and symbol based measures have the highest performance in inferring regulatory interactions. In addition, the proposed scoring scheme by asymmetric weighting has shown to be valuable in reducing the number of false positive interactions. On the other hand, Granger causality as well as information-theoretic measures, frequently used in inference of regulatory networks, show low performance on the short time series analyzed in this study.
Conclusions: Our study is intended to serve as a guide for choosing a particular combination of similarity measures and scoring schemes suitable for reconstruction of gene regulatory networks from short time series data. We show that further improvement of algorithms for reverse engineering can be obtained if one considers measures that are rooted in the study of symbolic dynamics or ranks, in contrast to the application of common similarity measures which do not consider the temporal character of the employed data. Moreover, we establish that the asymmetric weighting scoring scheme together with symbol based measures (for low noise level) and rank based measures (for high noise level) are the most suitable choices.
The femtosecond excited-state dynamics following resonant photoexcitation enable the selective deformation of N-H and N-C chemical bonds in 2-thiopyridone in aqueous solution with optical or X-ray pulses. In combination with multiconfigurational quantum-chemical calculations, the orbital-specific electronic structure and its ultrafast dynamics accessed with resonant inelastic X-ray scattering at the N 1s level using synchrotron radiation and the soft X-ray free-electron laser LCLS provide direct evidence for this controlled photoinduced molecular deformation and its ultrashort time-scale.
Many studies demonstrated interactions between number processing and either spatial codes (effects of spatial-numerical associations) or visual size-related codes (size-congruity effect). However, the interrelatedness of these two number couplings is still unclear. The present study examines the simultaneous occurrence of space- and size-numerical congruency effects and their interactions both within and across trials, in a magnitude judgment task physically small or large digits were presented left or right from screen center. The reaction times analysis revealed that space- and size-congruency effects coexisted in parallel and combined additively. Moreover, a selective sequential modulation of the two congruency effects was found. The size-congruency effect was reduced after size incongruent trials. The space-congruency effect, however, was only affected by the previous space congruency. The observed independence of spatial-numerical and within magnitude associations is interpreted as evidence that the two couplings reflect Different attributes of numerical meaning possibly related to orginality and cardinality.
In the context of back pain, great emphasis has been placed on the importance of trunk stability, especially in situations requiring compensation of repetitive, intense loading induced during high-performance activities, e.g., jumping or landing. This study aims to evaluate trunk muscle activity during drop jump in adolescent athletes with back pain (BP) compared to athletes without back pain (NBP). Eleven adolescent athletes suffering back pain (BP: m/f: n = 4/7; 15.9 ± 1.3 y; 176 ± 11 cm; 68 ± 11 kg; 12.4 ± 10.5 h/we training) and 11 matched athletes without back pain (NBP: m/f: n = 4/7; 15.5 ± 1.3 y; 174 ± 7 cm; 67 ± 8 kg; 14.9 ± 9.5 h/we training) were evaluated. Subjects conducted 3 drop jumps onto a force plate (ground reaction force). Bilateral 12-lead SEMG (surface Electromyography) was applied to assess trunk muscle activity. Ground contact time [ms], maximum vertical jump force [N], jump time [ms] and the jump performance index [m/s] were calculated for drop jumps. SEMG amplitudes (RMS: root mean square [%]) for all 12 single muscles were normalized to MIVC (maximum isometric voluntary contraction) and analyzed in 4 time windows (100 ms pre- and 200 ms post-initial ground contact, 100 ms pre- and 200 ms post-landing) as outcome variables. In addition, muscles were grouped and analyzed in ventral and dorsal muscles, as well as straight and transverse trunk muscles. Drop jump ground reaction force variables did not differ between NBP and BP (p > 0.05). Mm obliquus externus and internus abdominis presented higher SEMG amplitudes (1.3–1.9-fold) for BP (p < 0.05). Mm rectus abdominis, erector spinae thoracic/lumbar and latissimus dorsi did not differ (p > 0.05). The muscle group analysis over the whole jumping cycle showed statistically significantly higher SEMG amplitudes for BP in the ventral (p = 0.031) and transverse muscles (p = 0.020) compared to NBP. Higher activity of transverse, but not straight, trunk muscles might indicate a specific compensation strategy to support trunk stability in athletes with back pain during drop jumps. Therefore, exercises favoring the transverse trunk muscles could be recommended for back pain treatment.
Trunk loading and back pain
(2017)
An essential function of the trunk is the compensation of external forces and loads in order to guarantee stability. Stabilising the trunk during sudden, repetitive loading in everyday tasks, as well as during performance is important in order to protect against injury. Hence, reduced trunk stability is accepted as a risk factor for the development of back pain (BP). An altered activity pattern including extended response and activation times as well as increased co-contraction of the trunk muscles as well as a reduced range of motion and increased movement variability of the trunk are evident in back pain patients (BPP). These differences to healthy controls (H) have been evaluated primarily in quasi-static test situations involving isolated loading directly to the trunk. Nevertheless, transferability to everyday, dynamic situations is under debate. Therefore, the aim of this project is to analyse 3-dimensional motion and neuromuscular reflex activity of the trunk as response to dynamic trunk loading in healthy (H) and back pain patients (BPP).
A measurement tool was developed to assess trunk stability, consisting of dynamic test situations. During these tests, loading of the trunk is generated by the upper and lower limbs with and without additional perturbation. Therefore, lifting of objects and stumbling while walking are adequate represents. With the help of a 12-lead EMG, neuromuscular activity of the muscles encompassing the trunk was assessed. In addition, three-dimensional trunk motion was analysed using a newly developed multi-segmental trunk model. The set-up was checked for reproducibility as well as validity. Afterwards, the defined measurement set-up was applied to assess trunk stability in comparisons of healthy and back pain patients.
Clinically acceptable to excellent reliability could be shown for the methods (EMG/kinematics) used in the test situations. No changes in trunk motion pattern could be observed in healthy adults during continuous loading (lifting of objects) of different weights. In contrast, sudden loading of the trunk through perturbations to the lower limbs during walking led to an increased neuromuscular activity and ROM of the trunk. Moreover, BPP showed a delayed muscle response time and extended duration until maximum neuromuscular activity in response to sudden walking perturbations compared to healthy controls. In addition, a reduced lateral flexion of the trunk during perturbation could be shown in BPP.
It is concluded that perturbed gait seems suitable to provoke higher demands on trunk stability in adults. The altered neuromuscular and kinematic compensation pattern in back pain patients (BPP) can be interpreted as increased spine loading and reduced trunk stability in patients. Therefore, this novel assessment of trunk stability is suitable to identify deficits in BPP. Assignment of affected BPP to therapy interventions with focus on stabilisation of the trunk aiming to improve neuromuscular control in dynamic situations is implied. Hence, sensorimotor training (SMT) to enhance trunk stability and compensation of unexpected sudden loading should be preferred.
Trends in precipitation over Germany and the Rhine basin related to changes in weather patterns
(2017)
Precipitation as the central meteorological feature for agriculture, water security, and human well-being amongst others, has gained special attention ever since. Lack of precipitation may have devastating effects such as crop failure and water scarcity. Abundance of precipitation, on the other hand, may as well result in hazardous events such as flooding and again crop failure. Thus, great effort has been spent on tracking changes in precipitation and relating them to underlying processes. Particularly in the face of global warming and given the link between temperature and atmospheric water holding capacity, research is needed to understand the effect of climate change on precipitation.
The present work aims at understanding past changes in precipitation and other meteorological variables. Trends were detected for various time periods and related to associated changes in large-scale atmospheric circulation. The results derived in this thesis may be used as the foundation for attributing changes in floods to climate change. Assumptions needed for the downscaling of large-scale circulation model output to local climate stations are tested and verified here.
In a first step, changes in precipitation over Germany were detected, focussing not only on precipitation totals, but also on properties of the statistical distribution, transition probabilities as a measure for wet/dry spells, and extreme precipitation events.
Shifting the spatial focus to the Rhine catchment as one of the major water lifelines of Europe and the largest river basin in Germany, detected trends in precipitation and other meteorological variables were analysed in relation to states of an ``optimal'' weather pattern classification. The weather pattern classification was developed seeking the best skill in explaining the variance of local climate variables.
The last question addressed whether observed changes in local climate variables are attributable to changes in the frequency of weather patterns or rather to changes within the patterns itself. A common assumption for a downscaling approach using weather patterns and a stochastic weather generator is that climate change is expressed only as a changed occurrence of patterns with the pattern properties remaining constant. This assumption was validated and the ability of the latest generation of general circulation models to reproduce the weather patterns was evaluated.
% Paper 1
Precipitation changes in Germany in the period 1951-2006 can be summarised briefly as negative in summer and positive in all other seasons. Different precipitation characteristics confirm the trends in total precipitation: while winter mean and extreme precipitation have increased, wet spells tend to be longer as well (expressed as increased probability for a wet day followed by another wet day). For summer the opposite was observed: reduced total precipitation, supported by decreasing mean and extreme precipitation and reflected in an increasing length of dry spells.
Apart from this general summary for the whole of Germany, the spatial distribution within the country is much more differentiated. Increases in winter precipitation are most pronounced in the north-west and south-east of Germany, while precipitation increases are highest in the west for spring and in the south for autumn. Decreasing summer precipitation was observed in most regions of Germany, with particular focus on the south and west.
The seasonal picture, however, was again differently represented in the contributing months, e.g.\ increasing autumn precipitation in the south of Germany is formed by strong trends in the south-west in October and in the south-east in November. These results emphasise the high spatial and temporal organisation of precipitation changes.
% Paper 2
The next step towards attributing precipitation trends to changes in large-scale atmospheric patterns was the derivation of a weather pattern classification that sufficiently stratifies the local climate variables under investigation. Focussing on temperature, radiation, and humidity in addition to precipitation, a classification based on mean sea level pressure, near-surface temperature, and specific humidity was found to have the best skill in explaining the variance of the local variables. A rather high number of 40 patterns was selected, allowing typical pressure patterns being assigned to specific seasons by the associated temperature patterns. While the skill in explaining precipitation variance is rather low, better skill was achieved for radiation and, of course, temperature.
Most of the recent GCMs from the CMIP5 ensemble were found to reproduce these weather patterns sufficiently well in terms of frequency, seasonality, and persistence.
% Paper 3
Finally, the weather patterns were analysed for trends in pattern frequency, seasonality, persistence, and trends in pattern-specific precipitation and temperature. To overcome uncertainties in trend detection resulting from the selected time period, all possible periods in 1901-2010 with a minimum length of 31 years were considered. Thus, the assumption of a constant link between patterns and local weather was tested rigorously. This assumption was found to hold true only partly. While changes in temperature are mainly attributable to changes in pattern frequency, for precipitation a substantial amount of change was detected within individual patterns.
Magnitude and even sign of trends depend highly on the selected time period. The frequency of certain patterns is related to the long-term variability of large-scale circulation modes.
Changes in precipitation were found to be heterogeneous not only in space, but also in time - statements on trends are only valid for the specific time period under investigation. While some part of the trends can be attributed to changes in the large-scale circulation, distinct changes were found within single weather patterns as well.
The results emphasise the need to analyse multiple periods for thorough trend detection wherever possible and add some note of caution to the application of downscaling approaches based on weather patterns, as they might misinterpret the effect of climate change due to neglecting within-type trends.
Translating innovation
(2017)
This doctoral thesis studies the process of innovation adoption in public administrations, addressing the research question of how an innovation is translated to a local context. The study empirically explores Design Thinking as a new problem-solving approach introduced by a federal government organisation in Singapore. With a focus on user-centeredness, collaboration and iteration Design Thinking seems to offer a new way to engage recipients and other stakeholders of public services as well as to re-think the policy design process from a user’s point of view. Pioneered in the private sector, early adopters of the methodology include civil services in Australia, Denmark, the United Kingdom, the United States as well as Singapore. Hitherto, there is not much evidence on how and for which purposes Design Thinking is used in the public sector.
For the purpose of this study, innovation adoption is framed in an institutionalist perspective addressing how concepts are translated to local contexts. The study rejects simplistic views of the innovation adoption process, in which an idea diffuses to another setting without adaptation. The translation perspective is fruitful because it captures the multidimensionality and ‘messiness’ of innovation adoption. More specifically, the overall research question addressed in this study is: How has Design Thinking been translated to the local context of the public sector organisation under investigation? And from a theoretical point of view: What can we learn from translation theory about innovation adoption processes?
Moreover, there are only few empirical studies of organisations adopting Design Thinking and most of them focus on private organisations. We know very little about how Design Thinking is embedded in public sector organisations. This study therefore provides further empirical evidence of how Design Thinking is used in a public sector organisation, especially with regards to its application to policy work which has so far been under-researched.
An exploratory single case study approach was chosen to provide an in-depth analysis of the innovation adoption process. Based on a purposive, theory-driven sampling approach, a Singaporean Ministry was selected because it represented an organisational setting in which Design Thinking had been embedded for several years, making it a relevant case with regard to the research question. Following a qualitative research design, 28 semi-structured interviews (45-100 minutes) with employees and managers were conducted. The interview data was triangulated with observations and documents, collected during a field research research stay in Singapore.
The empirical study of innovation adoption in a single organisation focused on the intra-organisational perspective, with the aim to capture the variations of translation that occur during the adoption process. In so doing, this study opened the black box often assumed in implementation studies. Second, this research advances translation studies not only by showing variance, but also by deriving explanatory factors. The main differences in the translation of Design Thinking occurred between service delivery and policy divisions, as well as between the first adopter and the rest of the organisation. For the intra-organisational translation of Design Thinking in the Singaporean Ministry the following five factors played a role: task type, mode of adoption, type of expertise, sequence of adoption, and the adoption of similar practices.
Working memory (WM) performance declines with age. However, several studies have shown that WM training may lead to performance increases not only in the trained task, but also in untrained cognitive transfer tasks. It has been suggested that transfer effects occur if training task and transfer task share specific processing components that are supposedly processed in the same brain areas. In the current study, we investigated whether single-task WM training and training-related alterations in neural activity might support performance in a dual-task setting, thus assessing transfer effects to higher-order control processes in the context of dual-task coordination. A sample of older adults (age 60–72) was assigned to either a training or control group. The training group participated in 12 sessions of an adaptive n-back training. At pre and post-measurement, a multimodal dual-task was performed in all participants to assess transfer effects. This task consisted of two simultaneous delayed match to sample WM tasks using two different stimulus modalities (visual and auditory) that were performed either in isolation (single-task) or in conjunction (dual-task). A subgroup also participated in functional magnetic resonance imaging (fMRI) during the performance of the n-back task before and after training. While no transfer to single-task performance was found, dual-task costs in both the visual modality (p < 0.05) and the auditory modality (p < 0.05) decreased at post-measurement in the training but not in the control group. In the fMRI subgroup of the training participants, neural activity changes in left dorsolateral prefrontal cortex (DLPFC) during one-back predicted post-training auditory dual-task costs, while neural activity changes in right DLPFC during three-back predicted visual dual-task costs. Results might indicate an improvement in central executive processing that could facilitate both WM and dual-task coordination.
With its transparent orthography, Standard Indonesian is spoken by over 160 million inhabitants and is the primary language of instruction in education and the government in Indonesia. An assessment battery of reading and reading-related skills was developed as a starting point for the diagnosis of dyslexia in beginner learners. Founded on the International Dyslexia Association’s definition of dyslexia, the test battery comprises nine empirically motivated reading and reading-related tasks assessing word reading, pseudoword reading, arithmetic, rapid automatized naming, phoneme deletion, forward and backward digit span, verbal fluency, orthographic choice (spelling), and writing. The test was validated by computing the relationships between the outcomes on the reading-skills and reading-related measures by means of correlation and factor analyses. External variables, i.e., school grades and teacher ratings of the reading and learning abilities of individual students, were also utilized to provide evidence of its construct validity. Four variables were found to be significantly related with reading-skill measures: phonological awareness, rapid naming, spelling, and digit span. The current study on reading development in Standard Indonesian confirms findings from other languages with transparent orthographies and suggests a test battery including preliminary norm scores for screening and assessment of elementary school children learning to read Standard Indonesian.
As an emerging sub-field of music information retrieval (MIR), music imagery information retrieval (MIIR) aims to retrieve information from brain activity recorded during music cognition–such as listening to or imagining music pieces. This is a highly inter-disciplinary endeavor that requires expertise in MIR as well as cognitive neuroscience and psychology. The OpenMIIR initiative strives to foster collaborations between these fields to advance the state of the art in MIIR. As a first step, electroencephalography (EEG) recordings of music perception and imagination have been made publicly available, enabling MIR researchers to easily test and adapt their existing approaches for music analysis like fingerprinting, beat tracking or tempo estimation on this new kind of data. This paper reports on first results of MIIR experiments using these OpenMIIR datasets and points out how these findings could drive new research in cognitive neuroscience.
We present a setup combining a liquid flatjet sample delivery and a MHz laser system for time-resolved soft X-ray absorption measurements of liquid samples at the high brilliance undulator beamline UE52-SGM at Bessy II yielding unprecedented statistics in this spectral range. We demonstrate that the efficient detection of transient absorption changes in transmission mode enables the identification of photoexcited species in dilute samples. With iron(II)-trisbipyridine in aqueous solution as a benchmark system, we present absorption measurements at various edges in the soft X-ray regime. In combination with the wavelength tunability of the laser system, the set-up opens up opportunities to study the photochemistry of many systems at low concentrations, relevant to materials sciences, chemistry, and biology.
We introduce three strategies for the analysis of financial time series based on time averaged observables. These comprise the time averaged mean squared displacement (MSD) as well as the ageing and delay time methods for varying fractions of the financial time series. We explore these concepts via statistical analysis of historic time series for several Dow Jones Industrial indices for the period from the 1960s to 2015. Remarkably, we discover a simple universal law for the delay time averaged MSD. The observed features of the financial time series dynamics agree well with our analytical results for the time averaged measurables for geometric Brownian motion, underlying the famed Black–Scholes–Merton model. The concepts we promote here are shown to be useful for financial data analysis and enable one to unveil new universal features of stock market dynamics.
In October 2016, following a campaign led by Labour Peer Lord
Alfred Dubs, the first child asylum-seekers allowed entry to the UK
under new legislation (the ‘Dubs amendment’) arrived in England.
Their arrival was captured by a heavy media presence, and very
quickly doubts were raised by right-wing tabloids and politicians
about their age. In this article, I explore the arguments
underpinning the Dubs campaign and the media coverage of
the children’s arrival as a starting point for interrogating
representational practices around children who seek asylum. I
illustrate how the campaign was premised on a universal politics
of childhood that inadvertently laid down the terms on which
these children would be given protection, namely their innocence.
The universality of childhood fuels public sympathy for child
asylum-seekers, underlies the ‘child first, migrant second’
approach advocated by humanitarian organisations, and it was a
key argument in the ‘Dubs amendment’. Yet the campaign
highlights how representations of child asylum-seekers rely on
codes that operate to identify ‘unchildlike’ children. As I show, in
the context of the criminalisation of undocumented migrants‘,
childhood is no longer a stable category which guarantees
protection, but is subject to scrutiny and suspicion and can,
ultimately, be disproved.
Thermal cis-trans isomerization of azobenzene studied by path sampling and QM/MM stochastic dynamics
(2017)
Azobenzene-based molecular photoswitches have extensively been applied to biological systems, involving photo-control of peptides, lipids and nucleic acids. The isomerization between the stable trans and the metastable cis state of the azo moieties leads to pronounced changes in shape and other physico-chemical properties of the molecules into which they are incorporated. Fast switching can be induced via transitions to excited electronic states and fine-tuned by a large number of different substituents at the phenyl rings. But a rational design of tailor-made azo groups also requires control of their stability in the dark, the half-lifetime of the cis isomer. In computational chemistry, thermally activated barrier crossing on the ground state Born-Oppenheimer surface can efficiently be estimated with Eyring’s transition state theory (TST) approach; the growing complexity of the azo moiety and a rather heterogeneous environment, however, may render some of the underlying simplifying assumptions problematic.
In this dissertation, a computational approach is established to remove two restrictions at once: the environment is modeled explicitly by employing a quantum mechanical/molecular mechanics (QM/MM) description; and the isomerization process is tracked by analyzing complete dynamical pathways between stable states. The suitability of this description is validated by using two test systems, pure azo benzene and a derivative with electron donating and electron withdrawing substituents (“push-pull” azobenzene). Each system is studied in the gas phase, in toluene and in polar DMSO solvent. The azo molecules are treated at the QM level using a very recent, semi-empirical approximation to density functional theory (density functional tight binding approximation). Reactive pathways are sampled by implementing a version of the so-called transition path sampling method (TPS), without introducing any bias into the system dynamics. By analyzing ensembles of reactive trajectories, the change in isomerization pathway from linear inversion to rotation in going from apolar to polar solvent, predicted by the TST approach, could be verified for the push-pull derivative. At the same time, the mere presence of explicit solvation is seen to broaden the distribution of isomerization pathways, an effect TST cannot account for.
Using likelihood maximization based on the TPS shooting history, an improved reaction coordinate was identified as a sine-cosine combination of the central bend angles and the rotation dihedral, r (ω,α,α′). The computational van’t Hoff analysis for the activation entropies was performed to gain further insight into the differential role of solvent for the case of the unsubstituted and the push-pull azobenzene. In agreement with the experiment, it yielded positive activation entropies for azobenzene in the DMSO solvent while negative for the push-pull derivative, reflecting the induced ordering of solvent around the more dipolar transition state associated to the latter compound. Also, the dynamically corrected rate constants were evaluated using the reactive flux approach where an increase comparable to the experimental one was observed for a high polarity medium for both azobenzene derivatives.
Rezensiertes Werk
Theresa Biberauer u. George Walkden (Hgg.): Syntax over Time: Lexical, Morphological, and Information – Structural Interactions - Oxford, Oxford University Press, 2015, 418 S.
Einleitung: Die Erdnussallergie zählt zu den häufigsten Nahrungsmittelallergien im Kindesalter. Bereits kleine Mengen Erdnuss (EN) können zu schweren allergischen Reaktionen führen. EN ist der häufigste Auslöser einer lebensbedrohlichen Anaphylaxie bei Kindern und Jugendlichen. Im Gegensatz zu anderen frühkindlichen Nahrungsmittelallergien entwickeln Patienten mit einer EN-Allergie nur selten eine natürliche Toleranz. Seit mehreren Jahren wird daher an kausalen Therapiemöglichkeiten für EN-Allergiker, insbesondere an der oralen Immuntherapie (OIT), geforscht. Erste kleinere Studien zur OIT bei EN-Allergie zeigten erfolgsversprechende Ergebnisse. Im Rahmen einer randomisierten, doppelblind, Placebo-kontrollierten Studie mit größerer Fallzahl werden in der vorliegenden Arbeit die klinische Wirksamkeit und Sicherheit dieser Therapieoption bei Kindern mit EN-Allergie genauer evaluiert. Des Weiteren werden immunologische Veränderungen sowie die Lebensqualität und Therapiebelastung unter OIT untersucht.
Methoden: Kinder zwischen 3-18 Jahren mit einer IgE-vermittelten EN-Allergie wurden in die Studie eingeschlossen. Vor Beginn der OIT wurde eine orale Provokation mit EN durchgeführt. Die Patienten wurden 1:1 randomisiert und entsprechend der Verum- oder Placebogruppe zugeordnet. Begonnen wurde mit 2-120 mg EN bzw. Placebo pro Tag, abhängig von der Reaktionsdosis bei der oralen Provokation. Zunächst wurde die tägliche OIT-Dosis alle zwei Wochen über etwa 14 Monate langsam bis zu einer Erhaltungsdosis von mindestens 500 mg EN (= 125 mg EN-Protein, ~ 1 kleine EN) bzw. Placebo gesteigert. Die maximal erreichte Dosis wurde dann über zwei Monate täglich zu Hause verabreicht. Im Anschluss erfolgte erneut eine orale Provokation mit EN. Der primäre Endpunkt der Studie war die Anzahl an Patienten der Verum- und Placebogruppe, die unter oraler Provokation nach OIT ≥1200 mg EN vertrugen (=„partielle Desensibilisierung“). Sowohl vor als auch nach OIT wurde ein Hautpricktest mit EN durchgeführt und EN-spezifisches IgE und IgG4 im Serum bestimmt. Außerdem wurden die Basophilenaktivierung sowie die Ausschüttung von T-Zell-spezifischen Zytokinen nach Stimulation mit EN in vitro gemessen. Anhand von Fragebögen wurde die Lebensqualität vor und nach OIT sowie die Therapiebelastung während OIT erfasst.
Ergebnisse: 62 Patienten wurden in die Studie eingeschlossen und randomisiert. Nach etwa 16 Monaten unter OIT zeigten 74,2% (23/31) der Patienten der Verumgruppe und nur 16,1% (5/31) der Placebogruppe eine „partielle Desensibilisierung“ gegenüber EN (p<0,001). Im Median vertrugen Patienten der Verumgruppe 4000 mg EN (~8 kleine EN) unter der Provokation nach OIT wohingegen Patienten der Placebogruppe nur 80 mg EN (~1/6 kleine EN) vertrugen (p<0,001). Fast die Hälfte der Patienten der Verumgruppe (41,9%) tolerierten die Höchstdosis von 18 g EN unter Provokation („komplette Desensibilisierung“). Es zeigte sich ein vergleichbares Sicherheitsprofil unter Verum- und Placebo-OIT in Bezug auf objektive Nebenwirkungen. Unter Verum-OIT kam es jedoch signifikant häufiger zu subjektiven Nebenwirkungen wie oralem Juckreiz oder Bauchschmerzen im Vergleich zu Placebo (3,7% der Verum-OIT-Gaben vs. 0,5% der Placebo-OIT-Gaben, p<0,001). Drei Kinder der Verumgruppe (9,7%) und sieben Kinder der Placebogruppe (22,6%) beendeten die Studie vorzeitig, je zwei Patienten beider Gruppen aufgrund von Nebenwirkungen. Im Gegensatz zu Placebo, zeigten sich unter Verum-OIT signifikante immunologische Veränderungen. So kam es zu einer Abnahme des EN-spezifischen Quaddeldurchmessers im Hautpricktest, einem Anstieg der EN-spezifischen IgG4-Werte im Serum sowie zu einer verminderten EN-spezifischen Zytokinsekretion, insbesondere der Th2-spezifischen Zytokine IL-4 und IL-5. Hinsichtlich der EN-spezifischen IgE-Werte sowie der EN-spezifischen Basophilenaktivierung zeigten sich hingegen keine Veränderungen unter OIT. Die Lebensqualität von Kindern der Verumgruppe war nach OIT signifikant verbessert, jedoch nicht bei Kindern der Placebogruppe. Während der OIT wurde die Therapie von fast allen Kindern (82%) und Müttern (82%) als positiv bewertet (= niedrige Therapiebelastung).
Diskussion: Die EN-OIT führte bei einem Großteil der EN-allergischen Kinder zu einer Desensibilisierung und einer deutlich erhöhten Reaktionsschwelle auf EN. Somit sind die Kinder im Alltag vor akzidentellen Reaktionen auf EN geschützt, was die Lebensqualität der Kinder deutlich verbessert. Unter den kontrollierten Studienbedingungen zeigte sich ein akzeptables Sicherheitsprofil, mit vorrangig milder Symptomatik. Die klinische Desensibilisierung ging mit Veränderungen auf immunologischer Ebene einher. Langzeitstudien zur EN-OIT müssen jedoch abgewartet werden, um die klinische und immunologische Wirksamkeit hinsichtlich einer möglichen langfristigen oralen Toleranzinduktion sowie die Sicherheit unter langfristiger OIT zu untersuchen, bevor das Therapiekonzept in die Praxis übertragen werden kann.
Die klassische Physik/Chemie unterscheidet zwischen drei Bindungstypen: Der kovalenten Bindung, der ionischen Bindung und der metallischen Bindung. Moleküle untereinander werden hingegen durch schwache Wechselwirkungen zusammen gehalten, sie sind trotz ihrer schwachen Kräfte weniger verstanden, aber dabei nicht weniger wichtig. In zukunftsweisenden Gebieten wie der Nanotechnologie, der Supramolekularen Chemie und Biochemie sind sie von elementarer Bedeutung.
Um schwache, intermolekulare Wechselwirkungen zu beschreiben, vorauszusagen und zu verstehen, sind sie zunächst theoretisch zu erfassen. Hierzu gehören verschiedene quantenchemische Methoden, die in dieser Arbeit vorgestellt, verglichen, weiterentwickelt und schließlich auch exemplarisch auf Problemstellungen in der Chemie angewendet werden. Aufbauend auf einer Hierarchie von Methoden unterschiedlicher Genauigkeit werden sie für diese Ziele eingesetzt, ausgearbeitet und kombiniert.
Berechnet wird die Elektronenstruktur, also die Verteilung und Energie von Elektronen, die im Wesentlichen die Atome zusammen halten. Da Ungenauigkeiten von der Beschreibung der Elektronenstruktur von den verwendeten Methoden abhängen, kann man die Effekte detailliert untersuchen, sie beschreiben und darauf aufbauend weiter entwickeln, um sie anschließend an verschiedenen Modellen zu testen. Die Geschwindigkeit der Berechnungen mit modernen Computern ist eine wesentliche, zu berücksichtigende Komponente, da im Allgemeinen die Genauigkeit mit der Rechenzeit exponentiell steigt, und die damit an die Grenzen der Möglichkeiten stoßen muss.
Die genaueste der verwendeten Methoden basiert auf der Coupled-Cluster-Theorie, die sehr gute Voraussagen ermöglicht. Für diese wird eine sogenannte spektroskopische Genauigkeit mit Abweichungen von wenigen Wellenzahlen erzielt, was Vergleiche mit experimentellen Daten zeigen. Eine Möglichkeit zur Näherung von hochgenauen Methoden basiert auf der Dichtefunktionaltheorie: Hier wurde das „Boese-Martin for Kinetics“ (BMK)-Funktional entwickelt, dessen Funktionalform sich in vielen nach 2010 veröffentlichten Dichtefunktionalen wiederfindet.
Mit Hilfe der genaueren Methoden lassen sich schließlich semiempirische Kraftfelder zur Beschreibung intermolekularer Wechselwirkungen für individuelle Systeme parametrisieren, diese benötigen weit weniger Rechenzeit als die Methoden, die auf der genauen Berechnung der Elektronenstruktur von Molekülen beruhen.
Für größere Systeme lassen sich auch verschiedene Methoden kombinieren. Dabei wurden Einbettungsverfahren verfeinert und mit neuen methodischen Ansätzen vorgeschlagen. Sie verwenden sowohl die symmetrieadaptierte Störungstheorie als auch die quantenchemische Einbettung von Fragmenten in größere, quantenchemisch berechnete Systeme.
Die Entwicklungen neuer Methoden beziehen ihren Wert im Wesentlichen durch deren Anwendung:
In dieser Arbeit standen zunächst die Wasserstoffbrücken im Vordergrund. Sie zählen zu den stärkeren intermolekularen Wechselwirkungen und sind nach wie vor eine Herausforderung. Im Gegensatz dazu sind van-der-Waals Wechselwirkungen relativ einfach durch Kraftfelder zu beschreiben. Deshalb sind viele der heute verwendeten Methoden für Systeme, in denen Wasserstoffbrücken dominieren, vergleichsweise schlecht.
Eine Untersuchung molekularer Aggregate mit Auswirkungen intermolekularer Wechselwirkungen auf die Schwingungsfrequenzen von Molekülen schließt sich an. Dabei wird auch über die sogenannte starrer-Rotor-harmonischer-Oszillator-Näherung hinausgegangen.
Eine weitreichende Anwendung behandelt Adsorbate, hier die von Molekülen auf ionischen/metallischen Oberflächen. Sie können mit ähnlichen Methoden behandelt werden wie die intermolekularen Wechselwirkungen, und sind mit speziellen Einbettungsverfahren sehr genau zu beschreiben. Die Resultate dieser theoretischen Berechnungen stimulierten eine Neubewertung der bislang bekannten experimentellen Ergebnisse.
Molekulare Kristalle sind ein äußerst wichtiges Forschungsgebiet. Sie werden durch schwache Wechselwirkungen zusammengehalten, die von van-der-Waals Kräften bis zu Wasserstoffbrücken reichen. Auch hier wurden neuentwickelte Methoden eingesetzt, die eine interessante, mindestens ebenso genaue Alternative zu den derzeit gängigen Methoden darstellen.
Von daher sind die entwickelten Methoden, als auch deren Anwendung äußerst vielfältig. Die behandelten Berechnungen der Elektronenstruktur erstrecken sich von den sogenannten post-Hartree-Fock-Methoden über den Einsatz der Dichtefunktionaltheorie bis zu semiempirischen Kraftfeldern und deren Kombinationen. Die Anwendung reicht von einzelnen Molekülen in der Gasphase über die Adsorption auf Oberflächen bis zum molekularen Festkörper.
This article considers Isabella Bird’s representation of medicine in Unbeaten Tracks in Japan (1880) and Journeys in Persia and Kurdistan (1891), the two books in which she engages most extensively with both local (Chinese/Islamic) and Western medical science and practice. I explore how Bird uses medicine to assert her narrative authority and define her travelling persona in opposition to local medical practitioners. I argue that her ambivalence and the unease she frequently expresses concerning medical practice (expressed particularly in her later adoption of the Persian appellation “Feringhi Hakīm” [European physician] to describe her work) serves as a means for her to negotiate the colonial and gendered pressures on Victorian medicine. While in Japan this attitude works to destabilise her hierarchical understanding of science and results in some acknowledgement of traditional Japanese traditions, in Persia it functions more to disguise her increasing collusion with overt British colonial ambitions.
White adipose tissue (WAT) is actively involved in the regulation of whole-body energy homeostasis via storage/release of lipids and adipokine secretion. Current research links WAT dysfunction to the development of metabolic syndrome (MetS) and type 2 diabetes (T2D). The expansion of WAT during oversupply of nutrients prevents ectopic fat accumulation and requires proper preadipocyte-to-adipocyte differentiation. An assumed link between excess levels of reactive oxygen species (ROS), WAT dysfunction and T2D has been discussed controversially. While oxidative stress conditions have conclusively been detected in WAT of T2D patients and related animal models, clinical trials with antioxidants failed to prevent T2D or to improve glucose homeostasis. Furthermore, animal studies yielded inconsistent results regarding the role of oxidative stress in the development of diabetes. Here, we discuss the contribution of ROS to the (patho)physiology of adipocyte function and differentiation, with particular emphasis on sources and nutritional modulators of adipocyte ROS and their functions in signaling mechanisms controlling adipogenesis and functions of mature fat cells. We propose a concept of ROS balance that is required for normal functioning of WAT. We explain how both excessive and diminished levels of ROS, e.g. resulting from over supplementation with antioxidants, contribute to WAT dysfunction and subsequently insulin resistance.
Meaning-making in the brain has become one of the most intensely discussed topics in cognitive science. Traditional theories on cognition that emphasize abstract symbol manipulations often face a dead end: The symbol grounding problem. The embodiment idea tries to overcome this barrier by assuming that the mind is grounded in sensorimotor experiences. A recent surge in behavioral and brain-imaging studies has therefore focused on the role of the motor cortex in language processing. Concrete, action-related words have received convincing evidence to rely on sensorimotor activation. Abstract concepts, however, still pose a distinct challenge for embodied theories on cognition. Fully embodied abstraction mechanisms were formulated but sensorimotor activation alone seems unlikely to close the explanatory gap. In this respect, the idea of integration areas, such as convergence zones or the ‘hub and spoke’ model, do not only appear like the most promising candidates to account for the discrepancies between concrete and abstract concepts but could also help to unite the field of cognitive science again. The current review identifies milestones in cognitive science research and recent achievements that highlight fundamental challenges, key questions and directions for future research.
Human development has far-reaching impacts on the surface of the globe. The transformation of natural land cover occurs in different forms, and urban growth is one of the most eminent transformative processes. We analyze global land cover data and extract cities as defined by maximally connected urban clusters. The analysis of the city size distribution for all cities on the globe confirms Zipf’s law. Moreover, by investigating the percolation properties of the clustering of urban areas we assess the closeness to criticality for various countries. At the critical thresholds, the urban land cover of the countries undergoes a transition from separated clusters to a gigantic component on the country scale. We study the Zipf-exponents as a function of the closeness to percolation and find a systematic dependence, which could be the reason for deviating exponents reported in the literature. Moreover, we investigate the average size of the clusters as a function of the proximity to percolation and find country specific behavior. By relating the standard deviation and the average of cluster sizes—analogous to Taylor’s law—we suggest an alternative way to identify the percolation transition. We calculate spatial correlations of the urban land cover and find long-range correlations. Finally, by relating the areas of cities with population figures we address the global aspect of the allometry of cities, finding an exponent δ ≈ 0.85, i.e., large cities have lower densities.
The rule of law is the cornerstone of the international legal system. This paper shows, through analysis of intergovernmental instruments, statements made by representatives of States, and negotiation records, that the rule of law at the United Nations has become increasingly contested in the past years. More precisely, the argument builds on the process of integrating the notion of the rule of law into the Sustainable Development Goals, adopted in September 2015 in the document Transforming our world: the 2030 Agenda for Sustainable Development. The main sections set out the background of the rule of law debate at the UN, the elements of the rule of law at the goal- and target-levels in the 2030 Agenda – especially in the SDG 16 –, and evaluate whether the rule of law in this context may be viewed as a normative and universal foundation of international law. The paper concludes, with reflections drawn from the process leading up to the 2030 Agenda and the final outcome document that the rule of law – or at least strong and precise formulations of the concept – may be in decline in institutional and normative settings. This can be perceived as symptomatic of a broader crisis of the international legal order.
Reaching the Sustainable Development Goals requires a fundamental socio-economic transformation accompanied by substantial investment in low-carbon infrastructure. Such a sustainability transition represents a non-marginal change, driven by behavioral factors and systemic interactions. However, typical economic models used to assess a sustainability transition focus on marginal changes around a local optimum, whichby constructionlead to negative effects. Thus, these models do not allow evaluating a sustainability transition that might have substantial positive effects. This paper examines which mechanisms need to be included in a standard computable general equilibrium model to overcome these limitations and to give a more comprehensive view of the effects of climate change mitigation. Simulation results show that, given an ambitious greenhouse gas emission constraint and a price of carbon, positive economic effects are possible if (1) technical progress results (partly) endogenously from the model and (2) a policy intervention triggering an increase of investment is introduced. Additionally, if (3) the investment behavior of firms is influenced by their sales expectations, the effects are amplified. The results provide suggestions for policy-makers, because the outcome indicates that investment-oriented climate policies can lead to more desirable outcomes in economic, social and environmental terms.
The role of serum amyloid A and sphingosine-1-phosphate on high-density lipoprotein functionality
(2017)
The high-density lipoprotein (HDL) is one of the most important endogenous cardiovascular protective markers. HDL is an attractive target in the search for new pharmaceutical therapies and in the prevention of cardiovascular events. Some of HDL’s anti-atherogenic properties are related to the signaling molecule sphingosine-1-phosphate (S1P), which plays an important role in vascular homeostasis. However, for different patient populations it seems more complicated. Significant changes in HDL’s protective potency are reduced under pathologic conditions and HDL might even serve as a proatherogenic particle. Under uremic conditions especially there is a change in the compounds associated with HDL. S1P is reduced and acute phase proteins such as serum amyloid A (SAA) are found to be elevated in HDL. The conversion of HDL in inflammation changes the functional properties of HDL. High amounts of SAA are associated with the occurrence of cardiovascular diseases such as atherosclerosis. SAA has potent pro-atherogenic properties, which may have impact on HDL’s biological functions, including cholesterol efflux capacity, antioxidative and anti-inflammatory activities. This review focuses on two molecules that affect the functionality of HDL. The balance between functional and dysfunctional HDL is disturbed after the loss of the protective sphingolipid molecule S1P and the accumulation of the acute-phase protein SAA. This review also summarizes the biological activities of lipid-free and lipid-bound SAA and its impact on HDL function.
The role of alcohol and victim sexual interest in Spanish students' perceptions of sexual assault
(2017)
Two studies investigated the effects of information related to rape myths on Spanish college students’ perceptions of sexual assault. In Study 1, 92 participants read a vignette about a nonconsensual sexual encounter and rated whether it was a sexual assault and how much the woman was to blame. In the scenario, the man either used physical force or offered alcohol to the woman to overcome her resistance. Rape myth acceptance (RMA) was measured as an individual difference variable. Participants were more convinced that the incident was a sexual assault and blamed the woman less when the man had used force rather than offering her alcohol. In Study 2, 164 college students read a scenario in which the woman rejected a man’s sexual advances after having either accepted or turned down his offer of alcohol. In addition, the woman was either portrayed as being sexually attracted to him or there was no mention of her sexual interest. Participants’ RMA was again included. High RMA participants blamed the victim more than low RMA participants and were less certain that the incident was a sexual assault, especially when the victim had accepted alcohol and was described as being sexually attracted to the man. The findings are discussed in terms of their implications for the prevention and legal prosecution of sexual assault.
Over the last few decades, the methodology for the identification of customary international law (CIL) has been changing. Both elements of CIL – practice and opinio juris – have assumed novel and broader forms, as noted in the Reports of the Special Rapporteur of the International Law Commission (2013, 2014, 2015, 2016). This paper discusses these Reports and the draft conclusions, and reaction by States in the Sixth Committee of the United Nations General Assembly (UNGA), highlighting the areas of consensus and contestation. This ties to the analysis of the main doctrinal positions, with special attention being given to the two elements of CIL, and the role of the UNGA resolutions. The underlying motivation is to assess the real or perceived crisis of CIL, and the author develops the broader argument maintaining that in order to retain unity within international law, the internal limits of CIL must be carefully asserted.
Background: The relative dose response (RDR) test, which quantifies the increase in serum retinol after vitamin A administration, is a qualitative measure of liver vitamin A stores. Particularly in preterm infants, the feasibility of the RDR test involving blood is critically dependent on small sample volumes. Objectives: This study aimed to assess whether the RDR calculated with retinol-binding protein 4 (RBP4) might be a substitute for the classical retinol-based RDR test for assessing vitamin A status in very preterm infants. Methods: This study included preterm infants with a birth weight below 1,500 g (n = 63, median birth weight 985 g, median gestational age 27.4 weeks) who were treated with 5,000 IU retinyl palmitate intramuscularly 3 times a week for 4 weeks. On day 3 (first vitamin A injection) and day 28 of life (last vitamin A injection), the RDR was calculated and compared using serum retinol and RBP4 concentrations. Results: The concentrations of retinol (p < 0.001) and RBP4 (p < 0.01) increased significantly from day 3 to day 28. On day 3, the median (IQR) retinol-RDR was 27% (8.4-42.5) and the median RBP4-RDR was 8.4% (-3.4 to 27.9), compared to 7.5% (-10.6 to 20.8) and -0.61% (-19.7 to 15.3) on day 28. The results for retinol-RDR and RBP4-RDR revealed no significant correlation. The agreement between retinol-RDR and RBP4-RDR was poor (day 3: Cohen's κ = 0.12; day 28: Cohen's κ = 0.18). Conclusion: The RDR test based on circulating RBP4 is unlikely to reflect the hepatic vitamin A status in preterm infants.
The predictions of two contrasting approaches to the acquisition of transitive relative clauses were tested within the same groups of German-speaking participants aged from 3 to 5 years old. The input frequency approach predicts that object relative clauses with inanimate heads (e.g., the pullover that the man is scratching) are comprehended earlier and more accurately than those with an animate head (e.g., the man that the boy is scratching). In contrast, the structural intervention approach predicts that object relative clauses with two full NP arguments mismatching in number (e.g., the man that the boys are scratching) are comprehended earlier and more accurately than those with number-matching NPs (e.g., the man that the boy is scratching). These approaches were tested in two steps. First, we ran a corpus analysis to ensure that object relative clauses with number-mismatching NPs are not more frequent than object relative clauses with number-matching NPs in child directed speech. Next, the comprehension of these structures was tested experimentally in 3-, 4-, and 5-year-olds respectively by means of a color naming task. By comparing the predictions of the two approaches within the same participant groups, we were able to uncover that the effects predicted by the input frequency and by the structural intervention approaches co-exist and that they both influence the performance of children on transitive relative clauses, but in a manner that is modulated by age. These results reveal a sensitivity to animacy mismatch already being demonstrated by 3-year-olds and show that animacy is initially deployed more reliably than number to interpret relative clauses correctly. In all age groups, the animacy mismatch appears to explain the performance of children, thus, showing that the comprehension of frequent object relative clauses is enhanced compared to the other conditions. Starting with 4-year-olds but especially in 5-year-olds, the number mismatch supported comprehension—a facilitation that is unlikely to be driven by input frequency. Once children fine-tune their sensitivity to verb agreement information around the age of four, they are also able to deploy number marking to overcome the intervention effects. This study highlights the importance of testing experimentally contrasting theoretical approaches in order to characterize the multifaceted, developmental nature of language acquisition.
A particular form of social pain is invalidation. Therefore, this study (a) investigates whether patients with chronic low back pain experience invalidation, (b) if it has an influence on their pain, and (c) explores whether various social sources (e.g. partner and work) influence physical pain differentially. A total of 92 patients completed questionnaires, and for analysis, Pearson's correlation coefficients and hierarchical linear regression analyses were conducted. They indicated a significant association between discounting and disability due to pain (respective =.29, p>.05). Especially, discounting by partner was linked to higher disability (=.28, p>.05).
The classical Navier-Stokes equations of hydrodynamics are usually written in terms of vector analysis. More promising is the formulation of these equations in the language of differential forms of degree one. In this way the study of Navier-Stokes equations includes the analysis of the de Rham complex. In particular, the Hodge theory for the de Rham complex enables one to eliminate the pressure from the equations. The Navier-Stokes equations constitute a parabolic system with a nonlinear term which makes sense only for one-forms. A simpler model of dynamics of incompressible viscous fluid is given by Burgers' equation. This work is aimed at the study of invariant structure of the Navier-Stokes equations which is closely related to the algebraic structure of the de Rham complex at step 1. To this end we introduce Navier-Stokes equations related to any elliptic quasicomplex of first order differential operators. These equations are quite similar to the classical Navier-Stokes equations including generalised velocity and pressure vectors. Elimination of the pressure from the generalised Navier-Stokes equations gives a good motivation for the study of the Neumann problem after Spencer for elliptic quasicomplexes. Such a study is also included in the work.We start this work by discussion of Lamé equations within the context of elliptic quasicomplexes on compact manifolds with boundary. The non-stationary Lamé equations form a hyperbolic system. However, the study of the first mixed problem for them gives a good experience to attack the linearised Navier-Stokes equations. On this base we describe a class of non-linear perturbations of the Navier-Stokes equations, for which the solvability results still hold.
The Kenya rift revisited
(2017)
We present three-dimensional (3-D) models that describe the present-day thermal and rheological state of the lithosphere of the greater Kenya rift region aiming at a better understanding of the rift evolution, with a particular focus on plume-lithosphere interactions. The key methodology applied is the 3-D integration of diverse geological and geophysical observations using gravity modelling. Accordingly, the resulting lithospheric-scale 3-D density model is consistent with (i) reviewed descriptions of lithological variations in the sedimentary and volcanic cover, (ii) known trends in crust and mantle seismic velocities as revealed by seismic and seismological data and (iii) the observed gravity field. This data-based model is the first to image a 3-D density configuration of the crystalline crust for the entire region of Kenya and northern Tanzania. An upper and a basal crustal layer are differentiated, each composed of several domains of different average densities. We interpret these domains to trace back to the Precambrian terrane amalgamation associated with the East African Orogeny and to magmatic processes during Mesozoic and Cenozoic rifting phases. In combination with seismic velocities, the densities of these crustal domains indicate compositional differences. The derived lithological trends have been used to parameterise steady-state thermal and rheological models. These models indicate that crustal and mantle temperatures decrease from the Kenya rift in the west to eastern Kenya, while the integrated strength of the lithosphere increases. Thereby, the detailed strength configuration appears strongly controlled by the complex inherited crustal structure, which may have been decisive for the onset, localisation and propagation of rifting.
The paper looks at community interests in international law from the perspective of the International Law Commission. As the topics of the Commission are diverse, the outcome of its work is often seen as providing a sense of direction regarding general aspects of international law. After defining what he understands by “community interests”, the author looks at both secondary and primary rules of international law, as they have been articulated by the Commission, as well as their relevance for the recognition and implementation of community interests. The picture which emerges only partly fits the widespread narrative of “from self-interest to community interest”. Whereas the Commission has recognized, or developed, certain primary rules which more fully articulate community interests, it has been reluctant to reformulate secondary rules of international law, with the exception of jus cogens. The Commission has more recently rather insisted that the traditional State-consent-oriented secondary rules concerning the formation of customary international law and regarding the interpretation of treaties continue to be valid in the face of other actors and forms of action which push towards the recognition of more and thicker community interests.
The El Nino-Southern Oscillation (ENSO) is the main driver of the interannual variability in eastern African rainfall, with a significant impact on vegetation and agriculture and dire consequences for food and social security. In this study, we identify and quantify the ENSO contribution to the eastern African rainfall variability to forecast future eastern African vegetation response to rainfall variability related to a predicted intensified ENSO. To differentiate the vegetation variability due to ENSO, we removed the ENSO signal from the climate data using empirical orthogonal teleconnection (EOT) analysis. Then, we simulated the ecosystem carbon and water fluxes under the historical climate without components related to ENSO teleconnections. We found ENSO-driven patterns in vegetation response and confirmed that EOT analysis can successfully produce coupled tropical Pacific sea surface temperature-eastern African rainfall teleconnection from observed datasets. We further simulated eastern African vegetation response under future climate change as it is projected by climate models and under future climate change combined with a predicted increased ENSO intensity. Our EOT analysis highlights that climate simulations are still not good at capturing rainfall variability due to ENSO, and as we show here the future vegetation would be different from what is simulated under these climate model outputs lacking accurate ENSO contribution. We simulated considerable differences in eastern African vegetation growth under the influence of an intensified ENSO regime which will bring further environmental stress to a region with a reduced capacity to adapt effects of global climate change and food security.
The right to privacy in the digital age generates new challenges for the international jurisdiction. The following article deals with such challenges. Therefore it firstly defines the term of privacy in general and presents an international legal framework. With whisteblower Snowden a huge political discourse was initiated and the article gives insights into its further development. In 2015 the Human Rights Council for the first time announced a special rapporteur on the right to privacy. However, the discourse is not only taking place on a political level, also civil society organizations advocate more stringent regulations and prosecutions against violations of the right to privacy. Moreover the importance of the technology sector becomes clear. Companies like Microsoft are increasingly taking responsibility to protect digital media against unjustified data misuse, surveillance, collection and storage. But whereas the IT sector is developing very quickly, legislative processes do so rather slowly. Lastly, the individual is also hold to account. To protect oneself against data misuse is to a great extent acting self-responsible. Still, therefore information on protection must be clear and accessible for everyone.
West German anticommunism and the SED’s Westarbeit were to some extentinterrelated. From the beginning, each German state had attemted to stabilise itsown social system while trying to discredit its political opponent. The claim tosole representation and the refusal to acknowledge each other delineated governmentalaction on both sides. Anticommunism inWest Germany re-developed under theconditions of the Cold War, which allowed it to become virtually the reason ofstate and to serve as a tool for the exclusion of KPD supporters. In its turn, theSED branded the West German State as‘revanchist’and instrumentalised itsanticommunism to persecute and eliminate opponents within the GDR. Bothphenomena had an integrative and exclusionary element.
In littoral zones of lakes, multiple processes determine lake ecology and water quality. Lacustrine groundwater discharge (LGD), most frequently taking place in littoral zones, can transport or mobilize nutrients from the sediments and thus contribute significantly to lake eutrophication. Furthermore, lake littoral zones are the habitat of benthic primary producers, namely submerged macrophytes and periphyton, which play a key role in lake food webs and influence lake water quality. Groundwater-mediated nutrient-influx can potentially affect the asymmetric competition between submerged macrophytes and periphyton for light and nutrients. While rooted macrophytes have superior access to sediment nutrients, periphyton can negatively affect macrophytes by shading. LGD may thus facilitate periphyton production at the expense of macrophyte production, although studies on this hypothesized effect are missing.
The research presented in this thesis is aimed at determining how LGD influences periphyton, macrophytes, and the interactions between these benthic producers. Laboratory experiments were combined with field experiments and measurements in an oligo-mesotrophic hard water lake.
In the first study, a general concept was developed based on a literature review of the existing knowledge regarding the potential effects of LGD on nutrients and inorganic and organic carbon loads to lakes, and the effect of these loads on periphyton and macrophytes. The second study includes a field survey and experiment examining the effects of LGD on periphyton in an oligotrophic, stratified hard water lake (Lake Stechlin). This study shows that LGD, by mobilizing phosphorus from the sediments, significantly promotes epiphyton growth, especially at the end of the summer season when epilimnetic phosphorus concentrations are low. The third study focuses on the potential effects of LGD on submerged macrophytes in Lake Stechlin. This study revealed that LGD may have contributed to an observed change in macrophyte community composition and abundance in the shallow littoral areas of the lake. Finally, a laboratory experiment was conducted which mimicked the conditions of a seepage lake. Groundwater circulation was shown to mobilize nutrients from the sediments, which significantly promoted periphyton growth. Macrophyte growth was negatively affected at high periphyton biomasses, confirming the initial hypothesis.
More generally, this thesis shows that groundwater flowing into nutrient-limited lakes may import or mobilize nutrients. These nutrients first promote periphyton, and subsequently provoke radical changes in macrophyte populations before finally having a possible influence on the lake’s trophic state. Hence, the eutrophying effect of groundwater is delayed and, at moderate nutrient loading rates, partly dampened by benthic primary producers. The present research emphasizes the importance and complexity of littoral processes, and the need to further investigate and monitor the benthic environment. As present and future global changes can significantly affect LGD, the understanding of these complex interactions is required for the sustainable management of lake water quality.
The Cauchy problem for the linearised Einstein equation and the Goursat problem for wave equations
(2017)
In this thesis, we study two initial value problems arising in general relativity. The first is the Cauchy problem for the linearised Einstein equation on general globally hyperbolic spacetimes, with smooth and distributional initial data. We extend well-known results by showing that given a solution to the linearised constraint equations of arbitrary real Sobolev regularity, there is a globally defined solution, which is unique up to addition of gauge solutions. Two solutions are considered equivalent if they differ by a gauge solution. Our main result is that the equivalence class of solutions depends continuously on the corre- sponding equivalence class of initial data. We also solve the linearised constraint equations in certain cases and show that there exist arbitrarily irregular (non-gauge) solutions to the linearised Einstein equation on Minkowski spacetime and Kasner spacetime.
In the second part, we study the Goursat problem (the characteristic Cauchy problem) for wave equations. We specify initial data on a smooth compact Cauchy horizon, which is a lightlike hypersurface. This problem has not been studied much, since it is an initial value problem on a non-globally hyperbolic spacetime. Our main result is that given a smooth function on a non-empty, smooth, compact, totally geodesic and non-degenerate Cauchy horizon and a so called admissible linear wave equation, there exists a unique solution that is defined on the globally hyperbolic region and restricts to the given function on the Cauchy horizon. Moreover, the solution depends continuously on the initial data. A linear wave equation is called admissible if the first order part satisfies a certain condition on the Cauchy horizon, for example if it vanishes. Interestingly, both existence of solution and uniqueness are false for general wave equations, as examples show. If we drop the non-degeneracy assumption, examples show that existence of solution fails even for the simplest wave equation. The proof requires precise energy estimates for the wave equation close to the Cauchy horizon. In case the Ricci curvature vanishes on the Cauchy horizon, we show that the energy estimates are strong enough to prove local existence and uniqueness for a class of non-linear wave equations. Our results apply in particular to the Taub-NUT spacetime and the Misner spacetime. It has recently been shown that compact Cauchy horizons in spacetimes satisfying the null energy condition are necessarily smooth and totally geodesic. Our results therefore apply if the spacetime satisfies the null energy condition and the Cauchy horizon is compact and non-degenerate.
The Bruce effect revisited
(2017)
Pregnancy termination after encountering a strange male, the Bruce effect, is regarded as a counterstrategy of female mammals towards anticipated infanticide. While confirmed in caged rodent pairs, no verification for the Bruce effect existed from experimental field populations of small rodents. We suggest that the effect may be adaptive for breeding rodent females only under specific conditions related to populations with cyclically fluctuating densities. We investigated the occurrence of delay in birth date after experimental turnover of the breeding male under different population composition in bank voles (Myodes glareolus) in large outdoor enclosures: one-male–multiple-females (n = 6 populations/18 females), multiple-males–multiple-females (n = 15/45), and single-male–single-female (MF treatment, n = 74/74). Most delays were observed in the MF treatment after turnover. Parallel we showed in a laboratory experiment (n = 205 females) that overwintered and primiparous females, the most abundant cohort during population lows in the increase phase of cyclic rodent populations, were more likely to delay births after turnover of the male than year-born and multiparous females. Taken together, our results suggest that the Bruce effect may be an adaptive breeding strategy for rodent females in cyclic populations specifically at low densities in the increase phase, when isolated, overwintered animals associate in MF pairs. During population lows infanticide risk and inbreeding risk may then be higher than during population highs, while also the fitness value of a litter in an increasing population is higher. Therefore, the Bruce effect may be adaptive for females during annual population lows in the increase phases, even at the costs of delaying reproduction.
Recent research has called into question the current practice to estimate individual usual food intake in large-scale studies. In such studies, usual food intake has been defined as diet over the past year. The aim of this review is to summarise the concepts of dietary assessment methods providing food intake data over this time period. A conceptualised framework is given to help researchers to understand the more recent developments to improve dietary assessment in large-scale prospective studies, and also to help to spot the gaps that need to be addressed in future methodological research. The conceptual framework illustrates the current options for the assessment of an individual’s food consumption over 1 year. Ideally, a person’s food intake on each day of this year should be assessed. Due to participants’ burden, and organisational and financial constraints, however, the options are limited to directly requesting the long-term average (e.g. food frequency questionnaires), or selecting a few days with detailed food consumption measurements (e.g. 24-hour dietary recalls) or using snapshot techniques (e.g. barcode scanning of purchases). It seems necessary and important to further evaluate the performance of statistical modelling of the individual usual food intake from all available sources. Future dietary assessment might profit from the growing prominence of internet and telecommunication technologies to further enhance the available data on food consumption for each study participant. Research is crucial to investigate the performance of innovative assessment tools. However, the self-reported nature of the data itself will always lead to bias.
This research was designed to adapt and investigate the psychometric properties of the Short Dark Triad measure (Jones and Paulhus Assessment, 21(1), 28-41, 2014) in a German sample within four studies (total N = 1463); the measure evaluates three personality dimensions: narcissism, psychopathy, and Machiavellianism. The structure of the instrument was analysed by Confirmatory Factor Analyses procedure. It indicated that the three-factor structure had the best fit to the data. Next, the Short Dark Triad measure was evaluated in terms of construct, convergent and discriminant validity, internal consistency (≥ .72), and test-retest reliability during a 4-week period (≥ .73). Concurrent validity of the SD3 was supported by relating its subscales to measures of the Big Five concept, aggression, and self-esteem. We concluded that the Short Dark Triad instrument presented high cross-language replicability. The use of this short inventory in the investigation of the Dark Triad personality model in the German language context is suggested.
Information on the contemporary in-situ stress state of the earth’s crust is essential for geotechnical applications and physics-based seismic hazard assessment. Yet, stress data records for a data point are incomplete and their availability is usually not dense enough to allow conclusive statements. This demands a thorough examination of the in-situ stress field which is achieved by 3D geomechanicalnumerical models. However, the models spatial resolution is limited and the resulting local stress state is subject to large uncertainties that confine the significance of the findings. In addition, temporal variations of the in-situ stress field are naturally or anthropogenically induced. In my thesis I address these challenges in three manuscripts that investigate (1) the current crustal stress field orientation, (2) the 3D geomechanical-numerical modelling of the in-situ stress state, and (3) the phenomenon of injection induced temporal stress tensor rotations. In the first manuscript I present the first comprehensive stress data compilation of Iceland with 495 data records. Therefore, I analysed image logs from 57 boreholes in Iceland for indicators of the orientation of the maximum horizontal stress component. The study is the first stress survey from different kinds of stress indicators in a geologically very young and tectonically active area of an onshore spreading ridge. It reveals a distinct stress field with a depth independent stress orientation even very close to the spreading centre. In the second manuscript I present a calibrated 3D geomechanical-numerical modelling approach of the in-situ stress state of the Bavarian Molasse Basin that investigates the regional (70x70x10km³) and local (10x10x10km³) stress state. To link these two models I develop a multi-stage modelling approach that provides a reliable and efficient method to derive from the larger scale model initial and boundary conditions for the smaller scale model. Furthermore, I quantify the uncertainties in the models results which are inherent to geomechanical-numerical modelling in general and the multi-stage approach in particular. I show that the significance of the models results is mainly reduced due to the uncertainties in the material properties and the low number of available stress magnitude data records for calibration. In the third manuscript I investigate the phenomenon of injection induced temporal stress tensor rotation and its controlling factors. I conduct a sensitivity study with a 3D generic thermo-hydro-mechanical model. I show that the key control factors for the stress tensor rotation are the permeability as the decisive factor, the injection rate, and the initial differential stress. In particular for enhanced geothermal systems with a low permeability large rotations of the stress tensor are indicated. According to these findings the estimation of the initial differential stress in a reservoir is possible provided the permeability is known and the angle of stress rotation is observed. I propose that the stress tensor rotations can be a key factor in terms of the potential for induced seismicity on pre-existing faults due to the reorientation of the stress field that changes the optimal orientation of faults.
Für den Industrialisierungsprozess von Entwicklungs- und Schwellenländern haben ausländische Direktinvestitionen (ADI) eine wichtige Funktion. Sie können zum einen zu einer Erhöhung des industriellen Output des Ziellandes führen und zum anderen als Träger von technologischem Wissen fungieren. Neues Wissen kann den Empfängerländern der ADI durch Spillovereffekte und Technologietransfers ausländischer Tochterunternehmen zufließen. Diese Arbeit soll Antworten auf die Fragen geben, durch welche Mechanismen Spillovereffekte und Technologietransfers ausgelöst werden und wie Entwicklungs- und Schwellenländern diesen Wissenszufluss zur Beschleunigung ihres Industrialisierungsprozesses einsetzen können. Hierfür wird ein Konzept zur Förderung von Spillovereffekten entwickelt. Weiterhin wird ein theoretisches Modell entwickelt, in dem der Technologietransfer ausländischer Exportplattformen erstmals in Abhängigkeit des Anteils der Vorprodukte, die im Gastland nachgefragt werden, untersucht. In den Fallstudien Irland und Malaysia werden die Ergebnisse des theoretischen Modells sowie des entwickelten Konzepts illustriert.
Der technologische Wandel stellt Organisationen vor die Herausforderung, Innovationen möglichst schnell produktiv zu nutzen und damit einen Wettbewerbsvorteil zu erzielen. Der Erfolg der Technologieeinführung hängt stark mit der Schaffung von Akzeptanz bei den Mitarbeitern zusammen. Bestehende Ansätze wie die Diffusionstheorie (Rogers, 2003) oder das Technology Acceptance Model (Davis, 1989; Venkatesh und Davis, 1996; Venkatesh und Davis, 2000; Venkatesh, Morris u. a., 2003) widmen sich dem Organisationskontext jedoch nur am Rande. Ihre Modelle zielen auf die Übernahme einer Technologie in freier Entscheidung und im Marktkontext ab. Weiterhin beleuchten sie den Widerstand gegen Neuerungen nicht, welcher sich bei der verpflichtenden Übernahme bilden kann. Zur Untersuchung der Technologieeinführung und von Akzeptanzbildungsprozessen in Organisationen sind sie daher nur begrenzt nutzbar.
Das Ziel dieser Arbeit ist es daher, den spezifischen Einfluss des Kontextes Organisation auf die Akzeptanz und das Nutzungsverhalten herauszuarbeiten. Konkreter soll die Forschungsfrage geklärt werden, welchen Einfluss unterschiedliche Organisationstypen auf die Akzeptanz- und Nutzungsdynamik innerhalb von Organisationen haben. Hierfür wird die Erweiterung und Synthese bestehender Modelle der Akzeptanzforschung um organisationsspezifische Attribute vorgenommen. Das resultierende Modell erfasst die dynamische Entwicklung innerhalb der Organisation und ermöglicht damit die Beobachtung des Wandels. Die Funktionsweise des entwickelten Modells soll in einem Simulationsexperiment demonstriert und die Wirkung unterschiedlicher Organisationsformen verdeutlicht werden.
Das Modell vereint daher zwei Perspektiven: Die personale Perspektive fasst Akzeptanz als kognitiv-psychischen Prozess auf individueller Ebene. Dieser basiert auf den Kalkülen und Entscheidungen einzelner Personen. Zentral sind hierfür die Beiträge der Diffusionstheorie (Rogers, 2003) sowie das Technology Acceptance Model in seinen diversen Weiterentwicklungen und Veränderungen (Davis, 1989; Venkatesh und Davis, 1996; Venkatesh und Davis, 2000; Venkatesh, Morris u. a., 2003). Individuelle Faktoren aus unterschiedlichen Fit-Theorien (Goodhue und Thompson, 1995; Floyd, 1986; Liu, Lee und Chen, 2011; Parkes, 2013) werden genutzt, um diese Modelle anzureichern. Neben der Entwicklung
einer positiven, förderlichen Einstellung muss jedoch auch die Ablehnung und das offene Opponieren gegen die Innovation berücksichtigt werden (Patsiotis, Hughes und Webber, 2012).
Die organisatorische Perspektive hingegen sieht Akzeptanzentscheidungen eingebettet in den sozialen Kontext der Organisation. Die gegenseitige Beeinflussung basiert auf der Beobachtung der Umgebung und der Internalisierung sozialen Drucks. Dem steht in Organisationen die intendierte Beeinflussung in Form von Steuerung gegenüber. Beide Vorgänge formen das Akzeptanz- oder das Nutzungsverhalten der Mitarbeiter. Ausgehend von einem systemtheoretischen Organisationsbegriff werden unterschiedliche Steuerungsmedien (Luhmann, 1997; Fischer, 2009) vorgestellt. Diese können durch Steuerungsakteure
(Change Agents, Management) intendiert eingesetzt werden, um den Akzeptanz- und Nutzungsprozess über Interventionen zu gestalten.
Die Wirkung der Medien unterscheidet sich in verschiedenen Organisationstypen. Zur Analyse unterschiedlicher Organisationstypen werden die Konfigurationen nach Mintzberg (1979) herangezogen. Diese zeichnen sich durch unterschiedliche Koordinationsmechanismen aus, welche wiederum auf dem Einsatz von Steuerungsmedien beruhen.
Die Demonstration der Funktionsweise und Analysemöglichkeiten des entwickelten Modells erfolgt anhand eines Simulationsexperiments mittels der Simulationsplattform AnyLogic. Das Gültigkeitsspektrum wird anhand einer Sensitivitätsanalyse geprüft.
In der Simulation lassen sich spezifische Muster der Nutzung und Akzeptanzentwicklung nachweisen. Die Akzeptanz ist durch ein initiales Absinken und ein anschließendes gedämpftes Wachstum gekennzeichnet. Die Nutzung wird in der Organisation hingegen schnell durchgesetzt und verharrt dann auf einem stabilen Niveau. Für die Organisationstypen konnten unterschiedliche Effekte beobachtet werden. So eignet sich die bürokratische Steuerungsform zur Nutzungserhöhung, schafft es jedoch nicht, die Akzeptanz zu steigern. Organisationen, welche eher auf gegenseitige Abstimmung zur Koordination ausgelegt sind, erhöhen die Akzeptanz, jedoch nicht die Nutzung. Weiterhin ist die Entwicklung der Akzeptanz in diesem Organisationstyp sehr unsicher und weist einen hohen Schwankungsbereich auf.