Refine
Has Fulltext
- yes (382) (remove)
Year of publication
- 2017 (382) (remove)
Document Type
- Postprint (237)
- Doctoral Thesis (100)
- Article (16)
- Monograph/Edited Volume (8)
- Master's Thesis (4)
- Preprint (4)
- Working Paper (4)
- Conference Proceeding (3)
- Part of Periodical (2)
- Report (2)
Language
- English (382) (remove)
Keywords
- climate (9)
- Germany (6)
- climate change (6)
- networks (5)
- variability (5)
- ancient DNA (4)
- elderly (4)
- model (4)
- precipitation (4)
- transport (4)
Institute
- Mathematisch-Naturwissenschaftliche Fakultät (72)
- Institut für Geowissenschaften (42)
- Institut für Biochemie und Biologie (39)
- Humanwissenschaftliche Fakultät (29)
- Institut für Physik und Astronomie (28)
- Institut für Chemie (24)
- Strukturbereich Kognitionswissenschaften (19)
- Department Musik und Kunst (12)
- Institut für Ernährungswissenschaft (12)
- Sozialwissenschaften (12)
The effect of cellulose-based polyelectrolytes on biomimetic calcium phosphate mineralization is described. Three cellulose derivatives, a polyanion, a polycation, and a polyzwitterion were used as additives. Scanning electron microscopy, X-ray diffraction, IR and Raman spectroscopy show that, depending on the composition of the starting solution, hydroxyapatite or brushite precipitates form. Infrared and Raman spectroscopy also show that significant amounts of nitrate ions are incorporated in the precipitates. Energy dispersive X-ray spectroscopy shows that the Ca/P ratio varies throughout the samples and resembles that of other bioinspired calcium phosphate hybrid materials. Elemental analysis shows that the carbon (i.e., polymer) contents reach 10% in some samples, clearly illustrating the formation of a true hybrid material. Overall, the data indicate that a higher polymer concentration in the reaction mixture favors the formation of polymer-enriched materials, while lower polymer concentrations or high precursor concentrations favor the formation of products that are closely related to the control samples precipitated in the absence of polymer. The results thus highlight the potential of (water-soluble) cellulose derivatives for the synthesis and design of bioinspired and bio-based hybrid materials.
The functioning of the surface water-groundwater interface as buffer, filter and reactive zone is important for water quality, ecological health and resilience of streams and riparian ecosystems. Solute and heat exchange across this interface is driven by the advection of water. Characterizing the flow conditions in the streambed is challenging as flow patterns are often complex and multidimensional, driven by surface hydraulic gradients and groundwater discharge. This thesis presents the results of an integrated approach of studies, ranging from the acquisition of field data, the development of analytical and numerical approaches to analyse vertical temperature profiles to the detailed, fully-integrated 3D numerical modelling of water and heat flux at the reach scale. All techniques were applied in order to characterize exchange flux between stream and groundwater, hyporheic flow paths and temperature patterns.
The study was conducted at a reach-scale section of the lowland Selke River, characterized by distinctive pool riffle sequences and fluvial islands and gravel bars. Continuous time series of hydraulic heads and temperatures were measured at different depths in the river bank, the hyporheic zone and within the river. The analyses of the measured diurnal temperature variation in riverbed sediments provided detailed information about the exchange flux between river and groundwater. Beyond the one-dimensional vertical water flow in the riverbed sediment, hyporheic and parafluvial flow patterns were identified. Subsurface flow direction and magnitude around fluvial islands and gravel bars at the study site strongly depended on the position around the geomorphological structures and on the river stage. Horizontal water flux in the streambed substantially impacted temperature patterns in the streambed. At locations with substantial horizontal fluxes the penetration depths of daily temperature fluctuations was reduced in comparison to purely vertical exchange conditions.
The calibrated and validated 3D fully-integrated model of reach-scale water and heat fluxes across the river-groundwater interface was able to accurately represent the real system. The magnitude and variations of the simulated temperatures matched the observed ones, with an average mean absolute error of 0.7 °C and an average Nash Sutcliffe Efficiency of 0.87. The simulation results showed that the water and heat exchange at the surface water-groundwater interface is highly variable in space and time with zones of daily temperature oscillations penetrating deep into the sediment and spots of daily constant temperature following the average groundwater temperature. The average hyporheic flow path temperature was found to strongly correlate with the flow path residence time (flow path length) and the temperature gradient between river and groundwater. Despite the complexity of these processes, the simulation results allowed the derivation of a general empirical relationship between the hyporheic residence times and temperature patterns. The presented results improve our understanding of the complex spatial and temporal dynamics of water flux and thermal processes within the shallow streambed. Understanding these links provides a general basis from which to assess hyporheic temperature conditions in river reaches.
VS30, slope, H800 and f0
(2017)
The aim of this paper is to investigate the ability of various site-condition proxies (SCPs) to reduce ground-motion aleatory variability and evaluate how SCPs capture nonlinearity site effects. The SCPs used here are time-averaged shear-wave velocity in the top 30 m (VS30), the topographical slope (slope), the fundamental resonance frequency (f0) and the depth beyond which Vs exceeds 800 m/s (H800). We considered first the performance of each SCP taken alone and then the combined performance of the 6 SCP pairs [VS30–f0], [VS30–H800], [f0–slope], [H800–slope], [VS30–slope] and [f0–H800]. This analysis is performed using a neural network approach including a random effect applied on a KiK-net subset for derivation of ground-motion prediction equations setting the relationship between various ground-motion parameters such as peak ground acceleration, peak ground velocity and pseudo-spectral acceleration PSA (T), and Mw, RJB, focal depth and SCPs. While the choice of SCP is found to have almost no impact on the median groundmotion prediction, it does impact the level of aleatory uncertainty. VS30 is found to perform the best of single proxies
at short periods (T < 0.6 s), while f0 and H800 perform better at longer periods; considering SCP pairs leads to significant improvements, with particular emphasis on [VS30–H800] and [f0–slope] pairs. The results also indicate significant nonlinearity on the site terms for soft sites and that the most relevant loading parameter for characterising nonlinear site response is the “stiff” spectral ordinate at the considered period.
Background
The kidneys are essential for the metabolism of vitamin A (retinol) and its transport proteins retinol-binding protein 4 (RBP4) and transthyretin. Little is known about changes in serum concentration after living donor kidney transplantation (LDKT) as a consequence of unilateral nephrectomy; although an association of these parameters with the risk of cardiovascular diseases and insulin resistance has been suggested. Therefore we analyzed the concentration of retinol, RBP4, apoRBP4 and transthyretin in serum of 20 living-kidney donors and respective recipients at baseline as well as 6 weeks and 6 months after LDKT.
Results
As a consequence of LDKT, the kidney function of recipients was improved while the kidney function of donors was moderately reduced within 6 weeks after LDKT. With regard to vitamin A metabolism, the recipients revealed higher levels of retinol, RBP4, transthyretin and apoRBP4 before LDKT in comparison to donors. After LDKT, the levels of all four parameters decreased in serum of the recipients, while retinol, RBP4 as well as apoRBP4 serum levels of donors increased and remained increased during the follow-up period of 6 months.
Conclusion
LDKT is generally regarded as beneficial for allograft recipients and not particularly detrimental for the donors. However, it could be demonstrated in this study that a moderate reduction of kidney function by unilateral nephrectomy, resulted in an imbalance of components of vitamin A metabolism with a significant increase of retinol and RBP4 and apoRBP4 concentration in serum of donors.
In this study, we validate and compare elevation accuracy and geomorphic metrics of satellite-derived digital elevation models (DEMs) on the southern Central Andean Plateau. The plateau has an average elevation of 3.7 km and is characterized by diverse topography and relief, lack of vegetation, and clear skies that create ideal conditions for remote sensing. At 30m resolution, SRTM-C, ASTER GDEM2, stacked ASTER L1A stereopair DEM, ALOS World 3D, and TanDEM-X have been analyzed. The higher-resolution datasets include 12m TanDEM-X, 10m single-CoSSC TerraSAR-X/TanDEM-X DEMs, and 5m ALOS World 3D. These DEMs are state of the art for optical (ASTER and ALOS) and radar (SRTM-C and TanDEM-X) spaceborne sensors. We assessed vertical accuracy by comparing standard deviations of the DEM elevation versus 307 509 differential GPS measurements across 4000m of elevation. For the 30m DEMs, the ASTER datasets had the highest vertical standard deviation at > 6.5 m, whereas the SRTM-C, ALOS World 3D, and TanDEM-X were all < 3.5 m. Higher-resolution DEMs generally had lower uncertainty, with both the 12m TanDEM-X and 5m ALOSWorld 3D having < 2m vertical standard deviation. Analysis of vertical uncertainty with respect to terrain elevation, slope, and aspect revealed the low uncertainty across these attributes for SRTM-C (30 m), TanDEM-X (12–30 m), and ALOS World 3D (5–30 m). Single-CoSSC TerraSAR-X/TanDEM-X 10m DEMs and the 30m ASTER GDEM2 displayed slight aspect biases, which were removed in their stacked counterparts (TanDEM-X and ASTER Stack). Based on low vertical standard deviations and visual inspection alongside optical satellite data, we selected the 30m SRTM-C, 12–30m TanDEM-X, 10m single-CoSSC TerraSAR-X/TanDEM-X, and 5m ALOS World 3D for geomorphic metric comparison in a 66 km2 catchment with a distinct river knickpoint. Consistent m=n values were found using chi plot channel profile analysis, regardless of DEM type and spatial resolution. Slope, curvature, and drainage area were calculated and plotting schemes were used to assess basin-wide differences in the hillslope-to-valley transition related to the knickpoint. While slope and hillslope length measurements vary little between datasets, curvature displays higher magnitude measurements with fining resolution. This is especially true for the optical 5m ALOS World 3D DEM, which demonstrated high-frequency noise in 2–8 pixel steps through a Fourier frequency analysis. The improvements in accurate space-radar DEMs (e.g., TanDEM-X) for geomorphometry are promising, but airborne or terrestrial data are still necessary for meter-scale analysis.
Background: Plasma concentration of retinol is an accepted indicator to assess the vitamin A (retinol) status in cattle. However, the determination of vitamin A requires a time consuming multi-step procedure, which needs specific equipment to perform extraction, centrifugation or saponification prior to high-performance liquid chromatography (HPLC).
Methods: The concentrations of retinol in whole blood (n = 10), plasma (n = 132) and serum (n = 61) were measured by a new rapid cow-side test (iCheck™ FLUORO) and compared with those by HPLC in two independent laboratories in Germany (DE) and Japan (JP).
Results: Retinol concentrations in plasma ranged from 0.033 to 0.532 mg/L, and in serum from 0.043 to 0.360 mg/L (HPLC method). No significant differences in retinol levels were observed between the new rapid cow-side test and HPLC performed in different laboratories (HPLC vs. iCheck™ FLUORO: 0.320 ± 0.047 mg/L vs. 0.333 ± 0.044 mg/L, and 0.240 ± 0.096 mg/L vs. 0.241 ± 0.069 mg/L, lab DE and lab JP, respectively). A similar comparability was observed when whole blood was used (HPLC vs. iCheck™ FLUORO: 0.353 ± 0.084 mg/L vs. 0.341 ± 0.064 mg/L). Results showed a good agreement between both methods based on correlation coefficients of r2 = 0.87 (P < 0.001) and Bland-Altman blots revealed no significant bias for all comparison.
Conclusions: With the new rapid cow-side test (iCheck™ FLUORO) retinol concentrations in cattle can be reliably assessed within a few minutes and directly in the barn using even whole blood without the necessity of prior centrifugation. The ease of the application of the new rapid cow-side test and its portability can improve the diagnostic of vitamin A status and will help to control vitamin A supplementation in specific vitamin A feeding regimes such as used to optimize health status in calves or meat marbling in Japanese Black cattle.
Statistics Canada, Canada’s national statistics agency, offers a suite of spatial
files for mapping and analysis of its various population data products. The following
article showcases possibilities and shortfalls of the existing spatial files
for mapping population data, and provides an overview of the structure of the
available boundary files from the regional to the dissemination block level. Due
to Canada’s highly dispersed population, mapping its distribution and density can
be challenging. Common mapping techniques such as the choropleth method are
suitable only for mapping spatially high resolution data such as data at the dissemination
area level. To allow for mapping of population data at less detailed levels
such as census divisions or provinces, Statistics Canada has created a so-called
ecumene boundary file which outlines the inhabited area of Canada and can be
used to more accurately visualize Canada’s population distribution and density.
Using behavioral observation for the longitudinal study of anger regulation in middle childhood
(2017)
Assessing anger regulation via self-reports is fraught with problems, especially among children. Behavioral observation provides an ecologically valid alternative for measuring anger regulation. The present study uses data from two waves of a longitudinal study to present a behavioral observation approach for measuring anger regulation in middle childhood. At T1, 599 children from Germany (6–10 years old) were observed during an anger eliciting task, and the use of anger regulation strategies was coded. At T2, 3 years later, the observation was repeated with an age-appropriate version of the same task. Partial metric measurement invariance over time demonstrated the structural equivalence of the two versions. Maladaptive anger regulation between the two time points showed moderate stability. Validity was established by showing correlations with aggressive behavior, peer problems, and conduct problems (concurrent and predictive criterion validity). The study presents an ecologically valid and economic approach to assessing anger regulation strategies in situations.
Background: Although the benefits for health of physical activity (PA) are well documented, the majority of the population is unable to implement present recommendations into daily routine. Mobile health (mHealth) apps could help increase the level of PA. However, this is contingent on the interest of potential users.
Objective: The aim of this study was the explorative, nuanced determination of the interest in mHealth apps with respect to PA among students and staff of a university.
Methods: We conducted a Web-based survey from June to July 2015 in which students and employees from the University of Potsdam were asked about their activity level, interest in mHealth fitness apps, chronic diseases, and sociodemographic parameters.
Results: A total of 1217 students (67.30%, 819/1217; female; 26.0 years [SD 4.9]) and 485 employees (67.5%, 327/485; female; 42.7 years [SD 11.7]) participated in the survey. The recommendation for PA (3 times per week) was not met by 70.1% (340/485) of employees and 52.67% (641/1217) of students. Within these groups, 53.2% (341/641 students) and 44.2% (150/340 employees)—independent of age, sex, body mass index (BMI), and level of education or professional qualification—indicated an interest in mHealth fitness apps.
Conclusions: Even in a younger, highly educated population, the majority of respondents reported an insufficient level of PA. About half of them indicated their interest in training support. This suggests that the use of personalized mobile fitness apps may become increasingly significant for a positive change of lifestyle.
Background: Inferring regulatory interactions between genes from transcriptomics time-resolved data, yielding reverse engineered gene regulatory networks, is of paramount importance to systems biology and bioinformatics studies. Accurate methods to address this problem can ultimately provide a deeper insight into the complexity, behavior, and functions of the underlying biological systems. However, the large number of interacting genes coupled with short and often noisy time-resolved read-outs of the system renders the reverse engineering a challenging task. Therefore, the development and assessment of methods which are computationally efficient, robust against noise, applicable to short time series data, and preferably capable of reconstructing the directionality of the regulatory interactions remains a pressing research problem with valuable applications.
Results: Here we perform the largest systematic analysis of a set of similarity measures and scoring schemes within the scope of the relevance network approach which are commonly used for gene regulatory network reconstruction from time series data. In addition, we define and analyze several novel measures and schemes which are particularly suitable for short transcriptomics time series. We also compare the considered 21 measures and 6 scoring schemes according to their ability to correctly reconstruct such networks from short time series data by calculating summary statistics based on the corresponding specificity and sensitivity. Our results demonstrate that rank and symbol based measures have the highest performance in inferring regulatory interactions. In addition, the proposed scoring scheme by asymmetric weighting has shown to be valuable in reducing the number of false positive interactions. On the other hand, Granger causality as well as information-theoretic measures, frequently used in inference of regulatory networks, show low performance on the short time series analyzed in this study.
Conclusions: Our study is intended to serve as a guide for choosing a particular combination of similarity measures and scoring schemes suitable for reconstruction of gene regulatory networks from short time series data. We show that further improvement of algorithms for reverse engineering can be obtained if one considers measures that are rooted in the study of symbolic dynamics or ranks, in contrast to the application of common similarity measures which do not consider the temporal character of the employed data. Moreover, we establish that the asymmetric weighting scoring scheme together with symbol based measures (for low noise level) and rank based measures (for high noise level) are the most suitable choices.
The femtosecond excited-state dynamics following resonant photoexcitation enable the selective deformation of N-H and N-C chemical bonds in 2-thiopyridone in aqueous solution with optical or X-ray pulses. In combination with multiconfigurational quantum-chemical calculations, the orbital-specific electronic structure and its ultrafast dynamics accessed with resonant inelastic X-ray scattering at the N 1s level using synchrotron radiation and the soft X-ray free-electron laser LCLS provide direct evidence for this controlled photoinduced molecular deformation and its ultrashort time-scale.
Many studies demonstrated interactions between number processing and either spatial codes (effects of spatial-numerical associations) or visual size-related codes (size-congruity effect). However, the interrelatedness of these two number couplings is still unclear. The present study examines the simultaneous occurrence of space- and size-numerical congruency effects and their interactions both within and across trials, in a magnitude judgment task physically small or large digits were presented left or right from screen center. The reaction times analysis revealed that space- and size-congruency effects coexisted in parallel and combined additively. Moreover, a selective sequential modulation of the two congruency effects was found. The size-congruency effect was reduced after size incongruent trials. The space-congruency effect, however, was only affected by the previous space congruency. The observed independence of spatial-numerical and within magnitude associations is interpreted as evidence that the two couplings reflect Different attributes of numerical meaning possibly related to orginality and cardinality.
In the context of back pain, great emphasis has been placed on the importance of trunk stability, especially in situations requiring compensation of repetitive, intense loading induced during high-performance activities, e.g., jumping or landing. This study aims to evaluate trunk muscle activity during drop jump in adolescent athletes with back pain (BP) compared to athletes without back pain (NBP). Eleven adolescent athletes suffering back pain (BP: m/f: n = 4/7; 15.9 ± 1.3 y; 176 ± 11 cm; 68 ± 11 kg; 12.4 ± 10.5 h/we training) and 11 matched athletes without back pain (NBP: m/f: n = 4/7; 15.5 ± 1.3 y; 174 ± 7 cm; 67 ± 8 kg; 14.9 ± 9.5 h/we training) were evaluated. Subjects conducted 3 drop jumps onto a force plate (ground reaction force). Bilateral 12-lead SEMG (surface Electromyography) was applied to assess trunk muscle activity. Ground contact time [ms], maximum vertical jump force [N], jump time [ms] and the jump performance index [m/s] were calculated for drop jumps. SEMG amplitudes (RMS: root mean square [%]) for all 12 single muscles were normalized to MIVC (maximum isometric voluntary contraction) and analyzed in 4 time windows (100 ms pre- and 200 ms post-initial ground contact, 100 ms pre- and 200 ms post-landing) as outcome variables. In addition, muscles were grouped and analyzed in ventral and dorsal muscles, as well as straight and transverse trunk muscles. Drop jump ground reaction force variables did not differ between NBP and BP (p > 0.05). Mm obliquus externus and internus abdominis presented higher SEMG amplitudes (1.3–1.9-fold) for BP (p < 0.05). Mm rectus abdominis, erector spinae thoracic/lumbar and latissimus dorsi did not differ (p > 0.05). The muscle group analysis over the whole jumping cycle showed statistically significantly higher SEMG amplitudes for BP in the ventral (p = 0.031) and transverse muscles (p = 0.020) compared to NBP. Higher activity of transverse, but not straight, trunk muscles might indicate a specific compensation strategy to support trunk stability in athletes with back pain during drop jumps. Therefore, exercises favoring the transverse trunk muscles could be recommended for back pain treatment.
Trunk loading and back pain
(2017)
An essential function of the trunk is the compensation of external forces and loads in order to guarantee stability. Stabilising the trunk during sudden, repetitive loading in everyday tasks, as well as during performance is important in order to protect against injury. Hence, reduced trunk stability is accepted as a risk factor for the development of back pain (BP). An altered activity pattern including extended response and activation times as well as increased co-contraction of the trunk muscles as well as a reduced range of motion and increased movement variability of the trunk are evident in back pain patients (BPP). These differences to healthy controls (H) have been evaluated primarily in quasi-static test situations involving isolated loading directly to the trunk. Nevertheless, transferability to everyday, dynamic situations is under debate. Therefore, the aim of this project is to analyse 3-dimensional motion and neuromuscular reflex activity of the trunk as response to dynamic trunk loading in healthy (H) and back pain patients (BPP).
A measurement tool was developed to assess trunk stability, consisting of dynamic test situations. During these tests, loading of the trunk is generated by the upper and lower limbs with and without additional perturbation. Therefore, lifting of objects and stumbling while walking are adequate represents. With the help of a 12-lead EMG, neuromuscular activity of the muscles encompassing the trunk was assessed. In addition, three-dimensional trunk motion was analysed using a newly developed multi-segmental trunk model. The set-up was checked for reproducibility as well as validity. Afterwards, the defined measurement set-up was applied to assess trunk stability in comparisons of healthy and back pain patients.
Clinically acceptable to excellent reliability could be shown for the methods (EMG/kinematics) used in the test situations. No changes in trunk motion pattern could be observed in healthy adults during continuous loading (lifting of objects) of different weights. In contrast, sudden loading of the trunk through perturbations to the lower limbs during walking led to an increased neuromuscular activity and ROM of the trunk. Moreover, BPP showed a delayed muscle response time and extended duration until maximum neuromuscular activity in response to sudden walking perturbations compared to healthy controls. In addition, a reduced lateral flexion of the trunk during perturbation could be shown in BPP.
It is concluded that perturbed gait seems suitable to provoke higher demands on trunk stability in adults. The altered neuromuscular and kinematic compensation pattern in back pain patients (BPP) can be interpreted as increased spine loading and reduced trunk stability in patients. Therefore, this novel assessment of trunk stability is suitable to identify deficits in BPP. Assignment of affected BPP to therapy interventions with focus on stabilisation of the trunk aiming to improve neuromuscular control in dynamic situations is implied. Hence, sensorimotor training (SMT) to enhance trunk stability and compensation of unexpected sudden loading should be preferred.
Trends in precipitation over Germany and the Rhine basin related to changes in weather patterns
(2017)
Precipitation as the central meteorological feature for agriculture, water security, and human well-being amongst others, has gained special attention ever since. Lack of precipitation may have devastating effects such as crop failure and water scarcity. Abundance of precipitation, on the other hand, may as well result in hazardous events such as flooding and again crop failure. Thus, great effort has been spent on tracking changes in precipitation and relating them to underlying processes. Particularly in the face of global warming and given the link between temperature and atmospheric water holding capacity, research is needed to understand the effect of climate change on precipitation.
The present work aims at understanding past changes in precipitation and other meteorological variables. Trends were detected for various time periods and related to associated changes in large-scale atmospheric circulation. The results derived in this thesis may be used as the foundation for attributing changes in floods to climate change. Assumptions needed for the downscaling of large-scale circulation model output to local climate stations are tested and verified here.
In a first step, changes in precipitation over Germany were detected, focussing not only on precipitation totals, but also on properties of the statistical distribution, transition probabilities as a measure for wet/dry spells, and extreme precipitation events.
Shifting the spatial focus to the Rhine catchment as one of the major water lifelines of Europe and the largest river basin in Germany, detected trends in precipitation and other meteorological variables were analysed in relation to states of an ``optimal'' weather pattern classification. The weather pattern classification was developed seeking the best skill in explaining the variance of local climate variables.
The last question addressed whether observed changes in local climate variables are attributable to changes in the frequency of weather patterns or rather to changes within the patterns itself. A common assumption for a downscaling approach using weather patterns and a stochastic weather generator is that climate change is expressed only as a changed occurrence of patterns with the pattern properties remaining constant. This assumption was validated and the ability of the latest generation of general circulation models to reproduce the weather patterns was evaluated.
% Paper 1
Precipitation changes in Germany in the period 1951-2006 can be summarised briefly as negative in summer and positive in all other seasons. Different precipitation characteristics confirm the trends in total precipitation: while winter mean and extreme precipitation have increased, wet spells tend to be longer as well (expressed as increased probability for a wet day followed by another wet day). For summer the opposite was observed: reduced total precipitation, supported by decreasing mean and extreme precipitation and reflected in an increasing length of dry spells.
Apart from this general summary for the whole of Germany, the spatial distribution within the country is much more differentiated. Increases in winter precipitation are most pronounced in the north-west and south-east of Germany, while precipitation increases are highest in the west for spring and in the south for autumn. Decreasing summer precipitation was observed in most regions of Germany, with particular focus on the south and west.
The seasonal picture, however, was again differently represented in the contributing months, e.g.\ increasing autumn precipitation in the south of Germany is formed by strong trends in the south-west in October and in the south-east in November. These results emphasise the high spatial and temporal organisation of precipitation changes.
% Paper 2
The next step towards attributing precipitation trends to changes in large-scale atmospheric patterns was the derivation of a weather pattern classification that sufficiently stratifies the local climate variables under investigation. Focussing on temperature, radiation, and humidity in addition to precipitation, a classification based on mean sea level pressure, near-surface temperature, and specific humidity was found to have the best skill in explaining the variance of the local variables. A rather high number of 40 patterns was selected, allowing typical pressure patterns being assigned to specific seasons by the associated temperature patterns. While the skill in explaining precipitation variance is rather low, better skill was achieved for radiation and, of course, temperature.
Most of the recent GCMs from the CMIP5 ensemble were found to reproduce these weather patterns sufficiently well in terms of frequency, seasonality, and persistence.
% Paper 3
Finally, the weather patterns were analysed for trends in pattern frequency, seasonality, persistence, and trends in pattern-specific precipitation and temperature. To overcome uncertainties in trend detection resulting from the selected time period, all possible periods in 1901-2010 with a minimum length of 31 years were considered. Thus, the assumption of a constant link between patterns and local weather was tested rigorously. This assumption was found to hold true only partly. While changes in temperature are mainly attributable to changes in pattern frequency, for precipitation a substantial amount of change was detected within individual patterns.
Magnitude and even sign of trends depend highly on the selected time period. The frequency of certain patterns is related to the long-term variability of large-scale circulation modes.
Changes in precipitation were found to be heterogeneous not only in space, but also in time - statements on trends are only valid for the specific time period under investigation. While some part of the trends can be attributed to changes in the large-scale circulation, distinct changes were found within single weather patterns as well.
The results emphasise the need to analyse multiple periods for thorough trend detection wherever possible and add some note of caution to the application of downscaling approaches based on weather patterns, as they might misinterpret the effect of climate change due to neglecting within-type trends.
Translating innovation
(2017)
This doctoral thesis studies the process of innovation adoption in public administrations, addressing the research question of how an innovation is translated to a local context. The study empirically explores Design Thinking as a new problem-solving approach introduced by a federal government organisation in Singapore. With a focus on user-centeredness, collaboration and iteration Design Thinking seems to offer a new way to engage recipients and other stakeholders of public services as well as to re-think the policy design process from a user’s point of view. Pioneered in the private sector, early adopters of the methodology include civil services in Australia, Denmark, the United Kingdom, the United States as well as Singapore. Hitherto, there is not much evidence on how and for which purposes Design Thinking is used in the public sector.
For the purpose of this study, innovation adoption is framed in an institutionalist perspective addressing how concepts are translated to local contexts. The study rejects simplistic views of the innovation adoption process, in which an idea diffuses to another setting without adaptation. The translation perspective is fruitful because it captures the multidimensionality and ‘messiness’ of innovation adoption. More specifically, the overall research question addressed in this study is: How has Design Thinking been translated to the local context of the public sector organisation under investigation? And from a theoretical point of view: What can we learn from translation theory about innovation adoption processes?
Moreover, there are only few empirical studies of organisations adopting Design Thinking and most of them focus on private organisations. We know very little about how Design Thinking is embedded in public sector organisations. This study therefore provides further empirical evidence of how Design Thinking is used in a public sector organisation, especially with regards to its application to policy work which has so far been under-researched.
An exploratory single case study approach was chosen to provide an in-depth analysis of the innovation adoption process. Based on a purposive, theory-driven sampling approach, a Singaporean Ministry was selected because it represented an organisational setting in which Design Thinking had been embedded for several years, making it a relevant case with regard to the research question. Following a qualitative research design, 28 semi-structured interviews (45-100 minutes) with employees and managers were conducted. The interview data was triangulated with observations and documents, collected during a field research research stay in Singapore.
The empirical study of innovation adoption in a single organisation focused on the intra-organisational perspective, with the aim to capture the variations of translation that occur during the adoption process. In so doing, this study opened the black box often assumed in implementation studies. Second, this research advances translation studies not only by showing variance, but also by deriving explanatory factors. The main differences in the translation of Design Thinking occurred between service delivery and policy divisions, as well as between the first adopter and the rest of the organisation. For the intra-organisational translation of Design Thinking in the Singaporean Ministry the following five factors played a role: task type, mode of adoption, type of expertise, sequence of adoption, and the adoption of similar practices.
Working memory (WM) performance declines with age. However, several studies have shown that WM training may lead to performance increases not only in the trained task, but also in untrained cognitive transfer tasks. It has been suggested that transfer effects occur if training task and transfer task share specific processing components that are supposedly processed in the same brain areas. In the current study, we investigated whether single-task WM training and training-related alterations in neural activity might support performance in a dual-task setting, thus assessing transfer effects to higher-order control processes in the context of dual-task coordination. A sample of older adults (age 60–72) was assigned to either a training or control group. The training group participated in 12 sessions of an adaptive n-back training. At pre and post-measurement, a multimodal dual-task was performed in all participants to assess transfer effects. This task consisted of two simultaneous delayed match to sample WM tasks using two different stimulus modalities (visual and auditory) that were performed either in isolation (single-task) or in conjunction (dual-task). A subgroup also participated in functional magnetic resonance imaging (fMRI) during the performance of the n-back task before and after training. While no transfer to single-task performance was found, dual-task costs in both the visual modality (p < 0.05) and the auditory modality (p < 0.05) decreased at post-measurement in the training but not in the control group. In the fMRI subgroup of the training participants, neural activity changes in left dorsolateral prefrontal cortex (DLPFC) during one-back predicted post-training auditory dual-task costs, while neural activity changes in right DLPFC during three-back predicted visual dual-task costs. Results might indicate an improvement in central executive processing that could facilitate both WM and dual-task coordination.
With its transparent orthography, Standard Indonesian is spoken by over 160 million inhabitants and is the primary language of instruction in education and the government in Indonesia. An assessment battery of reading and reading-related skills was developed as a starting point for the diagnosis of dyslexia in beginner learners. Founded on the International Dyslexia Association’s definition of dyslexia, the test battery comprises nine empirically motivated reading and reading-related tasks assessing word reading, pseudoword reading, arithmetic, rapid automatized naming, phoneme deletion, forward and backward digit span, verbal fluency, orthographic choice (spelling), and writing. The test was validated by computing the relationships between the outcomes on the reading-skills and reading-related measures by means of correlation and factor analyses. External variables, i.e., school grades and teacher ratings of the reading and learning abilities of individual students, were also utilized to provide evidence of its construct validity. Four variables were found to be significantly related with reading-skill measures: phonological awareness, rapid naming, spelling, and digit span. The current study on reading development in Standard Indonesian confirms findings from other languages with transparent orthographies and suggests a test battery including preliminary norm scores for screening and assessment of elementary school children learning to read Standard Indonesian.
As an emerging sub-field of music information retrieval (MIR), music imagery information retrieval (MIIR) aims to retrieve information from brain activity recorded during music cognition–such as listening to or imagining music pieces. This is a highly inter-disciplinary endeavor that requires expertise in MIR as well as cognitive neuroscience and psychology. The OpenMIIR initiative strives to foster collaborations between these fields to advance the state of the art in MIIR. As a first step, electroencephalography (EEG) recordings of music perception and imagination have been made publicly available, enabling MIR researchers to easily test and adapt their existing approaches for music analysis like fingerprinting, beat tracking or tempo estimation on this new kind of data. This paper reports on first results of MIIR experiments using these OpenMIIR datasets and points out how these findings could drive new research in cognitive neuroscience.
We present a setup combining a liquid flatjet sample delivery and a MHz laser system for time-resolved soft X-ray absorption measurements of liquid samples at the high brilliance undulator beamline UE52-SGM at Bessy II yielding unprecedented statistics in this spectral range. We demonstrate that the efficient detection of transient absorption changes in transmission mode enables the identification of photoexcited species in dilute samples. With iron(II)-trisbipyridine in aqueous solution as a benchmark system, we present absorption measurements at various edges in the soft X-ray regime. In combination with the wavelength tunability of the laser system, the set-up opens up opportunities to study the photochemistry of many systems at low concentrations, relevant to materials sciences, chemistry, and biology.
We introduce three strategies for the analysis of financial time series based on time averaged observables. These comprise the time averaged mean squared displacement (MSD) as well as the ageing and delay time methods for varying fractions of the financial time series. We explore these concepts via statistical analysis of historic time series for several Dow Jones Industrial indices for the period from the 1960s to 2015. Remarkably, we discover a simple universal law for the delay time averaged MSD. The observed features of the financial time series dynamics agree well with our analytical results for the time averaged measurables for geometric Brownian motion, underlying the famed Black–Scholes–Merton model. The concepts we promote here are shown to be useful for financial data analysis and enable one to unveil new universal features of stock market dynamics.
In October 2016, following a campaign led by Labour Peer Lord
Alfred Dubs, the first child asylum-seekers allowed entry to the UK
under new legislation (the ‘Dubs amendment’) arrived in England.
Their arrival was captured by a heavy media presence, and very
quickly doubts were raised by right-wing tabloids and politicians
about their age. In this article, I explore the arguments
underpinning the Dubs campaign and the media coverage of
the children’s arrival as a starting point for interrogating
representational practices around children who seek asylum. I
illustrate how the campaign was premised on a universal politics
of childhood that inadvertently laid down the terms on which
these children would be given protection, namely their innocence.
The universality of childhood fuels public sympathy for child
asylum-seekers, underlies the ‘child first, migrant second’
approach advocated by humanitarian organisations, and it was a
key argument in the ‘Dubs amendment’. Yet the campaign
highlights how representations of child asylum-seekers rely on
codes that operate to identify ‘unchildlike’ children. As I show, in
the context of the criminalisation of undocumented migrants‘,
childhood is no longer a stable category which guarantees
protection, but is subject to scrutiny and suspicion and can,
ultimately, be disproved.
Thermal cis-trans isomerization of azobenzene studied by path sampling and QM/MM stochastic dynamics
(2017)
Azobenzene-based molecular photoswitches have extensively been applied to biological systems, involving photo-control of peptides, lipids and nucleic acids. The isomerization between the stable trans and the metastable cis state of the azo moieties leads to pronounced changes in shape and other physico-chemical properties of the molecules into which they are incorporated. Fast switching can be induced via transitions to excited electronic states and fine-tuned by a large number of different substituents at the phenyl rings. But a rational design of tailor-made azo groups also requires control of their stability in the dark, the half-lifetime of the cis isomer. In computational chemistry, thermally activated barrier crossing on the ground state Born-Oppenheimer surface can efficiently be estimated with Eyring’s transition state theory (TST) approach; the growing complexity of the azo moiety and a rather heterogeneous environment, however, may render some of the underlying simplifying assumptions problematic.
In this dissertation, a computational approach is established to remove two restrictions at once: the environment is modeled explicitly by employing a quantum mechanical/molecular mechanics (QM/MM) description; and the isomerization process is tracked by analyzing complete dynamical pathways between stable states. The suitability of this description is validated by using two test systems, pure azo benzene and a derivative with electron donating and electron withdrawing substituents (“push-pull” azobenzene). Each system is studied in the gas phase, in toluene and in polar DMSO solvent. The azo molecules are treated at the QM level using a very recent, semi-empirical approximation to density functional theory (density functional tight binding approximation). Reactive pathways are sampled by implementing a version of the so-called transition path sampling method (TPS), without introducing any bias into the system dynamics. By analyzing ensembles of reactive trajectories, the change in isomerization pathway from linear inversion to rotation in going from apolar to polar solvent, predicted by the TST approach, could be verified for the push-pull derivative. At the same time, the mere presence of explicit solvation is seen to broaden the distribution of isomerization pathways, an effect TST cannot account for.
Using likelihood maximization based on the TPS shooting history, an improved reaction coordinate was identified as a sine-cosine combination of the central bend angles and the rotation dihedral, r (ω,α,α′). The computational van’t Hoff analysis for the activation entropies was performed to gain further insight into the differential role of solvent for the case of the unsubstituted and the push-pull azobenzene. In agreement with the experiment, it yielded positive activation entropies for azobenzene in the DMSO solvent while negative for the push-pull derivative, reflecting the induced ordering of solvent around the more dipolar transition state associated to the latter compound. Also, the dynamically corrected rate constants were evaluated using the reactive flux approach where an increase comparable to the experimental one was observed for a high polarity medium for both azobenzene derivatives.
Rezensiertes Werk
Theresa Biberauer u. George Walkden (Hgg.): Syntax over Time: Lexical, Morphological, and Information – Structural Interactions - Oxford, Oxford University Press, 2015, 418 S.
This article considers Isabella Bird’s representation of medicine in Unbeaten Tracks in Japan (1880) and Journeys in Persia and Kurdistan (1891), the two books in which she engages most extensively with both local (Chinese/Islamic) and Western medical science and practice. I explore how Bird uses medicine to assert her narrative authority and define her travelling persona in opposition to local medical practitioners. I argue that her ambivalence and the unease she frequently expresses concerning medical practice (expressed particularly in her later adoption of the Persian appellation “Feringhi Hakīm” [European physician] to describe her work) serves as a means for her to negotiate the colonial and gendered pressures on Victorian medicine. While in Japan this attitude works to destabilise her hierarchical understanding of science and results in some acknowledgement of traditional Japanese traditions, in Persia it functions more to disguise her increasing collusion with overt British colonial ambitions.
White adipose tissue (WAT) is actively involved in the regulation of whole-body energy homeostasis via storage/release of lipids and adipokine secretion. Current research links WAT dysfunction to the development of metabolic syndrome (MetS) and type 2 diabetes (T2D). The expansion of WAT during oversupply of nutrients prevents ectopic fat accumulation and requires proper preadipocyte-to-adipocyte differentiation. An assumed link between excess levels of reactive oxygen species (ROS), WAT dysfunction and T2D has been discussed controversially. While oxidative stress conditions have conclusively been detected in WAT of T2D patients and related animal models, clinical trials with antioxidants failed to prevent T2D or to improve glucose homeostasis. Furthermore, animal studies yielded inconsistent results regarding the role of oxidative stress in the development of diabetes. Here, we discuss the contribution of ROS to the (patho)physiology of adipocyte function and differentiation, with particular emphasis on sources and nutritional modulators of adipocyte ROS and their functions in signaling mechanisms controlling adipogenesis and functions of mature fat cells. We propose a concept of ROS balance that is required for normal functioning of WAT. We explain how both excessive and diminished levels of ROS, e.g. resulting from over supplementation with antioxidants, contribute to WAT dysfunction and subsequently insulin resistance.
Meaning-making in the brain has become one of the most intensely discussed topics in cognitive science. Traditional theories on cognition that emphasize abstract symbol manipulations often face a dead end: The symbol grounding problem. The embodiment idea tries to overcome this barrier by assuming that the mind is grounded in sensorimotor experiences. A recent surge in behavioral and brain-imaging studies has therefore focused on the role of the motor cortex in language processing. Concrete, action-related words have received convincing evidence to rely on sensorimotor activation. Abstract concepts, however, still pose a distinct challenge for embodied theories on cognition. Fully embodied abstraction mechanisms were formulated but sensorimotor activation alone seems unlikely to close the explanatory gap. In this respect, the idea of integration areas, such as convergence zones or the ‘hub and spoke’ model, do not only appear like the most promising candidates to account for the discrepancies between concrete and abstract concepts but could also help to unite the field of cognitive science again. The current review identifies milestones in cognitive science research and recent achievements that highlight fundamental challenges, key questions and directions for future research.
Human development has far-reaching impacts on the surface of the globe. The transformation of natural land cover occurs in different forms, and urban growth is one of the most eminent transformative processes. We analyze global land cover data and extract cities as defined by maximally connected urban clusters. The analysis of the city size distribution for all cities on the globe confirms Zipf’s law. Moreover, by investigating the percolation properties of the clustering of urban areas we assess the closeness to criticality for various countries. At the critical thresholds, the urban land cover of the countries undergoes a transition from separated clusters to a gigantic component on the country scale. We study the Zipf-exponents as a function of the closeness to percolation and find a systematic dependence, which could be the reason for deviating exponents reported in the literature. Moreover, we investigate the average size of the clusters as a function of the proximity to percolation and find country specific behavior. By relating the standard deviation and the average of cluster sizes—analogous to Taylor’s law—we suggest an alternative way to identify the percolation transition. We calculate spatial correlations of the urban land cover and find long-range correlations. Finally, by relating the areas of cities with population figures we address the global aspect of the allometry of cities, finding an exponent δ ≈ 0.85, i.e., large cities have lower densities.
The rule of law is the cornerstone of the international legal system. This paper shows, through analysis of intergovernmental instruments, statements made by representatives of States, and negotiation records, that the rule of law at the United Nations has become increasingly contested in the past years. More precisely, the argument builds on the process of integrating the notion of the rule of law into the Sustainable Development Goals, adopted in September 2015 in the document Transforming our world: the 2030 Agenda for Sustainable Development. The main sections set out the background of the rule of law debate at the UN, the elements of the rule of law at the goal- and target-levels in the 2030 Agenda – especially in the SDG 16 –, and evaluate whether the rule of law in this context may be viewed as a normative and universal foundation of international law. The paper concludes, with reflections drawn from the process leading up to the 2030 Agenda and the final outcome document that the rule of law – or at least strong and precise formulations of the concept – may be in decline in institutional and normative settings. This can be perceived as symptomatic of a broader crisis of the international legal order.
Reaching the Sustainable Development Goals requires a fundamental socio-economic transformation accompanied by substantial investment in low-carbon infrastructure. Such a sustainability transition represents a non-marginal change, driven by behavioral factors and systemic interactions. However, typical economic models used to assess a sustainability transition focus on marginal changes around a local optimum, whichby constructionlead to negative effects. Thus, these models do not allow evaluating a sustainability transition that might have substantial positive effects. This paper examines which mechanisms need to be included in a standard computable general equilibrium model to overcome these limitations and to give a more comprehensive view of the effects of climate change mitigation. Simulation results show that, given an ambitious greenhouse gas emission constraint and a price of carbon, positive economic effects are possible if (1) technical progress results (partly) endogenously from the model and (2) a policy intervention triggering an increase of investment is introduced. Additionally, if (3) the investment behavior of firms is influenced by their sales expectations, the effects are amplified. The results provide suggestions for policy-makers, because the outcome indicates that investment-oriented climate policies can lead to more desirable outcomes in economic, social and environmental terms.
The role of serum amyloid A and sphingosine-1-phosphate on high-density lipoprotein functionality
(2017)
The high-density lipoprotein (HDL) is one of the most important endogenous cardiovascular protective markers. HDL is an attractive target in the search for new pharmaceutical therapies and in the prevention of cardiovascular events. Some of HDL’s anti-atherogenic properties are related to the signaling molecule sphingosine-1-phosphate (S1P), which plays an important role in vascular homeostasis. However, for different patient populations it seems more complicated. Significant changes in HDL’s protective potency are reduced under pathologic conditions and HDL might even serve as a proatherogenic particle. Under uremic conditions especially there is a change in the compounds associated with HDL. S1P is reduced and acute phase proteins such as serum amyloid A (SAA) are found to be elevated in HDL. The conversion of HDL in inflammation changes the functional properties of HDL. High amounts of SAA are associated with the occurrence of cardiovascular diseases such as atherosclerosis. SAA has potent pro-atherogenic properties, which may have impact on HDL’s biological functions, including cholesterol efflux capacity, antioxidative and anti-inflammatory activities. This review focuses on two molecules that affect the functionality of HDL. The balance between functional and dysfunctional HDL is disturbed after the loss of the protective sphingolipid molecule S1P and the accumulation of the acute-phase protein SAA. This review also summarizes the biological activities of lipid-free and lipid-bound SAA and its impact on HDL function.
The role of alcohol and victim sexual interest in Spanish students' perceptions of sexual assault
(2017)
Two studies investigated the effects of information related to rape myths on Spanish college students’ perceptions of sexual assault. In Study 1, 92 participants read a vignette about a nonconsensual sexual encounter and rated whether it was a sexual assault and how much the woman was to blame. In the scenario, the man either used physical force or offered alcohol to the woman to overcome her resistance. Rape myth acceptance (RMA) was measured as an individual difference variable. Participants were more convinced that the incident was a sexual assault and blamed the woman less when the man had used force rather than offering her alcohol. In Study 2, 164 college students read a scenario in which the woman rejected a man’s sexual advances after having either accepted or turned down his offer of alcohol. In addition, the woman was either portrayed as being sexually attracted to him or there was no mention of her sexual interest. Participants’ RMA was again included. High RMA participants blamed the victim more than low RMA participants and were less certain that the incident was a sexual assault, especially when the victim had accepted alcohol and was described as being sexually attracted to the man. The findings are discussed in terms of their implications for the prevention and legal prosecution of sexual assault.
Over the last few decades, the methodology for the identification of customary international law (CIL) has been changing. Both elements of CIL – practice and opinio juris – have assumed novel and broader forms, as noted in the Reports of the Special Rapporteur of the International Law Commission (2013, 2014, 2015, 2016). This paper discusses these Reports and the draft conclusions, and reaction by States in the Sixth Committee of the United Nations General Assembly (UNGA), highlighting the areas of consensus and contestation. This ties to the analysis of the main doctrinal positions, with special attention being given to the two elements of CIL, and the role of the UNGA resolutions. The underlying motivation is to assess the real or perceived crisis of CIL, and the author develops the broader argument maintaining that in order to retain unity within international law, the internal limits of CIL must be carefully asserted.
Background: The relative dose response (RDR) test, which quantifies the increase in serum retinol after vitamin A administration, is a qualitative measure of liver vitamin A stores. Particularly in preterm infants, the feasibility of the RDR test involving blood is critically dependent on small sample volumes. Objectives: This study aimed to assess whether the RDR calculated with retinol-binding protein 4 (RBP4) might be a substitute for the classical retinol-based RDR test for assessing vitamin A status in very preterm infants. Methods: This study included preterm infants with a birth weight below 1,500 g (n = 63, median birth weight 985 g, median gestational age 27.4 weeks) who were treated with 5,000 IU retinyl palmitate intramuscularly 3 times a week for 4 weeks. On day 3 (first vitamin A injection) and day 28 of life (last vitamin A injection), the RDR was calculated and compared using serum retinol and RBP4 concentrations. Results: The concentrations of retinol (p < 0.001) and RBP4 (p < 0.01) increased significantly from day 3 to day 28. On day 3, the median (IQR) retinol-RDR was 27% (8.4-42.5) and the median RBP4-RDR was 8.4% (-3.4 to 27.9), compared to 7.5% (-10.6 to 20.8) and -0.61% (-19.7 to 15.3) on day 28. The results for retinol-RDR and RBP4-RDR revealed no significant correlation. The agreement between retinol-RDR and RBP4-RDR was poor (day 3: Cohen's κ = 0.12; day 28: Cohen's κ = 0.18). Conclusion: The RDR test based on circulating RBP4 is unlikely to reflect the hepatic vitamin A status in preterm infants.
The predictions of two contrasting approaches to the acquisition of transitive relative clauses were tested within the same groups of German-speaking participants aged from 3 to 5 years old. The input frequency approach predicts that object relative clauses with inanimate heads (e.g., the pullover that the man is scratching) are comprehended earlier and more accurately than those with an animate head (e.g., the man that the boy is scratching). In contrast, the structural intervention approach predicts that object relative clauses with two full NP arguments mismatching in number (e.g., the man that the boys are scratching) are comprehended earlier and more accurately than those with number-matching NPs (e.g., the man that the boy is scratching). These approaches were tested in two steps. First, we ran a corpus analysis to ensure that object relative clauses with number-mismatching NPs are not more frequent than object relative clauses with number-matching NPs in child directed speech. Next, the comprehension of these structures was tested experimentally in 3-, 4-, and 5-year-olds respectively by means of a color naming task. By comparing the predictions of the two approaches within the same participant groups, we were able to uncover that the effects predicted by the input frequency and by the structural intervention approaches co-exist and that they both influence the performance of children on transitive relative clauses, but in a manner that is modulated by age. These results reveal a sensitivity to animacy mismatch already being demonstrated by 3-year-olds and show that animacy is initially deployed more reliably than number to interpret relative clauses correctly. In all age groups, the animacy mismatch appears to explain the performance of children, thus, showing that the comprehension of frequent object relative clauses is enhanced compared to the other conditions. Starting with 4-year-olds but especially in 5-year-olds, the number mismatch supported comprehension—a facilitation that is unlikely to be driven by input frequency. Once children fine-tune their sensitivity to verb agreement information around the age of four, they are also able to deploy number marking to overcome the intervention effects. This study highlights the importance of testing experimentally contrasting theoretical approaches in order to characterize the multifaceted, developmental nature of language acquisition.
A particular form of social pain is invalidation. Therefore, this study (a) investigates whether patients with chronic low back pain experience invalidation, (b) if it has an influence on their pain, and (c) explores whether various social sources (e.g. partner and work) influence physical pain differentially. A total of 92 patients completed questionnaires, and for analysis, Pearson's correlation coefficients and hierarchical linear regression analyses were conducted. They indicated a significant association between discounting and disability due to pain (respective =.29, p>.05). Especially, discounting by partner was linked to higher disability (=.28, p>.05).
The classical Navier-Stokes equations of hydrodynamics are usually written in terms of vector analysis. More promising is the formulation of these equations in the language of differential forms of degree one. In this way the study of Navier-Stokes equations includes the analysis of the de Rham complex. In particular, the Hodge theory for the de Rham complex enables one to eliminate the pressure from the equations. The Navier-Stokes equations constitute a parabolic system with a nonlinear term which makes sense only for one-forms. A simpler model of dynamics of incompressible viscous fluid is given by Burgers' equation. This work is aimed at the study of invariant structure of the Navier-Stokes equations which is closely related to the algebraic structure of the de Rham complex at step 1. To this end we introduce Navier-Stokes equations related to any elliptic quasicomplex of first order differential operators. These equations are quite similar to the classical Navier-Stokes equations including generalised velocity and pressure vectors. Elimination of the pressure from the generalised Navier-Stokes equations gives a good motivation for the study of the Neumann problem after Spencer for elliptic quasicomplexes. Such a study is also included in the work.We start this work by discussion of Lamé equations within the context of elliptic quasicomplexes on compact manifolds with boundary. The non-stationary Lamé equations form a hyperbolic system. However, the study of the first mixed problem for them gives a good experience to attack the linearised Navier-Stokes equations. On this base we describe a class of non-linear perturbations of the Navier-Stokes equations, for which the solvability results still hold.
The Kenya rift revisited
(2017)
We present three-dimensional (3-D) models that describe the present-day thermal and rheological state of the lithosphere of the greater Kenya rift region aiming at a better understanding of the rift evolution, with a particular focus on plume-lithosphere interactions. The key methodology applied is the 3-D integration of diverse geological and geophysical observations using gravity modelling. Accordingly, the resulting lithospheric-scale 3-D density model is consistent with (i) reviewed descriptions of lithological variations in the sedimentary and volcanic cover, (ii) known trends in crust and mantle seismic velocities as revealed by seismic and seismological data and (iii) the observed gravity field. This data-based model is the first to image a 3-D density configuration of the crystalline crust for the entire region of Kenya and northern Tanzania. An upper and a basal crustal layer are differentiated, each composed of several domains of different average densities. We interpret these domains to trace back to the Precambrian terrane amalgamation associated with the East African Orogeny and to magmatic processes during Mesozoic and Cenozoic rifting phases. In combination with seismic velocities, the densities of these crustal domains indicate compositional differences. The derived lithological trends have been used to parameterise steady-state thermal and rheological models. These models indicate that crustal and mantle temperatures decrease from the Kenya rift in the west to eastern Kenya, while the integrated strength of the lithosphere increases. Thereby, the detailed strength configuration appears strongly controlled by the complex inherited crustal structure, which may have been decisive for the onset, localisation and propagation of rifting.
The paper looks at community interests in international law from the perspective of the International Law Commission. As the topics of the Commission are diverse, the outcome of its work is often seen as providing a sense of direction regarding general aspects of international law. After defining what he understands by “community interests”, the author looks at both secondary and primary rules of international law, as they have been articulated by the Commission, as well as their relevance for the recognition and implementation of community interests. The picture which emerges only partly fits the widespread narrative of “from self-interest to community interest”. Whereas the Commission has recognized, or developed, certain primary rules which more fully articulate community interests, it has been reluctant to reformulate secondary rules of international law, with the exception of jus cogens. The Commission has more recently rather insisted that the traditional State-consent-oriented secondary rules concerning the formation of customary international law and regarding the interpretation of treaties continue to be valid in the face of other actors and forms of action which push towards the recognition of more and thicker community interests.
The El Nino-Southern Oscillation (ENSO) is the main driver of the interannual variability in eastern African rainfall, with a significant impact on vegetation and agriculture and dire consequences for food and social security. In this study, we identify and quantify the ENSO contribution to the eastern African rainfall variability to forecast future eastern African vegetation response to rainfall variability related to a predicted intensified ENSO. To differentiate the vegetation variability due to ENSO, we removed the ENSO signal from the climate data using empirical orthogonal teleconnection (EOT) analysis. Then, we simulated the ecosystem carbon and water fluxes under the historical climate without components related to ENSO teleconnections. We found ENSO-driven patterns in vegetation response and confirmed that EOT analysis can successfully produce coupled tropical Pacific sea surface temperature-eastern African rainfall teleconnection from observed datasets. We further simulated eastern African vegetation response under future climate change as it is projected by climate models and under future climate change combined with a predicted increased ENSO intensity. Our EOT analysis highlights that climate simulations are still not good at capturing rainfall variability due to ENSO, and as we show here the future vegetation would be different from what is simulated under these climate model outputs lacking accurate ENSO contribution. We simulated considerable differences in eastern African vegetation growth under the influence of an intensified ENSO regime which will bring further environmental stress to a region with a reduced capacity to adapt effects of global climate change and food security.
The right to privacy in the digital age generates new challenges for the international jurisdiction. The following article deals with such challenges. Therefore it firstly defines the term of privacy in general and presents an international legal framework. With whisteblower Snowden a huge political discourse was initiated and the article gives insights into its further development. In 2015 the Human Rights Council for the first time announced a special rapporteur on the right to privacy. However, the discourse is not only taking place on a political level, also civil society organizations advocate more stringent regulations and prosecutions against violations of the right to privacy. Moreover the importance of the technology sector becomes clear. Companies like Microsoft are increasingly taking responsibility to protect digital media against unjustified data misuse, surveillance, collection and storage. But whereas the IT sector is developing very quickly, legislative processes do so rather slowly. Lastly, the individual is also hold to account. To protect oneself against data misuse is to a great extent acting self-responsible. Still, therefore information on protection must be clear and accessible for everyone.
West German anticommunism and the SED’s Westarbeit were to some extentinterrelated. From the beginning, each German state had attemted to stabilise itsown social system while trying to discredit its political opponent. The claim tosole representation and the refusal to acknowledge each other delineated governmentalaction on both sides. Anticommunism inWest Germany re-developed under theconditions of the Cold War, which allowed it to become virtually the reason ofstate and to serve as a tool for the exclusion of KPD supporters. In its turn, theSED branded the West German State as‘revanchist’and instrumentalised itsanticommunism to persecute and eliminate opponents within the GDR. Bothphenomena had an integrative and exclusionary element.
In littoral zones of lakes, multiple processes determine lake ecology and water quality. Lacustrine groundwater discharge (LGD), most frequently taking place in littoral zones, can transport or mobilize nutrients from the sediments and thus contribute significantly to lake eutrophication. Furthermore, lake littoral zones are the habitat of benthic primary producers, namely submerged macrophytes and periphyton, which play a key role in lake food webs and influence lake water quality. Groundwater-mediated nutrient-influx can potentially affect the asymmetric competition between submerged macrophytes and periphyton for light and nutrients. While rooted macrophytes have superior access to sediment nutrients, periphyton can negatively affect macrophytes by shading. LGD may thus facilitate periphyton production at the expense of macrophyte production, although studies on this hypothesized effect are missing.
The research presented in this thesis is aimed at determining how LGD influences periphyton, macrophytes, and the interactions between these benthic producers. Laboratory experiments were combined with field experiments and measurements in an oligo-mesotrophic hard water lake.
In the first study, a general concept was developed based on a literature review of the existing knowledge regarding the potential effects of LGD on nutrients and inorganic and organic carbon loads to lakes, and the effect of these loads on periphyton and macrophytes. The second study includes a field survey and experiment examining the effects of LGD on periphyton in an oligotrophic, stratified hard water lake (Lake Stechlin). This study shows that LGD, by mobilizing phosphorus from the sediments, significantly promotes epiphyton growth, especially at the end of the summer season when epilimnetic phosphorus concentrations are low. The third study focuses on the potential effects of LGD on submerged macrophytes in Lake Stechlin. This study revealed that LGD may have contributed to an observed change in macrophyte community composition and abundance in the shallow littoral areas of the lake. Finally, a laboratory experiment was conducted which mimicked the conditions of a seepage lake. Groundwater circulation was shown to mobilize nutrients from the sediments, which significantly promoted periphyton growth. Macrophyte growth was negatively affected at high periphyton biomasses, confirming the initial hypothesis.
More generally, this thesis shows that groundwater flowing into nutrient-limited lakes may import or mobilize nutrients. These nutrients first promote periphyton, and subsequently provoke radical changes in macrophyte populations before finally having a possible influence on the lake’s trophic state. Hence, the eutrophying effect of groundwater is delayed and, at moderate nutrient loading rates, partly dampened by benthic primary producers. The present research emphasizes the importance and complexity of littoral processes, and the need to further investigate and monitor the benthic environment. As present and future global changes can significantly affect LGD, the understanding of these complex interactions is required for the sustainable management of lake water quality.
The Cauchy problem for the linearised Einstein equation and the Goursat problem for wave equations
(2017)
In this thesis, we study two initial value problems arising in general relativity. The first is the Cauchy problem for the linearised Einstein equation on general globally hyperbolic spacetimes, with smooth and distributional initial data. We extend well-known results by showing that given a solution to the linearised constraint equations of arbitrary real Sobolev regularity, there is a globally defined solution, which is unique up to addition of gauge solutions. Two solutions are considered equivalent if they differ by a gauge solution. Our main result is that the equivalence class of solutions depends continuously on the corre- sponding equivalence class of initial data. We also solve the linearised constraint equations in certain cases and show that there exist arbitrarily irregular (non-gauge) solutions to the linearised Einstein equation on Minkowski spacetime and Kasner spacetime.
In the second part, we study the Goursat problem (the characteristic Cauchy problem) for wave equations. We specify initial data on a smooth compact Cauchy horizon, which is a lightlike hypersurface. This problem has not been studied much, since it is an initial value problem on a non-globally hyperbolic spacetime. Our main result is that given a smooth function on a non-empty, smooth, compact, totally geodesic and non-degenerate Cauchy horizon and a so called admissible linear wave equation, there exists a unique solution that is defined on the globally hyperbolic region and restricts to the given function on the Cauchy horizon. Moreover, the solution depends continuously on the initial data. A linear wave equation is called admissible if the first order part satisfies a certain condition on the Cauchy horizon, for example if it vanishes. Interestingly, both existence of solution and uniqueness are false for general wave equations, as examples show. If we drop the non-degeneracy assumption, examples show that existence of solution fails even for the simplest wave equation. The proof requires precise energy estimates for the wave equation close to the Cauchy horizon. In case the Ricci curvature vanishes on the Cauchy horizon, we show that the energy estimates are strong enough to prove local existence and uniqueness for a class of non-linear wave equations. Our results apply in particular to the Taub-NUT spacetime and the Misner spacetime. It has recently been shown that compact Cauchy horizons in spacetimes satisfying the null energy condition are necessarily smooth and totally geodesic. Our results therefore apply if the spacetime satisfies the null energy condition and the Cauchy horizon is compact and non-degenerate.
The Bruce effect revisited
(2017)
Pregnancy termination after encountering a strange male, the Bruce effect, is regarded as a counterstrategy of female mammals towards anticipated infanticide. While confirmed in caged rodent pairs, no verification for the Bruce effect existed from experimental field populations of small rodents. We suggest that the effect may be adaptive for breeding rodent females only under specific conditions related to populations with cyclically fluctuating densities. We investigated the occurrence of delay in birth date after experimental turnover of the breeding male under different population composition in bank voles (Myodes glareolus) in large outdoor enclosures: one-male–multiple-females (n = 6 populations/18 females), multiple-males–multiple-females (n = 15/45), and single-male–single-female (MF treatment, n = 74/74). Most delays were observed in the MF treatment after turnover. Parallel we showed in a laboratory experiment (n = 205 females) that overwintered and primiparous females, the most abundant cohort during population lows in the increase phase of cyclic rodent populations, were more likely to delay births after turnover of the male than year-born and multiparous females. Taken together, our results suggest that the Bruce effect may be an adaptive breeding strategy for rodent females in cyclic populations specifically at low densities in the increase phase, when isolated, overwintered animals associate in MF pairs. During population lows infanticide risk and inbreeding risk may then be higher than during population highs, while also the fitness value of a litter in an increasing population is higher. Therefore, the Bruce effect may be adaptive for females during annual population lows in the increase phases, even at the costs of delaying reproduction.
Recent research has called into question the current practice to estimate individual usual food intake in large-scale studies. In such studies, usual food intake has been defined as diet over the past year. The aim of this review is to summarise the concepts of dietary assessment methods providing food intake data over this time period. A conceptualised framework is given to help researchers to understand the more recent developments to improve dietary assessment in large-scale prospective studies, and also to help to spot the gaps that need to be addressed in future methodological research. The conceptual framework illustrates the current options for the assessment of an individual’s food consumption over 1 year. Ideally, a person’s food intake on each day of this year should be assessed. Due to participants’ burden, and organisational and financial constraints, however, the options are limited to directly requesting the long-term average (e.g. food frequency questionnaires), or selecting a few days with detailed food consumption measurements (e.g. 24-hour dietary recalls) or using snapshot techniques (e.g. barcode scanning of purchases). It seems necessary and important to further evaluate the performance of statistical modelling of the individual usual food intake from all available sources. Future dietary assessment might profit from the growing prominence of internet and telecommunication technologies to further enhance the available data on food consumption for each study participant. Research is crucial to investigate the performance of innovative assessment tools. However, the self-reported nature of the data itself will always lead to bias.
This research was designed to adapt and investigate the psychometric properties of the Short Dark Triad measure (Jones and Paulhus Assessment, 21(1), 28-41, 2014) in a German sample within four studies (total N = 1463); the measure evaluates three personality dimensions: narcissism, psychopathy, and Machiavellianism. The structure of the instrument was analysed by Confirmatory Factor Analyses procedure. It indicated that the three-factor structure had the best fit to the data. Next, the Short Dark Triad measure was evaluated in terms of construct, convergent and discriminant validity, internal consistency (≥ .72), and test-retest reliability during a 4-week period (≥ .73). Concurrent validity of the SD3 was supported by relating its subscales to measures of the Big Five concept, aggression, and self-esteem. We concluded that the Short Dark Triad instrument presented high cross-language replicability. The use of this short inventory in the investigation of the Dark Triad personality model in the German language context is suggested.
Information on the contemporary in-situ stress state of the earth’s crust is essential for geotechnical applications and physics-based seismic hazard assessment. Yet, stress data records for a data point are incomplete and their availability is usually not dense enough to allow conclusive statements. This demands a thorough examination of the in-situ stress field which is achieved by 3D geomechanicalnumerical models. However, the models spatial resolution is limited and the resulting local stress state is subject to large uncertainties that confine the significance of the findings. In addition, temporal variations of the in-situ stress field are naturally or anthropogenically induced. In my thesis I address these challenges in three manuscripts that investigate (1) the current crustal stress field orientation, (2) the 3D geomechanical-numerical modelling of the in-situ stress state, and (3) the phenomenon of injection induced temporal stress tensor rotations. In the first manuscript I present the first comprehensive stress data compilation of Iceland with 495 data records. Therefore, I analysed image logs from 57 boreholes in Iceland for indicators of the orientation of the maximum horizontal stress component. The study is the first stress survey from different kinds of stress indicators in a geologically very young and tectonically active area of an onshore spreading ridge. It reveals a distinct stress field with a depth independent stress orientation even very close to the spreading centre. In the second manuscript I present a calibrated 3D geomechanical-numerical modelling approach of the in-situ stress state of the Bavarian Molasse Basin that investigates the regional (70x70x10km³) and local (10x10x10km³) stress state. To link these two models I develop a multi-stage modelling approach that provides a reliable and efficient method to derive from the larger scale model initial and boundary conditions for the smaller scale model. Furthermore, I quantify the uncertainties in the models results which are inherent to geomechanical-numerical modelling in general and the multi-stage approach in particular. I show that the significance of the models results is mainly reduced due to the uncertainties in the material properties and the low number of available stress magnitude data records for calibration. In the third manuscript I investigate the phenomenon of injection induced temporal stress tensor rotation and its controlling factors. I conduct a sensitivity study with a 3D generic thermo-hydro-mechanical model. I show that the key control factors for the stress tensor rotation are the permeability as the decisive factor, the injection rate, and the initial differential stress. In particular for enhanced geothermal systems with a low permeability large rotations of the stress tensor are indicated. According to these findings the estimation of the initial differential stress in a reservoir is possible provided the permeability is known and the angle of stress rotation is observed. I propose that the stress tensor rotations can be a key factor in terms of the potential for induced seismicity on pre-existing faults due to the reorientation of the stress field that changes the optimal orientation of faults.
Orthogonal systems for heterologous protein expression as well as for the engineering of synthetic gene regulatory circuits in hosts like Saccharomyces cerevisiae depend on synthetic transcription factors (synTFs) and corresponding cis-regulatory binding sites. We have constructed and characterized a set of synTFs based on either transcription activator-like effectors or CRISPR/Cas9, and corresponding small synthetic promoters (synPs) with minimal sequence identity to the host’s endogenous promoters. The resulting collection of functional synTF/synP pairs confers very low background expression under uninduced conditions, while expression output upon induction of the various synTFs covers a wide range and reaches induction factors of up to 400. The broad spectrum of expression strengths that is achieved will be useful for various experimental setups, e.g., the transcriptional balancing of expression levels within heterologous pathways or the construction of artificial regulatory networks. Furthermore, our analyses reveal simple rules that enable the tuning of synTF expression output, thereby allowing easy modification of a given synTF/synP pair. This will make it easier for researchers to construct tailored transcriptional control systems.
In the present work side-chain polystyrenes were synthesized and characterized, in order to be applied in multilayer OLEDs fabricated by solution process techniques. Manufacture of optoelectronic devices by solution process techniques is meant to decrease significantly fabrication cost and allow large scale production of such devices.
This dissertation focusses in three series, enveloped in two material classes. The two classes differ to each other in the type of charge transport exhibited, either ambipolar transport or electron transport. All materials were applied in all-organic solution processed green Ir-based devices.
In the first part, a series of ambipolar host materials were developed to transport both charge types, holes and electrons, and be applied especially as matrix for green Ir-based emitters. It was possible to increase devices efficacy by modulating the predominant charge transport type. This was achieved by modification of molecules electron transport part with more electron-deficient heterocycles or by extending the delocalization of the LUMO. Efficiencies up to 28.9 cd/A were observed for all-organic solution-process three layer devices.
In the second part, suitability of triarylboranes and tetraphenylsilanes as electron transport materials was studied. High triplet energies were obtained, up to 2.95 eV, by rational combination of both molecular structures. Although the combination of both elements had a low effect in materials electron transport properties, high efficiencies around 24 cd/A were obtained for the series in all-organic solution-processed two layer devices.
In the last part, benzene and pyridine were chosen as the series electron-transport motif. By controlling the relative pyridine content (RPC) solubility into methanol was induced for polystyrenes with bulky side-chains. Materials with RPC ≥ 0.5 could be deposited orthogonally from solution without harming underlying layers. From the best of our knowledge, this is the first time such materials are applied in this architecture showing moderate efficiencies around 10 cd/A in all-organic solution processed OLEDs.
Overall, the outcome of these studies will actively contribute to the current research on materials for all-solution processed OLEDs.
I. Ceric ammonium nitrate (CAN) mediated thiocyanate radical additions to glycals
In this dissertation, a facile entry was developed for the synthesis of 2-thiocarbohydrates and their transformations. Initially, CAN mediated thiocyanation of carbohydrates was carried out to obtain the basic building blocks (2-thiocyanates) for the entire studies. Subsequently, 2-thiocyanates were reduced to the corresponding thiols using appropriate reagents and reaction conditions. The screening of substrates, stereochemical outcome and the reaction mechanism are discussed briefly (Scheme I).
Scheme I. Synthesis of the 2-thiocyanates II and reductions to 2-thiols III & IV.
An interesting mechanism was proposed for the reduction of 2-thiocyanates II to 2-thiols III via formation of a disulfide intermediate. The water soluble free thiols IV were obtained by cleaving the thiocyanate and benzyl groups in a single step. In the subsequent part of studies, the synthetic potential of the 2-thiols was successfully expanded by simple synthetic transformations.
II. Transformations of the 2-thiocarbohydrates
The 2-thiols were utilized for convenient transformations including sulfa-Michael additions, nucleophilic substitutions, oxidation to disulfides and functionalization at the anomeric position. The diverse functionalizations of the carbohydrates at the C-2 position by means of the sulfur linkage are the highlighting feature of these studies. Thus, it creates an opportunity to expand the utility of 2-thiocarbohydrates for biological studies.
Reagents and conditions: a) I2, pyridine, THF, rt, 15 min; b) K2CO3, MeCN, rt, 1 h; c) MeI, K2CO3, DMF, 0 °C, 5 min; d) Ac2O, H2SO4 (1 drop), rt, 10 min; e) CAN, MeCN/H2O, NH4SCN, rt, 1 h; f) NaN3, ZnBr2, iPrOH/H2O, reflux, 15 h; g) NaOH (1 M), TBAI, benzene, rt, 2 h; h) ZnCl2, CHCl3, reflux, 3 h.
Scheme II. Functionalization of 2-thiocarbohydrates.
These transformations have enhanced the synthetic value of 2-thiocarbohydrates for the preparative scale. Worth to mention is the Lewis acid catalyzed replacement of the methoxy group by other nucleophiles and the synthesis of the (2→1) thiodisaccharides, which were obtained with complete β-selectivity. Additionally, for the first time, the carbohydrate linked thiotetrazole was synthesized by a (3 + 2) cycloaddition approach at the C-2 position.
III. Synthesis of thiodisaccharides by thiol-ene coupling.
In the final part of studies, the synthesis of thiodisaccharides by a classical photoinduced thiol-ene coupling was successfully achieved.
Reagents and conditions: 2,2-Dimethoxy-2-phenylacetophenone (DPAP), CH2Cl2/EtOH, hv, rt.
Scheme III. Thiol-ene coupling between 2-thiols and exo-glycals.
During the course of investigations, it was found that the steric hindrance plays an important role in the addition of bulky thiols to endo-glycals. Thus, we successfully screened the suitable substrates for addition of various thiols to sterically less hindered alkenes (Scheme III). The photochemical addition of 2-thiols to three different exo-glycals delivered excellent regio- and diastereoselectivities as well as yields, which underlines the synthetic potential of this convenient methodology.
The title compound was prepared by the reaction of 1,4,10,13-tetraoxa-7,16-diazacyclo-octadecane with 4-chloro-2-methyl-phenoxyacetic acid in a ratio of 1:2. The structure has been proved by the data of elemental analysis, IR spectroscopy, NMR ( 1 H, 13 C) technique and by X-ray diffraction analysis. Intermolecular hydrogen bonds between the azonium protons and oxygen atoms of the carboxylate groups were found. Immunoactive properties of the title compound have been screened. The compound has the ability to suppress spontaneous and Con A-stimulated cell proliferation in vitro and therefore can be considered as immunodepressant.
Graphs are ubiquitous in Computer Science. For this reason, in many areas, it is very important to have the means to express and reason about graph properties. In particular, we want to be able to check automatically if a given graph property is satisfiable. Actually, in most application scenarios it is desirable to be able to explore graphs satisfying the graph property if they exist or even to get a complete and compact overview of the graphs satisfying the graph property.
We show that the tableau-based reasoning method for graph properties as introduced by Lambers and Orejas paves the way for a symbolic model generation algorithm for graph properties. Graph properties are formulated in a dedicated logic making use of graphs and graph morphisms, which is equivalent to firstorder logic on graphs as introduced by Courcelle. Our parallelizable algorithm gradually generates a finite set of so-called symbolic models, where each symbolic model describes a set of finite graphs (i.e., finite models) satisfying the graph property. The set of symbolic models jointly describes all finite models for the graph property (complete) and does not describe any finite graph violating the graph property (sound). Moreover, no symbolic model is already covered by another one (compact). Finally, the algorithm is able to generate from each symbolic model a minimal finite model immediately and allows for an exploration of further finite models. The algorithm is implemented in the new tool AutoGraph.
Swearing in a public place
(2017)
The paper deals with the usage of swear words on the online forum "reddit". Three research questions are dealt with:
How often are swear words used?
How are these swear words received by other users?
Does the topic of the conversation have an influence on the reception and amount of usage of swear words?
The corpus from which the results are taken comprises almost 900 million words. The words are taken from February 2017. Compared to other, similar studies, the corpus is considerably larger and contempory.
In addition, the theoretical part discusses the linguistic basics of swear words. These include concepts such as the theory of politeness, the topic of taboos and its corresponding words and censorship. This is done to explain the factors that influence the use and application of swear words and to explain why swearwords are so special in comparison to other word groups. In addition, further research results from other corpora are presented and compared with the results afterwards. This includes corpora that are also composed of online communication, as well as corpora that reproduce spoken language. The results from all the corpora presented deal with results from the English language.
The results of this study indicate that the swear words on "reddit" are used approximately as often as they are on other platforms. The perception of these swear words is mostly positive, which suggests that the use of swear words on "reddit" is not perceived as impolite. In addition, an influence of the discussion topic on the frequency and reception of swear words could be determined.
Nowadays, the need to protect the environment becomes more urgent than ever. In the field of chemistry, this translates to practices such as waste prevention, use of renewable feedstocks, and catalysis; concepts based on the principles of green chemistry. Polymers are an important product in the chemical industry and are also in the focus of these changes. In this thesis, more sustainable approaches to make two classes of polymers, polypeptoids and polyesters, are described.
Polypeptoids or poly(alkyl-N-glycines) are isomers of polypeptides and are biocompatible, as well as degradable under biologically relevant conditions. In addition to that, they can have interesting properties such as lower critical solution temperature (LCST) behavior. They are usually synthesized by the ring opening polymerization (ROP) of N-carboxy anhydrides (NCAs), which are produced with the use of toxic compounds (e.g. phosgene) and which are highly sensitive to humidity. In order to avoid the direct synthesis and isolation of the NCAs, N-phenoxycarbonyl-protected N-substituted glycines are prepared, which can yield the NCAs in situ. The conditions for the NCA synthesis and its direct polymerization are investigated and optimized for the simplest N-substituted glycine, sarcosine. The use of a tertiary amine in less than stoichiometric amounts compared to the N-phenoxycarbonyl--sarcosine seems to accelerate drastically the NCA formation and does not affect the efficiency of the polymerization. In fact, well defined polysarcosines that comply to the monomer to initiator ratio can be produced by this method. This approach was also applied to other N-substituted glycines.
Dihydroxyacetone is a sustainable diol produced from glycerol, and has already been used for the synthesis of polycarbonates. Here, it was used as a comonomer for the synthesis of polyesters. However, the polymerization of dihydroxyacetone presented difficulties, probably due to the insolubility of the macromolecular chains. To circumvent the problem, the dimethyl acetal protected dihydroxyacetone was polymerized with terephthaloyl chloride to yield a soluble polymer. When the carbonyl was recovered after deprotection, the product was insoluble in all solvents, showing that the carbonyl in the main chain hinders the dissolution of the polymers. The solubility issue can be avoided, when a 1:1 mixture of dihydroxyacetone/ ethylene glycol is used to yield a soluble copolyester.
Anthropogenically amplified erosion leads to increased fine-grained sediment input into the fluvial system in the 15.000 km2 Kharaa River catchment in northern Mongolia and constitutes a major stressing factor for the aquatic ecosystem. This study uniquely combines the application of intensive monitoring, source fingerprinting and catchment modelling techniques to allow for the comparison of the credibility and accuracy of each single method. High-resolution discharge data were used in combination with daily suspended solid measurements to calculate the suspended sediment budget and compare it with estimations of the sediment budget model SedNet. The comparison of both techniques showed that the development of an overall sediment budget with SedNet was possible, yielding results in the same order of magnitude (20.3 kt a- 1 and 16.2 kt a- 1).
Radionuclide sediment tracing, using Be-7, Cs-137 and Pb-210 was applied to differentiate sediment sources for particles < 10μm from hillslope and riverbank erosion and showed that riverbank erosion generates 74.5% of the suspended sediment load, whereas surface erosion contributes 21.7% and gully erosion only 3.8%. The contribution of the single subcatchments of the Kharaa to the suspended sediment load was assessed based on their variation in geochemical composition (e.g. in Ti, Sn, Mo, Mn, As, Sr, B, U, Ca and Sb). These variations were used for sediment source discrimination with geochemical composite fingerprints based on Genetic Algorithm driven Discriminant Function Analysis, the Kruskal–Wallis H-test and Principal Component Analysis. The contributions of the individual sub-catchment varied from 6.4% to 36.2%, generally showing higher contributions from the sub-catchments in the middle, rather than the upstream portions of the study area.
The results indicate that river bank erosion generated by existing grazing practices of livestock is the main cause for elevated fine sediment input. Actions towards the protection of the headwaters and the stabilization of the river banks within the middle reaches were identified as the highest priority. Deforestation and by lodging and forest fires should be prevented to avoid increased hillslope erosion in the mountainous areas. Mining activities are of minor importance for the overall catchment sediment load but can constitute locally important point sources for particular heavy metals in the fluvial system.
This review analyzes the potential role and long-term effects of field perennial polycultures (mixtures) in agricultural systems, with the aim of reducing the trade-offs between provisioning and regulating ecosystem services. First, crop rotations are identified as a suitable tool for the assessment of the long-term effects of perennial polycultures on ecosystem services, which are not visible at the single-crop level. Second, the ability of perennial polycultures to support ecosystem services when used in crop rotations is quantified through eight agricultural ecosystem services. Legume-grass mixtures and wildflower mixtures are used as examples of perennial polycultures, and compared with silage maize as a typical crop for biomass production. Perennial polycultures enhance soil fertility, soil protection, climate regulation, pollination, pest and weed control, and landscape aesthetics compared with maize. They also score lower for biomass production compared with maize, which confirms the trade-off between provisioning and regulating ecosystem services. However, the additional positive factors provided by perennial polycultures, such as reduced costs for mineral fertilizer, pesticides, and soil tillage, and a significant preceding crop effect that increases the yields of subsequent crops, should be taken into account. However, a full assessment of agricultural ecosystem services requires a more holistic analysis that is beyond the capabilities of current frameworks.
Decades of research have demonstrated that physical stress (PS) stimulates bone remodeling and affects bone structure and function through complex mechanotransduction mechanisms. Recent research has laid ground to the hypothesis that mental stress (MS) also influences bone biology, eventually leading to osteoporosis and increased bone fracture risk. These effects are likely exerted by modulation of hypothalamic–pituitary–adrenal axis activity, resulting in an altered release of growth hormones, glucocorticoids and cytokines, as demonstrated in human and animal studies. Furthermore, molecular cross talk between mental and PS is thought to exist, with either synergistic or preventative effects on bone disease progression depending on the characteristics of the applied stressor. This mini review will explain the emerging concept of MS as an important player in bone adaptation and its potential cross talk with PS by summarizing the current state of knowledge, highlighting newly evolving notions (such as intergenerational transmission of stress and its epigenetic modifications affecting bone) and proposing new research directions.
Background: Functional abdominal pain (FAP) is not only a highly prevalent disease but also poses a considerable burden on children and their families. Untreated, FAP is highly persistent until adulthood, also leading to an increased risk of psychiatric disorders. Intervention studies underscore the efficacy of cognitive behavioral treatment approaches but are limited in terms of sample size, long-term follow-up data, controls and inclusion of psychosocial outcome data.
Methods/Design: In a multicenter randomized controlled trial, 112 children aged 7 to 12 years who fulfill the Rome III criteria for FAP will be allocated to an established cognitive behavioral training program for children with FAP (n = 56) or to an active control group (focusing on age-appropriate information delivery; n = 56). Randomization occurs centrally, blockwise and is stratified by center. This study is performed in five pediatric gastroenterology outpatient departments. Observer-blind assessments of outcome variables take place four times: pre-, post-, 3- and 12-months post-treatment. Primary outcome is the course of pain intensity and frequency. Secondary endpoints are health-related quality of life, pain-related coping and cognitions, as well as selfefficacy.
Discussion: This confirmatory randomized controlled clinical trial evaluates the efficacy of a cognitive behavioral intervention for children with FAP. By applying an active control group, time and attention processes can be controlled, and long-term follow-up data over the course of one year can be explored.
The interdisciplinary workshop STOCHASTIC PROCESSES WITH APPLICATIONS IN THE NATURAL SCIENCES was held in Bogotá, at Universidad de los Andes from December 5 to December 9, 2016. It brought together researchers from Colombia, Germany, France, Italy, Ukraine, who communicated recent progress in the mathematical research related to stochastic processes with application in biophysics.
The present volume collects three of the four courses held at this meeting by Angelo Valleriani, Sylvie Rœlly and Alexei Kulik.
A particular aim of this collection is to inspire young scientists in setting up research goals within the wide scope of fields represented in this volume.
Angelo Valleriani, PhD in high energy physics, is group leader of the team "Stochastic processes in complex and biological systems" from the Max-Planck-Institute of Colloids and Interfaces, Potsdam.
Sylvie Rœlly, Docteur en Mathématiques, is the head of the chair of Probability at the University of Potsdam.
Alexei Kulik, Doctor of Sciences, is a Leading researcher at the Institute of Mathematics of Ukrainian National Academy of Sciences.
In this thesis, stochastic dynamics modelling collective motions of populations, one of the most mysterious type of biological phenomena, are considered. For a system of N particle-like individuals, two kinds of asymptotic behaviours are studied : ergodicity and flocking properties, in long time, and propagation of chaos, when the number N of agents goes to infinity. Cucker and Smale, deterministic, mean-field kinetic model for a population without a hierarchical structure is the starting point of our journey : the first two chapters are dedicated to the understanding of various stochastic dynamics it inspires, with random noise added in different ways. The third chapter, an attempt to improve those results, is built upon the cluster expansion method, a technique from statistical mechanics. Exponential ergodicity is obtained for a class of non-Markovian process with non-regular drift. In the final part, the focus shifts onto a stochastic system of interacting particles derived from Keller and Segel 2-D parabolicelliptic model for chemotaxis. Existence and weak uniqueness are proven.
Understanding the distribution of species is fundamental for biodiversity conservation, ecosystem management, and increasingly also for climate impact assessment. The presence of a species in a given site depends on physiological limitations (abiotic factors), interactions with other species (biotic factors), migratory or dispersal processes (site accessibility) as well as the continuing
effects of past events, e.g. disturbances (site legacy). Existing approaches to predict species distributions either (i) correlate observed species occurrences with environmental variables describing abiotic limitations, thus ignoring biotic interactions, dispersal and legacy effects (statistical species distribution model, SDM); or (ii) mechanistically model the variety of processes determining species distributions (process-based model, PBM). SDMs are widely used due to their easy applicability and ability to handle varied data qualities. But they fail to reproduce the dynamic response of species distributions to changing conditions. PBMs are expected to be superior in this respect, but they need very specific data unavailable for many species, and are often more complex and require more computational effort. More recently, hybrid models link the two approaches to combine their respective strengths.
In this thesis, I apply and compare statistical and process-based approaches to predict species distributions, and I discuss their respective limitations, specifically for applications in changing environments. Detailed analyses of SDMs for boreal tree species in Finland reveal that nonclimatic predictors - edaphic properties and biotic interactions - are important limitations at the treeline, contesting the assumption of unrestricted, climatically induced range expansion. While the estimated SDMs are successful within their training data range, spatial and temporal model transfer fails. Mapping and comparing sampled predictor space among data subsets identifies spurious extrapolation as the plausible explanation for limited model transferability. Using these findings, I analyze the limited success of an established PBM (LPJ-GUESS) applied to the same problem. Examination of process representation and parameterization in the PBM identifies implemented processes to adjust (competition between species, disturbance) and missing processes that are crucial in boreal forests (nutrient limitation, forest management). Based on climatic correlations shifting over time, I stress the restricted temporal transferability of bioclimatic limits used in LPJ-GUESS and similar PBMs. By critically assessing the performance of SDM and PBM in this application, I demonstrate the importance of understanding the limitations of the
applied methods.
As a potential solution, I add a novel approach to the repertoire of existing hybrid models. By simulation experiments with an individual-based PBM which reproduces community dynamics resulting from biotic factors, dispersal and legacy effects, I assess the resilience of coastal vegetation to abrupt hydrological changes. According to the results of the resilience analysis, I then modify temporal SDM predictions, thereby transferring relevant process detail from PBM to
SDM. The direction of knowledge transfer from PBM to SDM avoids disadvantages of current hybrid models and increases the applicability of the resulting model in long-term, large-scale applications. A further advantage of the proposed framework is its flexibility, as it is readily extended to other model types, disturbance definitions and response characteristics.
Concluding, I argue that we already have a diverse range of promising modelling tools at hand, which can be refined further. But most importantly, they need to be applied more thoughtfully. Bearing their limitations in mind, combining their strengths and openly reporting underlying assumptions and uncertainties is the way forward.
Start-up incentives targeted at unemployed individuals have become an important tool of the Active Labor Market Policy (ALMP) to fight unemployment in many countries in recent years. In contrast to traditional ALMP instruments like training measures, wage subsidies, or job creation schemes, which are aimed at reintegrating unemployed individuals into dependent employment, start-up incentives are a fundamentally different approach to ALMP, in that they intend to encourage and help unemployed individuals to exit unemployment by entering self-employment and, thus, by creating their own jobs. In this sense, start-up incentives for unemployed individuals serve not only as employment and social policy to activate job seekers and combat unemployment but also as business policy to promote entrepreneurship. The corresponding empirical literature on this topic so far has been mainly focused on the individual labor market perspective, however. The main part of the thesis at hand examines the new start-up subsidy (“Gründungszuschuss”) in Germany and consists of four empirical analyses that extend the existing evidence on start-up incentives for unemployed individuals from multiple perspectives and in the following directions:
First, it provides the first impact evaluation of the new start-up subsidy in Germany. The results indicate that participation in the new start-up subsidy has significant positive and persistent effects on both reintegration into the labor market as well as the income profiles of participants, in line with previous evidence on comparable German and international programs, which emphasizes the general potential of start-up incentives as part of the broader ALMP toolset. Furthermore, a new innovative sensitivity analysis of the applied propensity score matching approach integrates findings from entrepreneurship and labor market research about the key role of an individual’s personality on start-up decision, business performance, as well as general labor market outcomes, into the impact evaluation of start-up incentives. The sensitivity analysis with regard to the inclusion and exclusion of usually unobserved personality variables reveals that differences in the estimated treatment effects are small in magnitude and mostly insignificant. Consequently, concerns about potential overestimation of treatment effects in previous evaluation studies of similar start-up incentives due to usually unobservable personality variables are less justified, as long as the set of observed control variables is sufficiently informative (Chapter 2).
Second, the thesis expands our knowledge about the longer-term business performance and potential of subsidized businesses arising from the start-up subsidy program. In absolute terms, the analysis shows that a relatively high share of subsidized founders successfully survives in the market with their original businesses in the medium to long run. The subsidy also yields a “double dividend” to a certain extent in terms of additional job creation. Compared to “regular”, i.e., non-subsidized new businesses founded by non-unemployed individuals in the same quarter, however, the economic and growth-related impulses set by participants of the subsidy program are only limited with regard to employment growth, innovation activity, or investment. Further investigations of possible reasons for these differences show that differential business growth paths of subsidized founders in the longer run seem to be mainly limited by higher restrictions to access capital and by unobserved factors, such as less growth-oriented business strategies and intentions, as well as lower (subjective) entrepreneurial persistence. Taken together, the program has only limited potential as a business and entrepreneurship policy intended to induce innovation and economic growth (Chapters 3 and 4).
And third, an empirical analysis on the level of German regional labor markets yields that there is a high regional variation in subsidized start-up activity relative to overall new business formation. The positive correlation between regular start-up intensity and the share among all unemployed individuals who participate in the start-up subsidy program suggests that (nascent) unemployed founders also profit from the beneficial effects of regional entrepreneurship capital. Moreover, the analysis of potential deadweight and displacement effects from an aggregated regional perspective emphasizes that the start-up subsidy for unemployed individuals represents a market intervention into existing markets, which affects incumbents and potentially produces inefficiencies and market distortions. This macro perspective deserves more attention and research in the future (Chapter 5).
The Star Excursion Balance Test (SEBT) is effective in measuring dynamic postural control (DPC). This research aimed to determine whether DPC measured by the SEBT in young athletes (YA) with back pain (BP) is different from those without BP (NBP). 53 BP YA and 53 NBP YA matched for age, height, weight, training years, training sessions/week and training minutes/session were studied. Participants performed 4 practice trials after which 3 measurements in the anterior, posteromedial and posterolateral SEBT reach directions were recorded. Normalized reach distance was analyzed using the mean of all 3 measurements. There was no statistical significant difference (p > 0.05) between the reach distance of BP (87.2 ± 5.3, 82.4 ± 8.2, 78.7 ± 8.1) and NBP (87.8 ± 5.6, 82.4 ± 8.0, 80.0 ± 8.8) in the anterior, posteromedial and posterolateral directions respectively. DPC in YA with BP, as assessed by the SEBT, was not different from NBP YA.
Stable isotopes in precipitation: Modelling intra-event variations using meteorological parameters
(2017)
The short-term variability of the isotopic composition of precipitation in Golm, Germany was assessed and modelled. Isotopic data (D/H and 18O/16O) on intra-event timescales as well as meteorological data from a weather station and a micro rain radar was used. After data preparation and the combination of all three data sets, a multivariate linear regression analysis was conducted. This was done for four different isotopic response variables and for the entire data set as well as for the two subsets Summer and Winter. The used response variables are the δ18O values as the difference to the corresponding event-based mean and as the difference to the median, and the deuterium excess values as the difference to both the mean and the median. The models were evaluated by comparing the modelled values with the observed ones. This showed that the observations could not be reproduced in a satisfactory way. Therefore, several suggestions on how to possibly improve the methods and thus the modelling results are given in the end.
Background: Obesity is not only a highly prevalent disease but also poses a considerable burden on children and their families. Evidence is increasing that a lack of self-regulation skills may play a role in the etiology and maintenance of obesity. Our goal with this currently ongoing trial is to examine whether training that focuses on the enhancement of self-regulation skills may increase the sustainability of a complex lifestyle intervention.
Methods/Design: In a multicenter, prospective, parallel group, randomized controlled superiority trial, 226 obese children and adolescents aged 8 to 16 years will be allocated either to a newly developed computer-training program to improve their self-regulation abilities or to a placebo control group. Randomization occurs centrally and blockwise at a 1:1 allocation ratio for each center. This study is performed in pediatric inpatient rehabilitation facilities specialized in the treatment of obesity. Observer-blind assessments of outcome variables take place at four times: at the beginning of the rehabilitation (pre), at the end of the training in the rehabilitation (post), and 6 and 12 months post-rehabilitation intervention. The primary outcome is the course of BMI-SDS over 1 year after the end of the inpatient rehabilitation. Secondary endpoints are the self-regulation skills. In addition, health-related quality of life, and snack intake will be analyzed.
Discussion: The computer-based training programs might be a feasible and attractive tool to increase the sustainability of the weight loss reached during inpatient rehabilitation.
Squimera
(2017)
Software development tools that work and behave consistently across different programming languages are helpful for developers, because they do not have to familiarize themselves with new tooling whenever they decide to use a new language. Also, being able to combine multiple programming languages in a program increases reusability, as developers do not have to recreate software frameworks and libraries in the language they develop in and can reuse existing software instead.
However, developers often have a broad choice with regard to tools, some of which are designed for only one specific programming language. Various Integrated Development Environments have support for multiple languages, but are usually unable to provide a consistent programming experience due to different features of language runtimes. Furthermore, common mechanisms that allow reuse of software written in other languages usually use the operating system or a network connection as the abstract layer. Tools, however, often cannot support such indirections well and are therefore less useful in debugging scenarios for example.
In this report, we present a novel approach that aims to improve the programming experience with regard to working with multiple high-level programming languages. As part of this approach, we reuse the tools of a Smalltalk programming environment for other languages and build a multi-language virtual execution environment which is able to provide the same runtime capabilities for all languages.
The prototype system Squimera is an implementation of our approach and demonstrates that it is possible to reuse development tools, so that they behave in the same way across all supported programming languages. In addition, it provides convenient means to reuse and even mix software libraries and frameworks written in different languages without breaking the debugging experience.
Spotlight on the underdogs
(2017)
Alternaria (A.) is a genus of widespread fungi capable of producing numerous, possibly health-endangering Alternaria toxins (ATs), which are usually not the focus of attention. The formation of ATs depends on the species and complex interactions of various environmental factors and is not fully understood. In this study the influence of temperature (7 °C, 25 °C), substrate (rice, wheat kernels) and incubation time (4, 7, and 14 days) on the production of thirteen ATs and three sulfoconjugated ATs by three different Alternaria isolates from the species groups A. tenuissima and A. infectoria was determined. High-performance liquid chromatography coupled with tandem mass spectrometry was used for quantification. Under nearly all conditions, tenuazonic acid was the most extensively produced toxin. At 25 °C and with increasing incubation time all toxins were formed in high amounts by the two A. tenuissima strains on both substrates with comparable mycotoxin profiles. However, for some of the toxins, stagnation or a decrease in production was observed from day 7 to 14. As opposed to the A. tenuissima strains, the A. infectoria strain only produced low amounts of ATs, but high concentrations of stemphyltoxin III. The results provide an essential insight into the quantitative in vitro AT formation under different environmental conditions, potentially transferable to different field and storage conditions
Background: Female sperm storage has evolved independently multiple times among vertebrates to control reproduction in response to the environment. In internally fertilising amphibians, female salamanders store sperm in cloacal spermathecae, whereas among anurans sperm storage in oviducts is known only in tailed frogs. Facilitated through extensive field sampling following historical observations we tested for sperm storing structures in the female urogenital tract of fossorial, tropical caecilian amphibians.
Findings: In the oviparous Ichthyophis cf. kohtaoensis, aggregated sperm were present in a distinct region of the posterior oviduct but not in the cloaca in six out of seven vitellogenic females prior to oviposition. Spermatozoa were found most abundantly between the mucosal folds. In relation to the reproductive status decreased amounts of sperm were present in gravid females compared to pre-ovulatory females. Sperm were absent in females past oviposition.
Conclusions: Our findings indicate short-term oviductal sperm storage in the oviparous Ichthyophis cf. kohtaoensis. We assume that in female caecilians exhibiting high levels of parental investment sperm storage has evolved in order to optimally coordinate reproductive events and to increase fitness.
Galaxies evolve on cosmological timescales and to study this evolution we can either study the stellar populations, tracing the star formation and chemical enrichment, or the dynamics, tracing interactions and mergers of galaxies as well as accretion. In the last decades this field has become one of the most active research areas in modern astrophysics and especially the use of integral field spectrographs furthered our understanding. This work is based on data of NGC 5102 obtained with the panoramic integral field spectrograph MUSE. The data are analysed with two separate and complementary approaches: In the first part, standard methods are used to measure the kinematics and than model the gravitational potential using these exceptionally high-quality data. In the second part I develop the new method of surface brightness fluctuation spectroscopy and quantitatively explore its potential to investigate the bright evolved stellar population.
Measuring the kinematics of NGC 5102 I discover that this low-luminosity S0 galaxy hosts two counter rotating discs. The more central stellar component co-rotates with the large amount of HI gas. Investigating the populations I find strong central age and metallicity gradients with a younger and more metal rich central population. The spectral resolution of MUSE does not allow to connect these population gradients with the two counter rotating discs.
The kinematic measurements are modelled with Jeans anisotropic models to infer the gravitational potential of NGC 5102. Under the self-consistent mass-follows-light assumption none of the Jeans models is able to reproduce the observed kinematics. To my knowledge this is the strongest evidence evidence for a dark matter dominated system obtained with this approach so far. Including a Navarro, Frenk & White dark matter halo immediately solves the discrepancies. A very robust result is the logarithmic slope of the total matter density. For this low-mass galaxy I find a value of -1.75 +- 0.04, shallower than an isothermal halo and even shallower than published values for more massive galaxies. This confirms a tentative relation between total mass slope and stellar mass of galaxies.
The Surface Brightness Fluctuation (SBF) method is a well established distance measure, but due to its sensitive to bright stars also used to study evolved stars in unresolved stellar populations. The wide-field spectrograph MUSE offers the possibility to apply this technique for the first time to spectroscopic data. In this thesis I develop the spectroscopic SBF technique and measure the first SBF spectrum of any galaxy. I discuss the challenges for measuring SBF spectra that rise due to the complexity of integral field spectrographs compared to imaging instruments.
Since decades, stellar population models indicate that SBFs in intermediate-to-old stellar systems are dominated by red giant branch and asymptotic giant branch stars. Especially the later carry significant model uncertainties, making these stars a scientifically interesting target. Comparing the NGC 5102 SBF spectrum with stellar spectra I show for the first time that M-type giants cause the fluctuations. Stellar evolution models suggest that also carbon rich thermally pulsating asymptotic giant branch stars should leave a detectable signal in the SBF spectrum. I cannot detect a significant contribution from these stars in the NGC 5102 SBF spectrum.
I have written a stellar population synthesis tool that predicts for the first time SBF spectra. I compute two sets of population models: based on observed and on theoretical stellar spectra. In comparing the two models I find that the models based on observed spectra predict weaker molecular features. The comparison with the NGC 5102 spectrum reveals that these models are in better agreement with the data.
High Mountain Asia (HMA) - encompassing the Tibetan Plateau and surrounding mountain ranges - is the primary water source for much of Asia, serving more than a billion downstream users. Many catchments receive the majority of their yearly water budget in the form of snow, which is poorly monitored by sparse in situ weather networks. Both the timing and volume of snowmelt play critical roles in downstream water provision, as many applications - such as agriculture, drinking-water generation, and hydropower - rely on consistent and predictable snowmelt runoff. Here, we examine passive microwave data across HMA with five sensors (SSMI, SSMIS, AMSR-E, AMSR2, and GPM) from 1987 to 2016 to track the timing of the snowmelt season - defined here as the time between maximum passive microwave signal separation and snow clearance. We validated our method against climate model surface temperatures, optical remote-sensing snow-cover data, and a manual control dataset (n = 2100, 3 variables at 25 locations over 28 years); our algorithm is generally accurate within 3-5 days. Using the algorithm-generated snowmelt dates, we examine the spatiotemporal patterns of the snowmelt season across HMA. The climatically short (29-year) time series, along with complex interannual snowfall variations, makes determining trends in snowmelt dates at a single point difficult. We instead identify trends in snowmelt timing by using hierarchical clustering of the passive microwave data to determine trends in self-similar regions. We make the following four key observations. (1) The end of the snowmelt season is trending almost universally earlier in HMA (negative trends). Changes in the end of the snowmelt season are generally between 2 and 8 days decade 1 over the 29-year study period (5-25 days total). The length of the snowmelt season is thus shrinking in many, though not all, regions of HMA. Some areas exhibit later peak signal separation (positive trends), but with generally smaller magnitudes than trends in snowmelt end. (2) Areas with long snowmelt periods, such as the Tibetan Plateau, show the strongest compression of the snowmelt season (negative trends). These trends are apparent regardless of the time period over which the regression is performed. (3) While trends averaged over 3 decades indicate generally earlier snowmelt seasons, data from the last 14 years (2002-2016) exhibit positive trends in many regions, such as parts of the Pamir and Kunlun Shan. Due to the short nature of the time series, it is not clear whether this change is a reversal of a long-term trend or simply interannual variability. (4) Some regions with stable or growing glaciers - such as the Karakoram and Kunlun Shan - see slightly later snowmelt seasons and longer snowmelt periods. It is likely that changes in the snowmelt regime of HMA account for some of the observed heterogeneity in glacier response to climate change. While the decadal increases in regional temperature have in general led to earlier and shortened melt seasons, changes in HMA's cryosphere have been spatially and temporally heterogeneous.
The Vogtland, located at the border region between the Czech Republic and Germany, is known for Holocene volcanism, gas and fluid emissions as well as for reoccurring earthquake swarms, pointing towards a high geodynamic activity. During the earthquake swarm in 2008/2009, a temporary array was installed close to Rohrbach (Germany), at an epicentral distance of about 10 km from the Nový Kostel focal zone (aperture ~0.75 km).
22 events of the recorded swarm were selected to set up a source array. Source arrays are spatially clustered earthquakes, which can be used in a similar manner as receiver array recordings of single events (Green’s functions reciprocity). The application of array seismology techniques like beam forming requires similar waveforms and precisely known origin times and locations. The resemblance of waveforms was assured by visual selection of events and quantified with the calculation of cross-correlation coefficients. We observed that the different events recorded at a single station generally show greater resemblances than the recordings of one event at all stations of the receiver array. This indicates a heterogeneous subsurface beneath the receiver array and a comparably homogeneous source array volume with respect to the frequency-dependent resolution of both arrays.
Beam forming was applied on the Z, N and E component recordings of the source array events at 11 stations, and the results were analysed with respect to converted or reflected crustal phases. While the theoretical back azimuth of the direct phases match the beam forming results in case of the source array analysis, in case of receiver array beam forming derivations of 15°-25° are observed.
PS phases, closely following the direct P phase and presumably SP phases, arriving shortly before the direct S phase can be observed on several stations. Based on the time differences to the direct P and S phases we inferred a conversion depth of about 0.6-0.9 km. A second deeper source array was set up in order to interpret a structural phase arriving 0.85 s after the direct P phase on records of deeper events only.
Additionally to the source array beam forming method an analytical method with a fixed medium velocity and a grid search method, both for determining conversion/ reflection locations of phases traveling off the direct line between source and receiver array, were developed and applied to other observed phases.
In conclusion, we think that the distinct beam forming results along with the striking waveform resemblance reveal the opportunities of using source arrays consisting of small swarm events for the analysis of crustal structures.
Shifts among Eukaryota, Bacteria, and Archaea define the vertical organization of a lake sediment
(2017)
Background
Lake sediments harbor diverse microbial communities that cycle carbon and nutrients while being constantly colonized and potentially buried by organic matter sinking from the water column. The interaction of activity and burial remained largely unexplored in aquatic sediments. We aimed to relate taxonomic composition to sediment biogeochemical parameters, test whether community turnover with depth resulted from taxonomic replacement or from richness effects, and to provide a basic model for the vertical community structure in sediments.
Methods
We analyzed four replicate sediment cores taken from 30-m depth in oligo-mesotrophic Lake Stechlin in northern Germany. Each 30-cm core spanned ca. 170 years of sediment accumulation according to 137Cs dating and was sectioned into layers 1–4 cm thick. We examined a full suite of biogeochemical parameters and used DNA metabarcoding to examine community composition of microbial Archaea, Bacteria, and Eukaryota.
Results
Community β-diversity indicated nearly complete turnover within the uppermost 30 cm. We observed a pronounced shift from Eukaryota- and Bacteria-dominated upper layers (<5 cm) to Bacteria-dominated intermediate layers (5–14 cm) and to deep layers (>14 cm) dominated by enigmatic Archaea that typically occur in deep-sea sediments. Taxonomic replacement was the prevalent mechanism in structuring the community composition and was linked to parameters indicative of microbial activity (e.g., CO2 and CH4 concentration, bacterial protein production). Richness loss played a lesser role but was linked to conservative parameters (e.g., C, N, P) indicative of past conditions.
Conclusions
By including all three domains, we were able to directly link the exponential decay of eukaryotes with the active sediment microbial community. The dominance of Archaea in deeper layers confirms earlier findings from marine systems and establishes freshwater sediments as a potential low-energy environment, similar to deep sea sediments. We propose a general model of sediment structure and function based on microbial characteristics and burial processes. An upper “replacement horizon” is dominated by rapid taxonomic turnover with depth, high microbial activity, and biotic interactions. A lower “depauperate horizon” is characterized by low taxonomic richness, more stable “low-energy” conditions, and a dominance of enigmatic Archaea.
Modifications of transfer RNA (tRNA) have been shown to play critical roles in the biogenesis, metabolism, structural stability and function of RNA molecules, and the specific modifications of nucleobases with sulfur atoms in tRNA are present in pro- and eukaryotes. Here, especially the thiomodifications xm(5)s(2)U at the wobble position 34 in tRNAs for Lys, Gln and Glu, were suggested to have an important role during the translation process by ensuring accurate deciphering of the genetic code and by stabilization of the tRNA structure. The trafficking and delivery of sulfur nucleosides is a complex process carried out by sulfur relay systems involving numerous proteins, which not only deliver sulfur to the specific tRNAs but also to other sulfur-containing molecules including iron-sulfur clusters, thiamin, biotin, lipoic acid and molybdopterin (MPT). Among the biosynthesis of these sulfur-containing molecules, the biosynthesis of the molybdenum cofactor (Moco) and the synthesis of thio-modified tRNAs in particular show a surprising link by sharing protein components for sulfur mobilization in pro- and eukaryotes.
Although it has become common practice to build applications based on the reuse of existing components or services, technical complexity and semantic challenges constitute barriers to ensuring a successful and wide reuse of components and services. In the geospatial application domain, the barriers are self-evident due to heterogeneous geographic data, a lack of interoperability and complex analysis processes.
Constructing workflows manually and discovering proper services and data that match user intents and preferences is difficult and time-consuming especially for users who are not trained in software development. Furthermore, considering the multi-objective nature of environmental modeling for the assessment of climate change impacts and the various types of geospatial data (e.g., formats, scales, and georeferencing systems) increases the complexity challenges.
Automatic service composition approaches that provide semantics-based assistance in the process of workflow design have proven to be a solution to overcome these challenges and have become a frequent demand especially by end users who are not IT experts. In this light, the major contributions of this thesis are:
(i) Simplification of service reuse and workflow design of applications for climate impact analysis by following the eXtreme Model-Driven Development (XMDD) paradigm.
(ii) Design of a semantic domain model for climate impact analysis applications that comprises specifically designed services, ontologies that provide domain-specific vocabulary for referring to types and services, and the input/output annotation of the services using the terms defined in the ontologies.
(iii) Application of a constraint-driven method for the automatic composition of workflows for analyzing the impacts of sea-level rise. The application scenario demonstrates the impact of domain modeling decisions on the results and the performance of the synthesis algorithm.
The motivation of this work was to investigate the self-assembly of a block copolymer species that attended little attraction before, double hydrophilic block copolymers (DHBCs). DHBCs consist of two linear hydrophilic polymer blocks. The self-assembly of DHBCs towards suprastructures such as particles and vesicles is determined via a strong difference in hydrophilicity between the corresponding blocks leading to a microphase separation due to immiscibility. The benefits of DHBCs and the corresponding particles and vesicles, such as biocompatibility, high permeability towards water and hydrophilic compounds as well as the large amount of possible functionalizations that can be addressed to the block copolymers make the application of DHBC based structures a viable choice in biomedicine. In order to assess a route towards self-assembled structures from DHBCs that display the potential to act as cargos for future applications, several block copolymers containing two hydrophilic polymer blocks were synthesized. Poly(ethylene oxide)-b-poly(N-vinylpyrrolidone) (PEO-b-PVP) and Poly(ethylene oxide)-b-poly(N-vinylpyrrolidone-co-N-vinylimidazole) (PEO-b-P(VP-co-VIm) block copolymers were synthesized via reversible deactivation radical polymerization (RDRP) techniques starting from a PEO-macro chain transfer agent. The block copolymers displayed a concentration dependent self-assembly behavior in water which was determined via dynamic light scattering (DLS). It was possible to observe spherical particles via laser scanning confocal microscopy (LSCM) and cryogenic scanning electron microscopy (cryo SEM) at highly concentrated solutions of PEO-b-PVP. Furthermore, a crosslinking strategy with (PEO-b-P(VP-co-VIm) was developed applying a diiodo derived crosslinker diethylene glycol bis(2-iodoethyl) ether to form quaternary amines at the VIm units. The formed crosslinked structures proved stability upon dilution and transfer into organic solvents. Moreover, self-assembly and crosslinking in DMF proved to be more advantageous and the crosslinked structures could be successfully transferred to aqueous solution. The afforded spherical submicron particles could be visualized via LSCM, cryo SEM and Cryo TEM.
Double hydrophilic pullulan-b-poly(acrylamide) block copolymers were synthesized via copper catalyzed alkyne azide cycloaddition (CuAAC) starting from suitable pullulan alkyne and azide functionalized poly(N,N-dimethylacrylamide) (PDMA) and poly(N-ethylacrylamide) (PEA) homopolymers. The conjugation reaction was confirmed via SEC and 1H-NMR measurements. The self-assembly of the block copolymers was monitored with DLS and static light scattering (SLS) measurements indicating the presence of hollow spherical structures. Cryo SEM measurements could confirm the presence of vesicular structures for Pull-b-PEA block copolymers. Solutions of Pull-b-PDMA displayed particles in cryo SEM. Moreover, an end group functionalization of Pull-b-PDMA with Rhodamine B allowed assessing the structure via LSCM and hollow spherical structures were observed indicating the presence of vesicles, too.
An exemplified pathway towards a DHBC based drug delivery vehicle was demonstrated with the block copolymer Pull-b-PVP. The block copolymer was synthesized via RAFT/MADIX techniques starting from a pullulan chain transfer agent. Pull-b-PVP displayed a concentration dependent self-assembly in water with an efficiency superior to the PEO-b-PVP system, which could be observed via DLS. Cryo SEM and LSCM microscopy displayed the presence of spherical structures. In order to apply a reversible crosslinking strategy on the synthesized block copolymer, the pullulan block was selectively oxidized to dialdehydes with NaIO4. The oxidation of the block copolymer was confirmed via SEC and 1H-NMR measurements. The self-assembled and oxidized structures were subsequently crosslinked with cystamine dihiydrochloride, a pH and redox responsive crosslinker resulting in crosslinked vesicles which were observed via cryo SEM. The vesicular structures of crosslinked Pull-b-PVP could be disassembled by acid treatment or the application of the redox agent tris(2-carboxyethyl)-phosphin-hydrochloride. The successful disassembly was monitored with DLS measurements.
To conclude, self-assembled structures from DHBCs such as particles and vesicles display a strong potential to generate an impact on biomedicine and nanotechnologies. The variety of DHBC compositions and functionalities are very promising features for future applications.
Self-adaptive data quality
(2017)
Carrying out business processes successfully is closely linked to the quality of the data inventory in an organization. Lacks in data quality lead to problems: Incorrect address data prevents (timely) shipments to customers. Erroneous orders lead to returns and thus to unnecessary effort. Wrong pricing forces companies to miss out on revenues or to impair customer satisfaction. If orders or customer records cannot be retrieved, complaint management takes longer. Due to erroneous inventories, too few or too much supplies might be reordered.
A special problem with data quality and the reason for many of the issues mentioned above are duplicates in databases. Duplicates are different representations of same real-world objects in a dataset. However, these representations differ from each other and are for that reason hard to match by a computer. Moreover, the number of required comparisons to find those duplicates grows with the square of the dataset size. To cleanse the data, these duplicates must be detected and removed. Duplicate detection is a very laborious process. To achieve satisfactory results, appropriate software must be created and configured (similarity measures, partitioning keys, thresholds, etc.). Both requires much manual effort and experience.
This thesis addresses automation of parameter selection for duplicate detection and presents several novel approaches that eliminate the need for human experience in parts of the duplicate detection process.
A pre-processing step is introduced that analyzes the datasets in question and classifies their attributes semantically. Not only do these annotations help understanding the respective datasets, but they also facilitate subsequent steps, for example, by selecting appropriate similarity measures or normalizing the data upfront. This approach works without schema information.
Following that, we show a partitioning technique that strongly reduces the number of pair comparisons for the duplicate detection process. The approach automatically finds particularly suitable partitioning keys that simultaneously allow for effective and efficient duplicate retrieval. By means of a user study, we demonstrate that this technique finds partitioning keys that outperform expert suggestions and additionally does not need manual configuration. Furthermore, this approach can be applied independently of the attribute types.
To measure the success of a duplicate detection process and to execute the described partitioning approach, a gold standard is required that provides information about the actual duplicates in a training dataset. This thesis presents a technique that uses existing duplicate detection results and crowdsourcing to create a near gold standard that can be used for the purposes above. Another part of the thesis describes and evaluates strategies how to reduce these crowdsourcing costs and to achieve a consensus with less effort.
This project was focused on generating ultra thin stimuli responsive membranes with an embedded transmembrane protein to act as the pore. The membranes were formed by crosslinking of transmembrane protein polymer conjugates. The conjugates were self assembled on air water interface and the polymer chains crosslinked using a UV crosslinkable comonomer to engender the membrane. The protein used for the studies reported herein was one of the largest transmembrane channel proteins, ferric hydroxamate uptake protein component A (FhuA), found in the outer membrane of Escherichia coli (E. coli). The wild type protein and three genetic variants of FhuA were provided by the group of Prof. Schwaneberg in Aachen. The well known thermo responsive poly(N isopropylacrylamide) (PNIPAAm) and the pH and thermo responsive polymer poly((2-dimethylamino)ethyl methacrylate) (PDMAEMA) were conjugated to FhuA and the genetic variants via controlled radical polymerization (CRP) using grafting from technique. These polymers were chosen because they would provide stimuli handles in the resulting membranes. The reported polymerization was the first ever attempt to attach polymer chains onto a membrane protein using site specific modification.
The conjugate synthesis was carried out in two steps – a) FhuA was first converted into a macroinitiator by covalently linking a water soluble functional CRP initiator to the lysine residues. b) Copper mediated CRP was then carried out in pure buffer conditions with and without sacrificial initiator to generate the conjugates.
The challenge was carrying out the modifications on FhuA without denaturing it. FhuA, being a transmembrane protein, requires amphiphilic species to stabilize its highly hydrophobic transmembrane region. For the experiments reported in this thesis, the stabilizing agent was 2 methyl 2,4-pentanediol (MPD). Since the buffer containing MPD cannot be considered a purely aqueous system, and also because MPD might interfere with the polymerization procedure, the reaction conditions were first optimized using a model globular protein, bovine serum albumin (BSA). The optimum conditions were then used for the generation of conjugates with FhuA.
The generated conjugates were shown to be highly interfacially active and this property was exploited to let them self assemble onto polar apolar interfaces. The emulsions stabilized by particles or conjugates are referred to as Pickering emulsions. Crosslinking conjugates with a UV crosslinkable co monomer afforded nano thin micro compartments. Interfacial self assembly at the air water interface and subsequent UV crosslinking also yielded nano thin, stimuli responsive membranes which were shown to be mechanically robust. Initial characterization of the flux and permeation of water through these membranes is also reported herein. The generated nano thin membranes with PNIPAAm showed reduced permeation at elevated temperatures owing to the resistance by the hydrophobic and thus water-impermeable polymer matrix, hence confirming the stimulus responsivity.
Additionally, as a part of collaborative work with Dr. Changzhu Wu, TU Dresden, conjugates of three enzymes with current/potential industrial relevance (candida antarctica lipase B, benzaldehyde lyase and glucose oxidase) with stimuli responsive polymers were synthesized. This work aims at carrying out cascade reactions in the Pickering emulsions generated by self assembled enzyme polymer conjugate.
The dynamics of fragmentation and vibration of molecular systems with a large number of coupled degrees of freedom are key aspects for understanding chemical reactivity and properties. Here we present a resonant inelastic X-ray scattering (RIXS) study to show how it is possible to break down such a complex multidimensional problem into elementary components. Local multimode nuclear wave packets created by X-ray excitation to different core-excited potential energy surfaces (PESs) will act as spatial gates to selectively probe the particular ground-state vibrational modes and, hence, the PES along these modes. We demonstrate this principle by combining ultra-high resolution RIXS measurements for gas-phase water with state-of-the-art simulations.
Is there an ideal time window for language acquisition after which nativelike
representation and processing are unattainable? Although this question has
been heavily debated, no consensus has been reached. Here, we present
evidence for a sensitive period in language development and show that it is
specific to grammar. We conducted a masked priming task with a group of
Turkish-German bilinguals and examined age of acquisition (AoA) effects on
the processing of complex words. We compared a subtle but meaningful
linguistic contrast, that between grammatical inflection and lexical-based
derivation. The results showed a highly selective AoA effect on inflectional
(but not derivational) priming. In addition, the effect displayed a discontinuity
indicative of a sensitive period: Priming from inflected forms was nativelike
when acquisition started before the age of 5 but declined with increasing
AoA. We conclude that the acquisition of morphological rules expressing
morphosyntactic properties is constrained by maturational factors.
Organic matter deposited in ancient, ice-rich permafrost sediments is vulnerable to climate change and may contribute to the future release of greenhouse gases; it is thus important to get a better characterization of the plant organic matter within such sediments. From a Late Quaternary permafrost sediment core from the Buor Khaya Peninsula, we analysed plant-derived sedimentary ancient DNA (sedaDNA) to identify the taxonomic composition of plant organic matter, and undertook palynological analysis to assess the environmental conditions during deposition. Using sedaDNA, we identified 154 taxa and from pollen and non-pollen palynomorphs we identified 83 taxa. In the deposits dated between 54 and 51 kyr BP, sedaDNA records a diverse low-centred polygon plant community including recurring aquatic pond vegetation while from the pollen record we infer terrestrial open-land vegetation with relatively dry environmental conditions at a regional scale. A fluctuating dominance of either terrestrial or swamp and aquatic taxa in both proxies allowed the local hydrological development of the polygon to be traced. In deposits dated between 11.4 and 9.7 kyr BP (13.4-11.1 cal kyr BP), sedaDNA shows a taxonomic turnover to moist shrub tundra and a lower taxonomic richness compared to the older samples. Pollen also records a shrub tundra community, mostly seen as changes in relative proportions of the most dominant taxa, while a decrease in taxonomic richness was less pronounced compared to sedaDNA. Our results show the advantages of using sedaDNA in combination with palynological analyses when macrofossils are rarely preserved. The high resolution of the sedaDNA record provides a detailed picture of the taxonomic composition of plant-derived organic matter throughout the core, and palynological analyses prove valuable by allowing for inferences of regional environmental conditions.
The Limpopo Basin in southern Africa is prone to droughts which affect the livelihood of millions of people in South Africa, Botswana, Zimbabwe and Mozambique. Seasonal drought early warning is thus vital for the whole region. In this study, the predictability of hydrological droughts during the main runoff period from December to May is assessed using statistical approaches. Three methods (multiple linear models, artificial neural networks, random forest regression trees) are compared in terms of their ability to forecast streamflow with up to 12 months of lead time. The following four main findings result from the study.
1. There are stations in the basin at which standardised streamflow is predictable with lead times up to 12 months. The results show high inter-station differences of forecast skill but reach a coefficient of determination as high as 0.73 (cross validated).
2. A large range of potential predictors is considered in this study, comprising well-established climate indices, customised teleconnection indices derived from sea surface temperatures and antecedent streamflow as a proxy of catchment conditions. El Nino and customised indices, representing sea surface temperature in the Atlantic and Indian oceans, prove to be important teleconnection predictors for the region. Antecedent streamflow is a strong predictor in small catchments (with median 42% explained variance), whereas teleconnections exert a stronger influence in large catchments.
3. Multiple linear models show the best forecast skill in this study and the greatest robustness compared to artificial neural networks and random forest regression trees, despite their capabilities to represent nonlinear relationships.
4. Employed in early warning, the models can be used to forecast a specific drought level. Even if the coefficient of determination is low, the forecast models have a skill better than a climatological forecast, which is shown by analysis of receiver operating characteristics (ROCs). Seasonal statistical forecasts in the Limpopo show promising results, and thus it is recommended to employ them as complementary to existing forecasts in order to strengthen preparedness for droughts.
The SusKat-ABC (Sustainable Atmosphere for the Kathmandu Valley-Atmospheric Brown Clouds) international air pollution measurement campaign was carried out from December 2012 to June 2013 in the Kathmandu Valley and surrounding regions in Nepal. The Kathmandu Valley is a bowl-shaped basin with a severe air pollution problem. This paper reports measurements of two major greenhouse gases (GHGs), methane (CH4) and carbon dioxide (CO2), along with the pollutant CO, that began during the campaign and were extended for 1 year at the SusKat-ABC supersite in Bode, a semi-urban location in the Kathmandu Valley. Simultaneous measurements were also made during 2015 in Bode and a nearby rural site (Chanban) similar to 25 km (aerial distance) to the southwest of Bode on the other side of a tall ridge. The ambient mixing ratios of methane (CH4), carbon dioxide (CO2), water vapor, and carbon monoxide (CO) were measured with a cavity ring-down spectrometer (G2401; Picarro, USA) along with meteorological parameters for 1 year (March 2013-March 2014). These measurements are the first of their kind in the central Himalayan foothills. At Bode, the annual average mixing ratios of CO2 and CH4 were 419.3 (+/- 6.0) ppm and 2.192 (+/- 0.066) ppm, respectively. These values are higher than the levels observed at background sites such as Mauna Loa, USA (CO2: 396.8 +/- 2.0 ppm, CH4: 1.831 +/- 0.110 ppm) and Waliguan, China (CO2: 397.7 +/- 3.6 ppm, CH4: 1.879 +/- 0.009 ppm) during the same period and at other urban and semi-urban sites in the region, such as Ahmedabad and Shadnagar (India). They varied slightly across the seasons at Bode, with seasonal average CH4 mixing ratios of 2.157 (+/- 0.230) ppm in the pre-monsoon season, 2.199 (+/- 0.241) ppm in the monsoon, 2.210 (+/- 0.200) ppm in the post-monsoon, and 2.214 (+/- 0.209) ppm in the winter season. The average CO2 mixing ratios were 426.2 (+/- 25.5) ppm in the pre-monsoon, 413.5 (+/- 24.2) ppm in the monsoon, 417.3 (+/- 23.1) ppm in the postmonsoon, and 421.9 (+/- 20.3) ppm in the winter season. The maximum seasonal mean mixing ratio of CH4 in winter was only 0.057 ppm or 2.6% higher than the seasonal minimum during the pre-monsoon period, while CO2 was 12.8 ppm or 3.1% higher during the pre-monsoon period (seasonal maximum) than during the monsoon (seasonal minimum). On the other hand, the CO mixing ratio at Bode was 191% higher during the winter than during the monsoon season. The enhancement in CO2 mixing ratios during the pre-monsoon season is associated with additional CO2 emissions from forest fires and agro-residue burning in northern South Asia in addition to local emissions in the Kathmandu Valley. Published CO = CO2 ratios of different emission sources in Nepal and India were compared with the observed CO = CO2 ratios in this study. This comparison suggested that the major sources in the Kathmandu Valley were residential cooking and vehicle exhaust in all seasons except winter. In winter, brick kiln emissions were a major source. Simultaneous measurements in Bode and Chanban (15 July-3 October 2015) revealed that the mixing ratios of CO2, CH4, and CO were 3.8, 12, and 64% higher in Bode than Chanban.
The Kathmandu Valley thus has significant emissions from local sources, which can also be attributed to its bowl-shaped geography that is conducive to pollution build-up. At Bode, all three gas species (CO2, CH4, and CO) showed strong diurnal patterns in their mixing ratios with a pronounced morning peak (ca. 08:00), a dip in the afternoon, and a gradual increase again through the night until the next morning. CH4 and CO at Chanban, however, did not show any noticeable diurnal variations.
These measurements provide the first insights into the diurnal and seasonal variation in key greenhouse gases and air pollutants and their local and regional sources, which is important information for atmospheric research in the region.
This essay reads Sam Selvon’s novel The Lonely Londoners (1956) as a milestone in the decolonisation of British fiction. After an introduction to Selvon and the core composition of the novel, it discusses the ways in which the narrative takes on issues of race and racism, how it in the tradition of the Trinidadian carnival confronts audiences with sexual profanation and black masculine swagger, and not least how the novel, especially through its elaborate use of creole Englishes, reimagines London as a West Indian metropolis. The essay then turns more systematically to the ways in which Selvon translates Western literary models and their isolated subject positions into collective modes of narrative performance taken from Caribbean orature and the calypsonian tradition. The Lonely Londoners breathes entirely new life into the ossified conventions of the English novel, and imbues it with unforeseen aesthetic, ethical, political and epistemological possibilities.
Objectives: Chronic back pain (CBP) can lead to disability and burden. In addition to its medical causes, its development is influenced by psychosocial risk factors, the so-called flag factors, which are categorized and integrated into many treatment guidelines. Currently, most studies investigate single flag factors, which limit the estimation of individual factor significance in the development of chronic pain. Furthermore, factors concerning patients’ lifestyle, biography and treatment history are often neglected. Therefore, the objectives of the present study are to identify commonly neglected factors of CBP and integrate them into an analysis model comparing their significance with established flag factors.
Methods: A total of 24 patients and therapists were cross-sectionally interviewed to identify commonly neglected factors of CBP. Subsequently, the impact of these factors was surveyed in a longitudinal study. In two rehabilitation clinics, CBP patients (n = 145) were examined before and 6 months after a 3-week inpatient rehabilitation. Outcome variables, chronification factor pain experience (CF-PE) and chronification factor disability (CF-D), were ascertained with confirmatory factor analysis (CFA) of standardized questionnaires. Predictors were evaluated using stepwise calculations of simple and multiple regression models.
Results: Through interviews, medical history, iatrogenic factors, poor compliance, critical life events (LEs), social support (SS) type and effort–reward were identified as commonly neglected factors. However, only the final three held significance in comparison to established factors such as depression and pain-related cognitions. Longitudinally, lifestyle factors found to influence future pain were initial pain, physically demanding work, nicotine consumption, gender and rehabilitation clinic. LEs were unexpectedly found to be a strong predictor of future pain, as were the protective factors, reward at work and perceived SS.
Discussion: These findings shed insight regarding often overlooked factors in the development of CBP, suggesting that more detailed operationalization and superordinate frameworks would be beneficial to further research.
Conclusion: In particular, LEs should be taken into account in future research. Protective factors should be integrated in therapeutic settings.
In this thesis, I develop a theoretical implementation of prosodic reconstruction and apply it to the empirical domain of German sentences in which part of a focus or contrastive topic is fronted.
Prosodic reconstruction refers to the idea that sentences involving syntactic movement show prosodic parallels with corresponding simpler structures without movement. I propose to model this recurrent observation by ordering syntax-prosody mapping before copy deletion.
In order to account for the partial fronting data, the idea is extended to the mapping between prosody and information structure. This assumption helps to explain why object-initial sentences containing a broad focus or broad contrastive topic show similar prosodic and interpretative restrictions as sentences with canonical word order.
The empirical adequacy of the model is tested against a set of gradient acceptability judgments.
Rethinking Music Education
(2017)
Background: Athletes may differ in their resting metabolic rate (RMR) from the general population. However, to estimate the RMR in athletes, prediction equations that have not been validated in athletes are often used. The purpose of this study was therefore to verify the applicability of commonly used RMR predictions for use in athletes. Methods: The RMR was measured by indirect calorimetry in 17 highly trained rowers and canoeists of the German national teams (BMI 24 ± 2 kg/m2, fat-free mass 69 ± 15 kg). In addition, the RMR was predicted using Cunningham (CUN) and Harris-Benedict (HB) equations. A two-way repeated measures ANOVA was calculated to test for differences between predicted and measured RMR (α = 0.05). The root mean square percentage error (RMSPE) was calculated and the Bland-Altman procedure was used to quantify the bias for each prediction. Results: Prediction equations significantly underestimated the RMR in males (p < 0.001). The RMSPE was calculated to be 18.4% (CUN) and 20.9% (HB) in the entire group. The bias was 133 kcal/24 h for CUN and 202 kcal/24 h for HB. Conclusions: Predictions significantly underestimate the RMR in male heavyweight endurance athletes but not in females. In athletes with a high fat-free mass, prediction equations might therefore not be applicable to estimate energy requirements. Instead, measurement of the resting energy expenditure or specific prediction equations might be needed for the individual heavyweight athlete.
This study aimed to determine the relative and absolute reliability of ultrasound (US) measurements of the thickness and echogenicity of the plantar fascia (PF) at different measurement stations along its length using a standardized protocol. Twelve healthy subjects (24 feet) were enrolled. The PF was imaged in the longitudinal plane. Subjects were assessed twice to evaluate the intra-rater reliability. A quantitative evaluation of the thickness and echogenicity of the plantar fascia was performed using Image J, a digital image analysis and viewer software. A sonography evaluation of the thickness and echogenicity of the PF showed a high relative reliability with an Intra class correlation coefficient of 0.88 at all measurement stations. However, the measurement stations for both the PF thickness and echogenicity which showed the highest intraclass correlation coefficient (ICCs) did not have the highest absolute reliability. Compared to other measurement stations, measuring the PF thickness at 3 cm distal and the echogenicity at a region of interest 1 cm to 2 cm distal from its insertion at the medial calcaneal tubercle showed the highest absolute reliability with the least systematic bias and random error. Also, the reliability was higher using a mean of three measurements compared to one measurement. To reduce discrepancies in the interpretation of the thickness and echogenicity measurements of the PF, the absolute reliability of the different measurement stations should be considered in clinical practice and research rather than the relative reliability with the ICC.
Background: Deficits in strength, power and balance represent important intrinsic risk factors for falls in seniors. Objective: The purpose of this study was to investigate the relationship between variables of lower extremity muscle strength/power and balance, assessed under various task conditions. Methods: Twenty-four healthy and physically active older adults (mean age: 70 8 5 years) were tested for their isometric strength (i.e. maximal isometric force of the leg extensors) and muscle power (i.e. countermovement jump height and power) as well as for their steady-state (i.e. unperturbed standing, 10-meter walk), proactive (i.e. Timed Up & Go test, Functional Reach Test) and reactive (i.e. perturbed standing) balance. Balance tests were conducted under single (i.e. standing or walking alone) and dual task conditions (i.e. standing or walking plus cognitive and motor interference task). Results: Significant positive correlations were found between measures of isometric strength and muscle power of the lower extremities (r values ranged between 0.608 and 0.720, p < 0.01). Hardly any significant associations were found between variables of strength, power and balance (i.e. no significant association in 20 out of 21 cases). Additionally, no significant correlations were found between measures of steady-state, proactive and reactive balance or balance tests performed under single and dual task conditions (all p > 0.05). Conclusion: The predominately nonsignificant correlations between different types of balance imply that balance performance is task specific in healthy and physically active seniors. Further, strength, power and balance as well as balance under single and dual task conditions seem to be independent of each other and may have to be tested and trained complementarily
Rehabilitation after autologous chondrocyte implantation for isolated cartilage defects of the knee
(2017)
Autologous chondrocyte implantation for treatment of isolated cartilage defects of the knee has become well established. Although various publications report technical modifications, clinical results, and cell-related issues, little is known about appropriate and optimal rehabilitation after autologous chondrocyte implantation. This article reviews the literature on rehabilitation after autologous chondrocyte implantation and presents a rehabilitation protocol that has been developed considering the best available evidence and has been successfully used for several years in a large number of patients who underwent autologous chondrocyte implantation for cartilage defects of the knee.
Regional snow-avalanche detection using object-based image analysis of near-infrared aerial imagery
(2017)
Snow avalanches are destructive mass movements in mountain regions that continue to claim lives and cause infrastructural damage and traffic detours. Given that avalanches often occur in remote and poorly accessible steep terrain, their detection and mapping is extensive and time consuming. Nonetheless, systematic avalanche detection over large areas could help to generate more complete and up-to-date inventories (cadastres) necessary for validating avalanche forecasting and hazard mapping. In this study, we focused on automatically detecting avalanches and classifying them into release zones, tracks, and run-out zones based on 0.25 m near-infrared (NIR) ADS80-SH92 aerial imagery using an object-based image analysis (OBIA) approach. Our algorithm takes into account the brightness, the normalised difference vegetation index (NDVI), the normalised difference water index (NDWI), and its standard deviation (SDNDWI) to distinguish avalanches from other land-surface elements. Using normalised parameters allows applying this method across large areas. We trained the method by analysing the properties of snow avalanches at three 4 km−2 areas near Davos, Switzerland. We compared the results with manually mapped avalanche polygons and obtained a user's accuracy of > 0.9 and a Cohen's kappa of 0.79–0.85. Testing the method for a larger area of 226.3 km−2, we estimated producer's and user's accuracies of 0.61 and 0.78, respectively, with a Cohen's kappa of 0.67. Detected avalanches that overlapped with reference data by > 80 % occurred randomly throughout the testing area, showing that our method avoids overfitting. Our method has potential for large-scale avalanche mapping, although further investigations into other regions are desirable to verify the robustness of our selected thresholds and the transferability of the method.
Objective: A key challenge in ancient DNA research is massive microbial DNA contamination from the deposition site which accumulates post mortem in the study organism’s remains. Two simple and cost-effective methods to enrich the relative endogenous fraction of DNA in ancient samples involve treatment of sample powder with either bleach or Proteinase K pre-digestion prior to DNA extraction. Both approaches have yielded promising but vary-ing results in other studies. Here, we contribute data on the performance of these methods using a comprehensive and systematic series of experiments applied to a single ancient bone fragment from a giant panda (Ailuropoda melanoleuca).Results: Bleach and pre-digestion treatments increased the endogenous DNA content up to ninefold. However, the absolute amount of DNA retrieved was dramatically reduced by all treatments. We also observed reduced DNA damage patterns in pre-treated libraries compared to untreated ones, resulting in longer mean fragment lengths and reduced thymine over-representation at fragment ends. Guanine–cytosine (GC) contents of both mapped and total reads are consistent between treatments and conform to general expectations, indicating no obvious biasing effect of the applied methods. Our results therefore confirm the value of bleach and pre-digestion as tools in palaeog-enomic studies, providing sufficient material is available.
We present an approach for reconstructing networks of pulse-coupled neuronlike oscillators from passive observation of pulse trains of all nodes. It is assumed that units are described by their phase response curves and that their phases are instantaneously reset by incoming pulses. Using an iterative procedure, we recover the properties of all nodes, namely their phase response curves and natural frequencies, as well as strengths of all directed connections.
The reaction of pharmacological active protic ionic liquid tris-(2-hydroxyethyl)ammonium 4-chlorophenylsulfanylacetate H + N(CH 2 CH 2 OH) 3 ∙ ( - OOCCH 2 SC 6 H 4 Cl-4) (1) with zinc or nickel chloride in a ratio of 2:1 affords stable at room temperature powder-like adducts [H + N(CH 2 CH 2 OH) 3 ] 2 ∙ [M(OOCCH 2 SC 6 H 4 Cl-4) 2 Cl 2 ] 2- , M = Zn (2), Ni (3). By recrystallization from aqueous alcohol compound 2 unexpectedly gives Zn(OOCCH 2 SC 6 H 4 Cl-4) 2 ∙ 2H 2 O (4). Unlike 2, compound 3 gives crystals [N(CH 2 CH 2 OH) 3 ] 2 Ni 2+ · [ - OOCCH 2 SC 6 H 4 Cl-4] 2 (5), which have a structure of metallated ionic liquid. The structure of 5 has been proved by X-ray diffraction analysis. It is the first example of the conversion of a protic ionic liquid into potentially biological active metallated ionic liquid (1 → 3 → 5).
Purpose: Comparison of the dissociation kinetics of rapid-acting insulins lispro, aspart, glulisine and human insulin under physiologically relevant conditions.
Methods: Dissociation kinetics after dilution were monitored directly in terms of the average molecular mass using combined static and dynamic light scattering. Changes in tertiary structure were detected by near-UV circular dichroism.
Results: Glulisine forms compact hexamers in formulation even in the absence of Zn2+. Upon severe dilution, these rapidly dissociate into monomers in less than 10 s. In contrast, in formulations of lispro and aspart, the presence of Zn2+ and phenolic compounds is essential for formation of compact R6 hexamers. These slowly dissociate in times ranging from seconds to one hour depending on the concentration of phenolic additives. The disadvantage of the long dissociation times of lispro and aspart can be diminished by a rapid depletion of the concentration of phenolic additives independent of the insulin dilution. This is especially important in conditions similar to those after subcutaneous injection, where only minor dilution of the insulins occurs.
Conclusion: Knowledge of the diverging dissociation mechanisms of lispro and aspart compared to glulisine will be helpful for optimizing formulation conditions of rapid-acting insulins.