Refine
Has Fulltext
- yes (636) (remove)
Year of publication
- 2017 (636) (remove)
Document Type
- Postprint (249)
- Article (172)
- Doctoral Thesis (135)
- Monograph/Edited Volume (25)
- Part of Periodical (22)
- Master's Thesis (12)
- Working Paper (5)
- Conference Proceeding (4)
- Preprint (4)
- Bachelor Thesis (2)
Keywords
- Philosophie (23)
- philosophy (23)
- Bürgerkommune (12)
- Partizipation (12)
- Partizipationsprozesse (12)
- kommunale Demokratie (12)
- kommunale Entscheidungsprozesse (12)
- Anthropologie (10)
- Genisa (10)
- Geniza (10)
Institute
- Mathematisch-Naturwissenschaftliche Fakultät (73)
- Institut für Geowissenschaften (44)
- Institut für Biochemie und Biologie (40)
- Humanwissenschaftliche Fakultät (36)
- Institut für Chemie (35)
- Department Musik und Kunst (31)
- Extern (30)
- Institut für Physik und Astronomie (30)
- Department Linguistik (27)
- Department Erziehungswissenschaft (24)
Vermessung im Sonnensystem
(2017)
Die bisherigen Missionen ins Sonnensystem lieferten eine enorme Fülle an Daten in unterschiedlichen Formaten und in Form von Bildern und digitalen Messergebnissen. Die Oberflächenprozesse der planetaren Körper, die mit Hilfe dieser Daten erforscht werden können, sind äußerst vielfältig und reichen von Einschlagskratern über Vulkanismus und Tektonik zu allen Formen der Erosion und Sedimentation. Um diese Prozesse verstehen zu können werden Verfahren angewendet, die für die Datenanalyse auf der Erde entwickelt wurden. Allerdings ist es notwendig all diese Verfahren zum Teil mit erheblichem Aufwand und unter Berücksichtigung der jeweiligen physikalischen Rahmenbedingungen anzupassen. Die Entwicklung kartographischer Verfahren zur Abstraktion der hier angesprochenen Informationen, also die Erfassung, geomorphologische Analyse und Visualisierung planetarer Oberflächen und Prozesse, hat jedoch gerade erst begonnen. Um diese Entwicklungen voranzutreiben, hat das Deutsche Zentrum für Luft- und Raumfahrt in Kooperation mit der Universität Potsdam (Institut für Geographie, Fachgruppe Geoinformatik, Prof. Dr. Asche), im Rahmen von Dissertationen und Forschungsvorhaben, in einem ersten Schritt kartographische Analyseverfahren für den Mars und die Asteroiden Ceres und Vesta entwickelt.
In this study, we validate and compare elevation accuracy and geomorphic metrics of satellite-derived digital elevation models (DEMs) on the southern Central Andean Plateau. The plateau has an average elevation of 3.7 km and is characterized by diverse topography and relief, lack of vegetation, and clear skies that create ideal conditions for remote sensing. At 30m resolution, SRTM-C, ASTER GDEM2, stacked ASTER L1A stereopair DEM, ALOS World 3D, and TanDEM-X have been analyzed. The higher-resolution datasets include 12m TanDEM-X, 10m single-CoSSC TerraSAR-X/TanDEM-X DEMs, and 5m ALOS World 3D. These DEMs are state of the art for optical (ASTER and ALOS) and radar (SRTM-C and TanDEM-X) spaceborne sensors. We assessed vertical accuracy by comparing standard deviations of the DEM elevation versus 307 509 differential GPS measurements across 4000m of elevation. For the 30m DEMs, the ASTER datasets had the highest vertical standard deviation at > 6.5 m, whereas the SRTM-C, ALOS World 3D, and TanDEM-X were all < 3.5 m. Higher-resolution DEMs generally had lower uncertainty, with both the 12m TanDEM-X and 5m ALOSWorld 3D having < 2m vertical standard deviation. Analysis of vertical uncertainty with respect to terrain elevation, slope, and aspect revealed the low uncertainty across these attributes for SRTM-C (30 m), TanDEM-X (12–30 m), and ALOS World 3D (5–30 m). Single-CoSSC TerraSAR-X/TanDEM-X 10m DEMs and the 30m ASTER GDEM2 displayed slight aspect biases, which were removed in their stacked counterparts (TanDEM-X and ASTER Stack). Based on low vertical standard deviations and visual inspection alongside optical satellite data, we selected the 30m SRTM-C, 12–30m TanDEM-X, 10m single-CoSSC TerraSAR-X/TanDEM-X, and 5m ALOS World 3D for geomorphic metric comparison in a 66 km2 catchment with a distinct river knickpoint. Consistent m=n values were found using chi plot channel profile analysis, regardless of DEM type and spatial resolution. Slope, curvature, and drainage area were calculated and plotting schemes were used to assess basin-wide differences in the hillslope-to-valley transition related to the knickpoint. While slope and hillslope length measurements vary little between datasets, curvature displays higher magnitude measurements with fining resolution. This is especially true for the optical 5m ALOS World 3D DEM, which demonstrated high-frequency noise in 2–8 pixel steps through a Fourier frequency analysis. The improvements in accurate space-radar DEMs (e.g., TanDEM-X) for geomorphometry are promising, but airborne or terrestrial data are still necessary for meter-scale analysis.
Background: Plasma concentration of retinol is an accepted indicator to assess the vitamin A (retinol) status in cattle. However, the determination of vitamin A requires a time consuming multi-step procedure, which needs specific equipment to perform extraction, centrifugation or saponification prior to high-performance liquid chromatography (HPLC).
Methods: The concentrations of retinol in whole blood (n = 10), plasma (n = 132) and serum (n = 61) were measured by a new rapid cow-side test (iCheck™ FLUORO) and compared with those by HPLC in two independent laboratories in Germany (DE) and Japan (JP).
Results: Retinol concentrations in plasma ranged from 0.033 to 0.532 mg/L, and in serum from 0.043 to 0.360 mg/L (HPLC method). No significant differences in retinol levels were observed between the new rapid cow-side test and HPLC performed in different laboratories (HPLC vs. iCheck™ FLUORO: 0.320 ± 0.047 mg/L vs. 0.333 ± 0.044 mg/L, and 0.240 ± 0.096 mg/L vs. 0.241 ± 0.069 mg/L, lab DE and lab JP, respectively). A similar comparability was observed when whole blood was used (HPLC vs. iCheck™ FLUORO: 0.353 ± 0.084 mg/L vs. 0.341 ± 0.064 mg/L). Results showed a good agreement between both methods based on correlation coefficients of r2 = 0.87 (P < 0.001) and Bland-Altman blots revealed no significant bias for all comparison.
Conclusions: With the new rapid cow-side test (iCheck™ FLUORO) retinol concentrations in cattle can be reliably assessed within a few minutes and directly in the barn using even whole blood without the necessity of prior centrifugation. The ease of the application of the new rapid cow-side test and its portability can improve the diagnostic of vitamin A status and will help to control vitamin A supplementation in specific vitamin A feeding regimes such as used to optimize health status in calves or meat marbling in Japanese Black cattle.
Statistics Canada, Canada’s national statistics agency, offers a suite of spatial
files for mapping and analysis of its various population data products. The following
article showcases possibilities and shortfalls of the existing spatial files
for mapping population data, and provides an overview of the structure of the
available boundary files from the regional to the dissemination block level. Due
to Canada’s highly dispersed population, mapping its distribution and density can
be challenging. Common mapping techniques such as the choropleth method are
suitable only for mapping spatially high resolution data such as data at the dissemination
area level. To allow for mapping of population data at less detailed levels
such as census divisions or provinces, Statistics Canada has created a so-called
ecumene boundary file which outlines the inhabited area of Canada and can be
used to more accurately visualize Canada’s population distribution and density.
Using behavioral observation for the longitudinal study of anger regulation in middle childhood
(2017)
Assessing anger regulation via self-reports is fraught with problems, especially among children. Behavioral observation provides an ecologically valid alternative for measuring anger regulation. The present study uses data from two waves of a longitudinal study to present a behavioral observation approach for measuring anger regulation in middle childhood. At T1, 599 children from Germany (6–10 years old) were observed during an anger eliciting task, and the use of anger regulation strategies was coded. At T2, 3 years later, the observation was repeated with an age-appropriate version of the same task. Partial metric measurement invariance over time demonstrated the structural equivalence of the two versions. Maladaptive anger regulation between the two time points showed moderate stability. Validity was established by showing correlations with aggressive behavior, peer problems, and conduct problems (concurrent and predictive criterion validity). The study presents an ecologically valid and economic approach to assessing anger regulation strategies in situations.
Background: Although the benefits for health of physical activity (PA) are well documented, the majority of the population is unable to implement present recommendations into daily routine. Mobile health (mHealth) apps could help increase the level of PA. However, this is contingent on the interest of potential users.
Objective: The aim of this study was the explorative, nuanced determination of the interest in mHealth apps with respect to PA among students and staff of a university.
Methods: We conducted a Web-based survey from June to July 2015 in which students and employees from the University of Potsdam were asked about their activity level, interest in mHealth fitness apps, chronic diseases, and sociodemographic parameters.
Results: A total of 1217 students (67.30%, 819/1217; female; 26.0 years [SD 4.9]) and 485 employees (67.5%, 327/485; female; 42.7 years [SD 11.7]) participated in the survey. The recommendation for PA (3 times per week) was not met by 70.1% (340/485) of employees and 52.67% (641/1217) of students. Within these groups, 53.2% (341/641 students) and 44.2% (150/340 employees)—independent of age, sex, body mass index (BMI), and level of education or professional qualification—indicated an interest in mHealth fitness apps.
Conclusions: Even in a younger, highly educated population, the majority of respondents reported an insufficient level of PA. About half of them indicated their interest in training support. This suggests that the use of personalized mobile fitness apps may become increasingly significant for a positive change of lifestyle.
Die Femtosekundendynamik nach resonanten Photoanregungen mit optischen und Röntgenpulsen ermöglicht eine selektive Verformung von chemischen N‐H‐ und N‐C‐Bindungen in 2‐Thiopyridon in wässriger Lösung. Die Untersuchung der orbitalspezifischen elektronischen Struktur und ihrer Dynamik auf ultrakurzen Zeitskalen mit resonanter inelastischer Röntgenstreuung an der N1s‐Resonanz am Synchrotron und dem Freie‐Elektronen‐Laser LCLS in Kombination mit quantenchemischen Multikonfigurationsberechnungen erbringen den direkten Nachweis dieser kontrollierten photoinduzierten Molekülverformungen und ihrer ultrakurzen Zeitskala.
Background: Inferring regulatory interactions between genes from transcriptomics time-resolved data, yielding reverse engineered gene regulatory networks, is of paramount importance to systems biology and bioinformatics studies. Accurate methods to address this problem can ultimately provide a deeper insight into the complexity, behavior, and functions of the underlying biological systems. However, the large number of interacting genes coupled with short and often noisy time-resolved read-outs of the system renders the reverse engineering a challenging task. Therefore, the development and assessment of methods which are computationally efficient, robust against noise, applicable to short time series data, and preferably capable of reconstructing the directionality of the regulatory interactions remains a pressing research problem with valuable applications.
Results: Here we perform the largest systematic analysis of a set of similarity measures and scoring schemes within the scope of the relevance network approach which are commonly used for gene regulatory network reconstruction from time series data. In addition, we define and analyze several novel measures and schemes which are particularly suitable for short transcriptomics time series. We also compare the considered 21 measures and 6 scoring schemes according to their ability to correctly reconstruct such networks from short time series data by calculating summary statistics based on the corresponding specificity and sensitivity. Our results demonstrate that rank and symbol based measures have the highest performance in inferring regulatory interactions. In addition, the proposed scoring scheme by asymmetric weighting has shown to be valuable in reducing the number of false positive interactions. On the other hand, Granger causality as well as information-theoretic measures, frequently used in inference of regulatory networks, show low performance on the short time series analyzed in this study.
Conclusions: Our study is intended to serve as a guide for choosing a particular combination of similarity measures and scoring schemes suitable for reconstruction of gene regulatory networks from short time series data. We show that further improvement of algorithms for reverse engineering can be obtained if one considers measures that are rooted in the study of symbolic dynamics or ranks, in contrast to the application of common similarity measures which do not consider the temporal character of the employed data. Moreover, we establish that the asymmetric weighting scoring scheme together with symbol based measures (for low noise level) and rank based measures (for high noise level) are the most suitable choices.
The femtosecond excited-state dynamics following resonant photoexcitation enable the selective deformation of N-H and N-C chemical bonds in 2-thiopyridone in aqueous solution with optical or X-ray pulses. In combination with multiconfigurational quantum-chemical calculations, the orbital-specific electronic structure and its ultrafast dynamics accessed with resonant inelastic X-ray scattering at the N 1s level using synchrotron radiation and the soft X-ray free-electron laser LCLS provide direct evidence for this controlled photoinduced molecular deformation and its ultrashort time-scale.
Many studies demonstrated interactions between number processing and either spatial codes (effects of spatial-numerical associations) or visual size-related codes (size-congruity effect). However, the interrelatedness of these two number couplings is still unclear. The present study examines the simultaneous occurrence of space- and size-numerical congruency effects and their interactions both within and across trials, in a magnitude judgment task physically small or large digits were presented left or right from screen center. The reaction times analysis revealed that space- and size-congruency effects coexisted in parallel and combined additively. Moreover, a selective sequential modulation of the two congruency effects was found. The size-congruency effect was reduced after size incongruent trials. The space-congruency effect, however, was only affected by the previous space congruency. The observed independence of spatial-numerical and within magnitude associations is interpreted as evidence that the two couplings reflect Different attributes of numerical meaning possibly related to orginality and cardinality.
In the context of back pain, great emphasis has been placed on the importance of trunk stability, especially in situations requiring compensation of repetitive, intense loading induced during high-performance activities, e.g., jumping or landing. This study aims to evaluate trunk muscle activity during drop jump in adolescent athletes with back pain (BP) compared to athletes without back pain (NBP). Eleven adolescent athletes suffering back pain (BP: m/f: n = 4/7; 15.9 ± 1.3 y; 176 ± 11 cm; 68 ± 11 kg; 12.4 ± 10.5 h/we training) and 11 matched athletes without back pain (NBP: m/f: n = 4/7; 15.5 ± 1.3 y; 174 ± 7 cm; 67 ± 8 kg; 14.9 ± 9.5 h/we training) were evaluated. Subjects conducted 3 drop jumps onto a force plate (ground reaction force). Bilateral 12-lead SEMG (surface Electromyography) was applied to assess trunk muscle activity. Ground contact time [ms], maximum vertical jump force [N], jump time [ms] and the jump performance index [m/s] were calculated for drop jumps. SEMG amplitudes (RMS: root mean square [%]) for all 12 single muscles were normalized to MIVC (maximum isometric voluntary contraction) and analyzed in 4 time windows (100 ms pre- and 200 ms post-initial ground contact, 100 ms pre- and 200 ms post-landing) as outcome variables. In addition, muscles were grouped and analyzed in ventral and dorsal muscles, as well as straight and transverse trunk muscles. Drop jump ground reaction force variables did not differ between NBP and BP (p > 0.05). Mm obliquus externus and internus abdominis presented higher SEMG amplitudes (1.3–1.9-fold) for BP (p < 0.05). Mm rectus abdominis, erector spinae thoracic/lumbar and latissimus dorsi did not differ (p > 0.05). The muscle group analysis over the whole jumping cycle showed statistically significantly higher SEMG amplitudes for BP in the ventral (p = 0.031) and transverse muscles (p = 0.020) compared to NBP. Higher activity of transverse, but not straight, trunk muscles might indicate a specific compensation strategy to support trunk stability in athletes with back pain during drop jumps. Therefore, exercises favoring the transverse trunk muscles could be recommended for back pain treatment.
Trunk loading and back pain
(2017)
An essential function of the trunk is the compensation of external forces and loads in order to guarantee stability. Stabilising the trunk during sudden, repetitive loading in everyday tasks, as well as during performance is important in order to protect against injury. Hence, reduced trunk stability is accepted as a risk factor for the development of back pain (BP). An altered activity pattern including extended response and activation times as well as increased co-contraction of the trunk muscles as well as a reduced range of motion and increased movement variability of the trunk are evident in back pain patients (BPP). These differences to healthy controls (H) have been evaluated primarily in quasi-static test situations involving isolated loading directly to the trunk. Nevertheless, transferability to everyday, dynamic situations is under debate. Therefore, the aim of this project is to analyse 3-dimensional motion and neuromuscular reflex activity of the trunk as response to dynamic trunk loading in healthy (H) and back pain patients (BPP).
A measurement tool was developed to assess trunk stability, consisting of dynamic test situations. During these tests, loading of the trunk is generated by the upper and lower limbs with and without additional perturbation. Therefore, lifting of objects and stumbling while walking are adequate represents. With the help of a 12-lead EMG, neuromuscular activity of the muscles encompassing the trunk was assessed. In addition, three-dimensional trunk motion was analysed using a newly developed multi-segmental trunk model. The set-up was checked for reproducibility as well as validity. Afterwards, the defined measurement set-up was applied to assess trunk stability in comparisons of healthy and back pain patients.
Clinically acceptable to excellent reliability could be shown for the methods (EMG/kinematics) used in the test situations. No changes in trunk motion pattern could be observed in healthy adults during continuous loading (lifting of objects) of different weights. In contrast, sudden loading of the trunk through perturbations to the lower limbs during walking led to an increased neuromuscular activity and ROM of the trunk. Moreover, BPP showed a delayed muscle response time and extended duration until maximum neuromuscular activity in response to sudden walking perturbations compared to healthy controls. In addition, a reduced lateral flexion of the trunk during perturbation could be shown in BPP.
It is concluded that perturbed gait seems suitable to provoke higher demands on trunk stability in adults. The altered neuromuscular and kinematic compensation pattern in back pain patients (BPP) can be interpreted as increased spine loading and reduced trunk stability in patients. Therefore, this novel assessment of trunk stability is suitable to identify deficits in BPP. Assignment of affected BPP to therapy interventions with focus on stabilisation of the trunk aiming to improve neuromuscular control in dynamic situations is implied. Hence, sensorimotor training (SMT) to enhance trunk stability and compensation of unexpected sudden loading should be preferred.
Trends in precipitation over Germany and the Rhine basin related to changes in weather patterns
(2017)
Precipitation as the central meteorological feature for agriculture, water security, and human well-being amongst others, has gained special attention ever since. Lack of precipitation may have devastating effects such as crop failure and water scarcity. Abundance of precipitation, on the other hand, may as well result in hazardous events such as flooding and again crop failure. Thus, great effort has been spent on tracking changes in precipitation and relating them to underlying processes. Particularly in the face of global warming and given the link between temperature and atmospheric water holding capacity, research is needed to understand the effect of climate change on precipitation.
The present work aims at understanding past changes in precipitation and other meteorological variables. Trends were detected for various time periods and related to associated changes in large-scale atmospheric circulation. The results derived in this thesis may be used as the foundation for attributing changes in floods to climate change. Assumptions needed for the downscaling of large-scale circulation model output to local climate stations are tested and verified here.
In a first step, changes in precipitation over Germany were detected, focussing not only on precipitation totals, but also on properties of the statistical distribution, transition probabilities as a measure for wet/dry spells, and extreme precipitation events.
Shifting the spatial focus to the Rhine catchment as one of the major water lifelines of Europe and the largest river basin in Germany, detected trends in precipitation and other meteorological variables were analysed in relation to states of an ``optimal'' weather pattern classification. The weather pattern classification was developed seeking the best skill in explaining the variance of local climate variables.
The last question addressed whether observed changes in local climate variables are attributable to changes in the frequency of weather patterns or rather to changes within the patterns itself. A common assumption for a downscaling approach using weather patterns and a stochastic weather generator is that climate change is expressed only as a changed occurrence of patterns with the pattern properties remaining constant. This assumption was validated and the ability of the latest generation of general circulation models to reproduce the weather patterns was evaluated.
% Paper 1
Precipitation changes in Germany in the period 1951-2006 can be summarised briefly as negative in summer and positive in all other seasons. Different precipitation characteristics confirm the trends in total precipitation: while winter mean and extreme precipitation have increased, wet spells tend to be longer as well (expressed as increased probability for a wet day followed by another wet day). For summer the opposite was observed: reduced total precipitation, supported by decreasing mean and extreme precipitation and reflected in an increasing length of dry spells.
Apart from this general summary for the whole of Germany, the spatial distribution within the country is much more differentiated. Increases in winter precipitation are most pronounced in the north-west and south-east of Germany, while precipitation increases are highest in the west for spring and in the south for autumn. Decreasing summer precipitation was observed in most regions of Germany, with particular focus on the south and west.
The seasonal picture, however, was again differently represented in the contributing months, e.g.\ increasing autumn precipitation in the south of Germany is formed by strong trends in the south-west in October and in the south-east in November. These results emphasise the high spatial and temporal organisation of precipitation changes.
% Paper 2
The next step towards attributing precipitation trends to changes in large-scale atmospheric patterns was the derivation of a weather pattern classification that sufficiently stratifies the local climate variables under investigation. Focussing on temperature, radiation, and humidity in addition to precipitation, a classification based on mean sea level pressure, near-surface temperature, and specific humidity was found to have the best skill in explaining the variance of the local variables. A rather high number of 40 patterns was selected, allowing typical pressure patterns being assigned to specific seasons by the associated temperature patterns. While the skill in explaining precipitation variance is rather low, better skill was achieved for radiation and, of course, temperature.
Most of the recent GCMs from the CMIP5 ensemble were found to reproduce these weather patterns sufficiently well in terms of frequency, seasonality, and persistence.
% Paper 3
Finally, the weather patterns were analysed for trends in pattern frequency, seasonality, persistence, and trends in pattern-specific precipitation and temperature. To overcome uncertainties in trend detection resulting from the selected time period, all possible periods in 1901-2010 with a minimum length of 31 years were considered. Thus, the assumption of a constant link between patterns and local weather was tested rigorously. This assumption was found to hold true only partly. While changes in temperature are mainly attributable to changes in pattern frequency, for precipitation a substantial amount of change was detected within individual patterns.
Magnitude and even sign of trends depend highly on the selected time period. The frequency of certain patterns is related to the long-term variability of large-scale circulation modes.
Changes in precipitation were found to be heterogeneous not only in space, but also in time - statements on trends are only valid for the specific time period under investigation. While some part of the trends can be attributed to changes in the large-scale circulation, distinct changes were found within single weather patterns as well.
The results emphasise the need to analyse multiple periods for thorough trend detection wherever possible and add some note of caution to the application of downscaling approaches based on weather patterns, as they might misinterpret the effect of climate change due to neglecting within-type trends.
Translating innovation
(2017)
This doctoral thesis studies the process of innovation adoption in public administrations, addressing the research question of how an innovation is translated to a local context. The study empirically explores Design Thinking as a new problem-solving approach introduced by a federal government organisation in Singapore. With a focus on user-centeredness, collaboration and iteration Design Thinking seems to offer a new way to engage recipients and other stakeholders of public services as well as to re-think the policy design process from a user’s point of view. Pioneered in the private sector, early adopters of the methodology include civil services in Australia, Denmark, the United Kingdom, the United States as well as Singapore. Hitherto, there is not much evidence on how and for which purposes Design Thinking is used in the public sector.
For the purpose of this study, innovation adoption is framed in an institutionalist perspective addressing how concepts are translated to local contexts. The study rejects simplistic views of the innovation adoption process, in which an idea diffuses to another setting without adaptation. The translation perspective is fruitful because it captures the multidimensionality and ‘messiness’ of innovation adoption. More specifically, the overall research question addressed in this study is: How has Design Thinking been translated to the local context of the public sector organisation under investigation? And from a theoretical point of view: What can we learn from translation theory about innovation adoption processes?
Moreover, there are only few empirical studies of organisations adopting Design Thinking and most of them focus on private organisations. We know very little about how Design Thinking is embedded in public sector organisations. This study therefore provides further empirical evidence of how Design Thinking is used in a public sector organisation, especially with regards to its application to policy work which has so far been under-researched.
An exploratory single case study approach was chosen to provide an in-depth analysis of the innovation adoption process. Based on a purposive, theory-driven sampling approach, a Singaporean Ministry was selected because it represented an organisational setting in which Design Thinking had been embedded for several years, making it a relevant case with regard to the research question. Following a qualitative research design, 28 semi-structured interviews (45-100 minutes) with employees and managers were conducted. The interview data was triangulated with observations and documents, collected during a field research research stay in Singapore.
The empirical study of innovation adoption in a single organisation focused on the intra-organisational perspective, with the aim to capture the variations of translation that occur during the adoption process. In so doing, this study opened the black box often assumed in implementation studies. Second, this research advances translation studies not only by showing variance, but also by deriving explanatory factors. The main differences in the translation of Design Thinking occurred between service delivery and policy divisions, as well as between the first adopter and the rest of the organisation. For the intra-organisational translation of Design Thinking in the Singaporean Ministry the following five factors played a role: task type, mode of adoption, type of expertise, sequence of adoption, and the adoption of similar practices.
Working memory (WM) performance declines with age. However, several studies have shown that WM training may lead to performance increases not only in the trained task, but also in untrained cognitive transfer tasks. It has been suggested that transfer effects occur if training task and transfer task share specific processing components that are supposedly processed in the same brain areas. In the current study, we investigated whether single-task WM training and training-related alterations in neural activity might support performance in a dual-task setting, thus assessing transfer effects to higher-order control processes in the context of dual-task coordination. A sample of older adults (age 60–72) was assigned to either a training or control group. The training group participated in 12 sessions of an adaptive n-back training. At pre and post-measurement, a multimodal dual-task was performed in all participants to assess transfer effects. This task consisted of two simultaneous delayed match to sample WM tasks using two different stimulus modalities (visual and auditory) that were performed either in isolation (single-task) or in conjunction (dual-task). A subgroup also participated in functional magnetic resonance imaging (fMRI) during the performance of the n-back task before and after training. While no transfer to single-task performance was found, dual-task costs in both the visual modality (p < 0.05) and the auditory modality (p < 0.05) decreased at post-measurement in the training but not in the control group. In the fMRI subgroup of the training participants, neural activity changes in left dorsolateral prefrontal cortex (DLPFC) during one-back predicted post-training auditory dual-task costs, while neural activity changes in right DLPFC during three-back predicted visual dual-task costs. Results might indicate an improvement in central executive processing that could facilitate both WM and dual-task coordination.
With its transparent orthography, Standard Indonesian is spoken by over 160 million inhabitants and is the primary language of instruction in education and the government in Indonesia. An assessment battery of reading and reading-related skills was developed as a starting point for the diagnosis of dyslexia in beginner learners. Founded on the International Dyslexia Association’s definition of dyslexia, the test battery comprises nine empirically motivated reading and reading-related tasks assessing word reading, pseudoword reading, arithmetic, rapid automatized naming, phoneme deletion, forward and backward digit span, verbal fluency, orthographic choice (spelling), and writing. The test was validated by computing the relationships between the outcomes on the reading-skills and reading-related measures by means of correlation and factor analyses. External variables, i.e., school grades and teacher ratings of the reading and learning abilities of individual students, were also utilized to provide evidence of its construct validity. Four variables were found to be significantly related with reading-skill measures: phonological awareness, rapid naming, spelling, and digit span. The current study on reading development in Standard Indonesian confirms findings from other languages with transparent orthographies and suggests a test battery including preliminary norm scores for screening and assessment of elementary school children learning to read Standard Indonesian.
As an emerging sub-field of music information retrieval (MIR), music imagery information retrieval (MIIR) aims to retrieve information from brain activity recorded during music cognition–such as listening to or imagining music pieces. This is a highly inter-disciplinary endeavor that requires expertise in MIR as well as cognitive neuroscience and psychology. The OpenMIIR initiative strives to foster collaborations between these fields to advance the state of the art in MIIR. As a first step, electroencephalography (EEG) recordings of music perception and imagination have been made publicly available, enabling MIR researchers to easily test and adapt their existing approaches for music analysis like fingerprinting, beat tracking or tempo estimation on this new kind of data. This paper reports on first results of MIIR experiments using these OpenMIIR datasets and points out how these findings could drive new research in cognitive neuroscience.
We present a setup combining a liquid flatjet sample delivery and a MHz laser system for time-resolved soft X-ray absorption measurements of liquid samples at the high brilliance undulator beamline UE52-SGM at Bessy II yielding unprecedented statistics in this spectral range. We demonstrate that the efficient detection of transient absorption changes in transmission mode enables the identification of photoexcited species in dilute samples. With iron(II)-trisbipyridine in aqueous solution as a benchmark system, we present absorption measurements at various edges in the soft X-ray regime. In combination with the wavelength tunability of the laser system, the set-up opens up opportunities to study the photochemistry of many systems at low concentrations, relevant to materials sciences, chemistry, and biology.
We introduce three strategies for the analysis of financial time series based on time averaged observables. These comprise the time averaged mean squared displacement (MSD) as well as the ageing and delay time methods for varying fractions of the financial time series. We explore these concepts via statistical analysis of historic time series for several Dow Jones Industrial indices for the period from the 1960s to 2015. Remarkably, we discover a simple universal law for the delay time averaged MSD. The observed features of the financial time series dynamics agree well with our analytical results for the time averaged measurables for geometric Brownian motion, underlying the famed Black–Scholes–Merton model. The concepts we promote here are shown to be useful for financial data analysis and enable one to unveil new universal features of stock market dynamics.
In October 2016, following a campaign led by Labour Peer Lord
Alfred Dubs, the first child asylum-seekers allowed entry to the UK
under new legislation (the ‘Dubs amendment’) arrived in England.
Their arrival was captured by a heavy media presence, and very
quickly doubts were raised by right-wing tabloids and politicians
about their age. In this article, I explore the arguments
underpinning the Dubs campaign and the media coverage of
the children’s arrival as a starting point for interrogating
representational practices around children who seek asylum. I
illustrate how the campaign was premised on a universal politics
of childhood that inadvertently laid down the terms on which
these children would be given protection, namely their innocence.
The universality of childhood fuels public sympathy for child
asylum-seekers, underlies the ‘child first, migrant second’
approach advocated by humanitarian organisations, and it was a
key argument in the ‘Dubs amendment’. Yet the campaign
highlights how representations of child asylum-seekers rely on
codes that operate to identify ‘unchildlike’ children. As I show, in
the context of the criminalisation of undocumented migrants‘,
childhood is no longer a stable category which guarantees
protection, but is subject to scrutiny and suspicion and can,
ultimately, be disproved.
Thermal cis-trans isomerization of azobenzene studied by path sampling and QM/MM stochastic dynamics
(2017)
Azobenzene-based molecular photoswitches have extensively been applied to biological systems, involving photo-control of peptides, lipids and nucleic acids. The isomerization between the stable trans and the metastable cis state of the azo moieties leads to pronounced changes in shape and other physico-chemical properties of the molecules into which they are incorporated. Fast switching can be induced via transitions to excited electronic states and fine-tuned by a large number of different substituents at the phenyl rings. But a rational design of tailor-made azo groups also requires control of their stability in the dark, the half-lifetime of the cis isomer. In computational chemistry, thermally activated barrier crossing on the ground state Born-Oppenheimer surface can efficiently be estimated with Eyring’s transition state theory (TST) approach; the growing complexity of the azo moiety and a rather heterogeneous environment, however, may render some of the underlying simplifying assumptions problematic.
In this dissertation, a computational approach is established to remove two restrictions at once: the environment is modeled explicitly by employing a quantum mechanical/molecular mechanics (QM/MM) description; and the isomerization process is tracked by analyzing complete dynamical pathways between stable states. The suitability of this description is validated by using two test systems, pure azo benzene and a derivative with electron donating and electron withdrawing substituents (“push-pull” azobenzene). Each system is studied in the gas phase, in toluene and in polar DMSO solvent. The azo molecules are treated at the QM level using a very recent, semi-empirical approximation to density functional theory (density functional tight binding approximation). Reactive pathways are sampled by implementing a version of the so-called transition path sampling method (TPS), without introducing any bias into the system dynamics. By analyzing ensembles of reactive trajectories, the change in isomerization pathway from linear inversion to rotation in going from apolar to polar solvent, predicted by the TST approach, could be verified for the push-pull derivative. At the same time, the mere presence of explicit solvation is seen to broaden the distribution of isomerization pathways, an effect TST cannot account for.
Using likelihood maximization based on the TPS shooting history, an improved reaction coordinate was identified as a sine-cosine combination of the central bend angles and the rotation dihedral, r (ω,α,α′). The computational van’t Hoff analysis for the activation entropies was performed to gain further insight into the differential role of solvent for the case of the unsubstituted and the push-pull azobenzene. In agreement with the experiment, it yielded positive activation entropies for azobenzene in the DMSO solvent while negative for the push-pull derivative, reflecting the induced ordering of solvent around the more dipolar transition state associated to the latter compound. Also, the dynamically corrected rate constants were evaluated using the reactive flux approach where an increase comparable to the experimental one was observed for a high polarity medium for both azobenzene derivatives.
Rezensiertes Werk
Theresa Biberauer u. George Walkden (Hgg.): Syntax over Time: Lexical, Morphological, and Information – Structural Interactions - Oxford, Oxford University Press, 2015, 418 S.
Einleitung: Die Erdnussallergie zählt zu den häufigsten Nahrungsmittelallergien im Kindesalter. Bereits kleine Mengen Erdnuss (EN) können zu schweren allergischen Reaktionen führen. EN ist der häufigste Auslöser einer lebensbedrohlichen Anaphylaxie bei Kindern und Jugendlichen. Im Gegensatz zu anderen frühkindlichen Nahrungsmittelallergien entwickeln Patienten mit einer EN-Allergie nur selten eine natürliche Toleranz. Seit mehreren Jahren wird daher an kausalen Therapiemöglichkeiten für EN-Allergiker, insbesondere an der oralen Immuntherapie (OIT), geforscht. Erste kleinere Studien zur OIT bei EN-Allergie zeigten erfolgsversprechende Ergebnisse. Im Rahmen einer randomisierten, doppelblind, Placebo-kontrollierten Studie mit größerer Fallzahl werden in der vorliegenden Arbeit die klinische Wirksamkeit und Sicherheit dieser Therapieoption bei Kindern mit EN-Allergie genauer evaluiert. Des Weiteren werden immunologische Veränderungen sowie die Lebensqualität und Therapiebelastung unter OIT untersucht.
Methoden: Kinder zwischen 3-18 Jahren mit einer IgE-vermittelten EN-Allergie wurden in die Studie eingeschlossen. Vor Beginn der OIT wurde eine orale Provokation mit EN durchgeführt. Die Patienten wurden 1:1 randomisiert und entsprechend der Verum- oder Placebogruppe zugeordnet. Begonnen wurde mit 2-120 mg EN bzw. Placebo pro Tag, abhängig von der Reaktionsdosis bei der oralen Provokation. Zunächst wurde die tägliche OIT-Dosis alle zwei Wochen über etwa 14 Monate langsam bis zu einer Erhaltungsdosis von mindestens 500 mg EN (= 125 mg EN-Protein, ~ 1 kleine EN) bzw. Placebo gesteigert. Die maximal erreichte Dosis wurde dann über zwei Monate täglich zu Hause verabreicht. Im Anschluss erfolgte erneut eine orale Provokation mit EN. Der primäre Endpunkt der Studie war die Anzahl an Patienten der Verum- und Placebogruppe, die unter oraler Provokation nach OIT ≥1200 mg EN vertrugen (=„partielle Desensibilisierung“). Sowohl vor als auch nach OIT wurde ein Hautpricktest mit EN durchgeführt und EN-spezifisches IgE und IgG4 im Serum bestimmt. Außerdem wurden die Basophilenaktivierung sowie die Ausschüttung von T-Zell-spezifischen Zytokinen nach Stimulation mit EN in vitro gemessen. Anhand von Fragebögen wurde die Lebensqualität vor und nach OIT sowie die Therapiebelastung während OIT erfasst.
Ergebnisse: 62 Patienten wurden in die Studie eingeschlossen und randomisiert. Nach etwa 16 Monaten unter OIT zeigten 74,2% (23/31) der Patienten der Verumgruppe und nur 16,1% (5/31) der Placebogruppe eine „partielle Desensibilisierung“ gegenüber EN (p<0,001). Im Median vertrugen Patienten der Verumgruppe 4000 mg EN (~8 kleine EN) unter der Provokation nach OIT wohingegen Patienten der Placebogruppe nur 80 mg EN (~1/6 kleine EN) vertrugen (p<0,001). Fast die Hälfte der Patienten der Verumgruppe (41,9%) tolerierten die Höchstdosis von 18 g EN unter Provokation („komplette Desensibilisierung“). Es zeigte sich ein vergleichbares Sicherheitsprofil unter Verum- und Placebo-OIT in Bezug auf objektive Nebenwirkungen. Unter Verum-OIT kam es jedoch signifikant häufiger zu subjektiven Nebenwirkungen wie oralem Juckreiz oder Bauchschmerzen im Vergleich zu Placebo (3,7% der Verum-OIT-Gaben vs. 0,5% der Placebo-OIT-Gaben, p<0,001). Drei Kinder der Verumgruppe (9,7%) und sieben Kinder der Placebogruppe (22,6%) beendeten die Studie vorzeitig, je zwei Patienten beider Gruppen aufgrund von Nebenwirkungen. Im Gegensatz zu Placebo, zeigten sich unter Verum-OIT signifikante immunologische Veränderungen. So kam es zu einer Abnahme des EN-spezifischen Quaddeldurchmessers im Hautpricktest, einem Anstieg der EN-spezifischen IgG4-Werte im Serum sowie zu einer verminderten EN-spezifischen Zytokinsekretion, insbesondere der Th2-spezifischen Zytokine IL-4 und IL-5. Hinsichtlich der EN-spezifischen IgE-Werte sowie der EN-spezifischen Basophilenaktivierung zeigten sich hingegen keine Veränderungen unter OIT. Die Lebensqualität von Kindern der Verumgruppe war nach OIT signifikant verbessert, jedoch nicht bei Kindern der Placebogruppe. Während der OIT wurde die Therapie von fast allen Kindern (82%) und Müttern (82%) als positiv bewertet (= niedrige Therapiebelastung).
Diskussion: Die EN-OIT führte bei einem Großteil der EN-allergischen Kinder zu einer Desensibilisierung und einer deutlich erhöhten Reaktionsschwelle auf EN. Somit sind die Kinder im Alltag vor akzidentellen Reaktionen auf EN geschützt, was die Lebensqualität der Kinder deutlich verbessert. Unter den kontrollierten Studienbedingungen zeigte sich ein akzeptables Sicherheitsprofil, mit vorrangig milder Symptomatik. Die klinische Desensibilisierung ging mit Veränderungen auf immunologischer Ebene einher. Langzeitstudien zur EN-OIT müssen jedoch abgewartet werden, um die klinische und immunologische Wirksamkeit hinsichtlich einer möglichen langfristigen oralen Toleranzinduktion sowie die Sicherheit unter langfristiger OIT zu untersuchen, bevor das Therapiekonzept in die Praxis übertragen werden kann.
Die klassische Physik/Chemie unterscheidet zwischen drei Bindungstypen: Der kovalenten Bindung, der ionischen Bindung und der metallischen Bindung. Moleküle untereinander werden hingegen durch schwache Wechselwirkungen zusammen gehalten, sie sind trotz ihrer schwachen Kräfte weniger verstanden, aber dabei nicht weniger wichtig. In zukunftsweisenden Gebieten wie der Nanotechnologie, der Supramolekularen Chemie und Biochemie sind sie von elementarer Bedeutung.
Um schwache, intermolekulare Wechselwirkungen zu beschreiben, vorauszusagen und zu verstehen, sind sie zunächst theoretisch zu erfassen. Hierzu gehören verschiedene quantenchemische Methoden, die in dieser Arbeit vorgestellt, verglichen, weiterentwickelt und schließlich auch exemplarisch auf Problemstellungen in der Chemie angewendet werden. Aufbauend auf einer Hierarchie von Methoden unterschiedlicher Genauigkeit werden sie für diese Ziele eingesetzt, ausgearbeitet und kombiniert.
Berechnet wird die Elektronenstruktur, also die Verteilung und Energie von Elektronen, die im Wesentlichen die Atome zusammen halten. Da Ungenauigkeiten von der Beschreibung der Elektronenstruktur von den verwendeten Methoden abhängen, kann man die Effekte detailliert untersuchen, sie beschreiben und darauf aufbauend weiter entwickeln, um sie anschließend an verschiedenen Modellen zu testen. Die Geschwindigkeit der Berechnungen mit modernen Computern ist eine wesentliche, zu berücksichtigende Komponente, da im Allgemeinen die Genauigkeit mit der Rechenzeit exponentiell steigt, und die damit an die Grenzen der Möglichkeiten stoßen muss.
Die genaueste der verwendeten Methoden basiert auf der Coupled-Cluster-Theorie, die sehr gute Voraussagen ermöglicht. Für diese wird eine sogenannte spektroskopische Genauigkeit mit Abweichungen von wenigen Wellenzahlen erzielt, was Vergleiche mit experimentellen Daten zeigen. Eine Möglichkeit zur Näherung von hochgenauen Methoden basiert auf der Dichtefunktionaltheorie: Hier wurde das „Boese-Martin for Kinetics“ (BMK)-Funktional entwickelt, dessen Funktionalform sich in vielen nach 2010 veröffentlichten Dichtefunktionalen wiederfindet.
Mit Hilfe der genaueren Methoden lassen sich schließlich semiempirische Kraftfelder zur Beschreibung intermolekularer Wechselwirkungen für individuelle Systeme parametrisieren, diese benötigen weit weniger Rechenzeit als die Methoden, die auf der genauen Berechnung der Elektronenstruktur von Molekülen beruhen.
Für größere Systeme lassen sich auch verschiedene Methoden kombinieren. Dabei wurden Einbettungsverfahren verfeinert und mit neuen methodischen Ansätzen vorgeschlagen. Sie verwenden sowohl die symmetrieadaptierte Störungstheorie als auch die quantenchemische Einbettung von Fragmenten in größere, quantenchemisch berechnete Systeme.
Die Entwicklungen neuer Methoden beziehen ihren Wert im Wesentlichen durch deren Anwendung:
In dieser Arbeit standen zunächst die Wasserstoffbrücken im Vordergrund. Sie zählen zu den stärkeren intermolekularen Wechselwirkungen und sind nach wie vor eine Herausforderung. Im Gegensatz dazu sind van-der-Waals Wechselwirkungen relativ einfach durch Kraftfelder zu beschreiben. Deshalb sind viele der heute verwendeten Methoden für Systeme, in denen Wasserstoffbrücken dominieren, vergleichsweise schlecht.
Eine Untersuchung molekularer Aggregate mit Auswirkungen intermolekularer Wechselwirkungen auf die Schwingungsfrequenzen von Molekülen schließt sich an. Dabei wird auch über die sogenannte starrer-Rotor-harmonischer-Oszillator-Näherung hinausgegangen.
Eine weitreichende Anwendung behandelt Adsorbate, hier die von Molekülen auf ionischen/metallischen Oberflächen. Sie können mit ähnlichen Methoden behandelt werden wie die intermolekularen Wechselwirkungen, und sind mit speziellen Einbettungsverfahren sehr genau zu beschreiben. Die Resultate dieser theoretischen Berechnungen stimulierten eine Neubewertung der bislang bekannten experimentellen Ergebnisse.
Molekulare Kristalle sind ein äußerst wichtiges Forschungsgebiet. Sie werden durch schwache Wechselwirkungen zusammengehalten, die von van-der-Waals Kräften bis zu Wasserstoffbrücken reichen. Auch hier wurden neuentwickelte Methoden eingesetzt, die eine interessante, mindestens ebenso genaue Alternative zu den derzeit gängigen Methoden darstellen.
Von daher sind die entwickelten Methoden, als auch deren Anwendung äußerst vielfältig. Die behandelten Berechnungen der Elektronenstruktur erstrecken sich von den sogenannten post-Hartree-Fock-Methoden über den Einsatz der Dichtefunktionaltheorie bis zu semiempirischen Kraftfeldern und deren Kombinationen. Die Anwendung reicht von einzelnen Molekülen in der Gasphase über die Adsorption auf Oberflächen bis zum molekularen Festkörper.
This article considers Isabella Bird’s representation of medicine in Unbeaten Tracks in Japan (1880) and Journeys in Persia and Kurdistan (1891), the two books in which she engages most extensively with both local (Chinese/Islamic) and Western medical science and practice. I explore how Bird uses medicine to assert her narrative authority and define her travelling persona in opposition to local medical practitioners. I argue that her ambivalence and the unease she frequently expresses concerning medical practice (expressed particularly in her later adoption of the Persian appellation “Feringhi Hakīm” [European physician] to describe her work) serves as a means for her to negotiate the colonial and gendered pressures on Victorian medicine. While in Japan this attitude works to destabilise her hierarchical understanding of science and results in some acknowledgement of traditional Japanese traditions, in Persia it functions more to disguise her increasing collusion with overt British colonial ambitions.
White adipose tissue (WAT) is actively involved in the regulation of whole-body energy homeostasis via storage/release of lipids and adipokine secretion. Current research links WAT dysfunction to the development of metabolic syndrome (MetS) and type 2 diabetes (T2D). The expansion of WAT during oversupply of nutrients prevents ectopic fat accumulation and requires proper preadipocyte-to-adipocyte differentiation. An assumed link between excess levels of reactive oxygen species (ROS), WAT dysfunction and T2D has been discussed controversially. While oxidative stress conditions have conclusively been detected in WAT of T2D patients and related animal models, clinical trials with antioxidants failed to prevent T2D or to improve glucose homeostasis. Furthermore, animal studies yielded inconsistent results regarding the role of oxidative stress in the development of diabetes. Here, we discuss the contribution of ROS to the (patho)physiology of adipocyte function and differentiation, with particular emphasis on sources and nutritional modulators of adipocyte ROS and their functions in signaling mechanisms controlling adipogenesis and functions of mature fat cells. We propose a concept of ROS balance that is required for normal functioning of WAT. We explain how both excessive and diminished levels of ROS, e.g. resulting from over supplementation with antioxidants, contribute to WAT dysfunction and subsequently insulin resistance.
Meaning-making in the brain has become one of the most intensely discussed topics in cognitive science. Traditional theories on cognition that emphasize abstract symbol manipulations often face a dead end: The symbol grounding problem. The embodiment idea tries to overcome this barrier by assuming that the mind is grounded in sensorimotor experiences. A recent surge in behavioral and brain-imaging studies has therefore focused on the role of the motor cortex in language processing. Concrete, action-related words have received convincing evidence to rely on sensorimotor activation. Abstract concepts, however, still pose a distinct challenge for embodied theories on cognition. Fully embodied abstraction mechanisms were formulated but sensorimotor activation alone seems unlikely to close the explanatory gap. In this respect, the idea of integration areas, such as convergence zones or the ‘hub and spoke’ model, do not only appear like the most promising candidates to account for the discrepancies between concrete and abstract concepts but could also help to unite the field of cognitive science again. The current review identifies milestones in cognitive science research and recent achievements that highlight fundamental challenges, key questions and directions for future research.
Human development has far-reaching impacts on the surface of the globe. The transformation of natural land cover occurs in different forms, and urban growth is one of the most eminent transformative processes. We analyze global land cover data and extract cities as defined by maximally connected urban clusters. The analysis of the city size distribution for all cities on the globe confirms Zipf’s law. Moreover, by investigating the percolation properties of the clustering of urban areas we assess the closeness to criticality for various countries. At the critical thresholds, the urban land cover of the countries undergoes a transition from separated clusters to a gigantic component on the country scale. We study the Zipf-exponents as a function of the closeness to percolation and find a systematic dependence, which could be the reason for deviating exponents reported in the literature. Moreover, we investigate the average size of the clusters as a function of the proximity to percolation and find country specific behavior. By relating the standard deviation and the average of cluster sizes—analogous to Taylor’s law—we suggest an alternative way to identify the percolation transition. We calculate spatial correlations of the urban land cover and find long-range correlations. Finally, by relating the areas of cities with population figures we address the global aspect of the allometry of cities, finding an exponent δ ≈ 0.85, i.e., large cities have lower densities.
The rule of law is the cornerstone of the international legal system. This paper shows, through analysis of intergovernmental instruments, statements made by representatives of States, and negotiation records, that the rule of law at the United Nations has become increasingly contested in the past years. More precisely, the argument builds on the process of integrating the notion of the rule of law into the Sustainable Development Goals, adopted in September 2015 in the document Transforming our world: the 2030 Agenda for Sustainable Development. The main sections set out the background of the rule of law debate at the UN, the elements of the rule of law at the goal- and target-levels in the 2030 Agenda – especially in the SDG 16 –, and evaluate whether the rule of law in this context may be viewed as a normative and universal foundation of international law. The paper concludes, with reflections drawn from the process leading up to the 2030 Agenda and the final outcome document that the rule of law – or at least strong and precise formulations of the concept – may be in decline in institutional and normative settings. This can be perceived as symptomatic of a broader crisis of the international legal order.
Reaching the Sustainable Development Goals requires a fundamental socio-economic transformation accompanied by substantial investment in low-carbon infrastructure. Such a sustainability transition represents a non-marginal change, driven by behavioral factors and systemic interactions. However, typical economic models used to assess a sustainability transition focus on marginal changes around a local optimum, whichby constructionlead to negative effects. Thus, these models do not allow evaluating a sustainability transition that might have substantial positive effects. This paper examines which mechanisms need to be included in a standard computable general equilibrium model to overcome these limitations and to give a more comprehensive view of the effects of climate change mitigation. Simulation results show that, given an ambitious greenhouse gas emission constraint and a price of carbon, positive economic effects are possible if (1) technical progress results (partly) endogenously from the model and (2) a policy intervention triggering an increase of investment is introduced. Additionally, if (3) the investment behavior of firms is influenced by their sales expectations, the effects are amplified. The results provide suggestions for policy-makers, because the outcome indicates that investment-oriented climate policies can lead to more desirable outcomes in economic, social and environmental terms.
The role of serum amyloid A and sphingosine-1-phosphate on high-density lipoprotein functionality
(2017)
The high-density lipoprotein (HDL) is one of the most important endogenous cardiovascular protective markers. HDL is an attractive target in the search for new pharmaceutical therapies and in the prevention of cardiovascular events. Some of HDL’s anti-atherogenic properties are related to the signaling molecule sphingosine-1-phosphate (S1P), which plays an important role in vascular homeostasis. However, for different patient populations it seems more complicated. Significant changes in HDL’s protective potency are reduced under pathologic conditions and HDL might even serve as a proatherogenic particle. Under uremic conditions especially there is a change in the compounds associated with HDL. S1P is reduced and acute phase proteins such as serum amyloid A (SAA) are found to be elevated in HDL. The conversion of HDL in inflammation changes the functional properties of HDL. High amounts of SAA are associated with the occurrence of cardiovascular diseases such as atherosclerosis. SAA has potent pro-atherogenic properties, which may have impact on HDL’s biological functions, including cholesterol efflux capacity, antioxidative and anti-inflammatory activities. This review focuses on two molecules that affect the functionality of HDL. The balance between functional and dysfunctional HDL is disturbed after the loss of the protective sphingolipid molecule S1P and the accumulation of the acute-phase protein SAA. This review also summarizes the biological activities of lipid-free and lipid-bound SAA and its impact on HDL function.
The role of alcohol and victim sexual interest in Spanish students' perceptions of sexual assault
(2017)
Two studies investigated the effects of information related to rape myths on Spanish college students’ perceptions of sexual assault. In Study 1, 92 participants read a vignette about a nonconsensual sexual encounter and rated whether it was a sexual assault and how much the woman was to blame. In the scenario, the man either used physical force or offered alcohol to the woman to overcome her resistance. Rape myth acceptance (RMA) was measured as an individual difference variable. Participants were more convinced that the incident was a sexual assault and blamed the woman less when the man had used force rather than offering her alcohol. In Study 2, 164 college students read a scenario in which the woman rejected a man’s sexual advances after having either accepted or turned down his offer of alcohol. In addition, the woman was either portrayed as being sexually attracted to him or there was no mention of her sexual interest. Participants’ RMA was again included. High RMA participants blamed the victim more than low RMA participants and were less certain that the incident was a sexual assault, especially when the victim had accepted alcohol and was described as being sexually attracted to the man. The findings are discussed in terms of their implications for the prevention and legal prosecution of sexual assault.
Over the last few decades, the methodology for the identification of customary international law (CIL) has been changing. Both elements of CIL – practice and opinio juris – have assumed novel and broader forms, as noted in the Reports of the Special Rapporteur of the International Law Commission (2013, 2014, 2015, 2016). This paper discusses these Reports and the draft conclusions, and reaction by States in the Sixth Committee of the United Nations General Assembly (UNGA), highlighting the areas of consensus and contestation. This ties to the analysis of the main doctrinal positions, with special attention being given to the two elements of CIL, and the role of the UNGA resolutions. The underlying motivation is to assess the real or perceived crisis of CIL, and the author develops the broader argument maintaining that in order to retain unity within international law, the internal limits of CIL must be carefully asserted.
Background: The relative dose response (RDR) test, which quantifies the increase in serum retinol after vitamin A administration, is a qualitative measure of liver vitamin A stores. Particularly in preterm infants, the feasibility of the RDR test involving blood is critically dependent on small sample volumes. Objectives: This study aimed to assess whether the RDR calculated with retinol-binding protein 4 (RBP4) might be a substitute for the classical retinol-based RDR test for assessing vitamin A status in very preterm infants. Methods: This study included preterm infants with a birth weight below 1,500 g (n = 63, median birth weight 985 g, median gestational age 27.4 weeks) who were treated with 5,000 IU retinyl palmitate intramuscularly 3 times a week for 4 weeks. On day 3 (first vitamin A injection) and day 28 of life (last vitamin A injection), the RDR was calculated and compared using serum retinol and RBP4 concentrations. Results: The concentrations of retinol (p < 0.001) and RBP4 (p < 0.01) increased significantly from day 3 to day 28. On day 3, the median (IQR) retinol-RDR was 27% (8.4-42.5) and the median RBP4-RDR was 8.4% (-3.4 to 27.9), compared to 7.5% (-10.6 to 20.8) and -0.61% (-19.7 to 15.3) on day 28. The results for retinol-RDR and RBP4-RDR revealed no significant correlation. The agreement between retinol-RDR and RBP4-RDR was poor (day 3: Cohen's κ = 0.12; day 28: Cohen's κ = 0.18). Conclusion: The RDR test based on circulating RBP4 is unlikely to reflect the hepatic vitamin A status in preterm infants.
The predictions of two contrasting approaches to the acquisition of transitive relative clauses were tested within the same groups of German-speaking participants aged from 3 to 5 years old. The input frequency approach predicts that object relative clauses with inanimate heads (e.g., the pullover that the man is scratching) are comprehended earlier and more accurately than those with an animate head (e.g., the man that the boy is scratching). In contrast, the structural intervention approach predicts that object relative clauses with two full NP arguments mismatching in number (e.g., the man that the boys are scratching) are comprehended earlier and more accurately than those with number-matching NPs (e.g., the man that the boy is scratching). These approaches were tested in two steps. First, we ran a corpus analysis to ensure that object relative clauses with number-mismatching NPs are not more frequent than object relative clauses with number-matching NPs in child directed speech. Next, the comprehension of these structures was tested experimentally in 3-, 4-, and 5-year-olds respectively by means of a color naming task. By comparing the predictions of the two approaches within the same participant groups, we were able to uncover that the effects predicted by the input frequency and by the structural intervention approaches co-exist and that they both influence the performance of children on transitive relative clauses, but in a manner that is modulated by age. These results reveal a sensitivity to animacy mismatch already being demonstrated by 3-year-olds and show that animacy is initially deployed more reliably than number to interpret relative clauses correctly. In all age groups, the animacy mismatch appears to explain the performance of children, thus, showing that the comprehension of frequent object relative clauses is enhanced compared to the other conditions. Starting with 4-year-olds but especially in 5-year-olds, the number mismatch supported comprehension—a facilitation that is unlikely to be driven by input frequency. Once children fine-tune their sensitivity to verb agreement information around the age of four, they are also able to deploy number marking to overcome the intervention effects. This study highlights the importance of testing experimentally contrasting theoretical approaches in order to characterize the multifaceted, developmental nature of language acquisition.
A particular form of social pain is invalidation. Therefore, this study (a) investigates whether patients with chronic low back pain experience invalidation, (b) if it has an influence on their pain, and (c) explores whether various social sources (e.g. partner and work) influence physical pain differentially. A total of 92 patients completed questionnaires, and for analysis, Pearson's correlation coefficients and hierarchical linear regression analyses were conducted. They indicated a significant association between discounting and disability due to pain (respective =.29, p>.05). Especially, discounting by partner was linked to higher disability (=.28, p>.05).
The classical Navier-Stokes equations of hydrodynamics are usually written in terms of vector analysis. More promising is the formulation of these equations in the language of differential forms of degree one. In this way the study of Navier-Stokes equations includes the analysis of the de Rham complex. In particular, the Hodge theory for the de Rham complex enables one to eliminate the pressure from the equations. The Navier-Stokes equations constitute a parabolic system with a nonlinear term which makes sense only for one-forms. A simpler model of dynamics of incompressible viscous fluid is given by Burgers' equation. This work is aimed at the study of invariant structure of the Navier-Stokes equations which is closely related to the algebraic structure of the de Rham complex at step 1. To this end we introduce Navier-Stokes equations related to any elliptic quasicomplex of first order differential operators. These equations are quite similar to the classical Navier-Stokes equations including generalised velocity and pressure vectors. Elimination of the pressure from the generalised Navier-Stokes equations gives a good motivation for the study of the Neumann problem after Spencer for elliptic quasicomplexes. Such a study is also included in the work.We start this work by discussion of Lamé equations within the context of elliptic quasicomplexes on compact manifolds with boundary. The non-stationary Lamé equations form a hyperbolic system. However, the study of the first mixed problem for them gives a good experience to attack the linearised Navier-Stokes equations. On this base we describe a class of non-linear perturbations of the Navier-Stokes equations, for which the solvability results still hold.
The Kenya rift revisited
(2017)
We present three-dimensional (3-D) models that describe the present-day thermal and rheological state of the lithosphere of the greater Kenya rift region aiming at a better understanding of the rift evolution, with a particular focus on plume-lithosphere interactions. The key methodology applied is the 3-D integration of diverse geological and geophysical observations using gravity modelling. Accordingly, the resulting lithospheric-scale 3-D density model is consistent with (i) reviewed descriptions of lithological variations in the sedimentary and volcanic cover, (ii) known trends in crust and mantle seismic velocities as revealed by seismic and seismological data and (iii) the observed gravity field. This data-based model is the first to image a 3-D density configuration of the crystalline crust for the entire region of Kenya and northern Tanzania. An upper and a basal crustal layer are differentiated, each composed of several domains of different average densities. We interpret these domains to trace back to the Precambrian terrane amalgamation associated with the East African Orogeny and to magmatic processes during Mesozoic and Cenozoic rifting phases. In combination with seismic velocities, the densities of these crustal domains indicate compositional differences. The derived lithological trends have been used to parameterise steady-state thermal and rheological models. These models indicate that crustal and mantle temperatures decrease from the Kenya rift in the west to eastern Kenya, while the integrated strength of the lithosphere increases. Thereby, the detailed strength configuration appears strongly controlled by the complex inherited crustal structure, which may have been decisive for the onset, localisation and propagation of rifting.
The paper looks at community interests in international law from the perspective of the International Law Commission. As the topics of the Commission are diverse, the outcome of its work is often seen as providing a sense of direction regarding general aspects of international law. After defining what he understands by “community interests”, the author looks at both secondary and primary rules of international law, as they have been articulated by the Commission, as well as their relevance for the recognition and implementation of community interests. The picture which emerges only partly fits the widespread narrative of “from self-interest to community interest”. Whereas the Commission has recognized, or developed, certain primary rules which more fully articulate community interests, it has been reluctant to reformulate secondary rules of international law, with the exception of jus cogens. The Commission has more recently rather insisted that the traditional State-consent-oriented secondary rules concerning the formation of customary international law and regarding the interpretation of treaties continue to be valid in the face of other actors and forms of action which push towards the recognition of more and thicker community interests.
The El Nino-Southern Oscillation (ENSO) is the main driver of the interannual variability in eastern African rainfall, with a significant impact on vegetation and agriculture and dire consequences for food and social security. In this study, we identify and quantify the ENSO contribution to the eastern African rainfall variability to forecast future eastern African vegetation response to rainfall variability related to a predicted intensified ENSO. To differentiate the vegetation variability due to ENSO, we removed the ENSO signal from the climate data using empirical orthogonal teleconnection (EOT) analysis. Then, we simulated the ecosystem carbon and water fluxes under the historical climate without components related to ENSO teleconnections. We found ENSO-driven patterns in vegetation response and confirmed that EOT analysis can successfully produce coupled tropical Pacific sea surface temperature-eastern African rainfall teleconnection from observed datasets. We further simulated eastern African vegetation response under future climate change as it is projected by climate models and under future climate change combined with a predicted increased ENSO intensity. Our EOT analysis highlights that climate simulations are still not good at capturing rainfall variability due to ENSO, and as we show here the future vegetation would be different from what is simulated under these climate model outputs lacking accurate ENSO contribution. We simulated considerable differences in eastern African vegetation growth under the influence of an intensified ENSO regime which will bring further environmental stress to a region with a reduced capacity to adapt effects of global climate change and food security.
The right to privacy in the digital age generates new challenges for the international jurisdiction. The following article deals with such challenges. Therefore it firstly defines the term of privacy in general and presents an international legal framework. With whisteblower Snowden a huge political discourse was initiated and the article gives insights into its further development. In 2015 the Human Rights Council for the first time announced a special rapporteur on the right to privacy. However, the discourse is not only taking place on a political level, also civil society organizations advocate more stringent regulations and prosecutions against violations of the right to privacy. Moreover the importance of the technology sector becomes clear. Companies like Microsoft are increasingly taking responsibility to protect digital media against unjustified data misuse, surveillance, collection and storage. But whereas the IT sector is developing very quickly, legislative processes do so rather slowly. Lastly, the individual is also hold to account. To protect oneself against data misuse is to a great extent acting self-responsible. Still, therefore information on protection must be clear and accessible for everyone.
West German anticommunism and the SED’s Westarbeit were to some extentinterrelated. From the beginning, each German state had attemted to stabilise itsown social system while trying to discredit its political opponent. The claim tosole representation and the refusal to acknowledge each other delineated governmentalaction on both sides. Anticommunism inWest Germany re-developed under theconditions of the Cold War, which allowed it to become virtually the reason ofstate and to serve as a tool for the exclusion of KPD supporters. In its turn, theSED branded the West German State as‘revanchist’and instrumentalised itsanticommunism to persecute and eliminate opponents within the GDR. Bothphenomena had an integrative and exclusionary element.