Refine
Has Fulltext
- yes (636) (remove)
Year of publication
- 2017 (636) (remove)
Document Type
- Postprint (249)
- Article (172)
- Doctoral Thesis (135)
- Monograph/Edited Volume (25)
- Part of Periodical (22)
- Master's Thesis (12)
- Working Paper (5)
- Conference Proceeding (4)
- Preprint (4)
- Bachelor Thesis (2)
Keywords
- Philosophie (23)
- philosophy (23)
- Bürgerkommune (12)
- Partizipation (12)
- Partizipationsprozesse (12)
- kommunale Demokratie (12)
- kommunale Entscheidungsprozesse (12)
- Anthropologie (10)
- Genisa (10)
- Geniza (10)
Institute
- Mathematisch-Naturwissenschaftliche Fakultät (73)
- Institut für Geowissenschaften (44)
- Institut für Biochemie und Biologie (40)
- Humanwissenschaftliche Fakultät (36)
- Institut für Chemie (35)
- Department Musik und Kunst (31)
- Extern (30)
- Institut für Physik und Astronomie (30)
- Department Linguistik (27)
- Department Erziehungswissenschaft (24)
Zeit-Not / Not-Zeit
(2017)
Theoretischer Hintergrund: Einflüsse von therapeutenorientiertem Kompetenz-Feedback in der Psychotherapieausbildung wurden bislang wenig untersucht.
Fragestellung: Wie gehen Ausbildungstherapeuten mit Feedback um? Welchen Einfluss hat ein regelmäßiges Kompetenz-Feedback auf die Qualität psychotherapeutischer Behandlungen (insbesondere Therapiesitzungen, therapeutische Beziehung, Person des Therapeuten, Supervision)?
Methode: Elf Therapeuten wurden mithilfe eines halbstrukturierten Interviewleitfadens befragt. Die Auswertung erfolgte mittels qualitativer Inhaltsanalyse nach Mayring (2015).
Ergebnisse: Das auf Basis der Interviews erstellte Kategoriensystem umfasste die Kategorien „Erwartungen an das Feedback“, „Wahrnehmung des Feedbacks“, „Verarbeitung von und Umgang mit Feedback“, „Folgen, Auswirkungen und Veränderungen durch Feedback“ sowie „Verbesserungswünsche“.
Schlussfolgerungen: Therapeuten streben eine Umsetzung des Feedbacks an, welches sich auf die Behandlung, die Supervision, die eigene Person und die therapeutische Beziehung auswirkt.
Kurz vor ihrem sechzigsten Geburtstag ist die europäische Union mit inneren und äußeren Herausforderungen konfrontiert und befindet sich in einer tiefen Krise. Am 1. März 2017 legte Kommissionspräsident Juncker das „Weißbuch über die Zukunft der Europäischen Union“ vor, in dem er verschiedene Szenarien darlegt und zur Diskussion über die anste-henden Entscheidungen einlädt. Diese Papier versteht sich als ein solcher Diskussionsbei-trag.
Im kognitiven Vulnerabilitäts-Stress-Modell der Depression von A.T. Beck (1967, 1976) spielen dysfunktionale Einstellungen bei der Entstehung von Depression in Folge von erlebtem Stress eine zentrale Rolle. Diese Theorie prägt seit Jahrzehnten die ätiologische Erforschung der Depression, jedoch ist die Bedeutung dysfunktionaler Einstellungen im Prozess der Entstehung einer Depression insbesondere im Kindes- und Jugendalter nach wie vor unklar. Die vorliegende Arbeit widmet sich einigen in der bisherigen Forschung wenig behandelten Fragen. Diese betreffen u. a. die Möglichkeit nichtlinearer Effekte dysfunktionaler Einstellungen, Auswirkungen einer Stichprobenselektion, Entwicklungseffekte sowie die Spezifität etwaiger Zusammenhänge für eine depressive Symptomatik.
Zur Beantwortung dieser Fragen wurden Daten von zwei Messzeitpunkten der PIER-Studie, eines großangelegten Längsschnittprojekts über Entwicklungsrisiken im Kindes- und Jugendalter, genutzt. Kinder und Jugendliche im Alter von 9 bis 18 Jahren berichteten zweimal im Abstand von ca. 20 Monaten im Selbstberichtsverfahren über ihre dysfunktionalen Einstellungen, Symptome aus verschiedenen Störungsbereichen sowie über eingetretene Lebensereignisse.
Die Ergebnisse liefern Evidenz für ein Schwellenmodell, in dem dysfunktionale Einstellungen unabhängig von Alter und Geschlecht nur im höheren Ausprägungsbereich eine Wirkung als Vulnerabilitätsfaktor zeigen, während im niedrigen Ausprägungsbereich keine Zusammenhänge zur späteren Depressivität bestehen. Eine Wirkung als Vulnerabilitätsfaktor war zudem nur in der Subgruppe der anfänglich weitgehend symptomfreien Kinder und Jugendlichen zu beobachten. Das Schwellenmodell erwies sich als spezifisch für eine depressive Symptomatik, es zeigten sich jedoch auch (teilweise ebenfalls nichtlineare) Effekte dysfunktionaler Einstellungen auf die Entwicklung von Essstörungssymptomen und aggressivem Verhalten. Bei 9- bis 13-jährigen Jungen standen dysfunktionale Einstellungen zudem in Zusammenhang mit einer Tendenz, Stress in Leistungskontexten herbeizuführen.
Zusammen mit den von Sahyazici-Knaak (2015) berichteten Ergebnissen aus der PIER-Studie weisen die Befunde darauf hin, dass dysfunktionale Einstellungen im Kindes- und Jugendalter – je nach betrachteter Subgruppe – Ursache, Symptom und Konsequenz der Depression darstellen können. Die in der vorliegenden Arbeit gezeigten nichtlinearen Effekte dysfunktionaler Einstellungen und die Effekte der Stichprobenselektion bieten eine zumindest teilweise Erklärung für die Heterogenität früherer Forschungsergebnisse. Insgesamt lassen sie auf komplexe – und nicht ausschließlich negative – Auswirkungen dysfunktionaler Einstellungen schließen. Für eine adäquate Beurteilung der „Dysfunktionalität“ der von A.T. Beck so betitelten Einstellungen erscheint daher eine Berücksichtigung der betrachteten Personengruppe, der absoluten Ausprägungen und der fraglichen Symptomgruppen geboten.
Die Arktis erwärmt sich schneller als der Rest der Erde. Die Auswirkungen manifestieren sich unter Anderem in einer verstärkten Erwärmung der arktischen Grenzschicht. Diese Arbeit befasst sich mit Wechselwirkungen zwischen synoptischen Zyklonen und der arktischen Atmosphäre auf lokalen bis überregionalen Skalen. Ausgangspunkt dafür sind Messdaten und Modellsimulationen für den Zeitraum der N-ICE2015 Expedition, die von Anfang Januar bis Ende Juni 2015 im arktischen Nordatlantiksektor stattgefunden hat.
Anhand von Radiosondenmessungen lassen sich Auswirkungen von synoptischen Zyklonen am deutlichsten im Winter erkennen, da sie durch die Advektion warmer und feuchter Luftmassen in die Arktis den Zustand der Atmosphäre von einem strahlungs-klaren in einen strahlungs-opaken ändern. Obwohl dieser scharfe Kontrast nur im Winter existiert, zeigt die Analyse, dass der integrierte Wasserdampf als Indikator für die Advektion von Luftmassen aus niedrigen Breiten in die Arktis auch im Frühjahr geeignet ist. Neben der Advektion von Luftmassen wird der Einfluss der Zyklonen auf die statische Stabilität charakterisiert. Beim Vergleich der N-ICE2015 Beobachtungen mit der SHEBA Kampagne (1997/1998), die über dickerem Eis stattfand, finden sich trotz der unterschiedlichen Meereisregime Ähnlichkeiten in der statischen Stabilität der Atmosphäre. Die beobachteten Differenzen in der Stabilität lassen sich auf Unterschiede in der synoptischen Aktivität zurückführen. Dies lässt vermuten, dass die dünnere Eisdecke auf saisonalen Zeitskalen nur einen geringen Einfluss auf die thermodynamische Struktur der arktischen Troposphäre besitzt, solange eine dicke Schneeschicht sie bedeckt. Ein weiterer Vergleich mit den parallel zur N-ICE2015 Kampagne gestarteten Radiosonden der AWIPEV Station in Ny-Åesund, Spitzbergen, macht deutlich, dass die synoptischen Zyklonen oberhalb der Orographie auf saisonalen Zeitskalen das Wettergeschehen bestimmen.
Des Weiteren werden für Februar 2015 die Auswirkungen von in der Vertikalen variiertem Nudging auf die Entwicklung der Zyklonen am Beispiel des hydrostatischen regionalen Klimamodells HIRHAM5 untersucht. Es zeigt sich, dass die Unterschiede zwischen den acht Modellsimulationen mit abnehmender Anzahl der genudgten Level zunehmen. Die größten Differenzen resultieren vornehmlich aus dem zeitlichen Versatz der Entwicklung synoptischer Zyklonen. Zur Korrektur des Zeitversatzes der Zykloneninitiierung genügt es bereits, Nudging in den unterstem 250 m der Troposphäre anzuwenden. Daneben findet sich zwischen den genudgten HIRHAM5-Simulation und den in situ Messungen der gleiche positive Temperaturbias, den auch ERA-Interim besitzt. Das freie HIRHAM hingegen reproduziert das positive Ende der N-ICE2015 Temperaturverteilung gut, besitzt aber einen starken negativen Bias, der sehr wahrscheinlich aus einer Unterschätzung des Feuchtegehalts resultiert. An Beispiel einer Zyklone wird gezeigt, dass Nudging Einfluss auf die Lage der Höhentiefs besitzt, die ihrerseits die Zyklonenentwicklung am Boden beeinflussen. Im Weiteren wird mittels eines für kleine Ensemblegrößen geeigneten Varianzmaßes eine statistische Einschätzung der Wirkung des Nudgings auf die Vertikale getroffen. Es wird festgestellt, dass die Ähnlichkeit der Modellsimulationen in der unteren Troposphäre generell höher ist als darüber und in 500 hPa ein lokales Minimum besitzt.
Im letzten Teil der Analyse wird die Wechselwirkung der oberen und unteren Stratosphäre anhand zuvor betrachteter Zyklonen mit Daten der ERA-Interim Reanalyse untersucht. Lage und Ausrichtung des Polarwirbels erzeugten ab Anfang Februar 2015 eine ungewöhnlich große Meridionalkomponente des Tropopausenjets, die Zugbahnen in die zentrale Arktis begünstigte. Am Beispiel einer Zyklone wird die Übereinstimmung der synoptischen Entwicklung mit den theoretischen Annahmen über den abwärts gerichteten Einfluss der Stratosphäre auf die Troposphäre hervorgehoben. Dabei spielt die nicht-lineare Wechselwirkung zwischen der Orographie Grönlands, einer Intrusion stratosphärischer Luft in die Troposphäre sowie einer in Richtung Arktis propagierender Rossby-Welle eine tragende Rolle. Als Indikator dieser Wechselwirkung werden horizontale Signaturen aus abwechselnd aufsteigender und absinkender Luft innerhalb der Troposphäre identifiziert.
The effect of cellulose-based polyelectrolytes on biomimetic calcium phosphate mineralization is described. Three cellulose derivatives, a polyanion, a polycation, and a polyzwitterion were used as additives. Scanning electron microscopy, X-ray diffraction, IR and Raman spectroscopy show that, depending on the composition of the starting solution, hydroxyapatite or brushite precipitates form. Infrared and Raman spectroscopy also show that significant amounts of nitrate ions are incorporated in the precipitates. Energy dispersive X-ray spectroscopy shows that the Ca/P ratio varies throughout the samples and resembles that of other bioinspired calcium phosphate hybrid materials. Elemental analysis shows that the carbon (i.e., polymer) contents reach 10% in some samples, clearly illustrating the formation of a true hybrid material. Overall, the data indicate that a higher polymer concentration in the reaction mixture favors the formation of polymer-enriched materials, while lower polymer concentrations or high precursor concentrations favor the formation of products that are closely related to the control samples precipitated in the absence of polymer. The results thus highlight the potential of (water-soluble) cellulose derivatives for the synthesis and design of bioinspired and bio-based hybrid materials.
The functioning of the surface water-groundwater interface as buffer, filter and reactive zone is important for water quality, ecological health and resilience of streams and riparian ecosystems. Solute and heat exchange across this interface is driven by the advection of water. Characterizing the flow conditions in the streambed is challenging as flow patterns are often complex and multidimensional, driven by surface hydraulic gradients and groundwater discharge. This thesis presents the results of an integrated approach of studies, ranging from the acquisition of field data, the development of analytical and numerical approaches to analyse vertical temperature profiles to the detailed, fully-integrated 3D numerical modelling of water and heat flux at the reach scale. All techniques were applied in order to characterize exchange flux between stream and groundwater, hyporheic flow paths and temperature patterns.
The study was conducted at a reach-scale section of the lowland Selke River, characterized by distinctive pool riffle sequences and fluvial islands and gravel bars. Continuous time series of hydraulic heads and temperatures were measured at different depths in the river bank, the hyporheic zone and within the river. The analyses of the measured diurnal temperature variation in riverbed sediments provided detailed information about the exchange flux between river and groundwater. Beyond the one-dimensional vertical water flow in the riverbed sediment, hyporheic and parafluvial flow patterns were identified. Subsurface flow direction and magnitude around fluvial islands and gravel bars at the study site strongly depended on the position around the geomorphological structures and on the river stage. Horizontal water flux in the streambed substantially impacted temperature patterns in the streambed. At locations with substantial horizontal fluxes the penetration depths of daily temperature fluctuations was reduced in comparison to purely vertical exchange conditions.
The calibrated and validated 3D fully-integrated model of reach-scale water and heat fluxes across the river-groundwater interface was able to accurately represent the real system. The magnitude and variations of the simulated temperatures matched the observed ones, with an average mean absolute error of 0.7 °C and an average Nash Sutcliffe Efficiency of 0.87. The simulation results showed that the water and heat exchange at the surface water-groundwater interface is highly variable in space and time with zones of daily temperature oscillations penetrating deep into the sediment and spots of daily constant temperature following the average groundwater temperature. The average hyporheic flow path temperature was found to strongly correlate with the flow path residence time (flow path length) and the temperature gradient between river and groundwater. Despite the complexity of these processes, the simulation results allowed the derivation of a general empirical relationship between the hyporheic residence times and temperature patterns. The presented results improve our understanding of the complex spatial and temporal dynamics of water flux and thermal processes within the shallow streambed. Understanding these links provides a general basis from which to assess hyporheic temperature conditions in river reaches.
Wahrnehmung und Heterogenität von Fach- und Lehramtsstudierenden im Kontext von Lehrveranstaltungen
(2017)
VS30, slope, H800 and f0
(2017)
The aim of this paper is to investigate the ability of various site-condition proxies (SCPs) to reduce ground-motion aleatory variability and evaluate how SCPs capture nonlinearity site effects. The SCPs used here are time-averaged shear-wave velocity in the top 30 m (VS30), the topographical slope (slope), the fundamental resonance frequency (f0) and the depth beyond which Vs exceeds 800 m/s (H800). We considered first the performance of each SCP taken alone and then the combined performance of the 6 SCP pairs [VS30–f0], [VS30–H800], [f0–slope], [H800–slope], [VS30–slope] and [f0–H800]. This analysis is performed using a neural network approach including a random effect applied on a KiK-net subset for derivation of ground-motion prediction equations setting the relationship between various ground-motion parameters such as peak ground acceleration, peak ground velocity and pseudo-spectral acceleration PSA (T), and Mw, RJB, focal depth and SCPs. While the choice of SCP is found to have almost no impact on the median groundmotion prediction, it does impact the level of aleatory uncertainty. VS30 is found to perform the best of single proxies
at short periods (T < 0.6 s), while f0 and H800 perform better at longer periods; considering SCP pairs leads to significant improvements, with particular emphasis on [VS30–H800] and [f0–slope] pairs. The results also indicate significant nonlinearity on the site terms for soft sites and that the most relevant loading parameter for characterising nonlinear site response is the “stiff” spectral ordinate at the considered period.
Vom Handeln und Schmusen
(2017)
Background
The kidneys are essential for the metabolism of vitamin A (retinol) and its transport proteins retinol-binding protein 4 (RBP4) and transthyretin. Little is known about changes in serum concentration after living donor kidney transplantation (LDKT) as a consequence of unilateral nephrectomy; although an association of these parameters with the risk of cardiovascular diseases and insulin resistance has been suggested. Therefore we analyzed the concentration of retinol, RBP4, apoRBP4 and transthyretin in serum of 20 living-kidney donors and respective recipients at baseline as well as 6 weeks and 6 months after LDKT.
Results
As a consequence of LDKT, the kidney function of recipients was improved while the kidney function of donors was moderately reduced within 6 weeks after LDKT. With regard to vitamin A metabolism, the recipients revealed higher levels of retinol, RBP4, transthyretin and apoRBP4 before LDKT in comparison to donors. After LDKT, the levels of all four parameters decreased in serum of the recipients, while retinol, RBP4 as well as apoRBP4 serum levels of donors increased and remained increased during the follow-up period of 6 months.
Conclusion
LDKT is generally regarded as beneficial for allograft recipients and not particularly detrimental for the donors. However, it could be demonstrated in this study that a moderate reduction of kidney function by unilateral nephrectomy, resulted in an imbalance of components of vitamin A metabolism with a significant increase of retinol and RBP4 and apoRBP4 concentration in serum of donors.
Vermessung im Sonnensystem
(2017)
Die bisherigen Missionen ins Sonnensystem lieferten eine enorme Fülle an Daten in unterschiedlichen Formaten und in Form von Bildern und digitalen Messergebnissen. Die Oberflächenprozesse der planetaren Körper, die mit Hilfe dieser Daten erforscht werden können, sind äußerst vielfältig und reichen von Einschlagskratern über Vulkanismus und Tektonik zu allen Formen der Erosion und Sedimentation. Um diese Prozesse verstehen zu können werden Verfahren angewendet, die für die Datenanalyse auf der Erde entwickelt wurden. Allerdings ist es notwendig all diese Verfahren zum Teil mit erheblichem Aufwand und unter Berücksichtigung der jeweiligen physikalischen Rahmenbedingungen anzupassen. Die Entwicklung kartographischer Verfahren zur Abstraktion der hier angesprochenen Informationen, also die Erfassung, geomorphologische Analyse und Visualisierung planetarer Oberflächen und Prozesse, hat jedoch gerade erst begonnen. Um diese Entwicklungen voranzutreiben, hat das Deutsche Zentrum für Luft- und Raumfahrt in Kooperation mit der Universität Potsdam (Institut für Geographie, Fachgruppe Geoinformatik, Prof. Dr. Asche), im Rahmen von Dissertationen und Forschungsvorhaben, in einem ersten Schritt kartographische Analyseverfahren für den Mars und die Asteroiden Ceres und Vesta entwickelt.
In this study, we validate and compare elevation accuracy and geomorphic metrics of satellite-derived digital elevation models (DEMs) on the southern Central Andean Plateau. The plateau has an average elevation of 3.7 km and is characterized by diverse topography and relief, lack of vegetation, and clear skies that create ideal conditions for remote sensing. At 30m resolution, SRTM-C, ASTER GDEM2, stacked ASTER L1A stereopair DEM, ALOS World 3D, and TanDEM-X have been analyzed. The higher-resolution datasets include 12m TanDEM-X, 10m single-CoSSC TerraSAR-X/TanDEM-X DEMs, and 5m ALOS World 3D. These DEMs are state of the art for optical (ASTER and ALOS) and radar (SRTM-C and TanDEM-X) spaceborne sensors. We assessed vertical accuracy by comparing standard deviations of the DEM elevation versus 307 509 differential GPS measurements across 4000m of elevation. For the 30m DEMs, the ASTER datasets had the highest vertical standard deviation at > 6.5 m, whereas the SRTM-C, ALOS World 3D, and TanDEM-X were all < 3.5 m. Higher-resolution DEMs generally had lower uncertainty, with both the 12m TanDEM-X and 5m ALOSWorld 3D having < 2m vertical standard deviation. Analysis of vertical uncertainty with respect to terrain elevation, slope, and aspect revealed the low uncertainty across these attributes for SRTM-C (30 m), TanDEM-X (12–30 m), and ALOS World 3D (5–30 m). Single-CoSSC TerraSAR-X/TanDEM-X 10m DEMs and the 30m ASTER GDEM2 displayed slight aspect biases, which were removed in their stacked counterparts (TanDEM-X and ASTER Stack). Based on low vertical standard deviations and visual inspection alongside optical satellite data, we selected the 30m SRTM-C, 12–30m TanDEM-X, 10m single-CoSSC TerraSAR-X/TanDEM-X, and 5m ALOS World 3D for geomorphic metric comparison in a 66 km2 catchment with a distinct river knickpoint. Consistent m=n values were found using chi plot channel profile analysis, regardless of DEM type and spatial resolution. Slope, curvature, and drainage area were calculated and plotting schemes were used to assess basin-wide differences in the hillslope-to-valley transition related to the knickpoint. While slope and hillslope length measurements vary little between datasets, curvature displays higher magnitude measurements with fining resolution. This is especially true for the optical 5m ALOS World 3D DEM, which demonstrated high-frequency noise in 2–8 pixel steps through a Fourier frequency analysis. The improvements in accurate space-radar DEMs (e.g., TanDEM-X) for geomorphometry are promising, but airborne or terrestrial data are still necessary for meter-scale analysis.
Background: Plasma concentration of retinol is an accepted indicator to assess the vitamin A (retinol) status in cattle. However, the determination of vitamin A requires a time consuming multi-step procedure, which needs specific equipment to perform extraction, centrifugation or saponification prior to high-performance liquid chromatography (HPLC).
Methods: The concentrations of retinol in whole blood (n = 10), plasma (n = 132) and serum (n = 61) were measured by a new rapid cow-side test (iCheck™ FLUORO) and compared with those by HPLC in two independent laboratories in Germany (DE) and Japan (JP).
Results: Retinol concentrations in plasma ranged from 0.033 to 0.532 mg/L, and in serum from 0.043 to 0.360 mg/L (HPLC method). No significant differences in retinol levels were observed between the new rapid cow-side test and HPLC performed in different laboratories (HPLC vs. iCheck™ FLUORO: 0.320 ± 0.047 mg/L vs. 0.333 ± 0.044 mg/L, and 0.240 ± 0.096 mg/L vs. 0.241 ± 0.069 mg/L, lab DE and lab JP, respectively). A similar comparability was observed when whole blood was used (HPLC vs. iCheck™ FLUORO: 0.353 ± 0.084 mg/L vs. 0.341 ± 0.064 mg/L). Results showed a good agreement between both methods based on correlation coefficients of r2 = 0.87 (P < 0.001) and Bland-Altman blots revealed no significant bias for all comparison.
Conclusions: With the new rapid cow-side test (iCheck™ FLUORO) retinol concentrations in cattle can be reliably assessed within a few minutes and directly in the barn using even whole blood without the necessity of prior centrifugation. The ease of the application of the new rapid cow-side test and its portability can improve the diagnostic of vitamin A status and will help to control vitamin A supplementation in specific vitamin A feeding regimes such as used to optimize health status in calves or meat marbling in Japanese Black cattle.
Statistics Canada, Canada’s national statistics agency, offers a suite of spatial
files for mapping and analysis of its various population data products. The following
article showcases possibilities and shortfalls of the existing spatial files
for mapping population data, and provides an overview of the structure of the
available boundary files from the regional to the dissemination block level. Due
to Canada’s highly dispersed population, mapping its distribution and density can
be challenging. Common mapping techniques such as the choropleth method are
suitable only for mapping spatially high resolution data such as data at the dissemination
area level. To allow for mapping of population data at less detailed levels
such as census divisions or provinces, Statistics Canada has created a so-called
ecumene boundary file which outlines the inhabited area of Canada and can be
used to more accurately visualize Canada’s population distribution and density.
Using behavioral observation for the longitudinal study of anger regulation in middle childhood
(2017)
Assessing anger regulation via self-reports is fraught with problems, especially among children. Behavioral observation provides an ecologically valid alternative for measuring anger regulation. The present study uses data from two waves of a longitudinal study to present a behavioral observation approach for measuring anger regulation in middle childhood. At T1, 599 children from Germany (6–10 years old) were observed during an anger eliciting task, and the use of anger regulation strategies was coded. At T2, 3 years later, the observation was repeated with an age-appropriate version of the same task. Partial metric measurement invariance over time demonstrated the structural equivalence of the two versions. Maladaptive anger regulation between the two time points showed moderate stability. Validity was established by showing correlations with aggressive behavior, peer problems, and conduct problems (concurrent and predictive criterion validity). The study presents an ecologically valid and economic approach to assessing anger regulation strategies in situations.
Background: Although the benefits for health of physical activity (PA) are well documented, the majority of the population is unable to implement present recommendations into daily routine. Mobile health (mHealth) apps could help increase the level of PA. However, this is contingent on the interest of potential users.
Objective: The aim of this study was the explorative, nuanced determination of the interest in mHealth apps with respect to PA among students and staff of a university.
Methods: We conducted a Web-based survey from June to July 2015 in which students and employees from the University of Potsdam were asked about their activity level, interest in mHealth fitness apps, chronic diseases, and sociodemographic parameters.
Results: A total of 1217 students (67.30%, 819/1217; female; 26.0 years [SD 4.9]) and 485 employees (67.5%, 327/485; female; 42.7 years [SD 11.7]) participated in the survey. The recommendation for PA (3 times per week) was not met by 70.1% (340/485) of employees and 52.67% (641/1217) of students. Within these groups, 53.2% (341/641 students) and 44.2% (150/340 employees)—independent of age, sex, body mass index (BMI), and level of education or professional qualification—indicated an interest in mHealth fitness apps.
Conclusions: Even in a younger, highly educated population, the majority of respondents reported an insufficient level of PA. About half of them indicated their interest in training support. This suggests that the use of personalized mobile fitness apps may become increasingly significant for a positive change of lifestyle.
Die Femtosekundendynamik nach resonanten Photoanregungen mit optischen und Röntgenpulsen ermöglicht eine selektive Verformung von chemischen N‐H‐ und N‐C‐Bindungen in 2‐Thiopyridon in wässriger Lösung. Die Untersuchung der orbitalspezifischen elektronischen Struktur und ihrer Dynamik auf ultrakurzen Zeitskalen mit resonanter inelastischer Röntgenstreuung an der N1s‐Resonanz am Synchrotron und dem Freie‐Elektronen‐Laser LCLS in Kombination mit quantenchemischen Multikonfigurationsberechnungen erbringen den direkten Nachweis dieser kontrollierten photoinduzierten Molekülverformungen und ihrer ultrakurzen Zeitskala.
Background: Inferring regulatory interactions between genes from transcriptomics time-resolved data, yielding reverse engineered gene regulatory networks, is of paramount importance to systems biology and bioinformatics studies. Accurate methods to address this problem can ultimately provide a deeper insight into the complexity, behavior, and functions of the underlying biological systems. However, the large number of interacting genes coupled with short and often noisy time-resolved read-outs of the system renders the reverse engineering a challenging task. Therefore, the development and assessment of methods which are computationally efficient, robust against noise, applicable to short time series data, and preferably capable of reconstructing the directionality of the regulatory interactions remains a pressing research problem with valuable applications.
Results: Here we perform the largest systematic analysis of a set of similarity measures and scoring schemes within the scope of the relevance network approach which are commonly used for gene regulatory network reconstruction from time series data. In addition, we define and analyze several novel measures and schemes which are particularly suitable for short transcriptomics time series. We also compare the considered 21 measures and 6 scoring schemes according to their ability to correctly reconstruct such networks from short time series data by calculating summary statistics based on the corresponding specificity and sensitivity. Our results demonstrate that rank and symbol based measures have the highest performance in inferring regulatory interactions. In addition, the proposed scoring scheme by asymmetric weighting has shown to be valuable in reducing the number of false positive interactions. On the other hand, Granger causality as well as information-theoretic measures, frequently used in inference of regulatory networks, show low performance on the short time series analyzed in this study.
Conclusions: Our study is intended to serve as a guide for choosing a particular combination of similarity measures and scoring schemes suitable for reconstruction of gene regulatory networks from short time series data. We show that further improvement of algorithms for reverse engineering can be obtained if one considers measures that are rooted in the study of symbolic dynamics or ranks, in contrast to the application of common similarity measures which do not consider the temporal character of the employed data. Moreover, we establish that the asymmetric weighting scoring scheme together with symbol based measures (for low noise level) and rank based measures (for high noise level) are the most suitable choices.
The femtosecond excited-state dynamics following resonant photoexcitation enable the selective deformation of N-H and N-C chemical bonds in 2-thiopyridone in aqueous solution with optical or X-ray pulses. In combination with multiconfigurational quantum-chemical calculations, the orbital-specific electronic structure and its ultrafast dynamics accessed with resonant inelastic X-ray scattering at the N 1s level using synchrotron radiation and the soft X-ray free-electron laser LCLS provide direct evidence for this controlled photoinduced molecular deformation and its ultrashort time-scale.
Many studies demonstrated interactions between number processing and either spatial codes (effects of spatial-numerical associations) or visual size-related codes (size-congruity effect). However, the interrelatedness of these two number couplings is still unclear. The present study examines the simultaneous occurrence of space- and size-numerical congruency effects and their interactions both within and across trials, in a magnitude judgment task physically small or large digits were presented left or right from screen center. The reaction times analysis revealed that space- and size-congruency effects coexisted in parallel and combined additively. Moreover, a selective sequential modulation of the two congruency effects was found. The size-congruency effect was reduced after size incongruent trials. The space-congruency effect, however, was only affected by the previous space congruency. The observed independence of spatial-numerical and within magnitude associations is interpreted as evidence that the two couplings reflect Different attributes of numerical meaning possibly related to orginality and cardinality.
In the context of back pain, great emphasis has been placed on the importance of trunk stability, especially in situations requiring compensation of repetitive, intense loading induced during high-performance activities, e.g., jumping or landing. This study aims to evaluate trunk muscle activity during drop jump in adolescent athletes with back pain (BP) compared to athletes without back pain (NBP). Eleven adolescent athletes suffering back pain (BP: m/f: n = 4/7; 15.9 ± 1.3 y; 176 ± 11 cm; 68 ± 11 kg; 12.4 ± 10.5 h/we training) and 11 matched athletes without back pain (NBP: m/f: n = 4/7; 15.5 ± 1.3 y; 174 ± 7 cm; 67 ± 8 kg; 14.9 ± 9.5 h/we training) were evaluated. Subjects conducted 3 drop jumps onto a force plate (ground reaction force). Bilateral 12-lead SEMG (surface Electromyography) was applied to assess trunk muscle activity. Ground contact time [ms], maximum vertical jump force [N], jump time [ms] and the jump performance index [m/s] were calculated for drop jumps. SEMG amplitudes (RMS: root mean square [%]) for all 12 single muscles were normalized to MIVC (maximum isometric voluntary contraction) and analyzed in 4 time windows (100 ms pre- and 200 ms post-initial ground contact, 100 ms pre- and 200 ms post-landing) as outcome variables. In addition, muscles were grouped and analyzed in ventral and dorsal muscles, as well as straight and transverse trunk muscles. Drop jump ground reaction force variables did not differ between NBP and BP (p > 0.05). Mm obliquus externus and internus abdominis presented higher SEMG amplitudes (1.3–1.9-fold) for BP (p < 0.05). Mm rectus abdominis, erector spinae thoracic/lumbar and latissimus dorsi did not differ (p > 0.05). The muscle group analysis over the whole jumping cycle showed statistically significantly higher SEMG amplitudes for BP in the ventral (p = 0.031) and transverse muscles (p = 0.020) compared to NBP. Higher activity of transverse, but not straight, trunk muscles might indicate a specific compensation strategy to support trunk stability in athletes with back pain during drop jumps. Therefore, exercises favoring the transverse trunk muscles could be recommended for back pain treatment.
Trunk loading and back pain
(2017)
An essential function of the trunk is the compensation of external forces and loads in order to guarantee stability. Stabilising the trunk during sudden, repetitive loading in everyday tasks, as well as during performance is important in order to protect against injury. Hence, reduced trunk stability is accepted as a risk factor for the development of back pain (BP). An altered activity pattern including extended response and activation times as well as increased co-contraction of the trunk muscles as well as a reduced range of motion and increased movement variability of the trunk are evident in back pain patients (BPP). These differences to healthy controls (H) have been evaluated primarily in quasi-static test situations involving isolated loading directly to the trunk. Nevertheless, transferability to everyday, dynamic situations is under debate. Therefore, the aim of this project is to analyse 3-dimensional motion and neuromuscular reflex activity of the trunk as response to dynamic trunk loading in healthy (H) and back pain patients (BPP).
A measurement tool was developed to assess trunk stability, consisting of dynamic test situations. During these tests, loading of the trunk is generated by the upper and lower limbs with and without additional perturbation. Therefore, lifting of objects and stumbling while walking are adequate represents. With the help of a 12-lead EMG, neuromuscular activity of the muscles encompassing the trunk was assessed. In addition, three-dimensional trunk motion was analysed using a newly developed multi-segmental trunk model. The set-up was checked for reproducibility as well as validity. Afterwards, the defined measurement set-up was applied to assess trunk stability in comparisons of healthy and back pain patients.
Clinically acceptable to excellent reliability could be shown for the methods (EMG/kinematics) used in the test situations. No changes in trunk motion pattern could be observed in healthy adults during continuous loading (lifting of objects) of different weights. In contrast, sudden loading of the trunk through perturbations to the lower limbs during walking led to an increased neuromuscular activity and ROM of the trunk. Moreover, BPP showed a delayed muscle response time and extended duration until maximum neuromuscular activity in response to sudden walking perturbations compared to healthy controls. In addition, a reduced lateral flexion of the trunk during perturbation could be shown in BPP.
It is concluded that perturbed gait seems suitable to provoke higher demands on trunk stability in adults. The altered neuromuscular and kinematic compensation pattern in back pain patients (BPP) can be interpreted as increased spine loading and reduced trunk stability in patients. Therefore, this novel assessment of trunk stability is suitable to identify deficits in BPP. Assignment of affected BPP to therapy interventions with focus on stabilisation of the trunk aiming to improve neuromuscular control in dynamic situations is implied. Hence, sensorimotor training (SMT) to enhance trunk stability and compensation of unexpected sudden loading should be preferred.
Trends in precipitation over Germany and the Rhine basin related to changes in weather patterns
(2017)
Precipitation as the central meteorological feature for agriculture, water security, and human well-being amongst others, has gained special attention ever since. Lack of precipitation may have devastating effects such as crop failure and water scarcity. Abundance of precipitation, on the other hand, may as well result in hazardous events such as flooding and again crop failure. Thus, great effort has been spent on tracking changes in precipitation and relating them to underlying processes. Particularly in the face of global warming and given the link between temperature and atmospheric water holding capacity, research is needed to understand the effect of climate change on precipitation.
The present work aims at understanding past changes in precipitation and other meteorological variables. Trends were detected for various time periods and related to associated changes in large-scale atmospheric circulation. The results derived in this thesis may be used as the foundation for attributing changes in floods to climate change. Assumptions needed for the downscaling of large-scale circulation model output to local climate stations are tested and verified here.
In a first step, changes in precipitation over Germany were detected, focussing not only on precipitation totals, but also on properties of the statistical distribution, transition probabilities as a measure for wet/dry spells, and extreme precipitation events.
Shifting the spatial focus to the Rhine catchment as one of the major water lifelines of Europe and the largest river basin in Germany, detected trends in precipitation and other meteorological variables were analysed in relation to states of an ``optimal'' weather pattern classification. The weather pattern classification was developed seeking the best skill in explaining the variance of local climate variables.
The last question addressed whether observed changes in local climate variables are attributable to changes in the frequency of weather patterns or rather to changes within the patterns itself. A common assumption for a downscaling approach using weather patterns and a stochastic weather generator is that climate change is expressed only as a changed occurrence of patterns with the pattern properties remaining constant. This assumption was validated and the ability of the latest generation of general circulation models to reproduce the weather patterns was evaluated.
% Paper 1
Precipitation changes in Germany in the period 1951-2006 can be summarised briefly as negative in summer and positive in all other seasons. Different precipitation characteristics confirm the trends in total precipitation: while winter mean and extreme precipitation have increased, wet spells tend to be longer as well (expressed as increased probability for a wet day followed by another wet day). For summer the opposite was observed: reduced total precipitation, supported by decreasing mean and extreme precipitation and reflected in an increasing length of dry spells.
Apart from this general summary for the whole of Germany, the spatial distribution within the country is much more differentiated. Increases in winter precipitation are most pronounced in the north-west and south-east of Germany, while precipitation increases are highest in the west for spring and in the south for autumn. Decreasing summer precipitation was observed in most regions of Germany, with particular focus on the south and west.
The seasonal picture, however, was again differently represented in the contributing months, e.g.\ increasing autumn precipitation in the south of Germany is formed by strong trends in the south-west in October and in the south-east in November. These results emphasise the high spatial and temporal organisation of precipitation changes.
% Paper 2
The next step towards attributing precipitation trends to changes in large-scale atmospheric patterns was the derivation of a weather pattern classification that sufficiently stratifies the local climate variables under investigation. Focussing on temperature, radiation, and humidity in addition to precipitation, a classification based on mean sea level pressure, near-surface temperature, and specific humidity was found to have the best skill in explaining the variance of the local variables. A rather high number of 40 patterns was selected, allowing typical pressure patterns being assigned to specific seasons by the associated temperature patterns. While the skill in explaining precipitation variance is rather low, better skill was achieved for radiation and, of course, temperature.
Most of the recent GCMs from the CMIP5 ensemble were found to reproduce these weather patterns sufficiently well in terms of frequency, seasonality, and persistence.
% Paper 3
Finally, the weather patterns were analysed for trends in pattern frequency, seasonality, persistence, and trends in pattern-specific precipitation and temperature. To overcome uncertainties in trend detection resulting from the selected time period, all possible periods in 1901-2010 with a minimum length of 31 years were considered. Thus, the assumption of a constant link between patterns and local weather was tested rigorously. This assumption was found to hold true only partly. While changes in temperature are mainly attributable to changes in pattern frequency, for precipitation a substantial amount of change was detected within individual patterns.
Magnitude and even sign of trends depend highly on the selected time period. The frequency of certain patterns is related to the long-term variability of large-scale circulation modes.
Changes in precipitation were found to be heterogeneous not only in space, but also in time - statements on trends are only valid for the specific time period under investigation. While some part of the trends can be attributed to changes in the large-scale circulation, distinct changes were found within single weather patterns as well.
The results emphasise the need to analyse multiple periods for thorough trend detection wherever possible and add some note of caution to the application of downscaling approaches based on weather patterns, as they might misinterpret the effect of climate change due to neglecting within-type trends.
Translating innovation
(2017)
This doctoral thesis studies the process of innovation adoption in public administrations, addressing the research question of how an innovation is translated to a local context. The study empirically explores Design Thinking as a new problem-solving approach introduced by a federal government organisation in Singapore. With a focus on user-centeredness, collaboration and iteration Design Thinking seems to offer a new way to engage recipients and other stakeholders of public services as well as to re-think the policy design process from a user’s point of view. Pioneered in the private sector, early adopters of the methodology include civil services in Australia, Denmark, the United Kingdom, the United States as well as Singapore. Hitherto, there is not much evidence on how and for which purposes Design Thinking is used in the public sector.
For the purpose of this study, innovation adoption is framed in an institutionalist perspective addressing how concepts are translated to local contexts. The study rejects simplistic views of the innovation adoption process, in which an idea diffuses to another setting without adaptation. The translation perspective is fruitful because it captures the multidimensionality and ‘messiness’ of innovation adoption. More specifically, the overall research question addressed in this study is: How has Design Thinking been translated to the local context of the public sector organisation under investigation? And from a theoretical point of view: What can we learn from translation theory about innovation adoption processes?
Moreover, there are only few empirical studies of organisations adopting Design Thinking and most of them focus on private organisations. We know very little about how Design Thinking is embedded in public sector organisations. This study therefore provides further empirical evidence of how Design Thinking is used in a public sector organisation, especially with regards to its application to policy work which has so far been under-researched.
An exploratory single case study approach was chosen to provide an in-depth analysis of the innovation adoption process. Based on a purposive, theory-driven sampling approach, a Singaporean Ministry was selected because it represented an organisational setting in which Design Thinking had been embedded for several years, making it a relevant case with regard to the research question. Following a qualitative research design, 28 semi-structured interviews (45-100 minutes) with employees and managers were conducted. The interview data was triangulated with observations and documents, collected during a field research research stay in Singapore.
The empirical study of innovation adoption in a single organisation focused on the intra-organisational perspective, with the aim to capture the variations of translation that occur during the adoption process. In so doing, this study opened the black box often assumed in implementation studies. Second, this research advances translation studies not only by showing variance, but also by deriving explanatory factors. The main differences in the translation of Design Thinking occurred between service delivery and policy divisions, as well as between the first adopter and the rest of the organisation. For the intra-organisational translation of Design Thinking in the Singaporean Ministry the following five factors played a role: task type, mode of adoption, type of expertise, sequence of adoption, and the adoption of similar practices.
Working memory (WM) performance declines with age. However, several studies have shown that WM training may lead to performance increases not only in the trained task, but also in untrained cognitive transfer tasks. It has been suggested that transfer effects occur if training task and transfer task share specific processing components that are supposedly processed in the same brain areas. In the current study, we investigated whether single-task WM training and training-related alterations in neural activity might support performance in a dual-task setting, thus assessing transfer effects to higher-order control processes in the context of dual-task coordination. A sample of older adults (age 60–72) was assigned to either a training or control group. The training group participated in 12 sessions of an adaptive n-back training. At pre and post-measurement, a multimodal dual-task was performed in all participants to assess transfer effects. This task consisted of two simultaneous delayed match to sample WM tasks using two different stimulus modalities (visual and auditory) that were performed either in isolation (single-task) or in conjunction (dual-task). A subgroup also participated in functional magnetic resonance imaging (fMRI) during the performance of the n-back task before and after training. While no transfer to single-task performance was found, dual-task costs in both the visual modality (p < 0.05) and the auditory modality (p < 0.05) decreased at post-measurement in the training but not in the control group. In the fMRI subgroup of the training participants, neural activity changes in left dorsolateral prefrontal cortex (DLPFC) during one-back predicted post-training auditory dual-task costs, while neural activity changes in right DLPFC during three-back predicted visual dual-task costs. Results might indicate an improvement in central executive processing that could facilitate both WM and dual-task coordination.
With its transparent orthography, Standard Indonesian is spoken by over 160 million inhabitants and is the primary language of instruction in education and the government in Indonesia. An assessment battery of reading and reading-related skills was developed as a starting point for the diagnosis of dyslexia in beginner learners. Founded on the International Dyslexia Association’s definition of dyslexia, the test battery comprises nine empirically motivated reading and reading-related tasks assessing word reading, pseudoword reading, arithmetic, rapid automatized naming, phoneme deletion, forward and backward digit span, verbal fluency, orthographic choice (spelling), and writing. The test was validated by computing the relationships between the outcomes on the reading-skills and reading-related measures by means of correlation and factor analyses. External variables, i.e., school grades and teacher ratings of the reading and learning abilities of individual students, were also utilized to provide evidence of its construct validity. Four variables were found to be significantly related with reading-skill measures: phonological awareness, rapid naming, spelling, and digit span. The current study on reading development in Standard Indonesian confirms findings from other languages with transparent orthographies and suggests a test battery including preliminary norm scores for screening and assessment of elementary school children learning to read Standard Indonesian.
As an emerging sub-field of music information retrieval (MIR), music imagery information retrieval (MIIR) aims to retrieve information from brain activity recorded during music cognition–such as listening to or imagining music pieces. This is a highly inter-disciplinary endeavor that requires expertise in MIR as well as cognitive neuroscience and psychology. The OpenMIIR initiative strives to foster collaborations between these fields to advance the state of the art in MIIR. As a first step, electroencephalography (EEG) recordings of music perception and imagination have been made publicly available, enabling MIR researchers to easily test and adapt their existing approaches for music analysis like fingerprinting, beat tracking or tempo estimation on this new kind of data. This paper reports on first results of MIIR experiments using these OpenMIIR datasets and points out how these findings could drive new research in cognitive neuroscience.
We present a setup combining a liquid flatjet sample delivery and a MHz laser system for time-resolved soft X-ray absorption measurements of liquid samples at the high brilliance undulator beamline UE52-SGM at Bessy II yielding unprecedented statistics in this spectral range. We demonstrate that the efficient detection of transient absorption changes in transmission mode enables the identification of photoexcited species in dilute samples. With iron(II)-trisbipyridine in aqueous solution as a benchmark system, we present absorption measurements at various edges in the soft X-ray regime. In combination with the wavelength tunability of the laser system, the set-up opens up opportunities to study the photochemistry of many systems at low concentrations, relevant to materials sciences, chemistry, and biology.
We introduce three strategies for the analysis of financial time series based on time averaged observables. These comprise the time averaged mean squared displacement (MSD) as well as the ageing and delay time methods for varying fractions of the financial time series. We explore these concepts via statistical analysis of historic time series for several Dow Jones Industrial indices for the period from the 1960s to 2015. Remarkably, we discover a simple universal law for the delay time averaged MSD. The observed features of the financial time series dynamics agree well with our analytical results for the time averaged measurables for geometric Brownian motion, underlying the famed Black–Scholes–Merton model. The concepts we promote here are shown to be useful for financial data analysis and enable one to unveil new universal features of stock market dynamics.
In October 2016, following a campaign led by Labour Peer Lord
Alfred Dubs, the first child asylum-seekers allowed entry to the UK
under new legislation (the ‘Dubs amendment’) arrived in England.
Their arrival was captured by a heavy media presence, and very
quickly doubts were raised by right-wing tabloids and politicians
about their age. In this article, I explore the arguments
underpinning the Dubs campaign and the media coverage of
the children’s arrival as a starting point for interrogating
representational practices around children who seek asylum. I
illustrate how the campaign was premised on a universal politics
of childhood that inadvertently laid down the terms on which
these children would be given protection, namely their innocence.
The universality of childhood fuels public sympathy for child
asylum-seekers, underlies the ‘child first, migrant second’
approach advocated by humanitarian organisations, and it was a
key argument in the ‘Dubs amendment’. Yet the campaign
highlights how representations of child asylum-seekers rely on
codes that operate to identify ‘unchildlike’ children. As I show, in
the context of the criminalisation of undocumented migrants‘,
childhood is no longer a stable category which guarantees
protection, but is subject to scrutiny and suspicion and can,
ultimately, be disproved.
Thermal cis-trans isomerization of azobenzene studied by path sampling and QM/MM stochastic dynamics
(2017)
Azobenzene-based molecular photoswitches have extensively been applied to biological systems, involving photo-control of peptides, lipids and nucleic acids. The isomerization between the stable trans and the metastable cis state of the azo moieties leads to pronounced changes in shape and other physico-chemical properties of the molecules into which they are incorporated. Fast switching can be induced via transitions to excited electronic states and fine-tuned by a large number of different substituents at the phenyl rings. But a rational design of tailor-made azo groups also requires control of their stability in the dark, the half-lifetime of the cis isomer. In computational chemistry, thermally activated barrier crossing on the ground state Born-Oppenheimer surface can efficiently be estimated with Eyring’s transition state theory (TST) approach; the growing complexity of the azo moiety and a rather heterogeneous environment, however, may render some of the underlying simplifying assumptions problematic.
In this dissertation, a computational approach is established to remove two restrictions at once: the environment is modeled explicitly by employing a quantum mechanical/molecular mechanics (QM/MM) description; and the isomerization process is tracked by analyzing complete dynamical pathways between stable states. The suitability of this description is validated by using two test systems, pure azo benzene and a derivative with electron donating and electron withdrawing substituents (“push-pull” azobenzene). Each system is studied in the gas phase, in toluene and in polar DMSO solvent. The azo molecules are treated at the QM level using a very recent, semi-empirical approximation to density functional theory (density functional tight binding approximation). Reactive pathways are sampled by implementing a version of the so-called transition path sampling method (TPS), without introducing any bias into the system dynamics. By analyzing ensembles of reactive trajectories, the change in isomerization pathway from linear inversion to rotation in going from apolar to polar solvent, predicted by the TST approach, could be verified for the push-pull derivative. At the same time, the mere presence of explicit solvation is seen to broaden the distribution of isomerization pathways, an effect TST cannot account for.
Using likelihood maximization based on the TPS shooting history, an improved reaction coordinate was identified as a sine-cosine combination of the central bend angles and the rotation dihedral, r (ω,α,α′). The computational van’t Hoff analysis for the activation entropies was performed to gain further insight into the differential role of solvent for the case of the unsubstituted and the push-pull azobenzene. In agreement with the experiment, it yielded positive activation entropies for azobenzene in the DMSO solvent while negative for the push-pull derivative, reflecting the induced ordering of solvent around the more dipolar transition state associated to the latter compound. Also, the dynamically corrected rate constants were evaluated using the reactive flux approach where an increase comparable to the experimental one was observed for a high polarity medium for both azobenzene derivatives.
Rezensiertes Werk
Theresa Biberauer u. George Walkden (Hgg.): Syntax over Time: Lexical, Morphological, and Information – Structural Interactions - Oxford, Oxford University Press, 2015, 418 S.
Einleitung: Die Erdnussallergie zählt zu den häufigsten Nahrungsmittelallergien im Kindesalter. Bereits kleine Mengen Erdnuss (EN) können zu schweren allergischen Reaktionen führen. EN ist der häufigste Auslöser einer lebensbedrohlichen Anaphylaxie bei Kindern und Jugendlichen. Im Gegensatz zu anderen frühkindlichen Nahrungsmittelallergien entwickeln Patienten mit einer EN-Allergie nur selten eine natürliche Toleranz. Seit mehreren Jahren wird daher an kausalen Therapiemöglichkeiten für EN-Allergiker, insbesondere an der oralen Immuntherapie (OIT), geforscht. Erste kleinere Studien zur OIT bei EN-Allergie zeigten erfolgsversprechende Ergebnisse. Im Rahmen einer randomisierten, doppelblind, Placebo-kontrollierten Studie mit größerer Fallzahl werden in der vorliegenden Arbeit die klinische Wirksamkeit und Sicherheit dieser Therapieoption bei Kindern mit EN-Allergie genauer evaluiert. Des Weiteren werden immunologische Veränderungen sowie die Lebensqualität und Therapiebelastung unter OIT untersucht.
Methoden: Kinder zwischen 3-18 Jahren mit einer IgE-vermittelten EN-Allergie wurden in die Studie eingeschlossen. Vor Beginn der OIT wurde eine orale Provokation mit EN durchgeführt. Die Patienten wurden 1:1 randomisiert und entsprechend der Verum- oder Placebogruppe zugeordnet. Begonnen wurde mit 2-120 mg EN bzw. Placebo pro Tag, abhängig von der Reaktionsdosis bei der oralen Provokation. Zunächst wurde die tägliche OIT-Dosis alle zwei Wochen über etwa 14 Monate langsam bis zu einer Erhaltungsdosis von mindestens 500 mg EN (= 125 mg EN-Protein, ~ 1 kleine EN) bzw. Placebo gesteigert. Die maximal erreichte Dosis wurde dann über zwei Monate täglich zu Hause verabreicht. Im Anschluss erfolgte erneut eine orale Provokation mit EN. Der primäre Endpunkt der Studie war die Anzahl an Patienten der Verum- und Placebogruppe, die unter oraler Provokation nach OIT ≥1200 mg EN vertrugen (=„partielle Desensibilisierung“). Sowohl vor als auch nach OIT wurde ein Hautpricktest mit EN durchgeführt und EN-spezifisches IgE und IgG4 im Serum bestimmt. Außerdem wurden die Basophilenaktivierung sowie die Ausschüttung von T-Zell-spezifischen Zytokinen nach Stimulation mit EN in vitro gemessen. Anhand von Fragebögen wurde die Lebensqualität vor und nach OIT sowie die Therapiebelastung während OIT erfasst.
Ergebnisse: 62 Patienten wurden in die Studie eingeschlossen und randomisiert. Nach etwa 16 Monaten unter OIT zeigten 74,2% (23/31) der Patienten der Verumgruppe und nur 16,1% (5/31) der Placebogruppe eine „partielle Desensibilisierung“ gegenüber EN (p<0,001). Im Median vertrugen Patienten der Verumgruppe 4000 mg EN (~8 kleine EN) unter der Provokation nach OIT wohingegen Patienten der Placebogruppe nur 80 mg EN (~1/6 kleine EN) vertrugen (p<0,001). Fast die Hälfte der Patienten der Verumgruppe (41,9%) tolerierten die Höchstdosis von 18 g EN unter Provokation („komplette Desensibilisierung“). Es zeigte sich ein vergleichbares Sicherheitsprofil unter Verum- und Placebo-OIT in Bezug auf objektive Nebenwirkungen. Unter Verum-OIT kam es jedoch signifikant häufiger zu subjektiven Nebenwirkungen wie oralem Juckreiz oder Bauchschmerzen im Vergleich zu Placebo (3,7% der Verum-OIT-Gaben vs. 0,5% der Placebo-OIT-Gaben, p<0,001). Drei Kinder der Verumgruppe (9,7%) und sieben Kinder der Placebogruppe (22,6%) beendeten die Studie vorzeitig, je zwei Patienten beider Gruppen aufgrund von Nebenwirkungen. Im Gegensatz zu Placebo, zeigten sich unter Verum-OIT signifikante immunologische Veränderungen. So kam es zu einer Abnahme des EN-spezifischen Quaddeldurchmessers im Hautpricktest, einem Anstieg der EN-spezifischen IgG4-Werte im Serum sowie zu einer verminderten EN-spezifischen Zytokinsekretion, insbesondere der Th2-spezifischen Zytokine IL-4 und IL-5. Hinsichtlich der EN-spezifischen IgE-Werte sowie der EN-spezifischen Basophilenaktivierung zeigten sich hingegen keine Veränderungen unter OIT. Die Lebensqualität von Kindern der Verumgruppe war nach OIT signifikant verbessert, jedoch nicht bei Kindern der Placebogruppe. Während der OIT wurde die Therapie von fast allen Kindern (82%) und Müttern (82%) als positiv bewertet (= niedrige Therapiebelastung).
Diskussion: Die EN-OIT führte bei einem Großteil der EN-allergischen Kinder zu einer Desensibilisierung und einer deutlich erhöhten Reaktionsschwelle auf EN. Somit sind die Kinder im Alltag vor akzidentellen Reaktionen auf EN geschützt, was die Lebensqualität der Kinder deutlich verbessert. Unter den kontrollierten Studienbedingungen zeigte sich ein akzeptables Sicherheitsprofil, mit vorrangig milder Symptomatik. Die klinische Desensibilisierung ging mit Veränderungen auf immunologischer Ebene einher. Langzeitstudien zur EN-OIT müssen jedoch abgewartet werden, um die klinische und immunologische Wirksamkeit hinsichtlich einer möglichen langfristigen oralen Toleranzinduktion sowie die Sicherheit unter langfristiger OIT zu untersuchen, bevor das Therapiekonzept in die Praxis übertragen werden kann.
Die klassische Physik/Chemie unterscheidet zwischen drei Bindungstypen: Der kovalenten Bindung, der ionischen Bindung und der metallischen Bindung. Moleküle untereinander werden hingegen durch schwache Wechselwirkungen zusammen gehalten, sie sind trotz ihrer schwachen Kräfte weniger verstanden, aber dabei nicht weniger wichtig. In zukunftsweisenden Gebieten wie der Nanotechnologie, der Supramolekularen Chemie und Biochemie sind sie von elementarer Bedeutung.
Um schwache, intermolekulare Wechselwirkungen zu beschreiben, vorauszusagen und zu verstehen, sind sie zunächst theoretisch zu erfassen. Hierzu gehören verschiedene quantenchemische Methoden, die in dieser Arbeit vorgestellt, verglichen, weiterentwickelt und schließlich auch exemplarisch auf Problemstellungen in der Chemie angewendet werden. Aufbauend auf einer Hierarchie von Methoden unterschiedlicher Genauigkeit werden sie für diese Ziele eingesetzt, ausgearbeitet und kombiniert.
Berechnet wird die Elektronenstruktur, also die Verteilung und Energie von Elektronen, die im Wesentlichen die Atome zusammen halten. Da Ungenauigkeiten von der Beschreibung der Elektronenstruktur von den verwendeten Methoden abhängen, kann man die Effekte detailliert untersuchen, sie beschreiben und darauf aufbauend weiter entwickeln, um sie anschließend an verschiedenen Modellen zu testen. Die Geschwindigkeit der Berechnungen mit modernen Computern ist eine wesentliche, zu berücksichtigende Komponente, da im Allgemeinen die Genauigkeit mit der Rechenzeit exponentiell steigt, und die damit an die Grenzen der Möglichkeiten stoßen muss.
Die genaueste der verwendeten Methoden basiert auf der Coupled-Cluster-Theorie, die sehr gute Voraussagen ermöglicht. Für diese wird eine sogenannte spektroskopische Genauigkeit mit Abweichungen von wenigen Wellenzahlen erzielt, was Vergleiche mit experimentellen Daten zeigen. Eine Möglichkeit zur Näherung von hochgenauen Methoden basiert auf der Dichtefunktionaltheorie: Hier wurde das „Boese-Martin for Kinetics“ (BMK)-Funktional entwickelt, dessen Funktionalform sich in vielen nach 2010 veröffentlichten Dichtefunktionalen wiederfindet.
Mit Hilfe der genaueren Methoden lassen sich schließlich semiempirische Kraftfelder zur Beschreibung intermolekularer Wechselwirkungen für individuelle Systeme parametrisieren, diese benötigen weit weniger Rechenzeit als die Methoden, die auf der genauen Berechnung der Elektronenstruktur von Molekülen beruhen.
Für größere Systeme lassen sich auch verschiedene Methoden kombinieren. Dabei wurden Einbettungsverfahren verfeinert und mit neuen methodischen Ansätzen vorgeschlagen. Sie verwenden sowohl die symmetrieadaptierte Störungstheorie als auch die quantenchemische Einbettung von Fragmenten in größere, quantenchemisch berechnete Systeme.
Die Entwicklungen neuer Methoden beziehen ihren Wert im Wesentlichen durch deren Anwendung:
In dieser Arbeit standen zunächst die Wasserstoffbrücken im Vordergrund. Sie zählen zu den stärkeren intermolekularen Wechselwirkungen und sind nach wie vor eine Herausforderung. Im Gegensatz dazu sind van-der-Waals Wechselwirkungen relativ einfach durch Kraftfelder zu beschreiben. Deshalb sind viele der heute verwendeten Methoden für Systeme, in denen Wasserstoffbrücken dominieren, vergleichsweise schlecht.
Eine Untersuchung molekularer Aggregate mit Auswirkungen intermolekularer Wechselwirkungen auf die Schwingungsfrequenzen von Molekülen schließt sich an. Dabei wird auch über die sogenannte starrer-Rotor-harmonischer-Oszillator-Näherung hinausgegangen.
Eine weitreichende Anwendung behandelt Adsorbate, hier die von Molekülen auf ionischen/metallischen Oberflächen. Sie können mit ähnlichen Methoden behandelt werden wie die intermolekularen Wechselwirkungen, und sind mit speziellen Einbettungsverfahren sehr genau zu beschreiben. Die Resultate dieser theoretischen Berechnungen stimulierten eine Neubewertung der bislang bekannten experimentellen Ergebnisse.
Molekulare Kristalle sind ein äußerst wichtiges Forschungsgebiet. Sie werden durch schwache Wechselwirkungen zusammengehalten, die von van-der-Waals Kräften bis zu Wasserstoffbrücken reichen. Auch hier wurden neuentwickelte Methoden eingesetzt, die eine interessante, mindestens ebenso genaue Alternative zu den derzeit gängigen Methoden darstellen.
Von daher sind die entwickelten Methoden, als auch deren Anwendung äußerst vielfältig. Die behandelten Berechnungen der Elektronenstruktur erstrecken sich von den sogenannten post-Hartree-Fock-Methoden über den Einsatz der Dichtefunktionaltheorie bis zu semiempirischen Kraftfeldern und deren Kombinationen. Die Anwendung reicht von einzelnen Molekülen in der Gasphase über die Adsorption auf Oberflächen bis zum molekularen Festkörper.
This article considers Isabella Bird’s representation of medicine in Unbeaten Tracks in Japan (1880) and Journeys in Persia and Kurdistan (1891), the two books in which she engages most extensively with both local (Chinese/Islamic) and Western medical science and practice. I explore how Bird uses medicine to assert her narrative authority and define her travelling persona in opposition to local medical practitioners. I argue that her ambivalence and the unease she frequently expresses concerning medical practice (expressed particularly in her later adoption of the Persian appellation “Feringhi Hakīm” [European physician] to describe her work) serves as a means for her to negotiate the colonial and gendered pressures on Victorian medicine. While in Japan this attitude works to destabilise her hierarchical understanding of science and results in some acknowledgement of traditional Japanese traditions, in Persia it functions more to disguise her increasing collusion with overt British colonial ambitions.
White adipose tissue (WAT) is actively involved in the regulation of whole-body energy homeostasis via storage/release of lipids and adipokine secretion. Current research links WAT dysfunction to the development of metabolic syndrome (MetS) and type 2 diabetes (T2D). The expansion of WAT during oversupply of nutrients prevents ectopic fat accumulation and requires proper preadipocyte-to-adipocyte differentiation. An assumed link between excess levels of reactive oxygen species (ROS), WAT dysfunction and T2D has been discussed controversially. While oxidative stress conditions have conclusively been detected in WAT of T2D patients and related animal models, clinical trials with antioxidants failed to prevent T2D or to improve glucose homeostasis. Furthermore, animal studies yielded inconsistent results regarding the role of oxidative stress in the development of diabetes. Here, we discuss the contribution of ROS to the (patho)physiology of adipocyte function and differentiation, with particular emphasis on sources and nutritional modulators of adipocyte ROS and their functions in signaling mechanisms controlling adipogenesis and functions of mature fat cells. We propose a concept of ROS balance that is required for normal functioning of WAT. We explain how both excessive and diminished levels of ROS, e.g. resulting from over supplementation with antioxidants, contribute to WAT dysfunction and subsequently insulin resistance.
Meaning-making in the brain has become one of the most intensely discussed topics in cognitive science. Traditional theories on cognition that emphasize abstract symbol manipulations often face a dead end: The symbol grounding problem. The embodiment idea tries to overcome this barrier by assuming that the mind is grounded in sensorimotor experiences. A recent surge in behavioral and brain-imaging studies has therefore focused on the role of the motor cortex in language processing. Concrete, action-related words have received convincing evidence to rely on sensorimotor activation. Abstract concepts, however, still pose a distinct challenge for embodied theories on cognition. Fully embodied abstraction mechanisms were formulated but sensorimotor activation alone seems unlikely to close the explanatory gap. In this respect, the idea of integration areas, such as convergence zones or the ‘hub and spoke’ model, do not only appear like the most promising candidates to account for the discrepancies between concrete and abstract concepts but could also help to unite the field of cognitive science again. The current review identifies milestones in cognitive science research and recent achievements that highlight fundamental challenges, key questions and directions for future research.
Human development has far-reaching impacts on the surface of the globe. The transformation of natural land cover occurs in different forms, and urban growth is one of the most eminent transformative processes. We analyze global land cover data and extract cities as defined by maximally connected urban clusters. The analysis of the city size distribution for all cities on the globe confirms Zipf’s law. Moreover, by investigating the percolation properties of the clustering of urban areas we assess the closeness to criticality for various countries. At the critical thresholds, the urban land cover of the countries undergoes a transition from separated clusters to a gigantic component on the country scale. We study the Zipf-exponents as a function of the closeness to percolation and find a systematic dependence, which could be the reason for deviating exponents reported in the literature. Moreover, we investigate the average size of the clusters as a function of the proximity to percolation and find country specific behavior. By relating the standard deviation and the average of cluster sizes—analogous to Taylor’s law—we suggest an alternative way to identify the percolation transition. We calculate spatial correlations of the urban land cover and find long-range correlations. Finally, by relating the areas of cities with population figures we address the global aspect of the allometry of cities, finding an exponent δ ≈ 0.85, i.e., large cities have lower densities.
The rule of law is the cornerstone of the international legal system. This paper shows, through analysis of intergovernmental instruments, statements made by representatives of States, and negotiation records, that the rule of law at the United Nations has become increasingly contested in the past years. More precisely, the argument builds on the process of integrating the notion of the rule of law into the Sustainable Development Goals, adopted in September 2015 in the document Transforming our world: the 2030 Agenda for Sustainable Development. The main sections set out the background of the rule of law debate at the UN, the elements of the rule of law at the goal- and target-levels in the 2030 Agenda – especially in the SDG 16 –, and evaluate whether the rule of law in this context may be viewed as a normative and universal foundation of international law. The paper concludes, with reflections drawn from the process leading up to the 2030 Agenda and the final outcome document that the rule of law – or at least strong and precise formulations of the concept – may be in decline in institutional and normative settings. This can be perceived as symptomatic of a broader crisis of the international legal order.
Reaching the Sustainable Development Goals requires a fundamental socio-economic transformation accompanied by substantial investment in low-carbon infrastructure. Such a sustainability transition represents a non-marginal change, driven by behavioral factors and systemic interactions. However, typical economic models used to assess a sustainability transition focus on marginal changes around a local optimum, whichby constructionlead to negative effects. Thus, these models do not allow evaluating a sustainability transition that might have substantial positive effects. This paper examines which mechanisms need to be included in a standard computable general equilibrium model to overcome these limitations and to give a more comprehensive view of the effects of climate change mitigation. Simulation results show that, given an ambitious greenhouse gas emission constraint and a price of carbon, positive economic effects are possible if (1) technical progress results (partly) endogenously from the model and (2) a policy intervention triggering an increase of investment is introduced. Additionally, if (3) the investment behavior of firms is influenced by their sales expectations, the effects are amplified. The results provide suggestions for policy-makers, because the outcome indicates that investment-oriented climate policies can lead to more desirable outcomes in economic, social and environmental terms.
The role of serum amyloid A and sphingosine-1-phosphate on high-density lipoprotein functionality
(2017)
The high-density lipoprotein (HDL) is one of the most important endogenous cardiovascular protective markers. HDL is an attractive target in the search for new pharmaceutical therapies and in the prevention of cardiovascular events. Some of HDL’s anti-atherogenic properties are related to the signaling molecule sphingosine-1-phosphate (S1P), which plays an important role in vascular homeostasis. However, for different patient populations it seems more complicated. Significant changes in HDL’s protective potency are reduced under pathologic conditions and HDL might even serve as a proatherogenic particle. Under uremic conditions especially there is a change in the compounds associated with HDL. S1P is reduced and acute phase proteins such as serum amyloid A (SAA) are found to be elevated in HDL. The conversion of HDL in inflammation changes the functional properties of HDL. High amounts of SAA are associated with the occurrence of cardiovascular diseases such as atherosclerosis. SAA has potent pro-atherogenic properties, which may have impact on HDL’s biological functions, including cholesterol efflux capacity, antioxidative and anti-inflammatory activities. This review focuses on two molecules that affect the functionality of HDL. The balance between functional and dysfunctional HDL is disturbed after the loss of the protective sphingolipid molecule S1P and the accumulation of the acute-phase protein SAA. This review also summarizes the biological activities of lipid-free and lipid-bound SAA and its impact on HDL function.
The role of alcohol and victim sexual interest in Spanish students' perceptions of sexual assault
(2017)
Two studies investigated the effects of information related to rape myths on Spanish college students’ perceptions of sexual assault. In Study 1, 92 participants read a vignette about a nonconsensual sexual encounter and rated whether it was a sexual assault and how much the woman was to blame. In the scenario, the man either used physical force or offered alcohol to the woman to overcome her resistance. Rape myth acceptance (RMA) was measured as an individual difference variable. Participants were more convinced that the incident was a sexual assault and blamed the woman less when the man had used force rather than offering her alcohol. In Study 2, 164 college students read a scenario in which the woman rejected a man’s sexual advances after having either accepted or turned down his offer of alcohol. In addition, the woman was either portrayed as being sexually attracted to him or there was no mention of her sexual interest. Participants’ RMA was again included. High RMA participants blamed the victim more than low RMA participants and were less certain that the incident was a sexual assault, especially when the victim had accepted alcohol and was described as being sexually attracted to the man. The findings are discussed in terms of their implications for the prevention and legal prosecution of sexual assault.
Over the last few decades, the methodology for the identification of customary international law (CIL) has been changing. Both elements of CIL – practice and opinio juris – have assumed novel and broader forms, as noted in the Reports of the Special Rapporteur of the International Law Commission (2013, 2014, 2015, 2016). This paper discusses these Reports and the draft conclusions, and reaction by States in the Sixth Committee of the United Nations General Assembly (UNGA), highlighting the areas of consensus and contestation. This ties to the analysis of the main doctrinal positions, with special attention being given to the two elements of CIL, and the role of the UNGA resolutions. The underlying motivation is to assess the real or perceived crisis of CIL, and the author develops the broader argument maintaining that in order to retain unity within international law, the internal limits of CIL must be carefully asserted.
Background: The relative dose response (RDR) test, which quantifies the increase in serum retinol after vitamin A administration, is a qualitative measure of liver vitamin A stores. Particularly in preterm infants, the feasibility of the RDR test involving blood is critically dependent on small sample volumes. Objectives: This study aimed to assess whether the RDR calculated with retinol-binding protein 4 (RBP4) might be a substitute for the classical retinol-based RDR test for assessing vitamin A status in very preterm infants. Methods: This study included preterm infants with a birth weight below 1,500 g (n = 63, median birth weight 985 g, median gestational age 27.4 weeks) who were treated with 5,000 IU retinyl palmitate intramuscularly 3 times a week for 4 weeks. On day 3 (first vitamin A injection) and day 28 of life (last vitamin A injection), the RDR was calculated and compared using serum retinol and RBP4 concentrations. Results: The concentrations of retinol (p < 0.001) and RBP4 (p < 0.01) increased significantly from day 3 to day 28. On day 3, the median (IQR) retinol-RDR was 27% (8.4-42.5) and the median RBP4-RDR was 8.4% (-3.4 to 27.9), compared to 7.5% (-10.6 to 20.8) and -0.61% (-19.7 to 15.3) on day 28. The results for retinol-RDR and RBP4-RDR revealed no significant correlation. The agreement between retinol-RDR and RBP4-RDR was poor (day 3: Cohen's κ = 0.12; day 28: Cohen's κ = 0.18). Conclusion: The RDR test based on circulating RBP4 is unlikely to reflect the hepatic vitamin A status in preterm infants.
The predictions of two contrasting approaches to the acquisition of transitive relative clauses were tested within the same groups of German-speaking participants aged from 3 to 5 years old. The input frequency approach predicts that object relative clauses with inanimate heads (e.g., the pullover that the man is scratching) are comprehended earlier and more accurately than those with an animate head (e.g., the man that the boy is scratching). In contrast, the structural intervention approach predicts that object relative clauses with two full NP arguments mismatching in number (e.g., the man that the boys are scratching) are comprehended earlier and more accurately than those with number-matching NPs (e.g., the man that the boy is scratching). These approaches were tested in two steps. First, we ran a corpus analysis to ensure that object relative clauses with number-mismatching NPs are not more frequent than object relative clauses with number-matching NPs in child directed speech. Next, the comprehension of these structures was tested experimentally in 3-, 4-, and 5-year-olds respectively by means of a color naming task. By comparing the predictions of the two approaches within the same participant groups, we were able to uncover that the effects predicted by the input frequency and by the structural intervention approaches co-exist and that they both influence the performance of children on transitive relative clauses, but in a manner that is modulated by age. These results reveal a sensitivity to animacy mismatch already being demonstrated by 3-year-olds and show that animacy is initially deployed more reliably than number to interpret relative clauses correctly. In all age groups, the animacy mismatch appears to explain the performance of children, thus, showing that the comprehension of frequent object relative clauses is enhanced compared to the other conditions. Starting with 4-year-olds but especially in 5-year-olds, the number mismatch supported comprehension—a facilitation that is unlikely to be driven by input frequency. Once children fine-tune their sensitivity to verb agreement information around the age of four, they are also able to deploy number marking to overcome the intervention effects. This study highlights the importance of testing experimentally contrasting theoretical approaches in order to characterize the multifaceted, developmental nature of language acquisition.
A particular form of social pain is invalidation. Therefore, this study (a) investigates whether patients with chronic low back pain experience invalidation, (b) if it has an influence on their pain, and (c) explores whether various social sources (e.g. partner and work) influence physical pain differentially. A total of 92 patients completed questionnaires, and for analysis, Pearson's correlation coefficients and hierarchical linear regression analyses were conducted. They indicated a significant association between discounting and disability due to pain (respective =.29, p>.05). Especially, discounting by partner was linked to higher disability (=.28, p>.05).
The classical Navier-Stokes equations of hydrodynamics are usually written in terms of vector analysis. More promising is the formulation of these equations in the language of differential forms of degree one. In this way the study of Navier-Stokes equations includes the analysis of the de Rham complex. In particular, the Hodge theory for the de Rham complex enables one to eliminate the pressure from the equations. The Navier-Stokes equations constitute a parabolic system with a nonlinear term which makes sense only for one-forms. A simpler model of dynamics of incompressible viscous fluid is given by Burgers' equation. This work is aimed at the study of invariant structure of the Navier-Stokes equations which is closely related to the algebraic structure of the de Rham complex at step 1. To this end we introduce Navier-Stokes equations related to any elliptic quasicomplex of first order differential operators. These equations are quite similar to the classical Navier-Stokes equations including generalised velocity and pressure vectors. Elimination of the pressure from the generalised Navier-Stokes equations gives a good motivation for the study of the Neumann problem after Spencer for elliptic quasicomplexes. Such a study is also included in the work.We start this work by discussion of Lamé equations within the context of elliptic quasicomplexes on compact manifolds with boundary. The non-stationary Lamé equations form a hyperbolic system. However, the study of the first mixed problem for them gives a good experience to attack the linearised Navier-Stokes equations. On this base we describe a class of non-linear perturbations of the Navier-Stokes equations, for which the solvability results still hold.
The Kenya rift revisited
(2017)
We present three-dimensional (3-D) models that describe the present-day thermal and rheological state of the lithosphere of the greater Kenya rift region aiming at a better understanding of the rift evolution, with a particular focus on plume-lithosphere interactions. The key methodology applied is the 3-D integration of diverse geological and geophysical observations using gravity modelling. Accordingly, the resulting lithospheric-scale 3-D density model is consistent with (i) reviewed descriptions of lithological variations in the sedimentary and volcanic cover, (ii) known trends in crust and mantle seismic velocities as revealed by seismic and seismological data and (iii) the observed gravity field. This data-based model is the first to image a 3-D density configuration of the crystalline crust for the entire region of Kenya and northern Tanzania. An upper and a basal crustal layer are differentiated, each composed of several domains of different average densities. We interpret these domains to trace back to the Precambrian terrane amalgamation associated with the East African Orogeny and to magmatic processes during Mesozoic and Cenozoic rifting phases. In combination with seismic velocities, the densities of these crustal domains indicate compositional differences. The derived lithological trends have been used to parameterise steady-state thermal and rheological models. These models indicate that crustal and mantle temperatures decrease from the Kenya rift in the west to eastern Kenya, while the integrated strength of the lithosphere increases. Thereby, the detailed strength configuration appears strongly controlled by the complex inherited crustal structure, which may have been decisive for the onset, localisation and propagation of rifting.
The paper looks at community interests in international law from the perspective of the International Law Commission. As the topics of the Commission are diverse, the outcome of its work is often seen as providing a sense of direction regarding general aspects of international law. After defining what he understands by “community interests”, the author looks at both secondary and primary rules of international law, as they have been articulated by the Commission, as well as their relevance for the recognition and implementation of community interests. The picture which emerges only partly fits the widespread narrative of “from self-interest to community interest”. Whereas the Commission has recognized, or developed, certain primary rules which more fully articulate community interests, it has been reluctant to reformulate secondary rules of international law, with the exception of jus cogens. The Commission has more recently rather insisted that the traditional State-consent-oriented secondary rules concerning the formation of customary international law and regarding the interpretation of treaties continue to be valid in the face of other actors and forms of action which push towards the recognition of more and thicker community interests.
The El Nino-Southern Oscillation (ENSO) is the main driver of the interannual variability in eastern African rainfall, with a significant impact on vegetation and agriculture and dire consequences for food and social security. In this study, we identify and quantify the ENSO contribution to the eastern African rainfall variability to forecast future eastern African vegetation response to rainfall variability related to a predicted intensified ENSO. To differentiate the vegetation variability due to ENSO, we removed the ENSO signal from the climate data using empirical orthogonal teleconnection (EOT) analysis. Then, we simulated the ecosystem carbon and water fluxes under the historical climate without components related to ENSO teleconnections. We found ENSO-driven patterns in vegetation response and confirmed that EOT analysis can successfully produce coupled tropical Pacific sea surface temperature-eastern African rainfall teleconnection from observed datasets. We further simulated eastern African vegetation response under future climate change as it is projected by climate models and under future climate change combined with a predicted increased ENSO intensity. Our EOT analysis highlights that climate simulations are still not good at capturing rainfall variability due to ENSO, and as we show here the future vegetation would be different from what is simulated under these climate model outputs lacking accurate ENSO contribution. We simulated considerable differences in eastern African vegetation growth under the influence of an intensified ENSO regime which will bring further environmental stress to a region with a reduced capacity to adapt effects of global climate change and food security.
The right to privacy in the digital age generates new challenges for the international jurisdiction. The following article deals with such challenges. Therefore it firstly defines the term of privacy in general and presents an international legal framework. With whisteblower Snowden a huge political discourse was initiated and the article gives insights into its further development. In 2015 the Human Rights Council for the first time announced a special rapporteur on the right to privacy. However, the discourse is not only taking place on a political level, also civil society organizations advocate more stringent regulations and prosecutions against violations of the right to privacy. Moreover the importance of the technology sector becomes clear. Companies like Microsoft are increasingly taking responsibility to protect digital media against unjustified data misuse, surveillance, collection and storage. But whereas the IT sector is developing very quickly, legislative processes do so rather slowly. Lastly, the individual is also hold to account. To protect oneself against data misuse is to a great extent acting self-responsible. Still, therefore information on protection must be clear and accessible for everyone.
West German anticommunism and the SED’s Westarbeit were to some extentinterrelated. From the beginning, each German state had attemted to stabilise itsown social system while trying to discredit its political opponent. The claim tosole representation and the refusal to acknowledge each other delineated governmentalaction on both sides. Anticommunism inWest Germany re-developed under theconditions of the Cold War, which allowed it to become virtually the reason ofstate and to serve as a tool for the exclusion of KPD supporters. In its turn, theSED branded the West German State as‘revanchist’and instrumentalised itsanticommunism to persecute and eliminate opponents within the GDR. Bothphenomena had an integrative and exclusionary element.
In littoral zones of lakes, multiple processes determine lake ecology and water quality. Lacustrine groundwater discharge (LGD), most frequently taking place in littoral zones, can transport or mobilize nutrients from the sediments and thus contribute significantly to lake eutrophication. Furthermore, lake littoral zones are the habitat of benthic primary producers, namely submerged macrophytes and periphyton, which play a key role in lake food webs and influence lake water quality. Groundwater-mediated nutrient-influx can potentially affect the asymmetric competition between submerged macrophytes and periphyton for light and nutrients. While rooted macrophytes have superior access to sediment nutrients, periphyton can negatively affect macrophytes by shading. LGD may thus facilitate periphyton production at the expense of macrophyte production, although studies on this hypothesized effect are missing.
The research presented in this thesis is aimed at determining how LGD influences periphyton, macrophytes, and the interactions between these benthic producers. Laboratory experiments were combined with field experiments and measurements in an oligo-mesotrophic hard water lake.
In the first study, a general concept was developed based on a literature review of the existing knowledge regarding the potential effects of LGD on nutrients and inorganic and organic carbon loads to lakes, and the effect of these loads on periphyton and macrophytes. The second study includes a field survey and experiment examining the effects of LGD on periphyton in an oligotrophic, stratified hard water lake (Lake Stechlin). This study shows that LGD, by mobilizing phosphorus from the sediments, significantly promotes epiphyton growth, especially at the end of the summer season when epilimnetic phosphorus concentrations are low. The third study focuses on the potential effects of LGD on submerged macrophytes in Lake Stechlin. This study revealed that LGD may have contributed to an observed change in macrophyte community composition and abundance in the shallow littoral areas of the lake. Finally, a laboratory experiment was conducted which mimicked the conditions of a seepage lake. Groundwater circulation was shown to mobilize nutrients from the sediments, which significantly promoted periphyton growth. Macrophyte growth was negatively affected at high periphyton biomasses, confirming the initial hypothesis.
More generally, this thesis shows that groundwater flowing into nutrient-limited lakes may import or mobilize nutrients. These nutrients first promote periphyton, and subsequently provoke radical changes in macrophyte populations before finally having a possible influence on the lake’s trophic state. Hence, the eutrophying effect of groundwater is delayed and, at moderate nutrient loading rates, partly dampened by benthic primary producers. The present research emphasizes the importance and complexity of littoral processes, and the need to further investigate and monitor the benthic environment. As present and future global changes can significantly affect LGD, the understanding of these complex interactions is required for the sustainable management of lake water quality.
The Cauchy problem for the linearised Einstein equation and the Goursat problem for wave equations
(2017)
In this thesis, we study two initial value problems arising in general relativity. The first is the Cauchy problem for the linearised Einstein equation on general globally hyperbolic spacetimes, with smooth and distributional initial data. We extend well-known results by showing that given a solution to the linearised constraint equations of arbitrary real Sobolev regularity, there is a globally defined solution, which is unique up to addition of gauge solutions. Two solutions are considered equivalent if they differ by a gauge solution. Our main result is that the equivalence class of solutions depends continuously on the corre- sponding equivalence class of initial data. We also solve the linearised constraint equations in certain cases and show that there exist arbitrarily irregular (non-gauge) solutions to the linearised Einstein equation on Minkowski spacetime and Kasner spacetime.
In the second part, we study the Goursat problem (the characteristic Cauchy problem) for wave equations. We specify initial data on a smooth compact Cauchy horizon, which is a lightlike hypersurface. This problem has not been studied much, since it is an initial value problem on a non-globally hyperbolic spacetime. Our main result is that given a smooth function on a non-empty, smooth, compact, totally geodesic and non-degenerate Cauchy horizon and a so called admissible linear wave equation, there exists a unique solution that is defined on the globally hyperbolic region and restricts to the given function on the Cauchy horizon. Moreover, the solution depends continuously on the initial data. A linear wave equation is called admissible if the first order part satisfies a certain condition on the Cauchy horizon, for example if it vanishes. Interestingly, both existence of solution and uniqueness are false for general wave equations, as examples show. If we drop the non-degeneracy assumption, examples show that existence of solution fails even for the simplest wave equation. The proof requires precise energy estimates for the wave equation close to the Cauchy horizon. In case the Ricci curvature vanishes on the Cauchy horizon, we show that the energy estimates are strong enough to prove local existence and uniqueness for a class of non-linear wave equations. Our results apply in particular to the Taub-NUT spacetime and the Misner spacetime. It has recently been shown that compact Cauchy horizons in spacetimes satisfying the null energy condition are necessarily smooth and totally geodesic. Our results therefore apply if the spacetime satisfies the null energy condition and the Cauchy horizon is compact and non-degenerate.
The Bruce effect revisited
(2017)
Pregnancy termination after encountering a strange male, the Bruce effect, is regarded as a counterstrategy of female mammals towards anticipated infanticide. While confirmed in caged rodent pairs, no verification for the Bruce effect existed from experimental field populations of small rodents. We suggest that the effect may be adaptive for breeding rodent females only under specific conditions related to populations with cyclically fluctuating densities. We investigated the occurrence of delay in birth date after experimental turnover of the breeding male under different population composition in bank voles (Myodes glareolus) in large outdoor enclosures: one-male–multiple-females (n = 6 populations/18 females), multiple-males–multiple-females (n = 15/45), and single-male–single-female (MF treatment, n = 74/74). Most delays were observed in the MF treatment after turnover. Parallel we showed in a laboratory experiment (n = 205 females) that overwintered and primiparous females, the most abundant cohort during population lows in the increase phase of cyclic rodent populations, were more likely to delay births after turnover of the male than year-born and multiparous females. Taken together, our results suggest that the Bruce effect may be an adaptive breeding strategy for rodent females in cyclic populations specifically at low densities in the increase phase, when isolated, overwintered animals associate in MF pairs. During population lows infanticide risk and inbreeding risk may then be higher than during population highs, while also the fitness value of a litter in an increasing population is higher. Therefore, the Bruce effect may be adaptive for females during annual population lows in the increase phases, even at the costs of delaying reproduction.
Recent research has called into question the current practice to estimate individual usual food intake in large-scale studies. In such studies, usual food intake has been defined as diet over the past year. The aim of this review is to summarise the concepts of dietary assessment methods providing food intake data over this time period. A conceptualised framework is given to help researchers to understand the more recent developments to improve dietary assessment in large-scale prospective studies, and also to help to spot the gaps that need to be addressed in future methodological research. The conceptual framework illustrates the current options for the assessment of an individual’s food consumption over 1 year. Ideally, a person’s food intake on each day of this year should be assessed. Due to participants’ burden, and organisational and financial constraints, however, the options are limited to directly requesting the long-term average (e.g. food frequency questionnaires), or selecting a few days with detailed food consumption measurements (e.g. 24-hour dietary recalls) or using snapshot techniques (e.g. barcode scanning of purchases). It seems necessary and important to further evaluate the performance of statistical modelling of the individual usual food intake from all available sources. Future dietary assessment might profit from the growing prominence of internet and telecommunication technologies to further enhance the available data on food consumption for each study participant. Research is crucial to investigate the performance of innovative assessment tools. However, the self-reported nature of the data itself will always lead to bias.
This research was designed to adapt and investigate the psychometric properties of the Short Dark Triad measure (Jones and Paulhus Assessment, 21(1), 28-41, 2014) in a German sample within four studies (total N = 1463); the measure evaluates three personality dimensions: narcissism, psychopathy, and Machiavellianism. The structure of the instrument was analysed by Confirmatory Factor Analyses procedure. It indicated that the three-factor structure had the best fit to the data. Next, the Short Dark Triad measure was evaluated in terms of construct, convergent and discriminant validity, internal consistency (≥ .72), and test-retest reliability during a 4-week period (≥ .73). Concurrent validity of the SD3 was supported by relating its subscales to measures of the Big Five concept, aggression, and self-esteem. We concluded that the Short Dark Triad instrument presented high cross-language replicability. The use of this short inventory in the investigation of the Dark Triad personality model in the German language context is suggested.
Information on the contemporary in-situ stress state of the earth’s crust is essential for geotechnical applications and physics-based seismic hazard assessment. Yet, stress data records for a data point are incomplete and their availability is usually not dense enough to allow conclusive statements. This demands a thorough examination of the in-situ stress field which is achieved by 3D geomechanicalnumerical models. However, the models spatial resolution is limited and the resulting local stress state is subject to large uncertainties that confine the significance of the findings. In addition, temporal variations of the in-situ stress field are naturally or anthropogenically induced. In my thesis I address these challenges in three manuscripts that investigate (1) the current crustal stress field orientation, (2) the 3D geomechanical-numerical modelling of the in-situ stress state, and (3) the phenomenon of injection induced temporal stress tensor rotations. In the first manuscript I present the first comprehensive stress data compilation of Iceland with 495 data records. Therefore, I analysed image logs from 57 boreholes in Iceland for indicators of the orientation of the maximum horizontal stress component. The study is the first stress survey from different kinds of stress indicators in a geologically very young and tectonically active area of an onshore spreading ridge. It reveals a distinct stress field with a depth independent stress orientation even very close to the spreading centre. In the second manuscript I present a calibrated 3D geomechanical-numerical modelling approach of the in-situ stress state of the Bavarian Molasse Basin that investigates the regional (70x70x10km³) and local (10x10x10km³) stress state. To link these two models I develop a multi-stage modelling approach that provides a reliable and efficient method to derive from the larger scale model initial and boundary conditions for the smaller scale model. Furthermore, I quantify the uncertainties in the models results which are inherent to geomechanical-numerical modelling in general and the multi-stage approach in particular. I show that the significance of the models results is mainly reduced due to the uncertainties in the material properties and the low number of available stress magnitude data records for calibration. In the third manuscript I investigate the phenomenon of injection induced temporal stress tensor rotation and its controlling factors. I conduct a sensitivity study with a 3D generic thermo-hydro-mechanical model. I show that the key control factors for the stress tensor rotation are the permeability as the decisive factor, the injection rate, and the initial differential stress. In particular for enhanced geothermal systems with a low permeability large rotations of the stress tensor are indicated. According to these findings the estimation of the initial differential stress in a reservoir is possible provided the permeability is known and the angle of stress rotation is observed. I propose that the stress tensor rotations can be a key factor in terms of the potential for induced seismicity on pre-existing faults due to the reorientation of the stress field that changes the optimal orientation of faults.
Für den Industrialisierungsprozess von Entwicklungs- und Schwellenländern haben ausländische Direktinvestitionen (ADI) eine wichtige Funktion. Sie können zum einen zu einer Erhöhung des industriellen Output des Ziellandes führen und zum anderen als Träger von technologischem Wissen fungieren. Neues Wissen kann den Empfängerländern der ADI durch Spillovereffekte und Technologietransfers ausländischer Tochterunternehmen zufließen. Diese Arbeit soll Antworten auf die Fragen geben, durch welche Mechanismen Spillovereffekte und Technologietransfers ausgelöst werden und wie Entwicklungs- und Schwellenländern diesen Wissenszufluss zur Beschleunigung ihres Industrialisierungsprozesses einsetzen können. Hierfür wird ein Konzept zur Förderung von Spillovereffekten entwickelt. Weiterhin wird ein theoretisches Modell entwickelt, in dem der Technologietransfer ausländischer Exportplattformen erstmals in Abhängigkeit des Anteils der Vorprodukte, die im Gastland nachgefragt werden, untersucht. In den Fallstudien Irland und Malaysia werden die Ergebnisse des theoretischen Modells sowie des entwickelten Konzepts illustriert.
Der technologische Wandel stellt Organisationen vor die Herausforderung, Innovationen möglichst schnell produktiv zu nutzen und damit einen Wettbewerbsvorteil zu erzielen. Der Erfolg der Technologieeinführung hängt stark mit der Schaffung von Akzeptanz bei den Mitarbeitern zusammen. Bestehende Ansätze wie die Diffusionstheorie (Rogers, 2003) oder das Technology Acceptance Model (Davis, 1989; Venkatesh und Davis, 1996; Venkatesh und Davis, 2000; Venkatesh, Morris u. a., 2003) widmen sich dem Organisationskontext jedoch nur am Rande. Ihre Modelle zielen auf die Übernahme einer Technologie in freier Entscheidung und im Marktkontext ab. Weiterhin beleuchten sie den Widerstand gegen Neuerungen nicht, welcher sich bei der verpflichtenden Übernahme bilden kann. Zur Untersuchung der Technologieeinführung und von Akzeptanzbildungsprozessen in Organisationen sind sie daher nur begrenzt nutzbar.
Das Ziel dieser Arbeit ist es daher, den spezifischen Einfluss des Kontextes Organisation auf die Akzeptanz und das Nutzungsverhalten herauszuarbeiten. Konkreter soll die Forschungsfrage geklärt werden, welchen Einfluss unterschiedliche Organisationstypen auf die Akzeptanz- und Nutzungsdynamik innerhalb von Organisationen haben. Hierfür wird die Erweiterung und Synthese bestehender Modelle der Akzeptanzforschung um organisationsspezifische Attribute vorgenommen. Das resultierende Modell erfasst die dynamische Entwicklung innerhalb der Organisation und ermöglicht damit die Beobachtung des Wandels. Die Funktionsweise des entwickelten Modells soll in einem Simulationsexperiment demonstriert und die Wirkung unterschiedlicher Organisationsformen verdeutlicht werden.
Das Modell vereint daher zwei Perspektiven: Die personale Perspektive fasst Akzeptanz als kognitiv-psychischen Prozess auf individueller Ebene. Dieser basiert auf den Kalkülen und Entscheidungen einzelner Personen. Zentral sind hierfür die Beiträge der Diffusionstheorie (Rogers, 2003) sowie das Technology Acceptance Model in seinen diversen Weiterentwicklungen und Veränderungen (Davis, 1989; Venkatesh und Davis, 1996; Venkatesh und Davis, 2000; Venkatesh, Morris u. a., 2003). Individuelle Faktoren aus unterschiedlichen Fit-Theorien (Goodhue und Thompson, 1995; Floyd, 1986; Liu, Lee und Chen, 2011; Parkes, 2013) werden genutzt, um diese Modelle anzureichern. Neben der Entwicklung
einer positiven, förderlichen Einstellung muss jedoch auch die Ablehnung und das offene Opponieren gegen die Innovation berücksichtigt werden (Patsiotis, Hughes und Webber, 2012).
Die organisatorische Perspektive hingegen sieht Akzeptanzentscheidungen eingebettet in den sozialen Kontext der Organisation. Die gegenseitige Beeinflussung basiert auf der Beobachtung der Umgebung und der Internalisierung sozialen Drucks. Dem steht in Organisationen die intendierte Beeinflussung in Form von Steuerung gegenüber. Beide Vorgänge formen das Akzeptanz- oder das Nutzungsverhalten der Mitarbeiter. Ausgehend von einem systemtheoretischen Organisationsbegriff werden unterschiedliche Steuerungsmedien (Luhmann, 1997; Fischer, 2009) vorgestellt. Diese können durch Steuerungsakteure
(Change Agents, Management) intendiert eingesetzt werden, um den Akzeptanz- und Nutzungsprozess über Interventionen zu gestalten.
Die Wirkung der Medien unterscheidet sich in verschiedenen Organisationstypen. Zur Analyse unterschiedlicher Organisationstypen werden die Konfigurationen nach Mintzberg (1979) herangezogen. Diese zeichnen sich durch unterschiedliche Koordinationsmechanismen aus, welche wiederum auf dem Einsatz von Steuerungsmedien beruhen.
Die Demonstration der Funktionsweise und Analysemöglichkeiten des entwickelten Modells erfolgt anhand eines Simulationsexperiments mittels der Simulationsplattform AnyLogic. Das Gültigkeitsspektrum wird anhand einer Sensitivitätsanalyse geprüft.
In der Simulation lassen sich spezifische Muster der Nutzung und Akzeptanzentwicklung nachweisen. Die Akzeptanz ist durch ein initiales Absinken und ein anschließendes gedämpftes Wachstum gekennzeichnet. Die Nutzung wird in der Organisation hingegen schnell durchgesetzt und verharrt dann auf einem stabilen Niveau. Für die Organisationstypen konnten unterschiedliche Effekte beobachtet werden. So eignet sich die bürokratische Steuerungsform zur Nutzungserhöhung, schafft es jedoch nicht, die Akzeptanz zu steigern. Organisationen, welche eher auf gegenseitige Abstimmung zur Koordination ausgelegt sind, erhöhen die Akzeptanz, jedoch nicht die Nutzung. Weiterhin ist die Entwicklung der Akzeptanz in diesem Organisationstyp sehr unsicher und weist einen hohen Schwankungsbereich auf.
In Zeiten eines sich schnell ändernden und vielseitigen Energiemarktes müssen Kohlenstoffmaterialien für verschiedene Anforderungen einsetzbar sein. Dies erfordert flexibel synthetisierbare Kohlenstoffmaterialien bevorzugt aus günstigen und nachhaltigen Kohlenstoffquellen. Es ist allerdings nicht leicht Vorläuferverbindungen auszumachen, welche sich einerseits für verschiedene Herstellungsverfahren eignen und deren Kohlenstoffprodukte andererseits in spezifischen Eigenschaften, wie der Struktur, des Stickstoffanteils, der Oberfläche und der Porengrößen, eingestellt werden können. In diesem Zusammenhang können natürliche Polyphenole, etwa überschüssige Tannine aus der Weinproduktion, eine neue Welt zu hoch funktionalen und vielseitig einstellbaren Kohlenstoffmaterialien mit hohen Ausbeuten öffnen.
Das Hauptziel dieser vorliegenden Thesis war es neue funktionale, einstellbare und skalierbare nanostrukturierte Kohlenstoffmaterialien aus Tanninen (insbesondere Tanninsäure) für unterschiedliche elektrochemische Zwecke zu synthetisieren und zu charakterisieren. Ermöglicht wurde dies durch unterschiedliche synthetische Herangehensweisen, wie etwa der polymeren Strukturdirektion, dem ionothermalen Templatieren und der weichen Templatierung. An Stelle des weitläufig gebräuchlichen, aber kanzerogenen Vernetzungsagens Formaldehyd wurden bei den vorgestellten Synthesen Harnstoff und Thioharnstoff gewählt, um zugleich die synthetisierten Kohlenmaterialien variabel dotieren zu können.
Daher wurden im ersten Teil der Arbeit die Wechselwirkungen, Reaktionen und thermischen Verhaltensweisen von Tanninsäure und Mixturen von Tanninsäure und Harnstoff bzw. Thioharnstoff untersucht, um daraus wichtige Erkenntnisse für die verschiedenen Kohlenstoffsynthesen zu gewinnen.
Durch die Verwendung eines polymeren Strukturierungsagenz Pluronic P123 konnten in einer ersten Kohlenstoffsynthese nachhaltige und dotierbare Kohlenstoffpartikel mit Durchmessern im Nanometerbereich aus Tanninsäure und Harnstoff hergestellt werden. Es konnte dabei gezeigt werden, dass durch die Modifikation der verschiedenen Syntheseparameter die Kohlenstoffnanopartikel gemäß ihres gemittelten Partikeldurchmessers, ihrer BET-Oberfläche, ihrer Komposition, ihrer Leitfähigkeit und ihrer chemischen Stabilität einstellbar sind. Dies eröffnete die Möglichkeit diese Kohlenstoffpartikel als alternatives und nachhaltiges Rußmaterial einzusetzen.
Weiterhin war es durch die ionothermale Templatierung möglich poröse, dotierte und kontrollierbare Kohlenstoffpartikel mit hohen spezifischen Oberflächen aus den gewählten Präkursorverbindungen zu synthetisieren, die sich für den Einsatz in Superkondensatoren eignen.
Auf diesen Erkenntnissen aufbauend konnten mittels der Rotationsbeschichtung poröse binderfreie und strukturierte Kohlenstofffilme synthetisiert werden, die eine spinodale Struktur aufwiesen. Anhand der Modifikation der Stammlösungskonzentration, der Rotationsgeschwindigkeit und der verwendeten Substrate konnten die Filmdicke (100-1000 nm), die Morphologie und Gesamtoberfläche gezielt beeinflusst werden. Die erweiterte elektrochemische Analyse zeigte außerdem ein sehr gut zugängliches Porensystem der porösen Kohlenstofffilme.
Allumfassend konnten demnach verschiedene Synthesewege für Kohlenstoffmaterialien aus Tanninen aufgezeigt werden, die verschiedenartig strukturiert und kontrolliert werden können und sich für diverse Anwendungsgebiete eignen.
Orthogonal systems for heterologous protein expression as well as for the engineering of synthetic gene regulatory circuits in hosts like Saccharomyces cerevisiae depend on synthetic transcription factors (synTFs) and corresponding cis-regulatory binding sites. We have constructed and characterized a set of synTFs based on either transcription activator-like effectors or CRISPR/Cas9, and corresponding small synthetic promoters (synPs) with minimal sequence identity to the host’s endogenous promoters. The resulting collection of functional synTF/synP pairs confers very low background expression under uninduced conditions, while expression output upon induction of the various synTFs covers a wide range and reaches induction factors of up to 400. The broad spectrum of expression strengths that is achieved will be useful for various experimental setups, e.g., the transcriptional balancing of expression levels within heterologous pathways or the construction of artificial regulatory networks. Furthermore, our analyses reveal simple rules that enable the tuning of synTF expression output, thereby allowing easy modification of a given synTF/synP pair. This will make it easier for researchers to construct tailored transcriptional control systems.
In the present work side-chain polystyrenes were synthesized and characterized, in order to be applied in multilayer OLEDs fabricated by solution process techniques. Manufacture of optoelectronic devices by solution process techniques is meant to decrease significantly fabrication cost and allow large scale production of such devices.
This dissertation focusses in three series, enveloped in two material classes. The two classes differ to each other in the type of charge transport exhibited, either ambipolar transport or electron transport. All materials were applied in all-organic solution processed green Ir-based devices.
In the first part, a series of ambipolar host materials were developed to transport both charge types, holes and electrons, and be applied especially as matrix for green Ir-based emitters. It was possible to increase devices efficacy by modulating the predominant charge transport type. This was achieved by modification of molecules electron transport part with more electron-deficient heterocycles or by extending the delocalization of the LUMO. Efficiencies up to 28.9 cd/A were observed for all-organic solution-process three layer devices.
In the second part, suitability of triarylboranes and tetraphenylsilanes as electron transport materials was studied. High triplet energies were obtained, up to 2.95 eV, by rational combination of both molecular structures. Although the combination of both elements had a low effect in materials electron transport properties, high efficiencies around 24 cd/A were obtained for the series in all-organic solution-processed two layer devices.
In the last part, benzene and pyridine were chosen as the series electron-transport motif. By controlling the relative pyridine content (RPC) solubility into methanol was induced for polystyrenes with bulky side-chains. Materials with RPC ≥ 0.5 could be deposited orthogonally from solution without harming underlying layers. From the best of our knowledge, this is the first time such materials are applied in this architecture showing moderate efficiencies around 10 cd/A in all-organic solution processed OLEDs.
Overall, the outcome of these studies will actively contribute to the current research on materials for all-solution processed OLEDs.
I. Ceric ammonium nitrate (CAN) mediated thiocyanate radical additions to glycals
In this dissertation, a facile entry was developed for the synthesis of 2-thiocarbohydrates and their transformations. Initially, CAN mediated thiocyanation of carbohydrates was carried out to obtain the basic building blocks (2-thiocyanates) for the entire studies. Subsequently, 2-thiocyanates were reduced to the corresponding thiols using appropriate reagents and reaction conditions. The screening of substrates, stereochemical outcome and the reaction mechanism are discussed briefly (Scheme I).
Scheme I. Synthesis of the 2-thiocyanates II and reductions to 2-thiols III & IV.
An interesting mechanism was proposed for the reduction of 2-thiocyanates II to 2-thiols III via formation of a disulfide intermediate. The water soluble free thiols IV were obtained by cleaving the thiocyanate and benzyl groups in a single step. In the subsequent part of studies, the synthetic potential of the 2-thiols was successfully expanded by simple synthetic transformations.
II. Transformations of the 2-thiocarbohydrates
The 2-thiols were utilized for convenient transformations including sulfa-Michael additions, nucleophilic substitutions, oxidation to disulfides and functionalization at the anomeric position. The diverse functionalizations of the carbohydrates at the C-2 position by means of the sulfur linkage are the highlighting feature of these studies. Thus, it creates an opportunity to expand the utility of 2-thiocarbohydrates for biological studies.
Reagents and conditions: a) I2, pyridine, THF, rt, 15 min; b) K2CO3, MeCN, rt, 1 h; c) MeI, K2CO3, DMF, 0 °C, 5 min; d) Ac2O, H2SO4 (1 drop), rt, 10 min; e) CAN, MeCN/H2O, NH4SCN, rt, 1 h; f) NaN3, ZnBr2, iPrOH/H2O, reflux, 15 h; g) NaOH (1 M), TBAI, benzene, rt, 2 h; h) ZnCl2, CHCl3, reflux, 3 h.
Scheme II. Functionalization of 2-thiocarbohydrates.
These transformations have enhanced the synthetic value of 2-thiocarbohydrates for the preparative scale. Worth to mention is the Lewis acid catalyzed replacement of the methoxy group by other nucleophiles and the synthesis of the (2→1) thiodisaccharides, which were obtained with complete β-selectivity. Additionally, for the first time, the carbohydrate linked thiotetrazole was synthesized by a (3 + 2) cycloaddition approach at the C-2 position.
III. Synthesis of thiodisaccharides by thiol-ene coupling.
In the final part of studies, the synthesis of thiodisaccharides by a classical photoinduced thiol-ene coupling was successfully achieved.
Reagents and conditions: 2,2-Dimethoxy-2-phenylacetophenone (DPAP), CH2Cl2/EtOH, hv, rt.
Scheme III. Thiol-ene coupling between 2-thiols and exo-glycals.
During the course of investigations, it was found that the steric hindrance plays an important role in the addition of bulky thiols to endo-glycals. Thus, we successfully screened the suitable substrates for addition of various thiols to sterically less hindered alkenes (Scheme III). The photochemical addition of 2-thiols to three different exo-glycals delivered excellent regio- and diastereoselectivities as well as yields, which underlines the synthetic potential of this convenient methodology.
The title compound was prepared by the reaction of 1,4,10,13-tetraoxa-7,16-diazacyclo-octadecane with 4-chloro-2-methyl-phenoxyacetic acid in a ratio of 1:2. The structure has been proved by the data of elemental analysis, IR spectroscopy, NMR ( 1 H, 13 C) technique and by X-ray diffraction analysis. Intermolecular hydrogen bonds between the azonium protons and oxygen atoms of the carboxylate groups were found. Immunoactive properties of the title compound have been screened. The compound has the ability to suppress spontaneous and Con A-stimulated cell proliferation in vitro and therefore can be considered as immunodepressant.
Graphs are ubiquitous in Computer Science. For this reason, in many areas, it is very important to have the means to express and reason about graph properties. In particular, we want to be able to check automatically if a given graph property is satisfiable. Actually, in most application scenarios it is desirable to be able to explore graphs satisfying the graph property if they exist or even to get a complete and compact overview of the graphs satisfying the graph property.
We show that the tableau-based reasoning method for graph properties as introduced by Lambers and Orejas paves the way for a symbolic model generation algorithm for graph properties. Graph properties are formulated in a dedicated logic making use of graphs and graph morphisms, which is equivalent to firstorder logic on graphs as introduced by Courcelle. Our parallelizable algorithm gradually generates a finite set of so-called symbolic models, where each symbolic model describes a set of finite graphs (i.e., finite models) satisfying the graph property. The set of symbolic models jointly describes all finite models for the graph property (complete) and does not describe any finite graph violating the graph property (sound). Moreover, no symbolic model is already covered by another one (compact). Finally, the algorithm is able to generate from each symbolic model a minimal finite model immediately and allows for an exploration of further finite models. The algorithm is implemented in the new tool AutoGraph.
Swearing in a public place
(2017)
The paper deals with the usage of swear words on the online forum "reddit". Three research questions are dealt with:
How often are swear words used?
How are these swear words received by other users?
Does the topic of the conversation have an influence on the reception and amount of usage of swear words?
The corpus from which the results are taken comprises almost 900 million words. The words are taken from February 2017. Compared to other, similar studies, the corpus is considerably larger and contempory.
In addition, the theoretical part discusses the linguistic basics of swear words. These include concepts such as the theory of politeness, the topic of taboos and its corresponding words and censorship. This is done to explain the factors that influence the use and application of swear words and to explain why swearwords are so special in comparison to other word groups. In addition, further research results from other corpora are presented and compared with the results afterwards. This includes corpora that are also composed of online communication, as well as corpora that reproduce spoken language. The results from all the corpora presented deal with results from the English language.
The results of this study indicate that the swear words on "reddit" are used approximately as often as they are on other platforms. The perception of these swear words is mostly positive, which suggests that the use of swear words on "reddit" is not perceived as impolite. In addition, an influence of the discussion topic on the frequency and reception of swear words could be determined.
Nowadays, the need to protect the environment becomes more urgent than ever. In the field of chemistry, this translates to practices such as waste prevention, use of renewable feedstocks, and catalysis; concepts based on the principles of green chemistry. Polymers are an important product in the chemical industry and are also in the focus of these changes. In this thesis, more sustainable approaches to make two classes of polymers, polypeptoids and polyesters, are described.
Polypeptoids or poly(alkyl-N-glycines) are isomers of polypeptides and are biocompatible, as well as degradable under biologically relevant conditions. In addition to that, they can have interesting properties such as lower critical solution temperature (LCST) behavior. They are usually synthesized by the ring opening polymerization (ROP) of N-carboxy anhydrides (NCAs), which are produced with the use of toxic compounds (e.g. phosgene) and which are highly sensitive to humidity. In order to avoid the direct synthesis and isolation of the NCAs, N-phenoxycarbonyl-protected N-substituted glycines are prepared, which can yield the NCAs in situ. The conditions for the NCA synthesis and its direct polymerization are investigated and optimized for the simplest N-substituted glycine, sarcosine. The use of a tertiary amine in less than stoichiometric amounts compared to the N-phenoxycarbonyl--sarcosine seems to accelerate drastically the NCA formation and does not affect the efficiency of the polymerization. In fact, well defined polysarcosines that comply to the monomer to initiator ratio can be produced by this method. This approach was also applied to other N-substituted glycines.
Dihydroxyacetone is a sustainable diol produced from glycerol, and has already been used for the synthesis of polycarbonates. Here, it was used as a comonomer for the synthesis of polyesters. However, the polymerization of dihydroxyacetone presented difficulties, probably due to the insolubility of the macromolecular chains. To circumvent the problem, the dimethyl acetal protected dihydroxyacetone was polymerized with terephthaloyl chloride to yield a soluble polymer. When the carbonyl was recovered after deprotection, the product was insoluble in all solvents, showing that the carbonyl in the main chain hinders the dissolution of the polymers. The solubility issue can be avoided, when a 1:1 mixture of dihydroxyacetone/ ethylene glycol is used to yield a soluble copolyester.
Anthropogenically amplified erosion leads to increased fine-grained sediment input into the fluvial system in the 15.000 km2 Kharaa River catchment in northern Mongolia and constitutes a major stressing factor for the aquatic ecosystem. This study uniquely combines the application of intensive monitoring, source fingerprinting and catchment modelling techniques to allow for the comparison of the credibility and accuracy of each single method. High-resolution discharge data were used in combination with daily suspended solid measurements to calculate the suspended sediment budget and compare it with estimations of the sediment budget model SedNet. The comparison of both techniques showed that the development of an overall sediment budget with SedNet was possible, yielding results in the same order of magnitude (20.3 kt a- 1 and 16.2 kt a- 1).
Radionuclide sediment tracing, using Be-7, Cs-137 and Pb-210 was applied to differentiate sediment sources for particles < 10μm from hillslope and riverbank erosion and showed that riverbank erosion generates 74.5% of the suspended sediment load, whereas surface erosion contributes 21.7% and gully erosion only 3.8%. The contribution of the single subcatchments of the Kharaa to the suspended sediment load was assessed based on their variation in geochemical composition (e.g. in Ti, Sn, Mo, Mn, As, Sr, B, U, Ca and Sb). These variations were used for sediment source discrimination with geochemical composite fingerprints based on Genetic Algorithm driven Discriminant Function Analysis, the Kruskal–Wallis H-test and Principal Component Analysis. The contributions of the individual sub-catchment varied from 6.4% to 36.2%, generally showing higher contributions from the sub-catchments in the middle, rather than the upstream portions of the study area.
The results indicate that river bank erosion generated by existing grazing practices of livestock is the main cause for elevated fine sediment input. Actions towards the protection of the headwaters and the stabilization of the river banks within the middle reaches were identified as the highest priority. Deforestation and by lodging and forest fires should be prevented to avoid increased hillslope erosion in the mountainous areas. Mining activities are of minor importance for the overall catchment sediment load but can constitute locally important point sources for particular heavy metals in the fluvial system.
This review analyzes the potential role and long-term effects of field perennial polycultures (mixtures) in agricultural systems, with the aim of reducing the trade-offs between provisioning and regulating ecosystem services. First, crop rotations are identified as a suitable tool for the assessment of the long-term effects of perennial polycultures on ecosystem services, which are not visible at the single-crop level. Second, the ability of perennial polycultures to support ecosystem services when used in crop rotations is quantified through eight agricultural ecosystem services. Legume-grass mixtures and wildflower mixtures are used as examples of perennial polycultures, and compared with silage maize as a typical crop for biomass production. Perennial polycultures enhance soil fertility, soil protection, climate regulation, pollination, pest and weed control, and landscape aesthetics compared with maize. They also score lower for biomass production compared with maize, which confirms the trade-off between provisioning and regulating ecosystem services. However, the additional positive factors provided by perennial polycultures, such as reduced costs for mineral fertilizer, pesticides, and soil tillage, and a significant preceding crop effect that increases the yields of subsequent crops, should be taken into account. However, a full assessment of agricultural ecosystem services requires a more holistic analysis that is beyond the capabilities of current frameworks.
Theoretischer Hintergrund:Supervision spielt eine zentrale Rolle zum Wissens- und Kompetenzerwerb sowie in der Qualitätssicherung.
Fragestellung:Ziel war es, den aktuellen Forschungsstand zur Supervision im Rahmen der kognitiven Verhaltenstherapie abzubilden, um daraus Schlussfolgerungen für die zukünftige Forschung abzuleiten.
Methode:Zur Evidenzsynthese wurde ein Scoping Review durchgeführt, das die Darstellung zentraler Konzepte, aktueller Evidenz und möglicher Forschungsbedarfe ermöglichte. Neben einer systematischen Literaturrecherche wurden Vorwärts- und Rückwärtssuchstrategien eingesetzt.
Ergebnisse:Eingeschlossen wurden zwölf Publikationen basierend auf zehn empirischen Studien. Alle Studien beschrieben Ausbildungssettings, aber nur wenige untersuchten übende Interventionen (z. B. Rollenspiele). Häufig wurden Effekte subjektiv erfasst, die methodische Qualität der Begleitstudien variierte.
Schlussfolgerungen:Notwendig sind weitere methodisch hochwertige Studien, experimentell orientiert oder in der klinischen Praxis, die die Supervisionsforschung bereichern können.
Studium nach Bologna
(2017)
Ziel des vorliegenden dritten Bandes der Potsdamer Beiträge zur Hochschulforschung ist es, ausgewählte Aspekte der Hochschuldebatte um Studium und Lehre zu beleuchten und mit empirischen Befunden zu vertiefen. Im Fokus stehen solche aktuellen Debatten wie die Gestaltung des Studieneingangs, die Erhöhung der Beschäftigungsbefähigung, die Qualität der Praktika sowie Probleme der Lehrerbildung. Dabei wird die Hochschuldebatte in Deutschland durch einschlägige Beiträge aus anderen, west- und osteuropäischen Ländern erweitert.
Die Reihe versteht sich als Forum verschiedener Akteure aus der Hochschulforschung, die die Diskussion zur Qualitätsentwicklung in Lehre und Studium mit ihren Impulsen aus Analysen und empirischen Ergebnissen bereichern sollen. Der Band richtet sich an alle, die sich für die Entwicklung an Hochschulen interessieren.
Decades of research have demonstrated that physical stress (PS) stimulates bone remodeling and affects bone structure and function through complex mechanotransduction mechanisms. Recent research has laid ground to the hypothesis that mental stress (MS) also influences bone biology, eventually leading to osteoporosis and increased bone fracture risk. These effects are likely exerted by modulation of hypothalamic–pituitary–adrenal axis activity, resulting in an altered release of growth hormones, glucocorticoids and cytokines, as demonstrated in human and animal studies. Furthermore, molecular cross talk between mental and PS is thought to exist, with either synergistic or preventative effects on bone disease progression depending on the characteristics of the applied stressor. This mini review will explain the emerging concept of MS as an important player in bone adaptation and its potential cross talk with PS by summarizing the current state of knowledge, highlighting newly evolving notions (such as intergenerational transmission of stress and its epigenetic modifications affecting bone) and proposing new research directions.
Background: Functional abdominal pain (FAP) is not only a highly prevalent disease but also poses a considerable burden on children and their families. Untreated, FAP is highly persistent until adulthood, also leading to an increased risk of psychiatric disorders. Intervention studies underscore the efficacy of cognitive behavioral treatment approaches but are limited in terms of sample size, long-term follow-up data, controls and inclusion of psychosocial outcome data.
Methods/Design: In a multicenter randomized controlled trial, 112 children aged 7 to 12 years who fulfill the Rome III criteria for FAP will be allocated to an established cognitive behavioral training program for children with FAP (n = 56) or to an active control group (focusing on age-appropriate information delivery; n = 56). Randomization occurs centrally, blockwise and is stratified by center. This study is performed in five pediatric gastroenterology outpatient departments. Observer-blind assessments of outcome variables take place four times: pre-, post-, 3- and 12-months post-treatment. Primary outcome is the course of pain intensity and frequency. Secondary endpoints are health-related quality of life, pain-related coping and cognitions, as well as selfefficacy.
Discussion: This confirmatory randomized controlled clinical trial evaluates the efficacy of a cognitive behavioral intervention for children with FAP. By applying an active control group, time and attention processes can be controlled, and long-term follow-up data over the course of one year can be explored.
The interdisciplinary workshop STOCHASTIC PROCESSES WITH APPLICATIONS IN THE NATURAL SCIENCES was held in Bogotá, at Universidad de los Andes from December 5 to December 9, 2016. It brought together researchers from Colombia, Germany, France, Italy, Ukraine, who communicated recent progress in the mathematical research related to stochastic processes with application in biophysics.
The present volume collects three of the four courses held at this meeting by Angelo Valleriani, Sylvie Rœlly and Alexei Kulik.
A particular aim of this collection is to inspire young scientists in setting up research goals within the wide scope of fields represented in this volume.
Angelo Valleriani, PhD in high energy physics, is group leader of the team "Stochastic processes in complex and biological systems" from the Max-Planck-Institute of Colloids and Interfaces, Potsdam.
Sylvie Rœlly, Docteur en Mathématiques, is the head of the chair of Probability at the University of Potsdam.
Alexei Kulik, Doctor of Sciences, is a Leading researcher at the Institute of Mathematics of Ukrainian National Academy of Sciences.
In this thesis, stochastic dynamics modelling collective motions of populations, one of the most mysterious type of biological phenomena, are considered. For a system of N particle-like individuals, two kinds of asymptotic behaviours are studied : ergodicity and flocking properties, in long time, and propagation of chaos, when the number N of agents goes to infinity. Cucker and Smale, deterministic, mean-field kinetic model for a population without a hierarchical structure is the starting point of our journey : the first two chapters are dedicated to the understanding of various stochastic dynamics it inspires, with random noise added in different ways. The third chapter, an attempt to improve those results, is built upon the cluster expansion method, a technique from statistical mechanics. Exponential ergodicity is obtained for a class of non-Markovian process with non-regular drift. In the final part, the focus shifts onto a stochastic system of interacting particles derived from Keller and Segel 2-D parabolicelliptic model for chemotaxis. Existence and weak uniqueness are proven.
Understanding the distribution of species is fundamental for biodiversity conservation, ecosystem management, and increasingly also for climate impact assessment. The presence of a species in a given site depends on physiological limitations (abiotic factors), interactions with other species (biotic factors), migratory or dispersal processes (site accessibility) as well as the continuing
effects of past events, e.g. disturbances (site legacy). Existing approaches to predict species distributions either (i) correlate observed species occurrences with environmental variables describing abiotic limitations, thus ignoring biotic interactions, dispersal and legacy effects (statistical species distribution model, SDM); or (ii) mechanistically model the variety of processes determining species distributions (process-based model, PBM). SDMs are widely used due to their easy applicability and ability to handle varied data qualities. But they fail to reproduce the dynamic response of species distributions to changing conditions. PBMs are expected to be superior in this respect, but they need very specific data unavailable for many species, and are often more complex and require more computational effort. More recently, hybrid models link the two approaches to combine their respective strengths.
In this thesis, I apply and compare statistical and process-based approaches to predict species distributions, and I discuss their respective limitations, specifically for applications in changing environments. Detailed analyses of SDMs for boreal tree species in Finland reveal that nonclimatic predictors - edaphic properties and biotic interactions - are important limitations at the treeline, contesting the assumption of unrestricted, climatically induced range expansion. While the estimated SDMs are successful within their training data range, spatial and temporal model transfer fails. Mapping and comparing sampled predictor space among data subsets identifies spurious extrapolation as the plausible explanation for limited model transferability. Using these findings, I analyze the limited success of an established PBM (LPJ-GUESS) applied to the same problem. Examination of process representation and parameterization in the PBM identifies implemented processes to adjust (competition between species, disturbance) and missing processes that are crucial in boreal forests (nutrient limitation, forest management). Based on climatic correlations shifting over time, I stress the restricted temporal transferability of bioclimatic limits used in LPJ-GUESS and similar PBMs. By critically assessing the performance of SDM and PBM in this application, I demonstrate the importance of understanding the limitations of the
applied methods.
As a potential solution, I add a novel approach to the repertoire of existing hybrid models. By simulation experiments with an individual-based PBM which reproduces community dynamics resulting from biotic factors, dispersal and legacy effects, I assess the resilience of coastal vegetation to abrupt hydrological changes. According to the results of the resilience analysis, I then modify temporal SDM predictions, thereby transferring relevant process detail from PBM to
SDM. The direction of knowledge transfer from PBM to SDM avoids disadvantages of current hybrid models and increases the applicability of the resulting model in long-term, large-scale applications. A further advantage of the proposed framework is its flexibility, as it is readily extended to other model types, disturbance definitions and response characteristics.
Concluding, I argue that we already have a diverse range of promising modelling tools at hand, which can be refined further. But most importantly, they need to be applied more thoughtfully. Bearing their limitations in mind, combining their strengths and openly reporting underlying assumptions and uncertainties is the way forward.
Start-up incentives targeted at unemployed individuals have become an important tool of the Active Labor Market Policy (ALMP) to fight unemployment in many countries in recent years. In contrast to traditional ALMP instruments like training measures, wage subsidies, or job creation schemes, which are aimed at reintegrating unemployed individuals into dependent employment, start-up incentives are a fundamentally different approach to ALMP, in that they intend to encourage and help unemployed individuals to exit unemployment by entering self-employment and, thus, by creating their own jobs. In this sense, start-up incentives for unemployed individuals serve not only as employment and social policy to activate job seekers and combat unemployment but also as business policy to promote entrepreneurship. The corresponding empirical literature on this topic so far has been mainly focused on the individual labor market perspective, however. The main part of the thesis at hand examines the new start-up subsidy (“Gründungszuschuss”) in Germany and consists of four empirical analyses that extend the existing evidence on start-up incentives for unemployed individuals from multiple perspectives and in the following directions:
First, it provides the first impact evaluation of the new start-up subsidy in Germany. The results indicate that participation in the new start-up subsidy has significant positive and persistent effects on both reintegration into the labor market as well as the income profiles of participants, in line with previous evidence on comparable German and international programs, which emphasizes the general potential of start-up incentives as part of the broader ALMP toolset. Furthermore, a new innovative sensitivity analysis of the applied propensity score matching approach integrates findings from entrepreneurship and labor market research about the key role of an individual’s personality on start-up decision, business performance, as well as general labor market outcomes, into the impact evaluation of start-up incentives. The sensitivity analysis with regard to the inclusion and exclusion of usually unobserved personality variables reveals that differences in the estimated treatment effects are small in magnitude and mostly insignificant. Consequently, concerns about potential overestimation of treatment effects in previous evaluation studies of similar start-up incentives due to usually unobservable personality variables are less justified, as long as the set of observed control variables is sufficiently informative (Chapter 2).
Second, the thesis expands our knowledge about the longer-term business performance and potential of subsidized businesses arising from the start-up subsidy program. In absolute terms, the analysis shows that a relatively high share of subsidized founders successfully survives in the market with their original businesses in the medium to long run. The subsidy also yields a “double dividend” to a certain extent in terms of additional job creation. Compared to “regular”, i.e., non-subsidized new businesses founded by non-unemployed individuals in the same quarter, however, the economic and growth-related impulses set by participants of the subsidy program are only limited with regard to employment growth, innovation activity, or investment. Further investigations of possible reasons for these differences show that differential business growth paths of subsidized founders in the longer run seem to be mainly limited by higher restrictions to access capital and by unobserved factors, such as less growth-oriented business strategies and intentions, as well as lower (subjective) entrepreneurial persistence. Taken together, the program has only limited potential as a business and entrepreneurship policy intended to induce innovation and economic growth (Chapters 3 and 4).
And third, an empirical analysis on the level of German regional labor markets yields that there is a high regional variation in subsidized start-up activity relative to overall new business formation. The positive correlation between regular start-up intensity and the share among all unemployed individuals who participate in the start-up subsidy program suggests that (nascent) unemployed founders also profit from the beneficial effects of regional entrepreneurship capital. Moreover, the analysis of potential deadweight and displacement effects from an aggregated regional perspective emphasizes that the start-up subsidy for unemployed individuals represents a market intervention into existing markets, which affects incumbents and potentially produces inefficiencies and market distortions. This macro perspective deserves more attention and research in the future (Chapter 5).
The Star Excursion Balance Test (SEBT) is effective in measuring dynamic postural control (DPC). This research aimed to determine whether DPC measured by the SEBT in young athletes (YA) with back pain (BP) is different from those without BP (NBP). 53 BP YA and 53 NBP YA matched for age, height, weight, training years, training sessions/week and training minutes/session were studied. Participants performed 4 practice trials after which 3 measurements in the anterior, posteromedial and posterolateral SEBT reach directions were recorded. Normalized reach distance was analyzed using the mean of all 3 measurements. There was no statistical significant difference (p > 0.05) between the reach distance of BP (87.2 ± 5.3, 82.4 ± 8.2, 78.7 ± 8.1) and NBP (87.8 ± 5.6, 82.4 ± 8.0, 80.0 ± 8.8) in the anterior, posteromedial and posterolateral directions respectively. DPC in YA with BP, as assessed by the SEBT, was not different from NBP YA.