Refine
Has Fulltext
- yes (649) (remove)
Year of publication
- 2016 (649) (remove)
Document Type
- Postprint (216)
- Article (175)
- Doctoral Thesis (136)
- Monograph/Edited Volume (28)
- Part of Periodical (22)
- Preprint (18)
- Review (14)
- Master's Thesis (12)
- Part of a Book (11)
- Working Paper (6)
Keywords
- Migration (13)
- migration (13)
- religion (13)
- Religion (12)
- interkulturelle Missverständnisse (12)
- religiöses Leben (12)
- confusions and misunderstandings (11)
- Logopädie (6)
- Zeitschrift (6)
- model (6)
Institute
- Mathematisch-Naturwissenschaftliche Fakultät (80)
- Institut für Slavistik (75)
- Institut für Geowissenschaften (41)
- Humanwissenschaftliche Fakultät (39)
- Institut für Chemie (39)
- Institut für Physik und Astronomie (31)
- Institut für Biochemie und Biologie (30)
- Vereinigung für Jüdische Studien e. V. (29)
- Bürgerliches Recht (28)
- Department Linguistik (23)
Numerous reports of relatively rapid climate changes over the past century make a clear case of the impact of aerosols and clouds, identified as sources of largest uncertainty in climate projections. Earth’s radiation balance is altered by aerosols depending on their size, morphology and chemical composition. Competing effects in the atmosphere can be further studied by investigating the evolution of aerosol microphysical properties, which are the focus of the present work.
The aerosol size distribution, the refractive index, and the single scattering albedo are commonly used such properties linked to aerosol type, and radiative forcing. Highly advanced lidars (light detection and ranging) have reduced aerosol monitoring and optical profiling into a routine process. Lidar data have been widely used to retrieve the size distribution through the inversion of the so-called Lorenz-Mie model (LMM). This model offers a reasonable treatment for spherically approximated particles, it no longer provides, though, a viable description for other naturally occurring arbitrarily shaped particles, such as dust particles. On the other hand, non-spherical geometries as simple as spheroids reproduce certain optical properties with enhanced accuracy. Motivated by this, we adapt the LMM to accommodate the spheroid-particle approximation introducing the notion of a two-dimensional (2D) shape-size distribution.
Inverting only a few optical data points to retrieve the shape-size distribution is classified as a non-linear ill-posed problem. A brief mathematical analysis is presented which reveals the inherent tendency towards highly oscillatory solutions, explores the available options for a generalized solution through regularization methods and quantifies the ill-posedness. The latter will improve our understanding on the main cause fomenting instability in the produced solution spaces. The new approach facilitates the exploitation of additional lidar data points from depolarization measurements, associated with particle non-sphericity. However, the generalization of LMM vastly increases the complexity of the problem. The underlying theory for the calculation of the involved optical cross sections (T-matrix theory) is computationally so costly, that would limit a retrieval analysis to an unpractical point. Moreover the discretization of the model equation by a 2D collocation method, proposed in this work, involves double integrations which are further time consuming. We overcome these difficulties by using precalculated databases and a sophisticated retrieval software (SphInX: Spheroidal Inversion eXperiments) especially developed for our purposes, capable of performing multiple-dataset inversions and producing a wide range of microphysical retrieval outputs.
Hybrid regularization in conjunction with minimization processes is used as a basis for our algorithms. Synthetic data retrievals are performed simulating various atmospheric scenarios in order to test the efficiency of different regularization methods. The gap in contemporary literature in providing full sets of uncertainties in a wide variety of numerical instances is of major concern here. For this, the most appropriate methods are identified through a thorough analysis on an overall-behavior basis regarding accuracy and stability. The general trend of the initial size distributions is captured in our numerical experiments and the reconstruction quality depends on data error level. Moreover, the need for more or less depolarization points is explored for the first time from the point of view of the microphysical retrieval. Finally, our approach is tested in various measurement cases giving further insight for future algorithm improvements.
In the study of relativistic jets one of the key open questions is their interaction with the environment on the microscopic level. Here, we study the initial evolution of both electron-proton (e(-)-p(+)) and electron-positron (e(+/-)) relativistic jets containing helical magnetic fields, focusing on their interaction with an ambient plasma. We have performed simulations of "global" jets containing helical magnetic fields in order to examine how helical magnetic fields affect kinetic instabilities such as the Weibel instability, the kinetic Kelvin-Helmholtz instability (kKHI) and the Mushroom instability (MI). In our initial simulation study these kinetic instabilities are suppressed and new types of instabilities can grow. In the e(-)-p(+) jet simulation a recollimation-like instability occurs and jet electrons are strongly perturbed. In the e(+/-) jet simulation a recollimation-like instability occurs at early times followed by a kinetic instability and the general structure is similar to a simulation without helical magnetic field. Simulations using much larger systems are required in order to thoroughly follow the evolution of global jets containing helical magnetic fields.
Die vorliegende Arbeit beschäftigt sich mit dem Einfluss der Organisationsform von Mikrofinanzinstitutionen (MFIs) auf deren soziale Performance. In diesem Kontext wird die soziale Performance als die direkte Übersetzung der sozialen Mission der MFIs in die Praxis verstanden. Konkret wird die soziale Performance der zwei Organisationsformen Shareholder Owned Firm (SOF) sowie Non-Governmental Organisation (NGO) betrachtet und verglichen. Dieser Vergleich stützt sich auf die in der Fachwelt häufig vertretene Annahme, dass NGO MFIs eine höhere soziale Performance aufweisen als SOF MFIs, da sie dem Nonprofit-Sektor zugeschrieben werden können. Die bis dato vorhandenen relevanten empirischen Forschungen zu dem Thema werden anhand einer Literaturdiskussion analysiert. Bislang existiert nur eine relativ geringe Anzahl an empirischen Forschungen in diesem Themenfeld, da die Erforschung der sozialen Performance von MFIs ein recht neues Forschungsgebiet darstellt. Die Ergebnisse der Literaturdiskussion verdeutlichen zudem, dass hinsichtlich des Einflusses der Organisationsform von MFIs auf deren soziale Performance keine eindeutige Aussage getroffen werden kann, da die betrachteten Studien zu unterschiedlichen Resultaten kommen. Die Ergebnisse deuten jedoch darauf hin, dass in einigen geografischen Regionen der Welt NGO MFIs eine bessere soziale Performance aufweisen als SOF MFIs. Insgesamt liefert die Arbeit einen wichtigen Überblick über den Forschungsstand auf diesem Forschungsgebiet und deckt diverse Forschungslücken auf, welche in zukünftigen Untersuchungen berücksichtigt werden sollten.
Mit Korbmachern zum Sieg
(2016)
La siguiente investigación aborda la relación entre mito y modernidad en la obra literaria (poemas, cuentos, novelas y crónicas) del autor chileno Rosamel del Valle (Curacaví, 1901 – Santiago de Chile, 1965). En sus distintos textos existe una tensión entre un proyecto poético basado en una visión mítica del mundo y un contexto histórico que privilegia posturas más racionalistas, relegando lo poético y lo mítico. Ya en el siglo XIX la modernidad y los fenómenos asociados de la modernización producen el desplazamiento de la poesía como discurso y del poeta como persona a una situación deficitaria dentro de la sociedad, lo que se extiende en el siglo XX. Debido a este conflicto Rosamel del Valle cuestiona en su obra los alcances de sus propios postulados, tanto estéticos como vitales. Esto conlleva una vacilación entre la reafirmación de su proyecto poético-vital y la consciencia del fracaso. Es por esto que la obra del poeta chileno contiene un Lebenswissen, que concibe a la poesía como una forma privilegiada de vida. Sin embargo, debido a las dificultades de las condiciones históricas es posible entender este Lebenswissen también como un ÜberLebenswissen (Ette).
En la primera parte del texto se estudia la concepción mítica de la poesía de del Valle y se analiza los distintos niveles en los que aparece el mito en su obra literaria: como pensamiento, como lenguaje y como narración tradicional. La identificación de la poesía como mito muestra las principales características de la poética de del Valle: una concepción ontológica que diferencia entre una dimensión visible y otra invisible de la realidad; la tendencia mística de la poesía; una concepción cíclica del tiempo que funda una relación con los discursos de la memoria y de la muerte, así como la idea de un pasado utópico que por medio de la poesía podría revivir en el presente; además de la figura de la mujer cómo símbolo del amor y de la poesía.
En la segunda parte se investiga la relación y las consecuencias de esta “poesía mítica” en el contexto de la modernidad. Se aborda especialmente el efecto en la poética de Rosamel del Valle de la Entzauberung der Welt (Weber), como experiencia específica de la época. Para esto se destaca ante todo sus impresiones sobre Nueva York. Esta ciudad en la que vivió y trabajó entre 1946 y 1963, se transforma en sus textos en un lugar en el que sería posible el “habitar poético del hombre” en la modernidad.
The relationship between climate and forest productivity is an intensively studied subject in forest science. This Thesis is embedded within the general framework of future forest growth under climate change and its implications for the ongoing forest conversion. My objective is to investigate the future forest productivity at different spatial scales (from a single specific forest stand to aggregated information across Germany) with focus on oak-pine forests in the federal state of Brandenburg. The overarching question is: how are the oak-pine forests affected by climate change described by a variety of climate scenarios. I answer this question by using a model based analysis of tree growth processes and responses to different climate scenarios with emphasis on drought events. In addition, a method is developed which considers climate change uncertainty of forest management planning.
As a first 'screening' of climate change impacts on forest productivity, I calculated the change in net primary production on the base of a large set of climate scenarios for different tree species and the total area of Germany. Temperature increases up to 3 K lead to positive effects on the net primary production of all selected tree species. But, in water-limited regions this positive net primary production trend is dependent on the length of drought periods which results in a larger uncertainty regarding future forest productivity. One of the regions with the highest uncertainty of net primary production development is the federal state of Brandenburg.
To enhance the understanding and ability of model based analysis of tree growth sensitivity to drought stress two water uptake approaches in pure pine and mixed oak-pine stands are contrasted. The first water uptake approach consists of an empirical function for root water uptake. The second approach is more mechanistic and calculates the differences of soil water potential along a soil-plant-atmosphere continuum. I assumed the total root resistance to vary at low, medium and high total root resistance levels. For validation purposes three data sets on different tree growth relevant time scales are used. Results show that, except the mechanistic water uptake approach with high total root resistance, all transpiration outputs exceeded observed values. On the other hand high transpiration led to a better match of observed soil water content. The strongest correlation between simulated and observed annual tree ring width occurred with the mechanistic water uptake approach and high total root resistance. The findings highlight the importance of severe drought as a main reason for small diameter increment, best supported by the mechanistic water uptake approach with high root resistance. However, if all aspects of the data sets are considered no approach can be judged superior to the other. I conclude that the uncertainty of future productivity of water-limited forest ecosystems under changing environmental conditions is linked to simulated root water uptake.
Finally my study aimed at the impacts of climate change combined with management scenarios on an oak-pine forest to evaluate growth, biomass and the amount of harvested timber. The pine and the oak trees are 104 and 9 years old respectively. Three different management scenarios with different thinning intensities and different climate scenarios are used to simulate the performance of management strategies which explicitly account for the risks associated with achieving three predefined objectives (maximum carbon storage, maximum harvested timber, intermediate). I found out that in most cases there is no general management strategy which fits best to different objectives. The analysis of variance in the growth related model outputs showed an increase of climate uncertainty with increasing climate warming. Interestingly, the increase of climate-induced uncertainty is much higher from 2 to 3 K than from 0 to 2 K.
Recently, due to an increasing demand on functionality and flexibility, beforehand isolated systems have become interconnected to gain powerful adaptive Systems of Systems (SoS) solutions with an overall robust, flexible and emergent behavior. The adaptive SoS comprises a variety of different system types ranging from small embedded to adaptive cyber-physical systems. On the one hand, each system is independent, follows a local strategy and optimizes its behavior to reach its goals. On the other hand, systems must cooperate with each other to enrich the overall functionality to jointly perform on the SoS level reaching global goals, which cannot be satisfied by one system alone. Due to difficulties of local and global behavior optimizations conflicts may arise between systems that have to be solved by the adaptive SoS.
This thesis proposes a modeling language that facilitates the description of an adaptive SoS by considering the adaptation capabilities in form of feedback loops as first class entities. Moreover, this thesis adopts the Models@runtime approach to integrate the available knowledge in the systems as runtime models into the modeled adaptation logic. Furthermore, the modeling language focuses on the description of system interactions within the adaptive SoS to reason about individual system functionality and how it emerges via collaborations to an overall joint SoS behavior. Therefore, the modeling language approach enables the specification of local adaptive system behavior, the integration of knowledge in form of runtime models and the joint interactions via collaboration to place the available adaptive behavior in an overall layered, adaptive SoS architecture.
Beside the modeling language, this thesis proposes analysis rules to investigate the modeled adaptive SoS, which enables the detection of architectural patterns as well as design flaws and pinpoints to possible system threats. Moreover, a simulation framework is presented, which allows the direct execution of the modeled SoS architecture. Therefore, the analysis rules and the simulation framework can be used to verify the interplay between systems as well as the modeled adaptation effects within the SoS. This thesis realizes the proposed concepts of the modeling language by mapping them to a state of the art standard from the automotive domain and thus, showing their applicability to actual systems. Finally, the modeling language approach is evaluated by remodeling up to date research scenarios from different domains, which demonstrates that the modeling language concepts are powerful enough to cope with a broad range of existing research problems.
Decreasing groundwater levels in many parts of Germany and decreasing low flows in Central Europe have created a need for adaptation measures to stabilize the water balance and to increase low flows. The objective of our study was to estimate the impact of ditch water level management on stream-aquifer interactions in small lowland catchments of the mid-latitudes. The water balance of a ditch-irrigated area and fluxes between the subsurface and the adjacent stream were modeled for three runoff recession periods using the Hydrus-2D software package. The results showed that the subsurface flow to the stream was closely related to the difference between the water level in the ditch system and the stream. Evapotranspiration during the growing season additionally reduced base flow. It was crucial to stop irrigation during a recession period to decrease water withdrawal from the stream and enhance the base flow by draining the irrigated area. Mean fluxes to the stream were between 0.04 and 0.64 ls(-1) for the first 20 days of the low-flow periods. This only slightly increased the flow in the stream, whose mean was 57 ls(-1) during the period with the lowest flows. Larger areas would be necessary to effectively increase flows in mesoscale catchments.
Die Projektierung und Abwicklung sowie die statische und dynamische Analyse von Geschäftsprozessen im Bereich des Verwaltens und Regierens auf kommunaler, Länder- wie auch Bundesebene mit Hilfe von Informations- und Kommunikationstechniken beschäftigen Politiker und Strategen für Informationstechnologie ebenso wie die Öffentlichkeit seit Langem.
Der hieraus entstandene Begriff E-Government wurde in der Folge aus den unterschiedlichsten technischen, politischen und semantischen Blickrichtungen beleuchtet.
Die vorliegende Arbeit konzentriert sich dabei auf zwei Schwerpunktthemen:
• Das erste Schwerpunktthema behandelt den Entwurf eines hierarchischen Architekturmodells, für welches sieben hierarchische Schichten identifiziert werden können. Diese erscheinen notwendig, aber auch hinreichend, um den allgemeinen Fall zu beschreiben.
Den Hintergrund hierfür liefert die langjährige Prozess- und Verwaltungserfahrung als Leiter der EDV-Abteilung der Stadtverwaltung Landshut, eine kreisfreie Stadt mit rund 69.000 Einwohnern im Nordosten von München. Sie steht als Repräsentant für viele Verwaltungsvorgänge in der Bundesrepublik Deutschland und ist dennoch als Analyseobjekt in der Gesamtkomplexität und Prozessquantität überschaubar.
Somit können aus der Analyse sämtlicher Kernabläufe statische und dynamische Strukturen extrahiert und abstrakt modelliert werden.
Die Schwerpunkte liegen in der Darstellung der vorhandenen Bedienabläufe in einer Kommune. Die Transformation der Bedienanforderung in einem hierarchischen System, die Darstellung der Kontroll- und der Operationszustände in allen Schichten wie auch die Strategie der Fehlererkennung und Fehlerbehebung schaffen eine transparente Basis für umfassende Restrukturierungen und Optimierungen.
Für die Modellierung wurde FMC-eCS eingesetzt, eine am Hasso-Plattner-Institut für Softwaresystemtechnik GmbH (HPI) im Fachgebiet Kommunikationssysteme entwickelte Methodik zur Modellierung zustandsdiskreter Systeme unter Berücksichtigung möglicher Inkonsistenzen (Betreuer: Prof. Dr.-Ing. Werner Zorn [ZW07a, ZW07b]).
• Das zweite Schwerpunktthema widmet sich der quantitativen Modellierung und Optimierung von E-Government-Bediensystemen, welche am Beispiel des Bürgerbüros der Stadt Landshut im Zeitraum 2008 bis 2015 durchgeführt wurden. Dies erfolgt auf Basis einer kontinuierlichen Betriebsdatenerfassung mit aufwendiger Vorverarbeitung zur Extrahierung mathematisch beschreibbarer Wahrscheinlichkeitsverteilungen.
Der hieraus entwickelte Dienstplan wurde hinsichtlich der erzielbaren Optimierungen im dauerhaften Echteinsatz verifiziert.
[ZW07a] Zorn, Werner: «FMC-QE A New Approach in Quantitative Modeling», Vortrag anlässlich: MSV'07- The 2007 International Conference on Modeling, Simulation and Visualization Methods WorldComp2007, Las Vegas, 28.6.2007.
[ZW07b] Zorn, Werner: «FMC-QE, A New Approach in Quantitative Modeling», Veröffentlichung, Hasso-Plattner-Institut für Softwaresystemtechnik an der Universität Potsdam, 28.6.2007.
In many statistical applications, the aim is to model the relationship between covariates and some outcomes. A choice of the appropriate model depends on the outcome and the research objectives, such as linear models for continuous outcomes, logistic models for binary outcomes and the Cox model for time-to-event data. In epidemiological, medical, biological, societal and economic studies, the logistic regression is widely used to describe the relationship between a response variable as binary outcome and explanatory variables as a set of covariates. However, epidemiologic cohort studies are quite expensive regarding data management since following up a large number of individuals takes long time. Therefore, the case-cohort design is applied to reduce cost and time for data collection. The case-cohort sampling collects a small random sample from the entire cohort, which is called subcohort. The advantage of this design is that the covariate and follow-up data are recorded only on the subcohort and all cases (all members of the cohort who develop the event of interest during the follow-up process).
In this thesis, we investigate the estimation in the logistic model for case-cohort design. First, a model with a binary response and a binary covariate is considered. The maximum likelihood estimator (MLE) is described and its asymptotic properties are established. An estimator for the asymptotic variance of the estimator based on the maximum likelihood approach is proposed; this estimator differs slightly from the estimator introduced by Prentice (1986). Simulation results for several proportions of the subcohort show that the proposed estimator gives lower empirical bias and empirical variance than Prentice's estimator.
Then the MLE in the logistic regression with discrete covariate under case-cohort design is studied. Here the approach of the binary covariate model is extended. Proving asymptotic normality of estimators, standard errors for the estimators can be derived. The simulation study demonstrates the estimation procedure of the logistic regression model with a one-dimensional discrete covariate. Simulation results for several proportions of the subcohort and different choices of the underlying parameters indicate that the estimator developed here performs reasonably well. Moreover, the comparison between theoretical values and simulation results of the asymptotic variance of estimator is presented.
Clearly, the logistic regression is sufficient for the binary outcome refers to be available for all subjects and for a fixed time interval. Nevertheless, in practice, the observations in clinical trials are frequently collected for different time periods and subjects may drop out or relapse from other causes during follow-up. Hence, the logistic regression is not appropriate for incomplete follow-up data; for example, an individual drops out of the study before the end of data collection or an individual has not occurred the event of interest for the duration of the study. These observations are called censored observations. The survival analysis is necessary to solve these problems. Moreover, the time to the occurence of the event of interest is taken into account. The Cox model has been widely used in survival analysis, which can effectively handle the censored data. Cox (1972) proposed the model which is focused on the hazard function. The Cox model is assumed to be
λ(t|x) = λ0(t) exp(β^Tx)
where λ0(t) is an unspecified baseline hazard at time t and X is the vector of covariates, β is a p-dimensional vector of coefficient.
In this thesis, the Cox model is considered under the view point of experimental design. The estimability of the parameter β0 in the Cox model, where β0 denotes the true value of β, and the choice of optimal covariates are investigated. We give new representations of the observed information matrix In(β) and extend results for the Cox model of Andersen and Gill (1982). In this way conditions for the estimability of β0 are formulated. Under some regularity conditions, ∑ is the inverse of the asymptotic variance matrix of the MPLE of β0 in the Cox model and then some properties of the asymptotic variance matrix of the MPLE are highlighted. Based on the results of asymptotic estimability, the calculation of local optimal covariates is considered and shown in examples. In a sensitivity analysis, the efficiency of given covariates is calculated. For neighborhoods of the exponential models, the efficiencies have then been found. It is appeared that for fixed parameters β0, the efficiencies do not change very much for different baseline hazard functions. Some proposals for applicable optimal covariates and a calculation procedure for finding optimal covariates are discussed.
Furthermore, the extension of the Cox model where time-dependent coefficient are allowed, is investigated. In this situation, the maximum local partial likelihood estimator for estimating the coefficient function β(·) is described. Based on this estimator, we formulate a new test procedure for testing, whether a one-dimensional coefficient function β(·) has a prespecified parametric form, say β(·; ϑ). The score function derived from the local constant partial likelihood function at d distinct grid points is considered. It is shown that the distribution of the properly standardized quadratic form of this d-dimensional vector under the null hypothesis tends to a Chi-squared distribution. Moreover, the limit statement remains true when replacing the unknown ϑ0 by the MPLE in the hypothetical model and an asymptotic α-test is given by the quantiles or p-values of the limiting Chi-squared distribution. Finally, we propose a bootstrap version of this test. The bootstrap test is only defined for the special case of testing whether the coefficient function is constant. A simulation study illustrates the behavior of the bootstrap test under the null hypothesis and a special alternative. It gives quite good results for the chosen underlying model.
References
P. K. Andersen and R. D. Gill. Cox's regression model for counting processes: a large samplestudy. Ann. Statist., 10(4):1100{1120, 1982.
D. R. Cox. Regression models and life-tables. J. Roy. Statist. Soc. Ser. B, 34:187{220, 1972.
R. L. Prentice. A case-cohort design for epidemiologic cohort studies and disease prevention trials. Biometrika, 73(1):1{11, 1986.
The LEA (late embryogenesis abundant) proteins COR15A and COR15B from Arabidopsis thaliana are intrinsically disordered under fully hydrated conditions, but obtain α-helical structure during dehydration, which is reversible upon rehydration. To understand this unusual structural transition, both proteins were investigated by circular dichroism (CD) and molecular dynamics (MD) approaches. MD simulations showed unfolding of the proteins in water, in agreement with CD data obtained with both HIS-tagged and untagged recombinant proteins. Mainly intramolecular hydrogen bonds (H-bonds) formed by the protein backbone were replaced by H-bonds with water molecules. As COR15 proteins function in vivo as protectants in leaves partially dehydrated by freezing, unfolding was further assessed under crowded conditions. Glycerol reduced (40%) or prevented (100%) unfolding during MD simulations, in agreement with CD spectroscopy results. H-bonding analysis indicated that preferential exclusion of glycerol from the protein backbone increased stability of the folded state.
Foam fractionation of surfactant and protein solutions is a process dedicated to separate surface active molecules from each other due to their differences in surface activities. The process is based on forming bubbles in a certain mixed solution followed by detachment and rising of bubbles through a certain volume of this solution, and consequently on the formation of a foam layer on top of the solution column. Therefore, systematic analysis of this whole process comprises of at first investigations dedicated to the formation and growth of single bubbles in solutions, which is equivalent to the main principles of the well-known bubble pressure tensiometry. The second stage of the fractionation process includes the detachment of a single bubble from a pore or capillary tip and its rising in a respective aqueous solution. The third and final stage of the process is the formation and stabilization of the foam created by these bubbles, which contains the adsorption layers formed at the growing bubble surface, carried up and gets modified during the bubble rising and finally ends up as part of the foam layer.
Bubble pressure tensiometry and bubble profile analysis tensiometry experiments were performed with protein solutions at different bulk concentrations, solution pH and ionic strength in order to describe the process of accumulation of protein and surfactant molecules at the bubble surface. The results obtained from the two complementary methods allow understanding the mechanism of adsorption, which is mainly governed by the diffusional transport of the adsorbing protein molecules to the bubble surface. This mechanism is the same as generally discussed for surfactant molecules. However, interesting peculiarities have been observed for protein adsorption kinetics at sufficiently short adsorption times. First of all, at short adsorption times the surface tension remains constant for a while before it decreases as expected due to the adsorption of proteins at the surface. This time interval is called induction time and it becomes shorter with increasing protein bulk concentration. Moreover, under special conditions, the surface tension does not stay constant but even increases over a certain period of time. This so-called negative surface pressure was observed for BCS and BLG and discussed for the first time in terms of changes in the surface conformation of the adsorbing protein molecules. Usually, a negative surface pressure would correspond to a negative adsorption, which is of course impossible for the studied protein solutions. The phenomenon, which amounts to some mN/m, was rather explained by simultaneous changes in the molar area required by the adsorbed proteins and the non-ideality of entropy of the interfacial layer. It is a transient phenomenon and exists only under dynamic conditions.
The experiments dedicated to the local velocity of rising air bubbles in solutions were performed in a broad range of BLG concentration, pH and ionic strength. Additionally, rising bubble experiments were done for surfactant solutions in order to validate the functionality of the instrument. It turns out that the velocity of a rising bubble is much more sensitive to adsorbing molecules than classical dynamic surface tension measurements. At very low BLG or surfactant concentrations, for example, the measured local velocity profile of an air bubble is changing dramatically in time scales of seconds while dynamic surface tensions still do not show any measurable changes at this time scale. The solution’s pH and ionic strength are important parameters that govern the measured rising velocity for protein solutions. A general theoretical description of rising bubbles in surfactant and protein solutions is not available at present due to the complex situation of the adsorption process at a bubble surface in a liquid flow field with simultaneous Marangoni effects. However, instead of modelling the complete velocity profile, new theoretical work has been started to evaluate the maximum values in the profile as characteristic parameter for dynamic adsorption layers at the bubble surface more quantitatively.
The studies with protein-surfactant mixtures demonstrate in an impressive way that the complexes formed by the two compounds change the surface activity as compared to the original native protein molecules and therefore lead to a completely different retardation behavior of rising bubbles. Changes in the velocity profile can be interpreted qualitatively in terms of increased or decreased surface activity of the formed protein-surfactant complexes. It was also observed that the pH and ionic strength of a protein solution have strong effects on the surface activity of the protein molecules, which however, could be different on the rising bubble velocity and the equilibrium adsorption isotherms. These differences are not fully understood yet but give rise to discussions about the structure of protein adsorption layer under dynamic conditions or in the equilibrium state.
The third main stage of the discussed process of fractionation is the formation and characterization of protein foams from BLG solutions at different pH and ionic strength. Of course a minimum BLG concentration is required to form foams. This minimum protein concentration is a function again of solution pH and ionic strength, i.e. of the surface activity of the protein molecules. Although at the isoelectric point, at about pH 5 for BLG, the hydrophobicity and hence the surface activity should be the highest, the concentration and ionic strength effects on the rising velocity profile as well as on the foamability and foam stability do not show a maximum. This is another remarkable argument for the fact that the interfacial structure and behavior of BLG layers under dynamic conditions and at equilibrium are rather different. These differences are probably caused by the time required for BLG molecules to adapt respective conformations once they are adsorbed at the surface.
All bubble studies described in this work refer to stages of the foam fractionation process. Experiments with different systems, mainly surfactant and protein solutions, were performed in order to form foams and finally recover a solution representing the foamed material. As foam consists to a large extent of foam lamella – two adsorption layers with a liquid core – the concentration in a foamate taken from foaming experiments should be enriched in the stabilizing molecules. For determining the concentration of the foamate, again the very sensitive bubble rising velocity profile method was applied, which works for any type of surface active materials. This also includes technical surfactants or protein isolates for which an accurate composition is unknown.
Molecular paleoclimate reconstructions over the last 9 ka from a peat sequence in South China
(2016)
To achieve a better understanding of Holocene climate change in the monsoon regions of China, we investigated the molecular distributions and carbon and hydrogen isotope compositions delta C-13 and delta D values) of long-chain n-alkanes in a peat core from the Shiwangutian SWGT) peatland, south China over the last 9 ka. By comparisons with other climate records, we found that the delta C-13 values of the long-chain n-alkanes can be a proxy for humidity, while the dD values of the long-chain n-alkanes primarily recorded the moisture source dD signal during 9-1.8 ka BP and responded to the dry climate during 1.8-0.3 ka BP. Together with the average chain length ACL) and the carbon preference index CPI) data, the climate evolution over last 9 ka in the SWGT peatland can be divided into three stages. During the first stage 9-5 ka BP), the delta C-13 values were depleted and CPI and Paq values were low, while ACL values were high. They reveal a period of warm and wet climate, which is regarded as the Holocene optimum. The second stage 5-1.8 ka BP) witnessed a shift to relatively cool and dry climate, as indicated by the more positive delta C-13 values and lower ACL values. During the third stage 1.8-0.3 ka BP), the delta C-13, delta D, CPI and Paq values showed marked increase and ACL values varied greatly, implying an abrupt change to cold and dry conditions. This climate pattern corresponds to the broad decline in Asian monsoon intensity through the latter part of the Holocene. Our results do not support a later Holocene optimum in south China as suggested by previous studies.
The speciation of 2-Mercaptopyridine in aqueous solution has been investigated with nitrogen 1s Near Edge X-ray Absorption Fine Structure spectroscopy and time dependent Density Functional Theory. The prevalence of distinct species as a function of the solvent basicity is established. No indications of dimerization towards high concentrations are found. The determination of different molecular structures of 2-Mercaptopyridine in aqueous solution is put into the context of proton-transfer in keto-enol and thione-thiol tautomerisms. (C) 2016 The Authors. Published by Elsevier B.V.
Molekulare Charakterisierung des Centrosom-assoziierten Proteins CP91 in Dictyostelium discoideum
(2016)
Das Dictyostelium-Centrosom ist ein Modell für acentrioläre Centrosomen. Es besteht aus einer dreischichtigen Kernstruktur und ist von einer Corona umgeben, welche Nukleationskomplexe für Mikrotubuli beinhaltet. Die Verdoppelung der Kernstruktur wird einmal pro Zellzyklus am Übergang der G2 zur M-Phase gestartet. Durch eine Proteomanalyse isolierter Centrosomen konnte CP91 identifiziert werden, ein 91 kDa großes Coiled-Coil Protein, das in der centrosomalen Kernstruktur lokalisiert. GFP-CP91 zeigte fast keine Mobilität in FRAP-Experimenten während der Interphase, was darauf hindeutet, dass es sich bei CP91 um eine Strukturkomponente des Centrosoms handelt. In der Mitose hingegen dissoziieren das GFP-CP91 als auch das endogene CP91 ab und fehlen an den Spindelpolen von der späten Prophase bis zur Anaphase. Dieses Verhalten korreliert mit dem Verschwinden der zentralen Schicht der Kernstruktur zu Beginn der Centrosomenverdopplung. Somit ist CP91 mit großer Wahrscheinlichkeit ein Bestandteil dieser Schicht. CP91-Fragmente der N-terminalen bzw. C-terminalen Domäne (GFP-CP91 N-Terminus, GFP-CP91 C-Terminus) lokalisieren als GFP-Fusionsproteine exprimiert auch am Centrosom, zeigen aber nicht die gleiche mitotische Verteilung des Volllängenproteins. Das CP91-Fragment der zentralen Coiled-Coil Domäne (GFP-CP91cc) lokalisiert als GFP-Fusionsprotein exprimiert, als ein diffuser cytosolische Cluster, in der Nähe des Centrosoms. Es zeigt eine partiell ähnliche mitotische Verteilung wie das Volllängenprotein. Dies lässt eine regulatorische Domäne innerhalb der Coiled-Coil Domäne vermuten. Die Expression der GFP-Fusionsproteine unterdrückt die Expression des endogenen CP91 und bringt überzählige Centrosomen hervor. Dies war auch eine markante Eigenschaft nach der Unterexpression von CP91 durch RNAi. Zusätzlich zeigte sich in CP91-RNAi Zellen eine stark erhöhte Ploidie verursacht durch schwere Defekte in der Chromosomensegregation verbunden mit einer erhöhten Zellgröße und Defekten im Abschnürungsprozess während der Cytokinese. Die Unterexpression von CP91 durch RNAi hatte auch einen direkten Einfluss auf die Menge an den centrosomalen Proteinen CP39, CP55 und CEP192 und dem Centromerprotein Cenp68 in der Interphase. Die Ergebnisse deuten darauf hin, dass CP91 eine zentrale centrosomale Kernkomponente ist und für den Zusammenhalt der beiden äußeren Schichten der Kernstruktur benötigt wird. Zudem spielt CP91 eine wichtige Rolle für eine ordnungsgemäße Centrosomenbiogenese und, unabhängig davon, bei dem Abschnürungsprozess der Tochterzellen während der Cytokinese.
Molekulare Charakterisierung von CP75, einem neuen centrosomalen Protein in Dictyostelium discoideum
(2016)
Das Centrosom ist ein Zellkern-assoziiertes Organell, das nicht von einer Membran umschlossen ist. Es spielt eine wichtige Rolle in vielen Mikrotubuli- abhängigen Prozessen wie Organellenpositionierung, Zellpolarität oder die Organisation der mitotischen Spindel. Das Centrosom von Dictyostelium besteht aus einer dreischichtigen Core-Struktur umgeben von einer Corona, die Mikrotubuli-nukleierende Komplexe enthält. Die Verdoppelung des Centrosoms in Dictyostelium findet zu Beginn der Mitose statt. In der Prophase vergrößert sich die geschichtete Core-Struktur und die Corona löst sich auf. Anschließend trennen sich die beiden äußeren Lagen der Core-Struktur und bilden in der Metaphase die beiden Spindelpole, die in der Telophase zu zwei vollständigen Centrosomen heranreifen. Das durch eine Proteom-Analyse identifizierte Protein CP75 lokalisiert am Centrosom abhängig von den Mitosephasen. Es dissoziiert von der Core-Struktur in der Prometaphase und erscheint an den Spindelpolen in der Telophase wieder. Dieses Verhalten korreliert mit dem Verhalten der mittleren Lage der Core-Struktur in der Mitose, was darauf hinweist, dass CP75 eine Komponente dieser Schicht sein könnte. Die FRAP-Experimente am Interphase- Centrosom zeigen, dass GFP-CP75 dort nicht mobil ist. Das deutet darauf hin, dass das Protein wichtige Funktionen im Strukturerhalt der centrosomalen Core- Struktur übernehmen könnte. Sowohl die C- als auch die N-terminale Domäne von CP75 enthalten centrosomale Targeting-Domäne. Als GFP-Fusionsproteine (GFP-CP75-N und -C) lokalisieren die beiden Fragmente am Centrosom in der Interphase. Während GFP-CP75-C in der Mitose am Centrosom verbleibt, verschwindet GFP-CP75-N in der Metaphase und kehrt erst in der späten Telophase zurück. GFP-CP75-C und GFP-CP75O/E kolokalisieren mit F-Aktin am Zellcortex, zeigen aber keine Interaktion mit Aktin mit der BioID-Methode. Die N-terminale Domäne von CP75 enthält eine potentielle Plk1- Phosphorylierungssequenz. Die Überexpression der nichtphosphorylierbaren Punktmutante (GFP-CP75-Plk-S143A) ruft verschiedene Phänotypen wie verlängerte oder überzählige Centrosomen, vergrößerte Zellkerne und Anreicherung von detyrosinierten Mikrotubuli hervor. Die ähnlichen Phänotypen konnten auch bei GFP-CP75-N und CP75-RNAi beobachtet werden. Der
Phänotyp der detyrosinierten Mikrotubuli bringt erstmals den Beweis dafür, dass I
in Dictyostelium posttranslationale Modifikation an Tubulinen stattfindet. Außerdem zeigten CP75-RNAi-Zellen Defekte in der Organisation der mitotischen Spindel. Mittels BioID-Methode konnten drei potentielle Interaktionspartner von CP75 identifiziert werden. Diese drei Proteine CP39, CP91 und Cep192 sind ebenfalls Bestandteile des Centrosoms.
Die hohe Energieaufnahme durch Fette ist ein Hauptfaktor für die Entstehung von Adipositas, was zu weltweiten Bestrebungen führte, die Fettaufnahme zu verringern. Fettreduzierte Lebensmittel erreichen jedoch, trotz ihrer Weiterentwicklung, nicht die Schmackhaftigkeit ihrer Originale. Die traditionelle Sichtweise, dass die Attraktivität von Fetten allein durch Textur, Geruch, Aussehen und postingestive Effekte bestimmt wird, wird nun durch das Konzept einer gustatorischen Wahrnehmung ergänzt. Bei Nagetieren zeigte sich, dass Lipide unabhängig von den vorgenannten Eigenschaften erkannt werden, sowie, dass Fettsäuren, freigesetzt durch linguale Lipasen, als gustatorische Stimuli fungieren und Fettsäuresensoren in Geschmackszellen exprimiert sind. Die Datenlage für den Menschen erwies sich jedoch als sehr begrenzt, daher war es Ziel der vorliegenden Arbeit molekulare und histologische Voraussetzungen für eine gustatorische Fettwahrnehmung beim Menschen zu untersuchen.
Zunächst wurde humanes Geschmacksgewebe mittels RT-PCR und immunhistochemischen Methoden auf die Expression von Fettsäuresensoren untersucht, sowie exprimierende Zellen in Kofärbeexperimenten charakterisiert und quantifiziert. Es wurde die Expression fettsäuresensitiver Rezeptoren nachgewiesen, deren Agonisten das gesamte Spektrum an kurz- bis langkettigen Fettsäuren abdecken (GPR43, GPR84, GPR120, CD36, KCNA5). Ein zweifelsfreier Nachweis des Proteins konnte für den auf langkettige Fettsäuren spezialisierten Rezeptor GPR120 in Typ-I- und Typ-III-Geschmackszellen der Wallpapillen erbracht werden. Etwa 85 % dieser GPR120-exprimierenden Zellen enthielten keine der ausgewählten Rezeptoren der Geschmacksqualitäten süß (TAS1R2/3), umami (TAS1R1/3) oder bitter (TAS2R38). Somit findet sich in humanen Geschmackspapillen nicht nur mindestens ein Sensor, sondern möglicherweise auch eine spezifische, fettsäuresensitive Zellpopulation. Weitere RT-PCR-Experimente und Untersuchungen mittels In-situ-Hybridisierung wurden zur Klärung der Frage durchgeführt, ob Lipasen in den Von-Ebner-Speicheldrüsen (VED) existieren, die freie Fettsäuren aus Triglyceriden als gustatorischen Stimulus freisetzen können. Es zeigte sich zwar keine Expression der bei Nagetieren gefundenen Lipase F (LIPF), jedoch der eng verwandten Lipasen K, M und N in den serösen Zellen der VED. In-silico-Untersuchungen der Sekundär- und Tertiärstrukturen zeigten die hohe Ähnlichkeit zu LIPF, erwiesen aber auch Unterschiede in den Bindungstaschen der Enzyme, welche auf ein differenziertes Substratspektrum hinweisen. Die Anwesenheit eines spezifischen Signalpeptids macht eine Sekretion der Lipasen in den die Geschmacksporen umspülenden Speichel wahrscheinlich und damit auch eine Bereitstellung von Fettsäuren als Stimuli für Fettsäuresensoren. Die Übertragung des durch diese Stimuli hervorgerufenen Signals von Geschmackszellen auf gustatorische Nervenfasern über P2X-Rezeptormultimere wurde mit Hilfe einer vorherigen Intervention mit einem P2X3 /P2X2/3-spezifischen Antagonisten an der Maus als Modellorganismus im Kurzzeit-Präferenztest untersucht. Es zeigte sich weder eine Beeinträchtigung der Wahrnehmung einer Fettsäurelösung, noch einer zuckerhaltigen Kontrolllösung, wohingegen die Wahrnehmung einer Bitterstofflösung reduziert wurde. Somit ist anhand der Ergebnisse dieser Arbeit eine Beteiligung des P2X3-Homomers bzw. des P2X2/3-Heteromers unwahrscheinlich, jedoch die des P2X2-Homomers und damit der gustatorischen Nervenfasern nicht ausgeschlossen.
Die Ergebnisse dieser Arbeit weisen auf die Erfüllung grundlegender Voraussetzungen für die gustatorische Fett(säure)wahrnehmung hin und tragen zum Verständnis der sensorischen Fettwahrnehmung und der Regulation der Fettaufnahme bei. Das Wissen um die Regulation dieser Mechanismen stellt eine Grundlage zur Aufklärung der Ursachen und damit der Bekämpfung von Adipositas und assoziierten Krankheiten dar.
More effort — more results
(2016)
The development of 'omics' technologies has progressed to address complex biological questions that underlie various plant functions thereby producing copious amounts of data. The need to assimilate large amounts of data into biologically meaningful interpretations has necessitated the development of statistical methods to integrate multidimensional information. Throughout this review, we provide examples of recent outcomes of 'omics' data integration together with an overview of available statistical methods and tools.
Background: Low back pain (LBP) is one of the world wide leading causes of limited activity and disability. Impaired motor control has been found to be one of the possible factors related to the development or persistence of LBP. In particularly, motor control strategies seemed to be altered in situations requiring reactive responses of the trunk counteracting sudden external forces. However, muscular responses were mostly assessed in (quasi) static testing situations under simplified laboratory conditions. Comprehensive investigations in motor control strategies during dynamic everyday situations are lacking. The present research project aimed to investigate muscular compensation strategies following unexpected gait perturbations in people with and without LBP. A novel treadmill stumbling protocol was tested for its validity and reliability to provoke muscular reflex responses at the trunk and the lower extremities (study 1). Thereafter, motor control strategies in response to sudden perturbations were compared between people with LBP and asymptomatic controls (CTRL) (study 2). In accordance with more recent concepts of motor adaptation to pain, it was hypothesized that pain may have profound consequences on motor control strategies in LBP. Therefore, it was investigated whether differences in compensation strategies were either consisting of changes local to the painful area at the trunk, or also being present in remote areas such as at the lower extremities.
Methods: All investigations were performed on a custom build split-belt treadmill simulating trip-like events by unexpected rapid deceleration impulses (amplitude: 2 m/s; duration: 100 ms; 200 ms after heel contact) at 1m/s baseline velocity. A total number of 5 (study 1) and 15 (study 2) right sided perturbations were applied during walking trials. Muscular activities were assessed by surface electromyography (EMG), recorded at 12 trunk muscles and 10 (study 1) respectively 5 (study 2) leg muscles. EMG latencies of muscle onset [ms] were retrieved by a semi-automatic detection method. EMG amplitudes (root mean square (RMS)) were assessed within 200 ms post perturbation, normalized to full strides prior to any perturbation [RMS%]. Latency and amplitude investigations were performed for each muscle individually, as well as for pooled data of muscles grouped by location. Characteristic pain intensity scores (CPIS; 0-100 points, von Korff) based on mean intensity ratings reported for current, worst and average pain over the last three months were used to allocate participants into LBP (≥30 points) or CTRL (≤10 points). Test-retest reproducibility between measurements was determined by a compilation of measures of reliability. Differences in muscular activities between LBP and CTRL were analysed descriptively for individual muscles; differences based on grouped muscles were statistically tested by using a multivariate analysis of variance (MANOVA, α =0.05).
Results: Thirteen individuals were included into the analysis of study 1. EMG latencies revealed reflex muscle activities following the perturbation (mean: 89 ms). Respective EMG amplitudes were on average 5-fold of those assessed in unperturbed strides, though being characterized by a high inter-subject variability. Test-retest reliability of muscle latencies showed a high reproducibility, both for muscles at the trunk and legs. In contrast, reproducibility of amplitudes was only weak to moderate for individual muscles, but increased when being assessed as a location specific outcome summary of grouped muscles. Seventy-six individuals were eligible for data analysis in study 2. Group allocation according to CPIS resulted in n=25 for LBP and n=29 for CTRL. Descriptive analysis of activity onsets revealed longer delays for all muscles within LBP compared to CTRL (trunk muscles: mean 10 ms; leg muscles: mean 3 ms). Onset latencies of grouped muscles revealed statistically significant differences between LBP and CTRL for right (p=0.009) and left (p=0.007) abdominal muscle groups. EMG amplitude analysis showed a high variability in activation levels between individuals, independent of group assignment or location. Statistical testing of grouped muscles indicated no significant difference in amplitudes between LBP and CTRL.
Discussion: The present research project could show that perturbed treadmill walking is suitable to provoke comprehensive reflex responses at the trunk and lower extremities, both in terms of sudden onsets and amplitudes of reflex activity. Moreover, it could demonstrate that sudden loadings under dynamic conditions provoke an altered reflex timing of muscles surrounding the trunk in people with LBP compared to CTRL. In line with previous investigations, compensation strategies seemed to be deployed in a task specific manner, with differences between LBP and CTRL being evident predominately at ventral sides. No muscular alterations exceeding the trunk could be found when being assessed under the automated task of locomotion. While rehabilitation programs tailored towards LBP are still under debate, it is tempting to urge the implementation of dynamic sudden loading incidents of the trunk to enhance motor control and thereby to improve spinal protection. Moreover, in respect to the consistently observed task specificity of muscular compensation strategies, such a rehabilitation program should be rich in variety.
Mujeres de apocalipsis
(2016)
Mujeres del Apocalipsis propone nuevas lecturas de género en relación a las mujeres piadosas que habitaron Nueva España en el siglo XVIII. El estudio sebasa en un corpus de los archivos de sus procesos inquisitoriales, mucho de ellos aún indéditos. Estas mujeres ganaron libertad y accedieon a una parcial autonomía en el mundo colonial. A través de su lectura, se descubren las estrategías tácticas a través de las cuales las beatas pactaron una nueva forma de se mujer en aquella era.
Different systems for habitual versus goal-directed control are thought to underlie human decision-making. Working memory is known to shape these decision-making systems and
their interplay, and is known to support goal-directed decision making even under stress. Here, we investigated if and how decision systems are differentially influenced by breaks filled with diverse everyday life activities known to modulate working memory performance. We used a within-subject design where young adults listened to music and played a video game during breaks interleaved with trials of a sequential two-step Markov decision task, designed to assess habitual as well as goal-directed decision making. Based on a neurocomputational model of task performance, we observed that for individuals with a rather limited working memory capacity video gaming as compared to music reduced reliance on the goal-directed decision-making system, while a rather large working memory capacity prevented such a decline. Our findings suggest differential effects of everyday activities on key decision-making processes.
Möglichkeiten der Mittelstandsförderung durch Vergaberechtsgestaltung und Vergaberechtspraxis
(2016)
Die Förderungswürdigkeit und die Förderungsfähigkeit mittelständischer Unternehmen ist ein gesamteuropäisches, wirtschaftspolitisches Anliegen. Hiervon zeugen zum einen zahlreiche Regelungen im Primär-, Sekundär-, Verfassungs- und einfachgesetzlichem Recht, zum anderen auch die Bedeutung der mittelständischen Unternehmen im wirtschaftlichen, gesellschaftlichen und sozialen Gefüge. So herrscht innerhalb der Europäischen Union nicht nur der Slogan „Vorfahrt für KMU“, sondern auch die im Frühjahr 2014 verabschiedeten Vergaberichtlinien legten ein besonderes Augenmerk auf die Förderung des Zugangs der KMU zum öffentlichen Beschaffungsmarkt. Denn gemessen am Steuerungs- und Lenkungspotenzial der Auftragsvergabe, deren Einfluss auf die Innovationstätigkeit der Wirtschaft sowie deren Auswirkungen auf die Wirtschafts- und Wettbewerbstätigkeit auf der einen Seite und dem gesamtwirtschaftlichen Stellenwert der mittelständischen Unternehmen auf der anderen Seite, sind mittelständische Unternehmen trotz zahlreicher europäischer und nationaler Initiativen im Vergabeverfahren unterrepräsentiert. Neben der undurchsichtigen Regelungsstruktur des deutschen Vergaberechts, unterliegen die mittelständischen Unternehmen vom Beginn bis zum Ende des Vergabeverfahrens besonderen Schwierigkeiten. Dieser Ausgangsbefund wurde zum Anlass genommen, um die Möglichkeiten der Mittelstandsförderung durch Vergaberechtsgestaltung und Vergaberechtspraxis erneut auf den Prüfstand zu stellen.
Rezensiertes Werk:
Naphtali Herz Wessely: Worte des Friedens und der Wahrheit. Dokumente einer Kontroverse über Erziehung in der europäischen Spätaufklärung. Herausgegeben, eingeleitet und kommentiert von Ingrid Lohmann, mitherausgegeben von Rainer Wenzel / Uta Lohmann. Aus dem Hebräischen übersetzt und mit Anmerkungen versehen von Rainer Wenzel, Jüdische Bildungsgeschichte in Deutschland, Bd. 8, Münster: Waxmann 2014. 800 S.
School shooters are often described as narcissistic, but empirical evidence is scant. To provide more reliable and detailed information, we conducted an exploratory study, analyzing police investigation files on seven school shootings in Germany, looking for symptoms of narcissistic personality disorder as defined by the Diagnostic and Statistical Manual of Mental Disorders (4th ed.; DSM-IV) in witnesses' and offenders' reports and expert psychological evaluations. Three out of four offenders who had been treated for mental disorders prior to the offenses displayed detached symptoms of narcissism, but none was diagnosed with narcissistic personality disorder. Of the other three, two displayed narcissistic traits. In one case, the number of symptoms would have justified a diagnosis of narcissistic personality disorder. Offenders showed low and high self-esteem and a range of other mental disorders. Thus, narcissism is not a common characteristic of school shooters, but possibly more frequent than in the general population. This should be considered in developing adequate preventive and intervention measures.
Background
Doping presents a potential health risk for young athletes. Prevention programs are intended to prevent doping by educating athletes about banned substances. However, such programs have their limitations in practice. This led Germany to introduce the National Doping Prevention Plan (NDPP), in hopes of ameliorating the situation among young elite athletes. Two studies examined 1) the degree to which the NDPP led to improved prevention efforts in elite sport schools, and 2) the extent to which newly developed prevention activities of the national anti-doping agency (NADA) based on the NDPP have improved knowledge among young athletes within elite sports schools.
Methods
The first objective was investigated in a longitudinal study (Study I: t0 = baseline, t1 = follow-up 4 years after NDPP introduction) with N = 22 teachers engaged in doping prevention in elite sports schools. The second objective was evaluated in a cross-sectional comparison study (Study II) in N = 213 elite sports school students (54.5 % male, 45.5 % female, age M = 16.7 ± 1.3 years (all students had received the improved NDDP measure in school; one student group had received additionally NADA anti-doping activities and a control group did not). Descriptive statistics were calculated, followed by McNemar tests, Wilcoxon tests and Analysis of Covariance (ANCOVA).
Results
Results indicate that 4 years after the introduction of the NDPP there have been limited structural changes with regard to the frequency, type, and scope of doping prevention in elite sport schools. On the other hand, in study II, elite sport school students who received further NADA anti-doping activities performed better on an anti-doping knowledge test than students who did not take part (F(1, 207) = 33.99, p <0.001), although this difference was small.
Conclusion
The integration of doping-prevention in elite sport schools as part of the NDPP was only partially successful. The results of the evaluation indicate that the introduction of the NDPP has contributed more to a change in the content of doping prevention activities than to a structural transformation in anti-doping education in elite sport schools. Moreover, while students who did receive additional education in the form of the NDPP“booster sessions” had significantly more knowledge about doping than students who did not receive such education, this difference was only small and may not translate to actual behavior.
Naturaleza y cultura
(2016)
El presente trabajo gira en torno al inexpugnable vínculo entre naturaleza y cultura y la 'no naturalidad' de la primera, producto de las milenarias intervenciones del hombre, subsumido bajo el término del 'antropoceno'. Los filósofos franceses Bruno Latour y Philippe Descola supieron destacar, aunque por caminos diferentes, la importancia de este nexo para asegurar la supervivencia del hombre; Bruno Latour centra sus reflexiones en la política de la naturaleza y Philippe Descola destaca el carácter ecológico de la naturaleza y la cultura. Sin embargo, ambos dejan de lado las literaturas del mundo y su capacidad de atesorar los diversos diseños del saber convivir entre hombre y naturaleza y las nociones de sustentabilidad. Descuella además la inspiración que Descola encuentra en la figura del gran erudito Alexander von Humboldt, quien en el siglo XIX ya daba fe de la relación inextricable entre naturaleza y cultura en innumerables testimonios, entre otros, el Chimborazo que, como cuadro global es representativo para entender que la naturaleza desde siempre ha sido cultura y la cultura es inimaginble sin la naturaleza.
The collision of bathymetric anomalies, such as oceanic spreading centers, at convergent plate margins can profoundly affect subduction dynamics, magmatism, and the structural and geomorphic evolution of the overriding plate. The Southern Patagonian Andes of South America are a prime example for sustained oceanic ridge collision and the successive formation and widening of an extensive asthenospheric slab window since the Middle Miocene. Several of the predicted upper-plate geologic manifestations of such deep-seated geodynamic processes have been studied in this region, but many topics remain highly debated. One of the main controversial topics is the interpretation of the regional low-temperature thermochronology exhumational record and its relationship with tectonic and/or climate-driven processes, ultimately manifested and recorded in the landscape evolution of the Patagonian Andes. The prominent along-strike variance in the topographic characteristics of the Andes, combined with coupled trends in low-temperature thermochronometer cooling ages have been interpreted in very contrasting ways, considering either purely climatic (i.e. glacial erosion) or geodynamic (slab-window related) controlling factors.
This thesis focuses on two main aspects of these controversial topics. First, based on field observations and bedrock low-temperature thermochronology data, the thesis addresses an existing research gap with respect to the neotectonic activity of the upper plate in response to ridge collision - a mechanism that has been shown to affect the upper plate topography and exhumational patterns in similar tectonic settings. Secondly, the qualitative interpretation of my new and existing thermochronological data from this region is extended by inverse thermal modelling to define thermal histories recorded in the data and evaluate the relative importance of surface vs. geodynamic factors and their possible relationship with the regional cooling record.
My research is centered on the Northern Patagonian Icefield (NPI) region of the Southern Patagonian Andes. This site is located inboard of the present-day location of the Chile Triple Junction - the juncture between the colliding Chile Rise spreading center and the Nazca and Antarctic Plates along the South American convergent margin. As such this study area represents the region of most recent oceanic-ridge collision and associated slab window formation. Importantly, this location also coincides with the abrupt rise in summit elevations and relief characteristics in the Southern Patagonian Andes. Field observations, based on geological, structural and geomorphic mapping, are combined with bedrock apatite (U-Th)/He and apatite fission track (AHe and AFT) cooling ages sampled along elevation transects across the orogen. This new data reveals the existence of hitherto unrecognized neotectonic deformation along the flanks of the range capped by the NPI.
This deformation is associated with the closely spaced oblique collision of successive oceanic-ridge segments in this region over the past 6 Ma. I interpret that this has caused a crustal-scale partitioning of deformation and the decoupling, margin-parallel migration, and localized uplift of a large crustal sliver (the NPI block) along the subduction margin. The location of this uplift coincides with a major increase of summit elevations and relief at the northern edge of the NPI massif. This mechanism is compatible with possible extensional processes along the topographically subdued trailing edge of the NPI block as documented by very recent and possibly still active normal faulting. Taken together, these findings suggest a major structural control on short-wavelength variations in topography in the Southern Patagonian Andes - the region affected by ridge collision and slab window formation.
The second research topic addressed here focuses on using my new and existing bedrock low-temperature cooling ages in forward and inverse thermal modeling. The data was implemented in the HeFTy and QTQt modeling platforms to constrain the late Cenozoic thermal history of the Southern Patagonian Andes in the region of the most recent upper-plate sectors of ridge collision. The data set combines AHe and AFT data from three elevation transects in the region of the Northern Patagonian Icefield. Previous similar studies claimed far-reaching thermal effects of the approaching ridge collision and slab window to affect patterns of Late Miocene reheating in the modelled thermal histories. In contrast, my results show that the currently available data can be explained with a simpler thermal history than previously proposed. Accordingly, a reheating event is not needed to reproduce the observations. Instead, the analyzed ensemble of modelled thermal histories defines a Late Miocene protracted cooling and Pliocene-to-recent stepwise exhumation. These findings agree with the geological record of this region. Specifically, this record indicates an Early Miocene phase of active mountain building associated with surface uplift and an active fold-and-thrust belt, followed by a period of stagnating deformation, peneplanation, and lack of synorogenic deposition in the Patagonian foreland. The subsequent period of stepwise exhumation likely resulted from a combination of pulsed glacial erosion and coeval neotectonic activity. The differences between the present and previously published interpretation of the cooling record can be reconciled with important inconsistencies of previously used model setup. These include mainly the insufficient convergence of the models and improper assumptions regarding the geothermal conditions in the region. This analysis puts a methodological emphasis on the prime importance of the model setup and the need for its thorough examination to evaluate the robustness of the final outcome.
Neue Freunde und Geschenke
(2016)
Diese Arbeit zu Grunde liegenden Forschung zielte darauf ab, neue schmelzbare Acrylnitril-Copolymere zu entwickeln. Diese sollten im Anschluss über ein Schmelzspinnverfahren zur Chemiefaser geformt und im letzten Schritt zur Carbonfaser konvertiert werden. Zu diesem Zweck wurden zunächst orientierende Untersuchungen an unterschiedlichen Copolymeren des Acrylnitril aus Lösungspolymerisation durchgeführt. Die Untersuchungen zeigten, dass elektrostatische Wechselwirkungen besser als sterische Abschirmung dazu geeignet sind, Schmelzbarkeit unterhalb der Zersetzungstemperatur von Polyacrylnitril zu bewirken. Aus der Vielzahl untersuchter Copolymere stellten sich jene mit Methoxyethylacrylat (MEA) als am effektivsten heraus. Für diese Copolymere wurden sowohl die Copolymerisationsparameter bestimmt als auch die grundlegende Kinetik der Lösungspolymerisation untersucht. Die Copolymere mit MEA wurden über Schmelzspinnen zur Faser umgeformt und diese dann untersucht. Hierbei wurden auch Einflüsse verschiedener Parameter, wie z.B. die der Molmasse, auf die Fasereigenschaften und -herstellung untersucht. Zuletzt wurde ein Heterophasenpolymerisationsverfahren zur Herstellung von Copolymeren aus AN/MEA entwickelt; dadurch konnten die Materialeigenschaften weiter verbessert werden. Zur Unterdrückung der thermoplastischen Eigenschaften der Fasern wurde ein geeignetes Verfahren entwickelt und anschließend die Konversion zu Carbonfasern durchgeführt.
Walking while concurrently performing cognitive and/or motor interference tasks is the norm rather than the exception during everyday life and there is evidence from behavioral studies that it negatively affects human locomotion. However, there is hardly any information available regarding the underlying neural correlates of single- and dual-task walking. We had 12 young adults (23.8 ± 2.8 years) walk while concurrently performing a cognitive interference (CI) or a motor interference (MI) task. Simultaneously, neural activation in frontal, central, and parietal brain areas was registered using a mobile EEG system. Results showed that the MI task but not the CI task affected walking performance in terms of significantly decreased gait velocity and stride length and significantly increased stride time and tempo-spatial variability. Average activity in alpha and beta frequencies was significantly modulated during both CI and MI walking conditions in frontal and central brain regions, indicating an increased cognitive load during dual-task walking. Our results suggest that impaired motor performance during dual-task walking is mirrored in neural activation patterns of the brain. This finding is in line with established cognitive theories arguing that dual-task situations overstrain cognitive capabilities resulting in motor performance decrements.
Meter and syntax have overlapping elements in music and speech domains, and individual differences have been documented in both meter perception and syntactic comprehension paradigms. Previous evidence insinuated but never fully explored the relationship that metrical structure has to syntactic comprehension, the comparability of these processes across music and language domains, and the respective role of individual differences. This dissertation aimed to investigate neurocognitive entrainment to meter in music and language, the impact that neurocognitive entrainment had on syntactic comprehension, and whether individual differences in musical expertise, temporal perception and working memory played a role during these processes.
A theoretical framework was developed, which linked neural entrainment, cognitive entrainment, and syntactic comprehension while detailing previously documented effects of individual differences on meter perception and syntactic comprehension. The framework was developed in both music and language domains and was tested using behavioral and EEG methods across three studies (seven experiments). In order to satisfy empirical evaluation of neurocognitive entrainment and syntactic aspects of the framework, original melodies and sentences were composed. Each item had four permutations: regular and irregular metricality, based on the hierarchical organization of strong and weak notes and syllables, and preferred and non-preferred syntax, based on structurally alternate endings. The framework predicted — for both music and language domains — greater neurocognitive entrainment in regular compared to irregular metricality conditions, and accordingly, better syntactic integration in regular compared to irregular metricality conditions. Individual differences among participants were expected for both entrainment and syntactic processes.
Altogether, the dissertation was able to support a holistic account of neurocognitive entrainment to musical meter and its subsequent influence on syntactic integration of melodies, with musician participants. The theoretical predictions were not upheld in the language domain with musician participants, but initial behavioral evidence in combination with previous EEG evidence suggest that perhaps non-musician language EEG data would support the framework’s predictions. Musicians’ deviation from hypothesized results in the language domain were suspected to reflect heightened perception of acoustic features stemming from musical training, which caused current ‘overly’ regular stimuli to distract the cognitive system. The individual-differences approach was vindicated by the surfacing of two factors scores, Verbal Working Memory and Time and Pitch Discrimination, which in turn correlated with multiple experimental data across the three studies.
During the last decade, high intensity interval training (HIIT) has been used as an alternative to endurance (END) exercise, since it requires less time to produce similar physiological adaptations. Previous literature has focused on HIIT changes in aerobic metabolism and cardiorespiratory fitness, however, there are currently no studies focusing on its neuromuscular adaptations.
Therefore, this thesis aimed to compare the neuromuscular adaptations of both HIIT and END after a two-week training intervention, by using a novel technology called high-density surface electromyography (HDEMG) motor unit decomposition. This project consisted in two experiments, where healthy young men were recruited (aged between 18 to 35 years). In experiment one, the reliability of HDEMG motor unit variables (mean discharge rate, peak-to-peak amplitude, conduction velocity and discharge rate variability) was tested (Study 1), a new method to track the same motor units longitudinally was proposed (Study 2), and the level of low (<5Hz) and high (>5Hz) frequency motor unit coherence between vastus medialis (VM) and lateralis (VL) knee extensor muscles was measured (Study 4). In experiment two, a two-week HIIT and END intervention was conducted where cardiorespiratory fitness parameters (e.g. peak oxygen uptake) and motor unit variables from the VM and VL muscles were assessed pre and post intervention (Study 3).
The results showed that HDEMG is reliable to monitor changes in motor unit activity and also allows the tracking of the same motor units across different testing sessions. As expected, both HIIT and END improved cardiorespiratory fitness parameters similarly. However, the neuromuscular adaptations of both types of training differed after the intervention, with HIIT showing a significant increase in knee extensor muscle strength that was accompanied by increased VM and VL motor unit discharge rates and HDEMG amplitude at the highest force levels [(50 and 70% of the maximum voluntary contraction force (MVC)], while END training induced a marked increase in time to task failure at lower force levels (30% MVC), without any influence on HDEMG amplitude and discharge rates. Additionally, the results showed that VM and VL muscles share most of their synaptic input since they present a large amount of low and high frequency motor unit coherence, which can explain the findings of the training intervention where both muscles showed similar changes in HDEMG amplitude and discharge rates.
Taken together, the findings of the current thesis show that despite similar improvements in cardiopulmonary fitness, HIIT and END induced opposite adjustments in motor unit behavior. These results suggest that HIIT and END show specific neuromuscular adaptations, possibly related to their differences in exercise load intensity and training volume.
In this thesis, a route to temperature-, pH-, solvent-, 1,2-diol-, and protein-responsive sensors made of biocompatible and low-fouling materials is established. These sensor devices are based on the sensitivemodulation of the visual band gap of a photonic crystal (PhC), which is induced by the selective binding of analytes, triggering a volume phase transition.
The PhCs introduced by this work show a high sensitivity not only for small biomolecules, but also for large analytes, such as glycopolymers or proteins. This enables the PhC to act as a sensor that detects analytes without the need of complex equipment.
Due to their periodical dielectric structure, PhCs prevent the propagation of specific wavelengths. A change of the periodicity parameters is thus indicated by a change in the reflected wavelengths. In the case explored, the PhC sensors are implemented as periodically structured responsive hydrogels in formof an inverse opal.
The stimuli-sensitive inverse opal hydrogels (IOHs) were prepared using a sacrificial opal template of monodispersed silica particles. First, monodisperse silica particles were assembled with a hexagonally packed structure via vertical deposition onto glass slides. The obtained silica crystals, also named colloidal crystals (CCs), exhibit structural color. Subsequently, the CCs templates were embedded in polymer matrix with low-fouling properties. The polymer matrices were composed of oligo(ethylene glycol) methacrylate derivatives (OEGMAs) that render the hydrogels thermoresponsive. Finally, the silica particles were etched, to produce highly porous hydrogel replicas of the CC. Importantly, the inner structure and thus the ability for light diffraction of the IOHs formed was maintained.
The IOH membrane was shown to have interconnected pores with a diameter as well as interconnections between the pores of several hundred nanometers. This enables not only the detection of small analytes, but also, the detection of even large analytes that can diffuse into the nanostructured IOH membrane. Various recognition unit – analyte model systems, such as benzoboroxole – 1,2-diols, biotin – avidin and mannose – concanavalin A, were studied by incorporating functional
comonomers of benzoboroxole, biotin and mannose into the copolymers. The incorporated recognition units specifically bind to certain low and highmolar mass biomolecules, namely to certain saccharides, catechols, glycopolymers or proteins.
Their specific binding strongly changes the overall hydrophilicity, thus modulating the swelling of the IOH matrices, and in consequence, drastically changes their internal periodicity. This swelling is amplified by the thermoresponsive properties of the polymer matrix. The shift of the interference band gap due to the specific molecular recognition is easily visible by the naked eye (up to 150 nm shifts). Moreover, preliminary trial were attempted to detect even larger entities. Therefore anti-bodies were immobilized on hydrogel platforms via polymer-analogous esterification. These platforms incorporate comonomers made of tri(ethylene glycol) methacrylate end-functionalized with a carboxylic acid. In these model systems, the bacteria analytes are too big to penetrate into the IOH membranes, but can only interact with their surfaces. The selected model bacteria, as Escherichia coli, show a specific affinity to anti-body-functionalized hydrogels. Surprisingly in the case functionalized IOHs, this study produced weak color shifts, possibly opening a path to detect directly living organism, which will need further investigations.
In recent years, entire industries and their participants have been affected by disruptive technologies, resulting in dramatic market changes and challenges to firm’s business logic and thus their business models (BMs). Firms from mature industries are increasingly realizing that BMs that worked successfully for years have become insufficient to stay on track in today’s “move fast and break things” economy. Firms must scrutinize the core logic that informs how they do business, which means exploring novel ways to engage customers and get them to pay. This can lead to a complete renewal of existing BMs or innovating completely new BMs.
BMs have emerged as a popular object of research within the last decade. Despite the popularity of the BM, the theoretical and empirical foundation underlying the concept is still weak. In particular, the innovation process for BMs has been developed and implemented in firms, but understanding of the mechanisms behind it is still lacking. Business model innovation (BMI) is a complex and challenging management task that requires more than just novel ideas. Systematic studies to generate a better understanding of BMI and support incumbents with appropriate concepts to improve BMI development are in short supply. Further, there is a lack of knowledge about appropriate research practices for studying BMI and generating valid data sets in order to meet expectations in both practice and academia.
This paper-based dissertation aims to contribute to research practice in the field of BM and BMI and foster better understanding of the BM concept and BMI processes in incumbent firms from mature industries. The overall dissertation presents three main results. The first result is a new perspective, or the systems thinking view, on the BM and BMI. With the systems thinking view, the fuzzy BM concept is clearly structured and a BMI framework is proposed. The second result is a new research strategy for studying BMI. After analyzing current research practice in the areas of BMs and BMI, it is obvious that there is a need for better research on BMs and BMI in terms of accuracy, transparency, and practical orientation. Thus, the action case study approach combined with abductive methodology is proposed and proven in the research setting of this thesis. The third result stems from three action case studies in incumbent firms from mature industries employed to study how BMI occurs in practice. The new insights and knowledge gained from the action case studies help to explain BMI in such industries and increase understanding of the core of these processes.
By studying these issues, the articles complied in this thesis contribute conceptually and empirically to the recently consolidated but still increasing literature on the BM and BMI. The conclusions and implications made are intended to foster further research and improve managerial practices for achieving BMI in a dramatically changing business environment.
Proteins are natural polypeptides produced by cells; they can be found in both animals and plants, and possess a variety of functions. One of these functions is to provide structural support to the surrounding cells and tissues. For example, collagen (which is found in skin, cartilage, tendons and bones) and keratin (which is found in hair and nails) are structural proteins. When a tissue is damaged, however, the supporting matrix formed by structural proteins cannot always spontaneously regenerate. Tailor-made synthetic polypeptides can be used to help heal and restore tissue formation.
Synthetic polypeptides are typically synthesized by the so-called ring opening polymerization (ROP) of α-amino acid N-carboxyanhydrides (NCA). Such synthetic polypeptides are generally non-sequence-controlled and thus less complex than proteins. As such, synthetic polypeptides are rarely as efficient as proteins in their ability to self-assemble and form hierarchical or structural supramolecular assemblies in water, and thus, often require rational designing. In this doctoral work, two types of amino acids, γ-benzyl-L/D-glutamate (BLG / BDG) and allylglycine (AG), were selected to synthesize a series of (co)polypeptides of different compositions and molar masses.
A new and versatile synthetic route to prepare polypeptides was developed, and its mechanism and kinetics were investigated. The polypeptide properties were thoroughly studied and new materials were developed from them. In particular, these polypeptides were able to aggregate (or self-assemble) in solution into microscopic fibres, very similar to those formed by collagen. By doing so, they formed robust physical networks and organogels which could be processed into high water-content, pH-responsive hydrogels. Particles with highly regular and chiral spiral morphologies were also obtained by emulsifying these polypeptides. Such polypeptides and the materials derived from them are, therefore, promising candidates for biomedical applications.
In diesem Artikel wird ein vergleichender Einblick in die jüdische Responsen Literatur und in die muslimische Fatwa-Literatur gegeben und herausgearbeitet, welche Fragen sich für weiterführende Studien ergeben. Beide Religionen haben ein normatives Bezugssystem (halacha und fiqh), das sich auf alle Bereiche des Lebens erstreckt. Die klassische Position beider Religionen sieht in der Ausübung dieser Normen den authentischsten Weg, Gottes Willen näherzukommen. Nach traditioneller Auffassung benötigen religiöse Menschen dabei eine permanente Supervision durch vertrauenswürdige Gelehrte, die sie bei Bedarf um Rat bitten können. Die große Zahl der Fragen, die Gelehrten – über das Internet, den Briefverkehr oder das Telefon – gestellt werden, zeigt einen auch in der Gegenwart ungebrochenen Bedarf an fachkundigen Auskünften im Bereich religiöser Normen. Im vorliegenden Artikel sollen die Grundzüge dieses Prozesses religiöser Rechtsauskünfte im Judentum und Islam vergleichend dargestellt werden. Dabei können an dieser Stelle nur die bedeutendsten Momente festgehalten und auf Gemeinsamkeiten und Unterschiede hin betrachtet werden. Als Methode dient die historische Analyse, bei der die Fatwa- und die Responsen-Literatur in ihrer klassischen Form und in Grundzügen dargestellt wird, so wie sie sich vom 7. bis ins 19. Jahrhundert gezeigt hat.
We examined the spontaneous association between numbers and space by documenting attention deployment and the time course of associated spatial-numerical mapping with and without overt oculomotor responses. In Experiment 1, participants maintained central fixation while listening to number names. In Experiment 2, they made horizontal target-direct saccades following auditory number presentation. In both experiments, we continuously measured spontaneous ocular drift in horizontal space during and after number presentation. Experiment 2 also measured visual-probe-directed saccades following number presentation. Reliable ocular drift congruent with a horizontal mental number line emerged during and after number presentation in both experiments. Our results provide new evidence for the implicit and automatic nature of the oculomotor resonance effect associated with the horizontal spatial-numerical mapping mechanism.
This thesis is focused on the study and the exact simulation of two classes of real-valued Brownian diffusions: multi-skew Brownian motions with constant drift and Brownian diffusions whose drift admits a finite number of jumps.
The skew Brownian motion was introduced in the sixties by Itô and McKean, who constructed it from the reflected Brownian motion, flipping its excursions from the origin with a given probability. Such a process behaves as the original one except at the point 0, which plays the role of a semipermeable barrier. More generally, a skew diffusion with several semipermeable barriers, called multi-skew diffusion, is a diffusion everywhere except when it reaches one of the barriers, where it is partially reflected with a probability depending on that particular barrier. Clearly, a multi-skew diffusion can be characterized either as solution of a stochastic differential equation involving weighted local times (these terms providing the semi-permeability) or by its infinitesimal generator as Markov process.
In this thesis we first obtain a contour integral representation for the transition semigroup of the multiskew Brownian motion with constant drift, based on a fine analysis of its complex properties. Thanks to this representation we write explicitly the transition densities of the two-skew Brownian motion with constant drift as an infinite series involving, in particular, Gaussian functions and their tails.
Then we propose a new useful application of a generalization of the known rejection sampling method. Recall that this basic algorithm allows to sample from a density as soon as one finds an - easy to sample - instrumental density verifying that the ratio between the goal and the instrumental densities is a bounded function. The generalized rejection sampling method allows to sample exactly from densities for which indeed only an approximation is known. The originality of the algorithm lies in the fact that one finally samples directly from the law without any approximation, except the machine's.
As an application, we sample from the transition density of the two-skew Brownian motion with or without constant drift. The instrumental density is the transition density of the Brownian motion with constant drift, and we provide an useful uniform bound for the ratio of the densities. We also present numerical simulations to study the efficiency of the algorithm.
The second aim of this thesis is to develop an exact simulation algorithm for a Brownian diffusion whose drift admits several jumps. In the literature, so far only the case of a continuous drift (resp. of a drift with one finite jump) was treated. The theoretical method we give allows to deal with any finite number of discontinuities. Then we focus on the case of two jumps, using the transition densities of the two-skew Brownian motion obtained before. Various examples are presented and the efficiency of our approach is discussed.
Graph queries have lately gained increased interest due to application areas such as social networks, biological networks, or model queries. For the relational database case the relational algebra and generalized discrimination networks have been studied to find appropriate decompositions into subqueries and ordering of these subqueries for query evaluation or incremental updates of query results. For graph database queries however there is no formal underpinning yet that allows us to find such suitable operationalizations. Consequently, we suggest a simple operational concept for the decomposition of arbitrary complex queries into simpler subqueries and the ordering of these subqueries in form of generalized discrimination networks for graph queries inspired by the relational case. The approach employs graph transformation rules for the nodes of the network and thus we can employ the underlying theory. We further show that the proposed generalized discrimination networks have the same expressive power as nested graph conditions.
Isostasy is one of the oldest and most widely applied concepts in the geosciences, but the geoscientific community lacks a coherent, easy-to-use tool to simulate flexure of a realistic (i.e., laterally heterogeneous) lithosphere under an arbitrary set of surface loads. Such a model is needed for studies of mountain building, sedimentary basin formation, glaciation, sea-level change, and other tectonic, geodynamic, and surface processes. Here I present gFlex (for GNU flexure), an open-source model that can produce analytical and finite difference solutions for lithospheric flexure in one (profile) and two (map view) dimensions. To simulate the flexural isostatic response to an imposed load, it can be used by itself or within GRASS GIS for better integration with field data. gFlex is also a component with the Community Surface Dynamics Modeling System (CSDMS) and Landlab modeling frameworks for coupling with a wide range of Earth-surface-related models, and can be coupled to additional models within Python scripts. As an example of this in-script coupling, I simulate the effects of spatially variable lithospheric thickness on a modeled Iceland ice cap. Finite difference solutions in gFlex can use any of five types of boundary conditions: 0-displacement, 0-slope (i.e., clamped); 0-slope, 0-shear; 0-moment, 0-shear (i.e., broken plate); mirror symmetry; and periodic. Typical calculations with gFlex require << 1 s to similar to 1 min on a personal laptop computer. These characteristics - multiple ways to run the model, multiple solution methods, multiple boundary conditions, and short compute time - make gFlex an effective tool for flexural isostatic modeling across the geosciences.
We study the interplay between analysis on manifolds with singularities and complex analysis and develop new structures of operators based on the Mellin transform and tools for iterating the calculus for higher singularities. We refer to the idea of interpreting boundary value problems (BVPs) in terms of pseudo-differential operators with a principal symbolic hierarchy, taking into account that BVPs are a source of cone and edge operator algebras. The respective cone and edge pseudo-differential algebras in turn are the starting point of higher corner theories. In addition there are deep relationships between corner operators and complex analysis. This will be illustrated by the Mellin symbolic calculus.
We consider a statistical inverse learning problem, where we observe the image of a function f through a linear operator A at i.i.d. random design points X_i, superposed with an additional noise. The distribution of the design points is unknown and can be very general. We analyze simultaneously the direct (estimation of Af) and the inverse (estimation of f) learning problems. In this general framework, we obtain strong and weak minimax optimal rates of convergence (as the number of observations n grows large) for a large class of spectral regularization methods over regularity classes defined through appropriate source conditions. This improves on or completes previous results obtained in related settings. The optimality of the obtained rates is shown not only in the exponent in n but also in the explicit dependence of the constant factor in the variance of the noise and the radius of the source condition set.
Die Dissertation befasst sich mit der Organisation von humanitären Lufttransporten bei internationalen Katastrophen. Diese Flüge finden immer dann statt, wenn die eigene Hilfeleistungsfähigkeit der von Katastrophen betroffenen Regionen überfordert ist und Hilfe aus dem Ausland angefordert wird. Bei jedem der darauffolgenden Hilfseinsätze stehen Hilfsorganisationen und weitere mit der Katastrophenhilfe beteiligte Akteure erneut vor der Herausforderung, in kürzester Zeit eine logistische Kette aufzubauen, damit die Güter zum richtigen Zeitpunkt in der richtigen Menge am richtigen Ort eintreffen.
Humanitäre Lufttransporte werden in der Regel als Charterflüge organisiert und finden auf langen Strecken zu Zielen statt, die nicht selten abseits der hochfrequentierten Warenströme liegen. Am Markt ist das Angebot für derartige Transportdienstleistungen nicht gesichert verfügbar und unter Umständen müssen Hilfsorganisationen warten bis Kapazitäten mit geeigneten Flugzeugen zur Verfügung stehen. Auch qualitativ sind die Anforderungen von Hilfsorganisationen an die Hilfsgütertransporte höher als im regulären Linientransport.
Im Rahmen der Dissertation wird ein alternatives Organisationsmodell für die Beschaffung und den Betrieb sowie die Finanzierung von humanitären Lufttransporten aufgebaut. Dabei wird die gesicherte Verfügbarkeit von besonders flexibel einsetzbaren Flugzeugen in Betracht gezogen, mit deren Hilfe die Qualität und insbesondere die Planbarkeit der Hilfeleistung verbessert werden könnte.
Ein idealtypisches Modell wird hier durch die Kopplung der Kollektivgütertheorie, die der Finanzwissenschaft zuzuordnen ist, mit der Vertragstheorie als Bestandteil der Neuen Institutionenökonomik erarbeitet.
Empirische Beiträge zur Vertragstheorie bemängeln, dass es bei der Beschaffung von transaktionsspezifischen Investitionsgütern, wie etwa Flugzeugen mit besonderen Eigenschaften, aufgrund von Risiken und Umweltunsicherheiten zu ineffizienten Lösungen zwischen Vertragspartnern kommt. Die vorliegende Dissertation zeigt eine Möglichkeit auf, wie durch Aufbau einer gemeinsamen Informationsbasis ex-ante, also vor Vertragsschluss, Risiken und Umweltunsicherheiten reduziert werden können. Dies geschieht durch eine temporale Erweiterung eines empirischen Modells zur Bestimmung der Organisationsform bei transaktionsspezifischen Investitionsgütern aus der Regulierungsökonomik.
Die Arbeitet leistet darüber hinaus einen Beitrag zur Steigerung der Effizienz in der humanitären Logistik durch die fallspezifische Betrachtung von horizontalen Kooperationen und Professionalisierung der Hilfeleistung im Bereich der humanitären Luftfahrt.
Linking together the processes of rapid physical erosion and the resultant chemical dissolution of rock is a crucial step in building an overall deterministic understanding of weathering in mountain belts. Landslides, which are the most volumetrically important geomorphic process at these high rates of erosion, can generate extremely high rates of very localised weathering. To elucidate how this process works we have taken advantage of uniquely intense landsliding, resulting from Typhoon Morakot, in the T'aimali River and surrounds in southern Taiwan. Combining detailed analysis of landslide seepage chemistry with estimates of catchment-by-catchment landslide volumes, we demonstrate that in this setting the primary role of landslides is to introduce fresh, highly labile mineral phases into the surface weathering environment. There, rapid weathering is driven by the oxidation of pyrite and the resultant sulfuric-acid-driven dissolution of primarily carbonate rock. The total dissolved load correlates well with dissolved sulfate - the chief product of this style of weathering - in both landslides and streams draining the area (R-2 = 0.841 and 0.929 respectively; p < 0.001 in both cases), with solute chemistry in seepage from landslides and catchments affected by significant landsliding governed by the same weathering reactions. The predominance of coupled carbonate-sulfuric-acid-driven weathering is the key difference between these sites and previously studied landslides in New Zealand (Emberson et al., 2016), but in both settings increasing volumes of landslides drive greater overall solute concentrations in streams.
Bedrock landslides, by excavating deep below saprolite-rock interfaces, create conditions for weathering in which all mineral phases in a lithology are initially unweathered within landslide deposits. As a result, the most labile phases dominate the weathering immediately after mobilisation and during a transient period of depletion. This mode of dissolution can strongly alter the overall output of solutes from catchments and their contribution to global chemical cycles if landslide-derived material is retained in catchments for extended periods after mass wasting.
PaRDeS. Zeitschrift der Vereinigung für Jüdische Studien e.V., möchte die fruchtbare und facettenreiche Kultur des Judentums sowie seine Berührungspunkte zur Umwelt in den unterschiedlichen Bereichen dokumentieren. Daneben dient die Zeitschrift als Forum zur Positionierung der Fächer Jüdische Studien und Judaistik innerhalb des wissenschaftlichen Diskurses sowie zur Diskussion ihrer historischen und gesellschaftlichen Verantwortung.
It is "scientific folklore" coming from physical heuristics that solutions to the heat equation on a Riemannian manifold can be represented by a path integral. However, the problem with such path integrals is that they are notoriously ill-defined. One way to make them rigorous (which is often applied in physics) is finite-dimensional approximation, or time-slicing approximation: Given a fine partition of the time interval into small subintervals, one restricts the integration domain to paths that are geodesic on each subinterval of the partition. These finite-dimensional integrals are well-defined, and the (infinite-dimensional) path integral then is defined as the limit of these (suitably normalized) integrals, as the mesh of the partition tends to zero.
In this thesis, we show that indeed, solutions to the heat equation on a general compact Riemannian manifold with boundary are given by such time-slicing path integrals. Here we consider the heat equation for general Laplace type operators, acting on sections of a vector bundle. We also obtain similar results for the heat kernel, although in this case, one has to restrict to metrics satisfying a certain smoothness condition at the boundary. One of the most important manipulations one would like to do with path integrals is taking their asymptotic expansions; in the case of the heat kernel, this is the short time asymptotic expansion. In order to use time-slicing approximation here, one needs the approximation to be uniform in the time parameter. We show that this is possible by giving strong error estimates.
Finally, we apply these results to obtain short time asymptotic expansions of the heat kernel also in degenerate cases (i.e. at the cut locus). Furthermore, our results allow to relate the asymptotic expansion of the heat kernel to a formal asymptotic expansion of the infinite-dimensional path integral, which gives relations between geometric quantities on the manifold and on the loop space. In particular, we show that the lowest order term in the asymptotic expansion of the heat kernel is essentially given by the Fredholm determinant of the Hessian of the energy functional. We also investigate how this relates to the zeta-regularized determinant of the Jacobi operator along minimizing geodesics.
Computer Security deals with the detection and mitigation of threats to computer networks, data, and computing hardware. This
thesis addresses the following two computer security problems: email spam campaign and malware detection.
Email spam campaigns can easily be generated using popular dissemination tools by specifying simple grammars that serve as message templates. A grammar is disseminated to nodes of a bot net, the nodes create messages by instantiating the grammar at random. Email spam campaigns can encompass huge data volumes and therefore pose a threat to the stability of the infrastructure of email service providers that have to store them. Malware -software that serves a malicious purpose- is affecting web servers, client computers via active content, and client computers through executable files. Without the help of malware detection systems it would be easy for malware creators to collect sensitive information or to infiltrate computers.
The detection of threats -such as email-spam messages, phishing messages, or malware- is an adversarial and therefore intrinsically
difficult problem. Threats vary greatly and evolve over time. The detection of threats based on manually-designed rules is therefore
difficult and requires a constant engineering effort. Machine-learning is a research area that revolves around the analysis of data and the discovery of patterns that describe aspects of the data. Discriminative learning methods extract prediction models from data that are optimized to predict a target attribute as accurately as possible. Machine-learning methods hold the promise of automatically identifying patterns that robustly and accurately detect threats. This thesis focuses on the design and analysis of discriminative learning methods for the two computer-security problems under investigation: email-campaign and malware detection.
The first part of this thesis addresses email-campaign detection. We focus on regular expressions as a syntactic framework, because regular expressions are intuitively comprehensible by security engineers and administrators, and they can be applied as a detection mechanism in an extremely efficient manner. In this setting, a prediction model is provided with exemplary messages from an email-spam campaign. The prediction model has to generate a regular expression that reveals the syntactic pattern that underlies the entire campaign, and that a security engineers finds comprehensible and feels confident enough to use the expression to blacklist further messages at the email server. We model this problem as two-stage learning problem with structured input and output spaces which can be solved using standard cutting plane methods. Therefore we develop an appropriate loss function, and derive a decoder for the resulting optimization problem.
The second part of this thesis deals with the problem of predicting whether a given JavaScript or PHP file is malicious or benign. Recent malware analysis techniques use static or dynamic features, or both. In fully dynamic analysis, the software or script is executed and observed for malicious behavior in a sandbox environment. By contrast, static analysis is based on features that can be extracted directly from the program file. In order to bypass static detection mechanisms, code obfuscation techniques are used to spread a malicious program file in many different syntactic variants. Deobfuscating the code before applying a static classifier can be subjected to mostly static code analysis and can overcome the problem of obfuscated malicious code, but on the other hand increases the computational costs of malware detection by an order of magnitude. In this thesis we present a cascaded architecture in which a classifier first performs a static analysis of the original code and -based on the outcome of this first classification step- the code may be deobfuscated and classified again. We explore several types of features including token $n$-grams, orthogonal sparse bigrams, subroutine-hashings, and syntax-tree features and study the robustness of detection methods and feature types against the evolution of malware over time. The developed tool scans very large file collections quickly and accurately.
Each model is evaluated on real-world data and compared to reference methods. Our approach of inferring regular expressions to filter emails belonging to an email spam campaigns leads to models with a high true-positive rate at a very low false-positive rate that is an order of magnitude lower than that of a commercial content-based filter. Our presented system -REx-SVMshort- is being used by a commercial email service provider and complements content-based and IP-address based filtering.
Our cascaded malware detection system is evaluated on a high-quality data set of almost 400,000 conspicuous PHP files and a collection of more than 1,00,000 JavaScript files. From our case study we can conclude that our system can quickly and accurately process large data collections at a low false-positive rate.
Background: Given the well-established association between perceived stress and quality of life (QoL) in dementia patients and their partners, our goal was to identify whether relationship quality and dyadic coping would operate as mediators between perceived stress and QoL.
Methods: 82 dyads of dementia patients and their spousal caregivers were included in a cross-sectional assessment from a prospective study. QoL was assessed with the Quality of Life in Alzheimer's Disease scale (QoL-AD) for dementia patients and the WHO Quality of Life-BREF for spousal caregivers. Perceived stress was measured with the Perceived Stress Scale (PSS-14). Both partners were assessed with the Dyadic Coping Inventory (DCI). Analyses of correlation as well as regression models including mediator analyses were performed.
Results: We found negative correlations between stress and QoL in both partners (QoL-AD: r = -0.62; p < 0.001; WHO-QOL Overall: r = -0.27; p = 0.02). Spousal caregivers had a significantly lower DCI total score than dementia patients (p < 0.001). Dyadic coping was a significant mediator of the relationship between stress and QoL in spousal caregivers (z = 0.28; p = 0.02), but not in dementia patients. Likewise, relationship quality significantly mediated the relationship between stress and QoL in caregivers only (z = -2.41; p = 0.02).
Conclusions: This study identified dyadic coping as a mediator on the relationship between stress and QoL in (caregiving) partners of dementia patients. In patients, however, we found a direct negative effect of stress on QoL. The findings suggest the importance of stress reducing and dyadic interventions for dementia patients and their partners, respectively.
We present a temperature and fluence dependent Ultrafast X-Ray Diffraction study of a laser-heated antiferromagnetic dysprosium thin film. The loss of antiferromagnetic order is evidenced by a pronounced lattice contraction. We devise a method to determine the energy flow between the phonon and spin system from calibrated Bragg peak positions in thermal equilibrium. Reestablishing the magnetic order is much slower than the cooling of the lattice, especially around the Néel temperature. Despite the pronounced magnetostriction, the transfer of energy from the spin system to the phonons in Dy is slow after the spin-order is lost.
The optical properties of semiconductor nanocrystals (SC NCs) are largely controlled by their size and surface chemistry, i.e., the chemical composition and thickness of inorganic passivation shells and the chemical nature and number of surface ligands as well as the strength of their bonds to surface atoms. The latter is particularly important for CdTe NCs, which – together with alloyed CdxHg1−xTe – are the only SC NCs that can be prepared in water in high quality without the need for an additional inorganic passivation shell. Aiming at a better understanding of the role of stabilizing ligands for the control of the application-relevant fluorescence features of SC NCs, we assessed the influence of two of the most commonly used monodentate thiol ligands, thioglycolic acid (TGA) and mercaptopropionic acid (MPA), on the colloidal stability, photoluminescence (PL) quantum yield (QY), and PL decay behavior of a set of CdTe NC colloids. As an indirect measure for the strength of the coordinative bond of the ligands to SC NC surface atoms, the influence of the pH (pD) and the concentration on the PL properties of these colloids was examined in water and D2O and compared to the results from previous dilution studies with a set of thiol-capped Cd1−xHgxTe SC NCs in D2O. As a prerequisite for these studies, the number of surface ligands was determined photometrically at different steps of purification after SC NC synthesis with Ellman's test. Our results demonstrate ligand control of the pH-dependent PL of these SC NCs, with MPA-stabilized CdTe NCs being less prone to luminescence quenching than TGA-capped ones. For both types of CdTe colloids, ligand desorption is more pronounced in H2O compared to D2O, underlining also the role of hydrogen bonding and solvent molecules.
In the current paradigm of cosmology, the formation of large-scale structures is mainly driven by non-radiating dark matter, making up the dominant part of the matter budget of the Universe. Cosmological observations however, rely on the detection of luminous galaxies, which are biased tracers of the underlying dark matter. In this thesis I present cosmological reconstructions of both, the dark matter density field that forms the cosmic web, and cosmic velocities, for which both aspects of my work are delved into, the theoretical formalism and the results of its applications to cosmological simulations and also to a galaxy redshift survey.The foundation of our method is relying on a statistical approach, in which a given galaxy catalogue is interpreted as a biased realization of the underlying dark matter density field. The inference is computationally performed on a mesh grid by sampling from a probability density function, which describes the joint posterior distribution of matter density and the three dimensional velocity field. The statistical background of our method is described in Chapter ”Implementation of argo”, where the introduction in sampling methods is given, paying special attention to Markov Chain Monte-Carlo techniques. In Chapter ”Phase-Space Reconstructions with N-body Simulations”, I introduce and implement a novel biasing scheme to relate the galaxy number density to the underlying dark matter, which I decompose into a deterministic part, described by a non-linear and scale-dependent analytic expression, and a stochastic part, by presenting a negative binomial (NB) likelihood function that models deviations from Poissonity. Both bias components had already been studied theoretically, but were so far never tested in a reconstruction algorithm. I test these new contributions againstN-body simulations to quantify improvements and show that, compared to state-of-the-art methods, the stochastic bias is inevitable at wave numbers of k≥0.15h Mpc^−1 in the power spectrum in order to obtain unbiased results from the reconstructions. In the second part of Chapter ”Phase-Space Reconstructions with N-body Simulations” I describe and validate our approach to infer the three dimensional cosmic velocity field jointly with the dark matter density. I use linear perturbation theory for the large-scale bulk flows and a dispersion term to model virialized galaxy motions, showing that our method is accurately recovering the real-space positions of the redshift-space distorted galaxies. I analyze the results with the isotropic and also the two-dimensional power spectrum.Finally, in Chapter ”Phase-space Reconstructions with Galaxy Redshift Surveys”, I show how I combine all findings and results and apply the method to the CMASS (for Constant (stellar) Mass) galaxy catalogue of the Baryon Oscillation Spectroscopic Survey (BOSS). I describe how our method is accounting for the observational selection effects inside our reconstruction algorithm. Also, I demonstrate that the renormalization of the prior distribution function is mandatory to account for higher order contributions in the structure formation model, and finally a redshift-dependent bias factor is theoretically motivated and implemented into our method. The various refinements yield unbiased results of the dark matter until scales of k≤0.2 h Mpc^−1in the power spectrum and isotropize the galaxy catalogue down to distances of r∼20h^−1 Mpc in the correlation function. We further test the results of our cosmic velocity field reconstruction by comparing them to a synthetic mock galaxy catalogue, finding a strong correlation between the mock and the reconstructed velocities. The applications of both, the density field without redshift-space distortions, and the velocity reconstructions, are very broad and can be used for improved analyses of the baryonic acoustic oscillations, environmental studies of the cosmic web, the kinematic Sunyaev-Zel’dovic or integrated Sachs-Wolfe effect.
Background:
Skewed body size distributions and the high relative richness of small-bodied taxa are a fundamental
property of a wide range of animal clades. The evolutionary processes responsible for generating these distributions
are well described in vertebrate model systems but have yet to be explored in detail for other major terrestrial
clades. In this study, we explore the macro-evolutionary patterns of body size variation across families of Hexapoda
(insects and their close relatives), using recent advances in phylogenetic understanding, with an aim to investigate
the link between size and diversity within this ancient and highly diverse lineage.
Results:
The maximum, minimum and mean-log body lengths of hexapod families are all approximately log-normally
distributed, consistent with previous studies at lower taxonomic levels, and contrasting with skewed distributions
typical of vertebrate groups. After taking phylogeny and within-tip variation into account, we find no evidence for a
negative relationship between diversification rate and body size, suggesting decoupling of the forces controlling these
two traits. Likelihood-based modeling of the log-mean body size identifies distinct processes operating within
Holometabola and Diptera compared with other hexapod groups, consistent with accelerating rates of size evolution
within these clades, while as a whole, hexapod body size evolution is found to be dominated by neutral processes
including significant phylogenetic conservatism.
Conclusions:
Based on our findings we suggest that the use of models derived from well-studied but atypical clades,
such as vertebrates may lead to misleading conclusions when applied to other major terrestrial lineages. Our results
indicate that within hexapods, and within the limits of current systematic and phylogenetic knowledge, insect
diversification is generally unfettered by size-biased macro-evolutionary processes, and that these processes over large
timescales tend to converge on apparently neutral evolutionary processes. We also identify limitations on available
data within the clade and modeling approaches for the resolution of trees of higher taxa, the resolution of which may
collectively enhance our understanding of this key component of terrestrial ecosystems.
Physikalische Hydrogele gewinnen derzeit als Zellsubstrate zunehmend an Interesse, da Viskoelastizität oder Stressrelaxation ein bedeutender Parameter in der Mechanotransduktion ist, der bisher vernachlässigt wurde. In dieser Arbeit wurden multi-funktionelle Polyurethane entworfen, die über einen neuartigen Gelierungsmechanismus physikalische Hydrogele bilden. In Wasser bilden die anionischen Polyurethane spontan Aggregate, welche durch elektrostatische Abstoßung in Lösung gehalten werden. Eine schnelle Gelierung kann von hier aus durch Ladungsabschirmung erreicht werden, wodurch die Aggregation voranschreitet und ein Netzwerk ausgebildet wird. Dies kann durch die Zugabe von verschiedenen Säuren oder Salzen geschehen, sodass sowohl saure (pH 4 - 5) als auch pH-neutrale Hydrogele erhalten werden können. Während konventionelle Hydrogele auf Polyurethan-Basis in der Regel durch toxische isocyanat-haltige Präpolymere hergestellt werden, eignet sich der hier beschriebene physikalische Gelierungsmechanismus für in situ Anwendungen in sensitiven Umgebungen. Sowohl Härte als auch Stressrelaxation der Hydrogele können unabhängig voneinander über einen breiten Bereich eingestellt werden. Darüberhinaus zeichnen sich die Hydrogele durch exzellente Stressregeneration aus.
Die empirische Studie hat die Verwissenschaftlichung der Physiotherapie und deren Relevanz für die berufliche Praxis in Deutschland zum Gegenstand. Unter Verwissenschaftlichung werden Prozesse der wissenschaftlichen Disziplinbildung und Akademisierung verstanden. Die Praxisrelevanz drückt sich in Veränderungsbestrebungen der Physiotherapie vom Beruf hin zur Profession aus.
Ausgehend von wissenschaftstheoretischen Ansätzen zur Disziplinbildung, Akademisierung und Professionalisierung sowie dem diesbezüglichen physiotherapeutischen Forschungsstand wendet sich die Arbeit aus den Perspektiven historischer und gegenwärtiger wissenschaftlicher Formierungsprozesse der empirischen Analyse des beschriebenen Gegenstandes zu.
Die zentralen Fragestellungen der vorliegenden Arbeit sind:
Auf welcher theoretischen Basis werden welche der Physiotherapie impliziten Gegenstände im Kontext welchen Theorie-Praxis-Verständnisses konstituiert?
Und: Gibt es ein theoretisches Fundament in Form von Theorien und Modellen, aus welchem sich forschungsmethodologische und wissenschaftstheoretische Zugänge begründen lassen und inwieweit zeigt sich hier das Potential zur Heranbildung einer wissenschaftlichen Disziplin?
Wie bezieht sich die Wissenschaft dabei auf eine professionelle Praxis und umgekehrt?
Der empirische Zugang zum Gegenstand erfolgte auf zwei Wegen:
1. Fachzeitschriftenanalyse zur Erfassung der Historizität,
2. Experteninterviews zur Erfassung der Kontextualität der Verwissenschaftlichung.
Die vorliegende Arbeit versteht sich als Beitrag zum wissenschaftlichen Diskurs in der Physiotherapie. Sie verfolgt das Ziel, eine empirisch belastbare Aussage bezüglich des Gelingens einer Disziplinbildung sowie des Akademisierungsprozesses in Deutschland zu treffen und diese in Beziehung zum Praxisfeld zu setzen. Empirisch relevant ist hierfür die Analyse der Historie. Letztere wiederum definiert den Weg zu einer ebenfalls zu analysierenden gegenwärtigen gesellschaftlichen Verortung. Die vorliegenden Analysen rekonstruieren die Emanzipation der Physiotherapie in Deutschland von einem Heilhilfsberuf hin zu einer eigenständigen Profession mit dem Fokus auf Prozesse der Disziplinbildung und Akademisierung.
Die Ergebnisse der Arbeit sind vielfältig und zeigen, dass die deutsche Physiotherapie sich unter anderem durch die Akademisierung auf dem Weg zu einer Wissenschaft sowie einer Profession befindet. Allerdings führt die Parallelität von Theoriebildung und Praxishandeln im Sinne einer kaum nachweisbaren Verschränkung beider Handlungsebenen zu dem Schluss, dass die untersuchten Prozesse bislang nicht zwangsläufig zu wissenschaftlich emanzipatorischem Erfolg führen müssen.
The global carbon cycle is closely linked to Earth’s climate. In the context of continuously unchecked anthropogenic CO₂ emissions, the importance of natural CO₂ bond and carbon storage is increasing. An important biogenic mechanism of natural atmospheric CO₂ drawdown is the photosynthetic carbon fixation in plants and the subsequent longterm deposition of plant detritus in sediments.
The main objective of this thesis is to identify factors that control mobilization and transport of plant organic matter (pOM) through rivers towards sedimentation basins. I investigated this aspect in the eastern Nepalese Arun Valley. The trans-Himalayan Arun River is characterized by a strong elevation gradient (205 − 8848 m asl) that is accompanied by strong changes in ecology and climate ranging from wet tropical conditions in the Himalayan forelad to high alpine tundra on the Tibetan Plateau. Therefore, the Arun is an excellent natural laboratory, allowing the investigation of the effect of vegetation cover, climate, and topography on plant organic matter mobilization and export in tributaries along the gradient.
Based on hydrogen isotope measurements of plant waxes sampled along the Arun River and its tributaries, I first developed a model that allows for an indirect quantification of pOM contributed to the mainsetm by the Arun’s tributaries. In order to determine the role of climatic and topographic parameters of sampled tributary catchments, I looked for significant statistical relations between the amount of tributary pOM export and tributary characteristics (e.g. catchment size, plant cover, annual precipitation or runoff, topographic measures). On one hand, I demonstrated that pOMsourced from the Arun is not uniformly derived from its entire catchment area. On the other, I showed that dense vegetation is a necessary, but not sufficient, criterion for high tributary pOM export. Instead, I identified erosion and rainfall and runoff as key factors controlling pOM sourcing in the Arun Valley. This finding is supported by terrestrial cosmogenic nuclide concentrations measured on river sands along the Arun and its tributaries in order to quantify catchment wide denudation rates. Highest denudation rates corresponded well with maximum pOM mobilization and export also suggesting the link between erosion and pOM sourcing.
The second part of this thesis focusses on the applicability of stable isotope records such as plant wax n-alkanes in sediment archives as qualitative and quantitative proxy for the variability of past Indian Summer Monsoon (ISM) strength. First, I determined how ISM strength affects the hydrogen and oxygen stable isotopic composition (reported as δD and δ18O values vs. Vienna Standard Mean Ocean Water) of precipitation in the Arun Valley and if this amount effect (Dansgaard, 1964) is strong enough to be recorded in potential paleo-ISM isotope proxies. Second, I investigated if potential isotope records across the Arun catchment reflect ISM strength dependent precipitation δD values only, or if the ISM isotope signal is superimposed by winter precipitation or glacial melt. Furthermore, I tested if δD values of plant waxes in fluvial deposits reflect δD values of environmental waters in the respective catchments.
I showed that surface water δD values in the Arun Valley and precipitation δD from south of the Himalaya both changed similarly during two consecutive years (2011 & 2012) with distinct ISM rainfall amounts (~20% less in 2012). In order to evaluate the effect of other water sources (Winter-Westerly precipitation, glacial melt) and evapotranspiration in the Arun Valley, I analysed satellite remote sensing data of rainfall distribution (TRMM 3B42V7), snow cover (MODIS MOD10C1), glacial coverage (GLIMSdatabase, Global Land Ice Measurements from Space), and evapotranspiration (MODIS MOD16A2). In addition to the predominant ISM in the entire catchment I found through stable isotope analysis of surface waters indications for a considerable amount of glacial melt derived from high altitude tributaries and the Tibetan Plateau. Remotely sensed snow cover data revealed that the upper portion of the Arun also receives considerable winter precipitation, but the effect of snow melt on the Arun Valley hydrology could not be evaluated as it takes place in early summer, several months prior to our sampling campaigns. However, I infer that plant wax records and other potential stable isotope proxy archives below the snowline are well-suited for qualitative, and potentially quantitative, reconstructions of past changes of ISM strength.
Plasma carotenoids, tocopherols, and retinol in the age-stratified (35–74 years) general population
(2016)
Blood micronutrient status may change with age. We analyzed plasma carotenoids, α-/γ-tocopherol, and retinol and their associations with age, demographic characteristics, and dietary habits (assessed by a short food frequency questionnaire) in a cross-sectional study of 2118 women and men (age-stratified from 35 to 74 years) of the general population from six European countries. Higher age was associated with lower lycopene and α-/β-carotene and higher β-cryptoxanthin, lutein, zeaxanthin, α-/γ-tocopherol, and retinol levels. Significant correlations with age were observed for lycopene (r = −0.248), α-tocopherol (r = 0.208), α-carotene (r = −0.112), and β-cryptoxanthin (r = 0.125; all p < 0.001). Age was inversely associated with lycopene (−6.5% per five-year age increase) and this association remained in the multiple regression model with the significant predictors (covariables) being country, season, cholesterol, gender, smoking status, body mass index (BMI (kg/m2)), and dietary habits. The positive association of α-tocopherol with age remained when all covariates including cholesterol and use of vitamin supplements were included (1.7% vs. 2.4% per five-year age increase). The association of higher β-cryptoxanthin with higher age was no longer statistically significant after adjustment for fruit consumption, whereas the inverse association of α-carotene with age remained in the fully adjusted multivariable model (−4.8% vs. −3.8% per five-year age increase). We conclude from our study that age is an independent predictor of plasma lycopene, α-tocopherol, and α-carotene.
Im Verlauf dieser Arbeit sind Blockcopolymere verschiedener Ladung auf Basis von PEO mit hohen Molekulargewichten durch lebendende freie radikalische Polymerisation hergestellt worden. Die Polymere sind einfach im Grammmaßstab herstellbar. Sie zeigen sowohl einen großen Einfluss auf die Nukleation als auch auf die Auflösung von Calciumphosphat. Gleichwohl scheint das Vorhandensein von positiven Gruppen (Kationen, Ampholyten und Betainen) keinen dramatischen Einfluss auf die Nukleation zu haben.
So verursachen Polymere mit positiven Ladungen die gleiche Retentionwirkung wie solche, die ausschließlich anionische Gruppen enthalten. Aus der Verwendung der kationischen, ampholytischen und betainischen Copolymere resultiert allerdings eine andersartige Morphologie der Niederschläge, als aus der Verwendung der Anionischen hervorgeht. Bei der Stabilisierung einer HAP-Oberfläche setzt sich dieser Trend fort, das heißt, rein anionische Copolymere wirken stärker stabilisierend als solche, die positive Ladungen enthalten. Durch Inkubation von menschlichem Zahnschmelz mit anionischen Copolymeren konnte gezeigt werden, dass die Biofilmbildung verglichen mit einer unbehandelten Zahnoberfläche eingeschränkt abläuft. All dies macht die Polymere zu interessanten Additiven für Zahnpflegeprodukte.
Zusätzlich konnten auf Basis dieser rein anionischen Copolymere Polymerbürsten, ebenfalls über lebendende freie radikalische Polymerisation, hergestellt werden. Diese zeichnen sich durch einen großen Einfluss auf die Kristallphase aus und bilden mit dem CHAP des AB-Types das Material, welches auch in Knochen und Zähnen vorkommt. Erste Cytotoxizitätstests lassen auf das große Potential dieser Polymerbürsten für Beschichtungen in der Medizintechnik schließen.
Portal = Klima im Wandel
(2016)
Portal alumni
(2016)
Die Universität Potsdam hat in diesem Jahr groß gefeiert: Sie ist 25 Jahre alt geworden. Zahlreiche Veranstaltungen und Publikationen zum Jubeljahr beleuchten, wie sich die Universität seit ihrer Gründung am 15. Juli 1991 entwickelt hat. Auch die Redaktion von Portal Alumni will dieses Vierteljahrhundert in den Blick nehmen und zwar aus der Perspektive der Studierenden und Alumni. Wir haben uns gefragt: Wie hat sich das Studium an der Universität Potsdam in der Vergangenheit geändert? Was ist den Studierenden heute wichtig, was waren wichtige Themen vor zehn oder zwanzig Jahren? Welche Erinnerungen haben Alumni an ihre Hochschulzeit? In diesem Heft lassen wir Alumni zu Wort kommen, die zu unterschiedlichen Zeiten an der Universität studiert haben und die für uns einen Blick zurück werfen auf ihre Generation. Mit dabei ist etwa Stefan Uhlmann, der sein Studium kurz vor dem Wendeherbst aufgenommen hatte. Sein erstes Semester war noch geprägt von sozialer Sicherheit: Der Staat zahlte den monatlichen studentischen Unterhalt von 200 Mark, die Universität stellte den Wohnheimplatz. Selbst der Arbeitsplatz war nach den Gesetzen der Planwirtschaft sicher. Wenige Monate später war alles anders: Die Inhalte aller Studiengänge wurden auf den Prüfstand gestellt, alle Rahmenbedingungen neu gestaltet, die gesamte Gesellschaft und damit auch der Arbeitsmarkt waren im Umbruch. Zwischen der Generation von Stefan Uhlmann und der der jungen Bachelorabsolventin Friederike Bath liegen 25 Jahre Universitätsgeschichte. Eine Zeit, in der sich Lehre und Studium gewandelt haben. Der Einführung von einer Studienordnung nach bundesdeutschem Recht folgte in der Jahrtausendwende die Bologna-Reform, die nicht nur das Studieren verändert hat. Sie hat auch einen Wertewandel mit sich gebracht, den der Sozialisationswissenschaftler Wilfried Schubarth in dieser Ausgabe beschreibt.
Portal Wissen = klein
(2016)
Seien wir mal ehrlich: Auch Wissenschaft hat das Ziel, groß rauszukommen, zumindest im Namen der Erkenntnis. Dabei gilt doch: Wenn etwas ins Stammbuch erfolgreicher Forschung gehört, dann ist es wohl die Vorstellung vom Kleinen. Schließlich ist es schon immer ihr Selbstverständnis gewesen, das zu ergründen, was sich nicht auf den ersten Blick erschließt. Schon Seneca war der Ansicht: „Wenn etwas kleiner ist als das Große, so ist es darum noch lange nicht unbedeutend.“
Kleinste Einheiten des Lebens wie Bakterien oder Viren bewirken Gewaltiges. Und immer wieder müssen (scheinbar) große Dinge erst ver- oder zerkleinert werden, um ihre Natur zu erkennen. Eines der größten Geheimnisse unserer Welt – das Atom als kleinste, wenn auch (nicht mehr) unteilbare Einheit der chemischen Elemente – hat sich erst beim Blick auf Winzigkeiten offenbart. Dabei war klein mitnichten immer nur das Gegenstück zu groß. Zumindest sprachlich, denn das Wort geht auf das westgermanische klaini zurück, das so viel wie „fein“ und „zierlich“ bedeutet, und ist darüber hinaus auch mit dem englischen clean, also „sauber“, verwandt. Fein und sauber – durchaus ein erstrebenswertes Credo für wissenschaftliches Arbeiten. Auch ein wenig Kleinlichkeit schadet nicht.
Dabei darf ein Forscher beileibe kein Kleingeist, sondern sollte bereit sein, das Unvermutete zu ahnen und seine Arbeit entsprechend darauf auszurichten. Und wenn das Ziel nicht kurzfristig zu erreichen ist, braucht es den namhaften langen Atem, sich nicht kleinreden zu lassen und klein beizugeben.
Genau genommen ist Forschung eigentlich ein nicht enden wollendes Klein-klein. Jede nobelpreiswürdige Entdeckung, jedes Großforschungsprojekt muss mit einer kleinen Idee, einem Fünklein beginnen, nur um anschließend bis ins Kleinste durchgeplant zu werden. Was folgt, die Niederungen der Ebene, ist Kleinarbeit: stundenlange Interviews auf der Suche nach dem Geheimnis des Kleinhirns, tagelange Feldstudien, die Kleinstlebewesen nachspüren, wochenlange Experimentreihen, die das mikroskopisch Kleine sichtbar machen sollen, monatelange Archivrecherche, die Kleinkram zutage fördert, oder jahrelange Lektüre des Kleingedruckten. All das auf der Jagd nach dem großen Wurf …
Darum haben wir Ihnen ein paar „Kleinigkeiten“ aus der Forschung an der Universität Potsdam zusammengestellt, ganz nach dem Motto: Klein, aber oho! So arbeiten Ernährungswissenschaftler daran, einigen der kleineren Erdenbewohner – den Mäusen – das Schicksal von „Laborratten“ zu ersparen, indem sie Alternativen zu Tierversuchen entwickeln. Wie Hänschen klein Sprachen lernt, untersuchen Sprachwissenschaftler gleich in mehreren Projekten und mit innovativen Methoden. Nur scheinbar klitzeklein sind dagegen die Milliarden von Sternen der Magellanschen Wolken, die Potsdamer Astrophysiker im Blick haben – vom Babelsberg aus. Die Geowissenschaftler des Graduiertenkollegs „StRATEGy“ wiederum waren vor Ort in Argentinien, um das (Klein-)„Kind“ – das Wetterphänomen El Niño – und seine Ursachen einmal genauer unter die Lupe zu nehmen. Klein anfangen, aber (die Potsdamer Kulturlandschaft) groß rausbringen soll das Research Center Sanssouci, das die Stiftung Preußische Schlösser und Gärten und die Universität Potsdam gemeinsam initiiert haben. Schließlich zeigen wir, dass schon jetzt eine ganze Reihe von Projekten und Initiativen angeschoben wird, um 2019 ein Kleinod der Region neu zu entdecken: den Wanderer durch die Mark, Theodor Fontane.
Wie gesagt: Kleinigkeiten. Wir wünschen viel Spaß beim Lesen!
Die Redaktion
Portal Wissen = Point
(2016)
A point is more than meets the eye. In geometry, a point is an object with zero dimensions – it is there but takes up little space. You may assume that something so small is easily overlooked. A closer look reveals that points are everywhere and play a significant role in many areas. In physics, for example, a mass point is the highest possible idealization of a body, which is the theoretical notion that the entire mass of a body is concentrated in a point, its “center of mass”.
Points are at the beginning (starting points), at intersections (pivot points), and at the end (final points). A point symbolizes great precision. There is a reason we “get to the point”. In writing, a point abbreviates, structures, and finalizes what is said. Physicians puncture, and athletes collect points on playing fields, courses, and on tables.
It’s no wonder that researchers are “surrounded” by points and work with them every day: Points bring order to chaos, structure the unexplained, and name the nameless. A point is often the beginning, an entry to worlds, findings, or problems.
Points are for everyone, though. German mathematician Oskar Perron wrote, “A point is exactly what the intelligent yet innocent, uncorrupted reader imagines it to be.” We want to follow up on this quotation: The latest edition of Portal Wissen offers exciting starting points, analyzes points of view, and gets right to the point.
We follow a physicist to the sun – the center point of our solar system – to ponder the origin of solar eruptions. We talked to a marketing professor about turning contentious points into successful deals during negotiation. Business information experts present leverage points that prepare both humans and machines for factories in the age of Industry 4.0. Enthusiastic entrepreneurs show us how their research became the starting point of a successful business idea – and also make the world a bit better. Geoscientists explain why the weather phenomenon El Niño causes – wet and dry – flashpoints. Just to name a few of many points …
We hope our magazine scores points with you and wish you an inspiring read!
The Editorial
Portal Wissen = Punkt
(2016)
Ein Punkt hat es in sich, auch wenn man es ihm nicht unbedingt ansieht. Immerhin gilt er in der Geometrie als Objekt ohne Ausdehnung – etwas, das da ist, aber sich nicht „breit macht“. Man sollte annehmen, was so klein ist, wird leicht übersehen. Aber Punkte sind, wenn man genauer hinschaut, nicht nur überall zu finden, sondern auch präsent und spielen gewichtige Rollen. In der Physik etwa ist ein Massepunkt die höchstmögliche Idealisierung eines realen Körpers: Er beschreibt die – theoretische – Vorstellung, die gesamte Masse des Körpers wäre in einem Punkt, seinem Schwerpunkt, vereinigt. Punkte finden sich am Anfang (Ausgangspunkt), an den Übergängen (Dreh- und Angelpunkt) und am Ende (Schlusspunkt). Ein Punkt ist das Symbol größter Präzision, nicht umsonst kommt man sprichwörtlich „auf den Punkt“. Als Bestandteil unserer Schrift kürzt er ab, gliedert und bringt Gesagtes zu Ende. Sogar Musiker und Mediziner punktieren. Und Sportler aller Arten sammeln Punkte auf Plätzen, Bahnen und in Tabellen.
Kein Wunder, dass gerade Wissenschaftler von Punkten „umgeben“ sind und täglich mit ihnen arbeiten: Sie bringen Ordnung ins Chaos, strukturieren das Unerklärte, benennen das Namenlose. Allzu oft macht ein Punkt den Anfang, von dem aus sich Zugänge eröffnen, zu Welten, Erkenntnissen oder Problemen.
Aber Punkte sind für alle da! Der deutsche Mathematiker Oskar Perron schrieb: „Ein Punkt ist genau das, was der intelligente, aber harmlose, unverbildete Leser sich darunter vorstellt.“ So wollen wir es halten: Das vorliegende Heft will spannende Anknüpfungspunkte bieten, Standpunkte analysieren und punktgenau erklären.
So folgen wir etwa einem Physiker zur Sonne, in den Mittelpunkt unseres Sonnensystems, um zu ergründen, wie Sonneneruptionen entstehen. Mit einem Juristen diskutierten wir über Streitpunkte zwischen Recht und Religion, während Religionswissenschaftler in einem Reisetagebuch von einzigartigen Berührungspunkten mit der (Wissenschafts-)Kultur im Iran berichten. Kognitionsund Literaturwissenschaftler eröffnen gemeinsam besondere Blickpunkte – nämlich, wie wir Comics lesen und verstehen – und Wirtschaftsinformatiker zeigen Ansatzpunkte, mit denen sie Mensch und Maschine gemeinsam fit machen für Fabriken im Zeitalter von Industrie 4.0. Bildungswissenschaftler wiederum erläutern ihr Konzept dafür, wie das kreative Potenzial von Künstlern zum Ausgangspunkt werden kann, um Schülern einen individuellen Zugang zu Kunst zu ermöglichen. Psychologen offenbaren, warum die Farbe Rot uns zu Anziehungspunkten macht, während Geowissenschaftler erklären, warum ein Klimaphänomen namens El Niño weltweit – nasse und trockene – Brennpunkte verursacht. Und das sind nur einige Punkte von vielen …
Mit all dem hoffen wir natürlich, auch bei Ihnen zu punkten. Wir wünschen Ihnen eine anregende Lektüre! Punkt. Aus. Ende. Nein, Anfang!
Die Redaktion
Portal Wissen = small
(2016)
Let’s be honest: even science wants to make it big, at least when it comes to discovering new knowledge. Yet if one thing belongs in the annals of successful research, it is definitely small things. Scientists have long understood that their job is to explore things that they don’t see right away. Seneca once wrote, “If something is smaller than the great, this does not mean at all that it is insignificant.”
The smallest units of life, such as bacteria or viruses, can often have powerful effects. And again and again, (seemingly) large things must first be disassembled or reduced to small pieces in order to recognize their nature. One of the greatest secrets of our world – the atom, the smallest, if no longer indivisible, unit of chemical elements – revealed itself only by looking at its diminutive size. By no means is ‘small’ (German: klein) merely a counterpoint to large, at least in linguistic terms; the word comes from West Germanic klaini, which means ‘fine’ or ‘delicate,’ and is also related to the English word ‘clean.’ Fine and clean – certainly something worth striving for in scientific work. And a bit of attention to detail doesn’t hurt either.
This doesn’t mean that researchers can be smallminded; they should be ready to expect the unexpected and to adjust their work accordingly. And even if they cannot attain their goals in the short term, they need staying power to keep themselves from being talked down, from giving up.
Strictly speaking, research is like putting together a puzzle with tons of tiny pieces; you don’t want it to end. Every discovery worthy of a Nobel Prize, every major research project, has to start with a small idea, with a tiny spark, and then the planning of the minutest details can begin. What follows is work focused on minuscule details: hours of interviews searching for the secret of the cerebellum (Latin for ‘little brain’), days of field studies searching for Lilliputian forms of life, weeks of experimentation meant to render visible the microscopically tiny, months of archival research that brings odds and ends to light, or years of reading fine print. All while hunting for a big hit...
This is why we’ve assembled a few ‘little’ stories about research at the University of Potsdam, under the motto: small, but look out! Nutritional scientists are working on rescuing some of the earth’s smaller residents – mice – from the fate of ‘lab rats’ by developing alternatives to animal testing. Linguists are using innovative methods in several projects to investigate how small children learn languages. Astrophysicists in Potsdam are scanning the skies above Babelsberg for the billions of stars in the Magellan Cloud, which only seem tiny from down here. The Research Center Sanssouci, initiated by the Prussian Palaces and Gardens Foundation and the University of Potsdam, is starting small but will bring about great things for Potsdam’s cultural landscape. Biologists are drilling down to the smallest building blocks of life, looking for genes in barley so that new strains with positive characteristics can be cultivated.
Like we said: little things. Have fun reading!
The Editorial
Gene expression describes the process of making functional gene products (e.g. proteins or special RNAs) from instructions encoded in the genetic information (e.g. DNA). This process is heavily regulated, allowing cells to produce the appropriate gene products necessary for cell survival, adapting production as necessary for different cell environments. Gene expression is subject to regulation at several levels, including transcription, mRNA degradation, translation and protein degradation. When intact, this system maintains cell homeostasis, keeping the cell alive and adaptable to different environments. Malfunction in the system can result in disease states and cell death. In this dissertation, we explore several aspects of gene expression control by analyzing data from biological experiments. Most of the work following uses a common mathematical model framework based on Markov chain models to test hypotheses, predict system dynamics or elucidate network topology. Our work lies in the intersection between mathematics and biology and showcases the power of statistical data analysis and math modeling for validation and discovery of biological phenomena.
Postcolonial Justice
(2016)
Postcolonial Piracy
(2016)
Media piracy is a contested term in the academic as much as the public debate. It is used by the corporate industries as a synonym for the theft of protected media content with disastrous economic consequences. It is celebrated by technophile elites as an expression of freedom that ensures creativity as much as free market competition. Marxist critics and activists promote flapiracy as a subversive practice that undermines the capitalist world system and its structural injustices. Artists and entrepreneurs across the globe curse it as a threat to their existence, while many use pirate infrastructures and networks fundamentally for the production and dissemination of their art. For large sections of the population across the global South, piracy is simply the only means of accessing the medial flows of a progressively globalising planet.
Postural control is important to cope with demands of everyday life. It has been shown that both attentional demand (i.e., cognitive processing) and fatigue affect postural control in young adults. However, their combined effect is still unresolved. Therefore, we investigated the effects of fatigue on single- (ST) and dual-task (DT) postural control. Twenty young subjects (age: 23.7 ± 2.7) performed an all-out incremental treadmill protocol. After each completed stage, one-legged-stance performance on a force platform under ST (i.e., one-legged-stance only) and DT conditions (i.e., one-legged-stance while subtracting serial 3s) was registered. On a second test day, subjects conducted the same balance tasks for the control condition (i.e., non-fatigued). Results showed that heart rate, lactate, and ventilation increased following fatigue (all p < 0.001; d = 4.2–21). Postural sway and sway velocity increased during DT compared to ST (all p < 0.001; d = 1.9–2.0) and fatigued compared to non-fatigued condition (all p < 0.001; d = 3.3–4.2). In addition, postural control deteriorated with each completed stage during the treadmill protocol (all p < 0.01; d = 1.9–3.3). The addition of an attention-demanding interference task did not further impede one-legged-stance performance. Although both additional attentional demand and physical fatigue affected postural control in healthy young adults, there was no evidence for an overadditive effect (i.e., fatigue-related performance decrements in postural control were similar under ST and DT conditions). Thus, attentional resources were sufficient to cope with the DT situations in the fatigue condition of this experiment.
Preface
(2016)
Introduction: Adequate cognitive function in patients is a prerequisite for successful implementation of patient education and lifestyle coping in comprehensive cardiac rehabilitation (CR) programs. Although the association between cardiovascular diseases and cognitive impairments (CIs) is well known, the prevalence particularly of mild CI in CR and the characteristics of affected patients have been insufficiently investigated so far.
Methods: In this prospective observational study, 496 patients (54.5 ± 6.2 years, 79.8% men) with coronary artery disease following an acute coronary event (ACE) were analyzed. Patients were enrolled within 14 days of discharge from the hospital in a 3-week inpatient CR program. Patients were tested for CI using the Montreal Cognitive Assessment (MoCA) upon admission to and discharge from CR. Additionally, sociodemographic, clinical, and physiological variables were documented. The data were analyzed descriptively and in a multivariate stepwise backward elimination regression model with respect to CI.
Results: At admission to CR, the CI (MoCA score < 26) was determined in 182 patients (36.7%). Significant differences between CI and no CI groups were identified, and CI group was associated with high prevalence of smoking (65.9 vs 56.7%, P = 0.046), heavy (physically demanding) workloads (26.4 vs 17.8%, P < 0.001), sick leave longer than 1 month prior to CR (28.6 vs 18.5%, P = 0.026), reduced exercise capacity (102.5 vs 118.8 W, P = 0.006), and a shorter 6-min walking distance (401.7 vs 421.3 m, P = 0.021) compared to no CI group. The age- and education-adjusted model showed positive associations with CI only for sick leave more than 1 month prior to ACE (odds ratio [OR] 1.673, 95% confidence interval 1.07–2.79; P = 0.03) and heavy workloads (OR 2.18, 95% confidence interval 1.42–3.36; P < 0.01).
Conclusion: The prevalence of CI in CR was considerably high, affecting more than one-third of cardiac patients. Besides age and education level, CI was associated with heavy workloads and a longer sick leave before ACE.
In Turkey, there is a shortage of studies on the prevalence of sexual aggression among young adults. The present study examined sexual aggression victimization and perpetration since the age of 15 in a convenience sample of N = 1,376 college students (886 women) from four public universities in Ankara, Turkey. Prevalence rates for different coercive strategies, victim-perpetrator constellations, and sexual acts were measured with a Turkish version of the Sexual Aggression and Victimization Scale (SAV-S). Overall, 77.6% of women and 65.5% of men reported at least one instance of sexual aggression victimization, and 28.9% of men and 14.2% of women reported at least one instance of sexual aggression perpetration. Prevalence rates of sexual aggression victimization and perpetration were highest for current or former partners, followed by acquaintances/friends and strangers. Alcohol was involved in a substantial proportion of the reported incidents. The findings are the first to provide systematic evidence on sexual aggression perpetration and victimization among college students in Turkey, including both women and men.
Design and Implementation of service-oriented architectures imposes a huge number of research questions from the fields of software engineering, system analysis and modeling, adaptability, and application integration. Component orientation and web services are two approaches for design and realization of complex web-based system. Both approaches allow for dynamic application adaptation as well as integration of enterprise application.
Commonly used technologies, such as J2EE and .NET, form de facto standards for the realization of complex distributed systems. Evolution of component systems has lead to web services and service-based architectures. This has been manifested in a multitude of industry standards and initiatives such as XML, WSDL UDDI, SOAP, etc. All these achievements lead to a new and promising paradigm in IT systems engineering which proposes to design complex software solutions as collaboration of contractually defined software services.
Service-Oriented Systems Engineering represents a symbiosis of best practices in object-orientation, component-based development, distributed computing, and business process management. It provides integration of business and IT concerns.
The annual Ph.D. Retreat of the Research School provides each member the opportunity to present his/her current state of their research and to give an outline of a prospective Ph.D. thesis. Due to the interdisciplinary structure of the research school, this technical report covers a wide range of topics. These include but are not limited to: Human Computer Interaction and Computer Vision as Service; Service-oriented Geovisualization Systems; Algorithm Engineering for Service-oriented Systems; Modeling and Verification of Self-adaptive Service-oriented Systems; Tools and Methods for Software Engineering in Service-oriented Systems; Security Engineering of Service-based IT Systems; Service-oriented Information Systems; Evolutionary Transition of Enterprise Applications to Service Orientation; Operating System Abstractions for Service-oriented Computing; and Services Specification, Composition, and Enactment.
TripleA is a workshop series founded by linguists from the University of Tübingen and the University of Potsdam. Its aim is to provide a forum for semanticists doing fieldwork on understudied languages, and its focus is on languages from Africa, Asia, Australia and Oceania. The second TripleA workshop was held at the University of Potsdam, June 3-5, 2015.
Every year, the Hasso Plattner Institute (HPI) invites guests from industry and academia to a collaborative scientific workshop on the topic “Operating the Cloud”. Our goal is to provide a forum for the exchange of knowledge and experience between industry and academia. Hence, HPI’s Future SOC Lab is the adequate environment to host this event which is also supported by BITKOM.
On the occasion of this workshop we called for submissions of research papers and practitioner’s reports. ”Operating the Cloud” aims to be a platform for productive discussions of innovative ideas, visions, and upcoming technologies in the field of cloud operation and administration.
In this workshop proceedings the results of the third HPI cloud symposium ”Operating the Cloud” 2015 are published. We thank the authors for exciting presentations and insights into their current work and research. Moreover, we look forward to more interesting submissions for the upcoming symposium in 2016.
The coil-to-globule transition of poly(N-isopropylacrylamide) (PNIPAM) microgel particles suspended in water has been investigated in situ as a function of heating and cooling rate with four optical process analytical technologies (PAT), sensitive to structural changes of the polymer. Photon Density Wave (PDW) spectroscopy, Focused Beam Reflectance Measurements (FBRM), turbidity measurements, and Particle Vision Microscope (PVM) measurements are found to be powerful tools for the monitoring of the temperature-dependent transition of such thermo-responsive polymers. These in-line technologies allow for monitoring of either the reduced scattering coefficient and the absorption coefficient, the chord length distribution, the reflected intensities, or the relative backscatter index via in-process imaging, respectively. Varying heating and cooling rates result in rate-dependent lower critical solution temperatures (LCST), with different impact of cooling and heating. Particularly, the data obtained by PDW spectroscopy can be used to estimate the thermodynamic transition temperature of PNIPAM for infinitesimal heating or cooling rates. In addition, an inverse hysteresis and a reversible building of micrometer-sized agglomerates are observed for the PNIPAM transition process.
This work reports about new high-resolution imaging and spectroscopic observations of solar type III radio bursts at low radio frequencies in the range from 30 to 80 MHz. Solar type III radio bursts are understood as result of the beam-plasma interaction of electron beams in the corona. The Sun provides a unique opportunity to study these plasma processes of an active star. Its activity appears in eruptive events like flares, coronal mass ejections and radio bursts which are all accompanied by enhanced radio emission. Therefore solar radio emission carries important information about plasma processes associated with the Sun’s activity. Moreover, the Sun’s atmosphere is a unique plasma laboratory with plasma processes under conditions not found in terrestrial laboratories. Because of the Sun’s proximity to Earth, it can be studied in greater detail than any other star but new knowledge about the Sun can be transfer to them. This “solar stellar connection” is important for the understanding of processes on other stars.
The novel radio interferometer LOFAR provides imaging and spectroscopic capabilities to study these processes at low frequencies. Here it was used for solar observations.
LOFAR, the characteristics of its solar data and the processing and analysis of the latter with the Solar Imaging Pipeline and Solar Data Center are described. The Solar Imaging Pipeline is the central software that allows using LOFAR for solar observations. So its development was necessary for the analysis of solar LOFAR data and realized here. Moreover a new density model with heat conduction and Alfvén waves was developed that provides the distance of radio bursts to the Sun from dynamic radio spectra.
Its application to the dynamic spectrum of a type III burst observed on March 16, 2016 by LOFAR shows a nonuniform radial propagation velocity of the radio emission. The analysis of an imaging observation of type III bursts on June 23, 2012 resolves a burst as bright, compact region localized in the corona propagating in radial direction along magnetic field lines with an average velocity of 0.23c. A nonuniform propagation velocity is revealed. A new beam model is presented that explains the nonuniform motion of the radio source as a propagation effect of an electron ensemble with a spread velocity distribution and rules out a monoenergetic electron distribution. The coronal electron number density is derived in the region from 1.5 to 2.5 R☉ and fitted with the newly developed density model. It determines the plasma density for the interplanetary space between Sun and Earth. The values correspond to a 1.25- and 5-fold Newkirk model for harmonic and fundamental emission, respectively. In comparison to data from other radio instruments the LOFAR data shows a high sensitivity and resolution in space, time and frequency.
The new results from LOFAR’s high resolution imaging spectroscopy are consistent with current theories of solar type III radio bursts and demonstrate its capability to track fast moving radio sources in the corona. LOFAR solar data is found to be a valuable source for solar radio physics and opens a new window for studying plasma processes associated with highly energetic electrons in the solar corona.
Protektiver Effekt von 6-Shogaol, Ellagsäure und Myrrhe auf die intestinale epitheliale Barriere
(2016)
Viele bioaktive Pflanzeninhaltsstoffe bzw. Pflanzenmetabolite besitzen antiinflammatorische Eigenschaften. Diese versprechen ein hohes Potential für den Einsatz in der Phytotherapie bzw. Prävention von chronisch-entzündlichen Darmerkrankungen (CED). Eine intestinale Barrieredysfunktion ist ein typisches Charakteristikum von CED Patienten, die dadurch an akuter Diarrhoe leiden.
In dieser Arbeit werden die Pflanzenkomponenten 6-Shogaol, Ellagsäure und Myrrhe an den intestinalen Kolonepithelzellmodellen HT-29/B6 und Caco-2 auf ihr Potential hin, die intestinale Barriere zu stärken bzw. eine Barrieredysfunktion zu verhindern, untersucht. Hauptschwerpunkt der Analysen ist die parazelluläre Barrierefunktion und die Regulation der dafür entscheidenden Proteinfamilie der Tight Junctions (TJs), der Claudine.
Die Barrierefunktion wird durch Messung des transepithelialen Widerstands (TER) und der Fluxmessung in der Ussing-Kammer bestimmt. Dazu werden die HT-29/B6- und Caco-2-Monolayer mit den Pflanzenkomponenten (6-Shogaol, Ellagsäure, Myrrhe), dem proinflammatorischen Zytokin TNF-α oder der Kombination von beiden Subsztanzen für 24 oder 48 h behandelt. Außerdem wurden zur weiteren Charakterisierung die Expression sowie die Lokalisation der für die parazelluläre Barriere relevanten Claudine, die TJ-Ultrastruktur und verschiedene Signalwege analysiert.
In Caco-2-Monolayern führten Ellagsäure und Myrrhe, nicht aber 6-Shogaol, allein zu einem TER-Anstieg bedingt durch eine verringerte Permeabilität für Natriumionen. Myrrhe verminderte die Expression des Kationenkanal-bildenden TJ-Proteins Claudin-2 über die Inhibierung des PI3K/Akt-Signalweges, während Ellagsäure die Expression der TJ-Proteine Claudin-4 und -7 reduzierte. Alle Pflanzenkomponenten schützten in den Caco-2-Zellen vor einer TNF-α-induzierten Barrieredysfunktion.
An den HT-29/B6-Monolayern änderte keine der Pflanzenkomponenten allein die Barrierefunktion. Die HT-29/B6-Zellen reagierten auf TNF-α mit einer deutlichen Verminderung des TER und einer erhöhten Fluoreszein-Permeabilität. Die TER-Abnahme war durch eine PI3K/Akt-vermittelte gesteigerte Claudin-2-Expression sowie eine NFκB-vermittelte Umverteilung des abdichtenden TJ-Proteins Claudin-1 gekennzeichnet. 6-Shogaol konnte den TER-Abfall partiell hemmen sowie die PI3K/Akt-induzierte Claudin-2-Expression und die NFκB-bedingte Claudin-1-Umverteilung verhindern. Ebenso inhibierte Myrrhe, nicht aber Ellagsäure, den TNF-α-induzierten TER-Abfall. Dabei konnte Myrrhe zwar den Claudin-2-Expressionsanstieg und die Claudin-1-Umverteilung unterbinden, jedoch weder die NFκB- noch die PI3K/Akt-Aktivierung hemmen. Diese Arbeit zeigt, dass auch STAT6 an dem Claudin-2-Expressionsanstieg durch
TNF-α in HT-29/B6-Zellen beteiligt ist. So wurde durch Myrrhe die TNF-α-induzierte Phosphorylierung von STAT6 und die erhöhte Claudin-2-Expression inhibiert.
Die Ergebnisse deuten darauf hin, dass die Pflanzenkomponenten 6-Shogaol, Ellagsäure und Myrrhe mit unterschiedlichen Mechanismen stärkend auf die Barriere einwirken. Zur Behandlung von intestinalen Erkrankungen mit Barrieredysfunktion könnten daher Kombinationspräparate aus verschiedenen Pflanzen effektiver sein als Monopräparate.
Jeden Tag werden unzählige Mengen an medizinischen Patientendaten in Krankenhäusern und Arztpraxen digital gespeichert. Für Forschungszwecke werden diese Daten bisher größtenteils nicht verwendet. Ziel dieser Arbeit ist es täglich anfallende anonymisierte Patientendaten, die aus einer Praxis für ganzheitliche Innere Medizin stammen, zu analysieren. Aufgrund mangelnder Kooperation seitens des Anbieters der Praxissoftware konnten die Patientendaten nicht automatisch extrahiert werden. Daher wurde eine Auswahl an Diagnosen und anthropometrischen Parametern manuell in eine Datenbank übertragen. Informationen über die Behandlung wurden dabei nicht berücksichtigt. Data-Mining Verfahren ermöglichen die Forschung auf der Grundlage von alltäglichen Patientendaten. Durch die Anwendung maschinellen Lernens kann Präventionsmedizin und die Überwachung von Behandlungsverläufen unterstützt werden.
Das Potenzial der Analyse dieser sonst weitgehend ungenutzten Daten wird anhand von Untersuchungen zur Komorbidität verdeutlicht. Dabei zeigt sich, dass einerseits das Metabolische Syndrom und dessen Komponenten zusammen mit Krebserkrankungen ein Cluster bilden und andererseits psychosomatische Störungen vermehrt mit Autoimmunerkrankungen der Schilddrüse auftreten. Außerdem wird eine noch nicht schulmedizinisch anerkannte Stoffwechselerkrankung, die Hämopyrrollaktamurie (HPU) untersucht. Diese lässt sich durch eine vermehrte Ausscheidung von Pyrrolen im Urin nachweisen. Bezüglich der Patienten bei denen ein HPU-Test vorliegt, weisen 84 % einen erhöhten Titer auf. Diese Beobachtung steht im Widerspruch zur vorherigen Annahme, dass in etwa 10 % der Bevölkerung von HPU betroffen sind.
Präventives Handeln ermöglicht es Gesundheit zu erhalten. Zu diesem Zweck ist es notwen- dig Krankheiten möglichst früh zu erkennen. In dieser Studie können Entscheidungsbaum-Modelle die Hashimoto Thyreoiditis mit einer Genauigkeit von 87.5 % bei einem Patienten diagnostizieren. Defizite durch die fehlenden Informationen über die medikamentöse Behandlung werden anhand des Modells zur Vorhersage von Hypothyreoiditis (Genauigkeit von 60.9 %) aufgezeigt.
Mit Hilfe von STATIS, das auf einer Erweiterung der Hauptkomponentenanalyse basiert, die es ermöglicht mehrere Tabellen simultan zu vergleichen, wurde der Behandlungsverlauf von 20 Patienten über einen Zeitraum von fünf Jahren überwacht. Anhand von Hypertonie wird gezeigt, dass sich sich die Patenten bezüglich Ihrer Laborwerte voneinander unterscheiden und sich Muster für Krankheiten erkennen lassen.
Diese Arbeit demonstriert den Nutzen, der durch die vermehrte Analyse alltäglicher hochdimensionaler und heterogener Daten erbracht werden kann.
Individuals within populations often differ substantially in habitat use, the ecological consequences of which can be far reaching. Stable isotope analysis provides a convenient and often cost effective means of indirectly assessing the habitat use of individuals that can yield valuable insights into the spatiotemporal distribution of foraging specialisations within a population. Here we use the stable isotope ratios of southern sea lion (Otaria flavescens) pup vibrissae at the Falkland Islands, in the South Atlantic, as a proxy for adult female habitat use during gestation. A previous study found that adult females from one breeding colony (Big Shag Island) foraged in two discrete habitats, inshore (coastal) or offshore (outer Patagonian Shelf). However, as this species breeds at over 70 sites around the Falkland Islands, it is unclear if this pattern is representative of the Falkland Islands as a whole. In order to characterize habitat use, we therefore assayed carbon (delta C-13) and nitrogen (delta N-15) ratios from 65 southern sea lion pup vibrissae, sampled across 19 breeding colonies at the Falkland Islands. Model-based clustering of pup isotope ratios identified three distinct clusters, representing adult females that foraged inshore, offshore, and a cluster best described as intermediate. A significant difference was found in the use of inshore and offshore habitats between West and East Falkland and between the two colonies with the largest sample sizes, both of which are located in East Falkland. However, habitat use was unrelated to the proximity of breeding colonies to the Patagonian Shelf, a region associated with enhanced biological productivity. Our study thus points towards other factors, such as local oceanography and its influence on resource distribution, playing a prominent role in inshore and offshore habitat use.
Die vorliegende Arbeit beschäftigt sich mit Qualitätsmanagementsystemen in Nonprofit-Organisationen. Sie stellt dabei das Spannungsfeld verschiedener Akteursinteressen innerhalb von Nonprofit-Organisationen in den Vordergrund. Dies erfolgt anhand des mikropolitischen Ansatzes, der allen Akteuren innerhalb einer Organisation eigene Interessen zugesteht, die sie durch Taktiken und Strategien in Machtkämpfen versuchen durchzusetzen.
Untersucht wird der Prozess der Entstehung und Evaluation von konkreten Maßnahmen, den sogenannten Qualitätszielen, und den Einfluss von pädagogischen Mitarbeitenden auf deren Formulierung. Dies erfolgt anhand einer Einzelfallstudie. Mithilfe von qualitativen Interviews wurde untersucht, inwieweit pädagogische Mitarbeitende die Einflussmöglichkeiten des Qualitätsmanagementsystems zur strategischen Organisationsentwicklung und Durchsetzung eigener Interessen nutzen.
Die Ergebnisse zeigen, dass es zwei Typen von Mitarbeitenden gibt, aktive und passive, die entweder einen Machtgewinn oder -verlust erleben. Aufgrund der kooperativen Art der Kommunikation und Entscheidungsfindung sowie kaum divergierenden Interessen zwischen den verschiedenen Akteuren bleiben die vorhandenen Einflussmöglichkeiten im Sinne von organisationsinternen Machtkämpfen und mikropolitischen Taktiken bisher jedoch weitestgehend ungenutzt. Diese Falleigenschaften erklären, wieso der mikropolitische Ansatz bei der Analyse nicht zu den antizipierten Resultaten geführt hat.
Information about the strength of donor–acceptor interactions in push–pull alkenes is valuable, as this so-called “push–pull effect” influences their chemical reactivity and dynamic behaviour. In this paper, we discuss the applicability of NMR spectral data and barriers to rotation around the C[double bond, length as m-dash]C double bond to quantify the push–pull effect in biologically important 2-alkylidene-4-oxothiazolidines. While olefinic proton chemical shifts and differences in 13C NMR chemical shifts of the two carbons constituting the C[double bond, length as m-dash]C double bond fail to give the correct trend in the electron withdrawing ability of the substituents attached to the exocyclic carbon of the double bond, barriers to rotation prove to be a reliable quantity in providing information about the extent of donor–acceptor interactions in the push–pull systems studied. In particular all relevant kinetic data, that is the Arrhenius parameters (apparent activation energy Ea and frequency factor A) and activation parameters (ΔS‡, ΔH‡ and ΔG‡), were determined from the data of the experimentally studied configurational isomerization of (E)-9a. These results were compared to previously published related data for other two compounds, (Z)-1b and (2E,5Z)-7, showing that experimentally determined ΔG‡ values are a good indicator of the strength of push–pull character. Theoretical calculations of the rotational barriers of eight selected derivatives excellently correlate with the calculated C[double bond, length as m-dash]C bond lengths and corroborate the applicability of ΔG‡ for estimation of the strength of the push–pull effect in these and related systems.
Rapidly uplifting coastlines are frequently associated with convergent tectonic boundaries, like subduction zones, which are repeatedly breached by giant megathrust earthquakes. The coastal relief along tectonically active realms is shaped by the effect of sea-level variations and heterogeneous patterns of permanent tectonic deformation, which are accumulated through several cycles of megathrust earthquakes. However, the correlation between earthquake deformation patterns and the sustained long-term segmentation of forearcs, particularly in Chile, remains poorly understood. Furthermore, the methods used to estimate permanent deformation from geomorphic markers, like marine terraces, have remained qualitative and are based on unrepeatable methods. This contrasts with the increasing resolution of digital elevation models, such as Light Detection and Ranging (LiDAR) and high-resolution bathymetric surveys.
Throughout this thesis I study permanent deformation in a holistic manner: from the methods to assess deformation rates, to the processes involved in its accumulation. My research focuses particularly on two aspects: Developing methodologies to assess permanent deformation using marine terraces, and comparing permanent deformation with seismic cycle deformation patterns under different spatial scales along the M8.8 Maule earthquake (2010) rupture zone. Two methods are developed to determine deformation rates from wave-built and wave-cut terraces respectively. I selected an archetypal example of a wave-built terrace at Santa Maria Island studying its stratigraphy and recognizing sequences of reoccupation events tied with eleven radiocarbon sample ages (14C ages). I developed a method to link patterns of reoccupation with sea-level proxies by iterating relative sea level curves for a range of uplift rates. I find the best fit between relative sea-level and the stratigraphic patterns for an uplift rate of 1.5 +- 0.3 m/ka.
A Graphical User Interface named TerraceM® was developed in Matlab®. This novel software tool determines shoreline angles in wave-cut terraces under different geomorphic scenarios. To validate the methods, I select test sites in areas of available high-resolution LiDAR topography along the Maule earthquake rupture zone and in California, USA. The software allows determining the 3D location of the shoreline angle, which is a proxy for the estimation of permanent deformation rates. The method is based on linear interpolations to define the paleo platform and cliff on swath profiles. The shoreline angle is then located by intersecting these interpolations. The
accuracy and precision of TerraceM® was tested by comparing its results with previous assessments, and through an experiment with students in a computer lab setting at the University
of Potsdam.
I combined the methods developed to analyze wave-built and wave-cut terraces to assess regional patterns of permanent deformation along the (2010) Maule earthquake rupture. Wave-built terraces are tied using 12 Infra Red Stimulated luminescence ages (IRSL ages) and shoreline angles in wave-cut terraces are estimated from 170 aligned swath profiles. The comparison of coseismic slip, interseismic coupling, and permanent deformation, leads to three areas of high permanent uplift, terrace warping, and sharp fault offsets. These three areas correlate with regions of high slip and low coupling, as well as with the spatial limit of at least eight historical megathrust ruptures (M8-9.5). I propose that the zones of upwarping at Arauco and Topocalma reflect changes in frictional properties of the megathrust, which result in discrete boundaries for the propagation of mega earthquakes.
To explore the application of geomorphic markers and quantitative morphology in offshore areas I performed a local study of patterns of permanent deformation inferred from hitherto unrecognized drowned shorelines at the Arauco Bay, at the southern part of the (2010) Maule earthquake rupture zone. A multidisciplinary approach, including morphometry, sedimentology, paleontology, 3D morphoscopy, and a landscape Evolution Model is used to recognize, map, and assess local rates and patterns of permanent deformation in submarine environments. Permanent deformation patterns are then reproduced using elastic models to assess deformation rates of an active submarine splay fault defined as Santa Maria Fault System. The best fit suggests a reverse structure with a slip rate of 3.7 m/ka for the last 30 ka. The register of land level changes during the earthquake cycle at Santa Maria Island suggest that most of the deformation may be accrued through splay fault reactivation during mega earthquakes, like the (2010) Maule event. Considering a recurrence time of 150 to 200 years, as determined from historical and geological observations, slip between 0.3 and 0.7 m per event would be required to account for the 3.7 m/ka millennial slip rate. However, if the SMFS slips only every ~1000 years, representing a few megathrust earthquakes, then a slip of ~3.5 m per event would be required to account for the long- term rate. Such event would be equivalent to a magnitude ~6.7 earthquake capable to generate a local tsunami.
The results of this thesis provide novel and fundamental information regarding the amount of permanent deformation accrued in the crust, and the mechanisms responsible for this accumulation at millennial time-scales along the M8.8 Maule earthquake (2010) rupture zone. Furthermore, the results of this thesis highlight the application of quantitative geomorphology and the use of repeatable methods to determine permanent deformation, improve the accuracy of marine terrace assessments, and estimates of vertical deformation rates in tectonically active coastal areas. This is vital information for adequate coastal-hazard assessments and to anticipate realistic earthquake and tsunami scenarios.
In contrast to recent advances in projecting sea levels, estimations about the economic impact of sea level rise are vague. Nonetheless, they are of great importance for policy making with regard to adaptation and greenhouse-gas mitigation. Since the damage is mainly caused by extreme events, we propose a stochastic framework to estimate the monetary losses from coastal floods in a confined region. For this purpose, we follow a Peak-over-Threshold approach employing a Poisson point process and the Generalised Pareto Distribution. By considering the effect of sea level rise as well as potential adaptation scenarios on the involved parameters, we are able to study the development of the annual damage. An application to the city of Copenhagen shows that a doubling of losses can be expected from a mean sea level increase of only 11 cm. In general, we find that for varying parameters the expected losses can be well approximated by one of three analytical expressions depending on the extreme value parameters. These findings reveal the complex interplay of the involved parameters and allow conclusions of fundamental relevance. For instance, we show that the damage typically increases faster than the sea level rise itself. This in turn can be of great importance for the assessment of sea level rise impacts on the global scale. Our results are accompanied by an assessment of uncertainty, which reflects the stochastic nature of extreme events. While the absolute value of uncertainty about the flood damage increases with rising mean sea levels, we find that it decreases in relation to the expected damage.
Can the statistical properties of single-electron transfer events be correctly predicted within a common equilibrium ensemble description? This fundamental in nanoworld question of ergodic behavior is scrutinized within a very basic semi-classical curve-crossing problem. It is shown that in the limit of non-adiabatic electron transfer (weak tunneling) well-described by the Marcus–Levich–Dogonadze(MLD) rate the answer is yes. However, in the limit of the so-called solvent-controlled adiabatic electron transfer, a profound breaking of ergodicity occurs. Namely, a common description based on the ensemble reduced density matrix with an initial equilibrium distribution of the reaction coordinate is not able to reproduce the statistics of single-trajectory events in this seemingly classical regime. For sufficiently large activation barriers, the ensemble survival probability in a state remains nearly exponential with the inverse rate given by the sum of the adiabatic curve crossing (Kramers) time and the inverse MLD rate. In contrast, near to the adiabatic regime, the single-electron survival probability is clearly non-exponential, even though it possesses an exponential tail which agrees well with the ensemble description. Initially, it is well described by a Mittag-Leffler distribution with a fractional rate. Paradoxically, the mean transfer time in this classical on the ensemble level regime is well described by the inverse of the nonadiabatic quantum tunneling rate on a single particle level. An analytical theory is developed which perfectly agrees with stochastic simulations and explains our findings.