Refine
Year of publication
- 2016 (1796) (remove)
Document Type
- Article (1203)
- Postprint (210)
- Doctoral Thesis (197)
- Other (69)
- Review (50)
- Monograph/Edited Volume (17)
- Preprint (17)
- Part of a Book (14)
- Conference Proceeding (6)
- Master's Thesis (4)
Language
- English (1796) (remove)
Keywords
- German (11)
- Magellanic Clouds (8)
- aggression (8)
- children (8)
- climate change (8)
- prosody (8)
- sentence processing (8)
- adolescents (7)
- prevalence (7)
- self-paced reading (7)
Institute
- Institut für Geowissenschaften (284)
- Institut für Biochemie und Biologie (283)
- Institut für Physik und Astronomie (254)
- Institut für Chemie (198)
- Mathematisch-Naturwissenschaftliche Fakultät (80)
- Institut für Ernährungswissenschaft (75)
- Institut für Mathematik (71)
- Department Psychologie (62)
- Department Linguistik (56)
- Humanwissenschaftliche Fakultät (40)
The current financial reporting environment, with its increasing use of accounting estimates, including fair value estimates, suggests that unethical accounting estimates may be a growing concern. This paper provides explanations and empirical evidence for why some types of accounting estimates in financial reporting may promote a form of ethical blindness. These types of ethical blindness can have an escalating effect that corrupts not only an individual or organization but also the accounting profession and the public interest it serves. Ethical blindness in the standards of professional accountants may be a factor in the extent of misreporting, and may have taken on new urgency as a result of the proposals to change the conceptual framework for financial reporting using international standards. The social consequences for users of financial statements can be huge. The acquittal of former Nortel executives on fraud charges related to accounting manipulations is viewed by many as legitimizing accounting gamesmanship. This decision illustrates that the courts may not be the best place to deal with ethical reporting issues. The courts may be relied on for only the most egregious unethical conduct and, even then, the accounting profession is ill equipped to assist the legal system in prosecuting accounting fraud unless the standards have been clarified. We argue that the problem of unethical reporting should be addressed by the accounting profession itself, preferably as a key part of the conceptual framework that supports accounting and auditing standards, and the codes of ethical conduct that underpin the professionalism of accountants.
This article introduces the juxtaposed notions of liberal and neo-liberal gameplay in order to show that, while forms of contemporary game culture are heavily influenced by neo-liberalism, they often appear under a liberal disguise. The argument is grounded in Claus Pias’ idea of games as always a product of their time in terms of economic, political and cultural history. The article shows that romantic play theories (e.g. Schiller, Huizinga and Caillois) are circling around the notion of play as ‘free’, which emerged in parallel with the philosophy of liberalism and respective socio-economic developments such as the industrialization and the rise of the nation state. It shows further that contemporary discourse in computer game studies addresses computer game/play as if it still was the romantic form of play rooted in the paradigm of liberalism. The article holds that an account that acknowledges the neo-liberalist underpinnings of computer games is more suited to addressing contemporary computer games, among which are phenomena such as free to play games, which repeat the structures of a neo-liberal society. In those games the players invest time and effort in developing their skills, although their future value is mainly speculative – just like this is the case for citizens of neo-liberal societies.
Apart from their central role during 3D structure determination of proteins the backbone chemical shift assignment is the basis for a number of applications, like chemical shift perturbation mapping and studies on the dynamics of proteins. This assignment is not a trivial task even if a 3D protein structure is known and needs almost as much effort as the assignment for structure prediction if performed manually. We present here a new algorithm based solely on 4D [H-1, N-15]-HSQC-NOESY-[H-1, N-15]-HSQC spectra which is able to assign a large percentage of chemical shifts (73-82 %) unambiguously, demonstrated with proteins up to a size of 250 residues. For the remaining residues, a small number of possible assignments is filtered out. This is done by comparing distances in the 3D structure to restraints obtained from the peak volumes in the 4D spectrum. Using dead-end elimination, assignments are removed in which at least one of the restraints is violated. Including additional information from chemical shift predictions, a complete unambiguous assignment was obtained for Ubiquitin and 95 % of the residues were correctly assigned in the 251 residue-long N-terminal domain of enzyme I. The program including source code is available at https://github.com/thomasexner/4Dassign.
Macrocycles with quaterthiophene subunits were obtained by cyclooligomerization by direct oxidative coupling of unsubstituted dithiophene moieties. The rings were closed with high selectivity by an α,β′-connection of the thiophenes as proven by NMR spectroscopy. The reaction of the precursor with terthiophene moieties yielded the symmetric α,α′-linked macrocycle in low yield together with various differently connected isomers. Blocking of the β-position of the half-rings yielded selectively the α,α′-linked macrocycle. Selected cyclothiophenes were investigated by scanning tunneling microscopy, which displayed the formation of highly ordered 2D crystalline monolayers.
Spatio-temporal control of cellular uptake achieved by photoswitchable cell-penetrating peptides
(2016)
The selective uptake of compounds into specific cells of interest is a major objective in cell biology and drug delivery. By incorporation of a novel, thermostable azobenzene moiety we generated peptides that can be switched optically between an inactive state and an active, cell-penetrating state with excellent spatio-temporal control.
Low Earth orbiting geomagnetic satellite missions, such as the Swarm satellite mission, are the only means to monitor and investigate ionospheric currents on a global scale and to make in situ measurements of F region currents. High-precision geomagnetic satellite missions are also able to detect ionospheric currents during quiet-time geomagnetic conditions that only have few nanotesla amplitudes in the magnetic field. An efficient method to isolate the ionospheric signals from satellite magnetic field measurements has been the use of residuals between the observations and predictions from empirical geomagnetic models for other geomagnetic sources, such as the core and lithospheric field or signals from the quiet-time magnetospheric currents. This study aims at highlighting the importance of high-resolution magnetic field models that are able to predict the lithospheric field and that consider the quiet-time magnetosphere for reliably isolating signatures from ionospheric currents during geomagnetically quiet times. The effects on the detection of ionospheric currents arising from neglecting the lithospheric and magnetospheric sources are discussed on the example of four Swarm orbits during very quiet times. The respective orbits show a broad range of typical scenarios, such as strong and weak ionospheric signal (during day- and nighttime, respectively) superimposed over strong and weak lithospheric signals. If predictions from the lithosphere or magnetosphere are not properly considered, the amplitude of the ionospheric currents, such as the midlatitude Sq currents or the equatorial electrojet (EEJ), is modulated by 10-15 % in the examples shown. An analysis from several orbits above the African sector, where the lithospheric field is significant, showed that the peak value of the signatures of the EEJ is in error by 5 % in average when lithospheric contributions are not considered, which is in the range of uncertainties of present empirical models of the EEJ.
Generalizing a linear expression over a vector space, we call a term of an arbitrary type tau linear if its every variable occurs only once. Instead of the usual superposition of terms and of the total many-sorted clone of all terms in the case of linear terms, we define the partial many-sorted superposition operation and the partial many-sorted clone that satisfies the superassociative law as weak identity. The extensions of linear hypersubstitutions are weak endomorphisms of this partial clone. For a variety V of one-sorted total algebras of type tau, we define the partial many-sorted linear clone of V as the partial quotient algebra of the partial many-sorted clone of all linear terms by the set of all linear identities of V. We prove then that weak identities of this clone correspond to linear hyperidentities of V.
In recent years, entire industries and their participants have been affected by disruptive technologies, resulting in dramatic market changes and challenges to firm’s business logic and thus their business models (BMs). Firms from mature industries are increasingly realizing that BMs that worked successfully for years have become insufficient to stay on track in today’s “move fast and break things” economy. Firms must scrutinize the core logic that informs how they do business, which means exploring novel ways to engage customers and get them to pay. This can lead to a complete renewal of existing BMs or innovating completely new BMs.
BMs have emerged as a popular object of research within the last decade. Despite the popularity of the BM, the theoretical and empirical foundation underlying the concept is still weak. In particular, the innovation process for BMs has been developed and implemented in firms, but understanding of the mechanisms behind it is still lacking. Business model innovation (BMI) is a complex and challenging management task that requires more than just novel ideas. Systematic studies to generate a better understanding of BMI and support incumbents with appropriate concepts to improve BMI development are in short supply. Further, there is a lack of knowledge about appropriate research practices for studying BMI and generating valid data sets in order to meet expectations in both practice and academia.
This paper-based dissertation aims to contribute to research practice in the field of BM and BMI and foster better understanding of the BM concept and BMI processes in incumbent firms from mature industries. The overall dissertation presents three main results. The first result is a new perspective, or the systems thinking view, on the BM and BMI. With the systems thinking view, the fuzzy BM concept is clearly structured and a BMI framework is proposed. The second result is a new research strategy for studying BMI. After analyzing current research practice in the areas of BMs and BMI, it is obvious that there is a need for better research on BMs and BMI in terms of accuracy, transparency, and practical orientation. Thus, the action case study approach combined with abductive methodology is proposed and proven in the research setting of this thesis. The third result stems from three action case studies in incumbent firms from mature industries employed to study how BMI occurs in practice. The new insights and knowledge gained from the action case studies help to explain BMI in such industries and increase understanding of the core of these processes.
By studying these issues, the articles complied in this thesis contribute conceptually and empirically to the recently consolidated but still increasing literature on the BM and BMI. The conclusions and implications made are intended to foster further research and improve managerial practices for achieving BMI in a dramatically changing business environment.
What shapes peace, and how can peace be successfully built in those countries affected by armed conflict? This paper examines mpeacebuilding in the aftermath of civil wars in order to identify the conditions for post-conflict peace. The field of civil war research is
characterised by case studies, comparative analyses and quantitative research, which relate relatively little to each other. Furthermore, the complex dynamics of peacebuilding have hardly been investigated so far. Thus, the question remains of how best to enhance the prospects
of a stable peace in post-conflict societies. Therefore, it is necessary to capture the dynamics of post-conflict peace. This paper aims at helping to narrow these research gaps by 1) presenting the benefits of set theoretic methods for peace and conflict studies; 2) identifying remote conflict environment factors and proximate peacebuilding factors which have an influence on the peacebuilding process and 3) proposing a
set-theoretic multi-method research approach in order to identify the causal structures and mechanisms underlying the complex realm of post-conflict peacebuilding. By implementing this transparent and systematic comparative approach, it will become possible to discover
the dynamics of post-conflict peace.
In this thesis we use integral-field spectroscopy to detect and understand of Lyman α (Lyα) emission from high-redshift galaxies.
Intrinsically the Lyα emission at λ = 1216 Å is the strongest recombination line from galaxies. It arises from the 2p → 1s transition in hydrogen. In star-forming galaxies the line is powered by ionisation of the interstellar gas by hot O- and B- stars. Galaxies with star-formation rates of 1 - 10 Msol/year are expected to have Lyα luminosities of 42 dex - 43 dex (erg/s), corresponding to fluxes ~ -17 dex - -18 dex (erg/s/cm²) at redshifts z~3, where Lyα is easily accessible with ground-based telescopes. However, star-forming galaxies do not show these expected Lyα fluxes. Primarily this is a consequence of the high-absorption cross-section of neutral hydrogen for Lyα photons σ ~ -14 dex (cm²). Therefore, in typical interstellar environments Lyα photons have to undergo a complex radiative transfer. The exact conditions under which Lyα photons can escape a galaxy are poorly understood.
Here we present results from three observational projects. In Chapter 2, we show integral field spectroscopic observations of 14 nearby star-forming galaxies in Balmer α radiation (Hα, λ = 6562.8 Å). These observations were obtained with the Potsdam Multi Aperture Spectrophotometer at the Calar-Alto 3.5m Telescope}. Hα directly traces the intrinsic Lyα radiation field. We present Hα velocity fields and velocity dispersion maps spatially registered onto Hubble Space Telescope Lyα and Hα images. From our observations, we conjecture a causal connection between spatially resolved Hα kinematics and Lyα photometry for individual galaxies. Statistically, we find that dispersion-dominated galaxies are more likely to emit Lyα photons than galaxies where ordered gas-motions dominate. This result indicates that turbulence in actively star-forming systems favours an escape of Lyα radiation.
Not only massive stars can power Lyα radiation, but also non-thermal emission from an accreting super-massive black hole in the galaxy centre. If a galaxy harbours such an active galactic nucleus, the rate of hydrogen-ionising photons can be more than 1000 times higher than that of a typical star-forming galaxy. This radiation can potentially ionise large regions well outside the main stellar body of galaxies. Therefore, it is expected that the neutral hydrogen from these circum-galactic regions shines fluorescently in Lyα. Circum-galactic gas plays a crucial role in galaxy formation. It may act as a reservoir for fuelling star formation, and it is also subject to feedback processes that expel galactic material. If Lyα emission from this circum-galactic medium (CGM) was detected, these important processes could be studied in-situ around high-z galaxies. In Chapter 3, we show observations of five radio-quiet quasars with PMAS to search for possible extended CGM emission in the Lyα line. However, in four of the five objects, we find no significant traces of this emission. In the fifth object, there is evidence for a weak and spatially quite compact Lyα excess at several kpc outside the nucleus. The faintness of these structures is consistent with the idea that radio-quiet quasars typically reside in dark matter haloes of modest masses. While we were not able to detect Lyα CGM emission, our upper limits provide constraints for the new generation of IFS instruments at 8--10m class telescopes.
The Multi Unit Spectroscopic Explorer (MUSE) at ESOs Very Large Telescopeis such an unique instrument. One of the main motivating drivers in its construction was the use as a survey instrument for Lyα emitting galaxies at high-z. Currently, we are conducting such a survey that will cover a total area of ~100 square arcminutes with 1 hour exposures for each 1 square arcminute MUSE pointing. As a first result from this survey we present in Chapter 5 a catalogue of 831 emission-line selected galaxies from a 22.2 square arcminute region in the Chandra Deep Field South. In order to construct the catalogue, we developed and implemented a novel source detection algorithm -- LSDCat -- based on matched filtering for line emission in 3D spectroscopic datasets (Chapter 4). Our catalogue contains 237 Lyα emitting galaxies in the redshift range 3 ≲ z ≲ 6. Only four of those previously had spectroscopic redshifts in the literature. We conclude this thesis with an outlook on the construction of a Lyα luminosity function based on this unique sample (Chapter 6).
Based on theories of scientific discovery learning (SDL) and conceptual change, this study explores students' preconceptions in the domain of torques in physics and the development of these conceptions while learning with a computer-based SDL task. As a framework we used a three-space theory of SDL and focused on model space, which is supposed to contain the current conceptualization/model of the learning domain, and on its change through hypothesis testing and experimenting. Three questions were addressed: (1) What are students' preconceptions of torques before learning about this domain? To do this a multiple-choice test for assessing students' models of torques was developed and given to secondary school students (N = 47) who learned about torques using computer simulations. (2) How do students' models of torques develop during SDL? Working with simulations led to replacement of some misconceptions with physically correct conceptions. (3) Are there differential patterns of model development and if so, how do they relate to students’ use of the simulations? By analyzing individual differences in model development, we found that an intensive use of the simulations was associated with the acquisition of correct conceptions. Thus, the three-space theory provided a useful framework for understanding conceptual change in SDL.
This survey on the theme of Geometry Education (including new technologies) focuses chiefly on the time span since 2008. Based on our review of the research literature published during this time span (in refereed journal articles, conference proceedings and edited books), we have jointly identified seven major threads of contributions that span from the early years of learning (pre-school and primary school) through to post-compulsory education and to the issue of mathematics teacher education for geometry. These threads are as follows: developments and trends in the use of theories; advances in the understanding of visuo spatial reasoning; the use and role of diagrams and gestures; advances in the understanding of the role of digital technologies; advances in the understanding of the teaching and learning of definitions; advances in the understanding of the teaching and learning of the proving process; and, moving beyond traditional Euclidean approaches. Within each theme, we identify relevant research and also offer commentary on future directions.
The three-space theory of problem solving predicts that the quality of a learner's model and the goal specificity of a task interact on knowledge acquisition. In Experiment 1 participants used a computer simulation of a lever system to learn about torques. They either had to test hypotheses (nonspecific goal), or to produce given values for variables (specific goal). In the good- but not in the poor-model condition they saw torque depicted as an area. Results revealed the predicted interaction. A nonspecific goal only resulted in better learning when a good model of torques was provided. In Experiment 2 participants learned to manipulate the inputs of a system to control its outputs. A nonspecific goal to explore the system helped performance when compared to a specific goal to reach certain values when participants were given a good model, but not when given a poor model that suggested the wrong hypothesis space. Our findings support the three-space theory. They emphasize the importance of understanding for problem solving and stress the need to study underlying processes.
Recent studies of short-term serial order memory have suggested that the maintenance of order information does not involve domain-specific processes. We carried out two dual task experiments aimed at resolving several ambiguities in those studies. In our experiments, encoding and response of one serial reconstruction task was embedded within encoding and response of a concurrent serial reconstruction task. Order demands in both tasks were independently varied so as to find revealing patterns of interference between the two tasks. In Experiment 1, participants were to maintain and reconstruct the order of a list of verbal materials, while maintaining a list of spatial materials or vice-versa. Increasing the order demands in the outer reconstruction task resulted in small or non reliable performance decrements in the embedded reconstruction task. Experiment 2 sought to compare these results against two same-domain baseline conditions (two verbal lists or two spatial lists). In all conditions, increasing order demands in the outer task resulted in small or non-reliable performance decrements in the embedded task. However, performance in the embedded tasks was generally lower in the same-domain baseline conditions than in the cross-domain conditions. We argue that the main effect of domain in Experiment 2 indicates the contribution of domain-specific processes to short-term serial order maintenance. In addition, we interpret the failure to find consistent cross-list interference irrespective of domain as indicating the involvement of grouping mechanisms in concurrently performed serial order tasks. (C) 2015 Elsevier Inc. All rights reserved.
In a series of experiments, we tested a recently proposed hypothesis stating that the degree of alignment between the form of a mental representation resulting from learning with a particular visualization format and the specific requirements of a learning task determines learning performance (task-appropriateness). Groups of participants were required to learn the stroke configuration, the stroke order, or the stroke directions of a set of Chinese pseudocharacters. For each learning task, participants were divided into groups receiving dynamic, static-sequential, or static visualizations. An old/new character recognition task was given at test. The results showed that learning both stroke configuration and stroke order was best with static pictures (Experiments 1 and 2), while there was no reliable difference between the groups for learning stroke direction (Experiment 3). An additional experiment, however, revealed that learning with sequential pictures was superior when testing was carried out with sequential pictures, irrespective of the learning task (Experiment 4). The combined evidence from all experiments speaks against task requirements playing a role in determining the effectiveness of a visualization format. Furthermore, the evidence supports the view that a high degree of congruence between information presented during learning and information presented at test results in better learning (study-test congruence). Implications for instructional design are discussed.
Let A be a nonlinear differential operator on an open set X subset of R-n and S a closed subset of X. Given a class F of functions in X, the set S is said to be removable for F relative to A if any weak solution of A(u) = 0 in XS of class F satisfies this equation weakly in all of X. For the most extensively studied classes F, we show conditions on S which guarantee that S is removable for F relative to A.
The current study investigates an interaction between numbers and physical size (i.e. size congruity) in visual search. In three experiments, participants had to detect a physically large (or small) target item among physically small (or large) distractors in a search task comprising single-digit numbers. The relative numerical size of the digits was varied, such that the target item was either among the numerically large or small numbers in the search display and the relation between numerical and physical size was either congruent or incongruent. Perceptual differences of the stimuli were controlled by a condition in which participants had to search for a differently coloured target item with the same physical size and by the usage of LCD-style numbers that were matched in visual similarity by shape transformations. The results of all three experiments consistently revealed that detecting a physically large target item is significantly faster when the numerical size of the target item is large as well (congruent), compared to when it is small (incongruent). This novel finding of a size congruity effect in visual search demonstrates an interaction between numerical and physical size in an experimental setting beyond typically used binary comparison tasks, and provides important new evidence for the notion of shared cognitive codes for numbers and sensorimotor magnitudes. Theoretical consequences for recent models on attention, magnitude representation and their interactions are discussed.
We show that self-consistent partial synchrony in globally coupled oscillatory ensembles is a general phenomenon. We analyze in detail appearance and stability properties of this state in possibly the simplest setup of a biharmonic Kuramoto-Daido phase model as well as demonstrate the effect in limit-cycle relaxational Rayleigh oscillators. Such a regime extends the notion of splay state from a uniform distribution of phases to an oscillating one. Suitable collective observables such as the Kuramoto order parameter allow detecting the presence of an inhomogeneous distribution. The characteristic and most peculiar property of self-consistent partial synchrony is the difference between the frequency of single units and that of the macroscopic field.
Conclusion
(2016)
This chapter revisits the role of the new modes of governance in areas of limited statehood. First, it states that there is no linear relationship between degrees of statehood and the overall effectiveness of new modes of sustainability governance. Second, the chapter states that, in most of the cases, national governments are hesitant or even actively hamper the development of new modes of governance. Third, it shows that the absence of the shadow of hierarchy can indeed lead to ineffective new modes of governance. However, the shadow of hierarchy does not necessarily need to be cast by states. Finally, the author reviews the complexities involved in participatory practices, stressing the importance of institutional structures and knowledgeable brokers. The chapter concludes by outlining fields for future research.
Introduction
(2016)
The Paris Agreement for Climate Change or the Sustainable Development Goals (SDGs) rely on new modes of governance for implementation. Indeed, new modes of governance such as market-based instruments, public-private partnerships or multi-stakeholder initiatives have been praised for playing a pivotal role in effective and legitimate sustainability governance. Yet, do they also deliver in areas of limited statehood? States such as Malaysia or the Dominican Republic partly lack the ability to implement and enforce rules; their statehood is limited. This introduction provides the analytical framework of this volume and critically examines the performance of new modes of governance in areas of limited statehood, drawing on the book’s in-depth case studies on issues of climate change, biodiversity, and health.
This chapter investigates the trajectory of establishing the Forest Stewardship Council (FSC) in the early 1990s as the first private transnational certification organization with an antagonistic stakeholder body. Its main contribution is a micro-analysis of the founding assembly in 1993. By investigating the role of brokers within the negotiation as one institutional scope condition for ‘arguing’ having occurred, the chapter adopts a dramaturgical approach. It contends that the authority of brokers is not necessarily institutionally given, but needs to be gained: brokers have to prove situationally that their knowledge is relevant and that they are speaking impartially in the interest of progress rather than their own. The chapter stresses the importance of procedural knowledge which brokers provide in contrast to policy knowledge.
The determination of the total carbon storage of peatlands is of high relevance in the context of climate-change mitigation efforts. This determination relies on data about stratigraphy and peat properties, which are conventionally collected by coring. Ground-penetrating radar (GPR) and electrical resistivity imaging (ERI) can support these point data by providing subsoil information in two-dimensional cross-sections. In this study, GPR and ERI were conducted at two groundwater-fed fen sites located in the temperate zone in north-east Germany. The fens of this region are embedded in low conductive glacial sand and are characterised by thick layers of gyttja, which can be either mineral or organic. The two study sites are representative of this region with respect to stratigraphy (total thickness, peat and gyttja types) and ecological conditions (pH-value, trophic condition). The aim of this study is to assess the suitability of GPR and ERI to detect stratigraphy and peat properties under these characteristic site conditions. Results show that GPR clearly detects the interfaces between (i) Carex and brown-moss peat, (ii) brown-moss peat and organic gyttja, (iii) organic- and mineral gyttja, and (iv) mineral gyttja and the parent material (glacial sand). These layers differ in bulk density and the related organic matter content. ERI, however, does not delineate these layers; rather it delineates regions of varying properties. At our base-rich site, pore fluid conductivity and cation.exchange capacity are the main factors that determine peat electrical conductivity (reverse of resistivity), whereas organic matter and water content are most influential at the more acidic site. Thus the correlation between peat properties and electrical conductivity are driven by site-specific conditions, which are mainly determined by the solute load in the groundwater at fens. When the total organic deposits exceed a thickness of 5 m, the depth of investigation by GPR is limited due to increasing attenuation. This is not a limiting factor for ERI, where the transition from organic deposits to glacial sand is visible at both sites. Due to these specific sensitivities, a combined application of GPR and ERI meets the demand for up-to-date information on carbon storage of peatlands, which is, moreover, very site-specific because of the inherent variety of ecological conditions and stratigraphy between peatlands in general and between fens and bogs in particular. (C) 2016 Elsevier B.V. All rights reserved.
Mycobacterium tuberculosis (M. tuberculosis) is the intracellular bacterium responsible for tuberculosis disease (TD). Inside the phagosomes of activated macrophages, M. tuberculosis is exposed to cytotoxic hydroperoxides such as hydrogen peroxide, fatty acid hydroperoxides and peroxynitrite. Thus, the characterization of the bacterial antioxidant systems could facilitate novel drug developments. In this work, we characterized the product of the gene Rv1608c from M. tuberculosis, which according to sequence homology had been annotated as a putative peroxiredoxin of the peroxiredoxin Q subfamily (PrxQ B from M. tuberculosis or MtPrxQ B). The protein has been reported to be essential for M. tuberculosis growth in cholesterol-rich medium. We demonstrated the M. tuberculosis thioredoxin B/C-dependent peroxidase activity of MtPrxQ B, which acted as a two-cysteine peroxiredoxin that could function, although less efficiently, using a one-cysteine mechanism. Through steady-state and competition kinetic analysis, we proved that the net forward rate constant of MtPrxQ B reaction was 3 orders of magnitude faster for fatty acid hydroperoxides than for hydrogen peroxide (3x10(6) vs 6x10(3) M-1 s(-1), respectively), while the rate constant of peroxynitrite reduction was (0.6-1.4) x10(6) M-1 s(-1) at pH 7.4. The enzyme lacked activity towards cholesterol hydroperoxides solubilized in sodium deoxycholate. Both thioredoxin B and C rapidly reduced the oxidized form of MtPrxQ B, with rates constants of 0.5x10(6) and 1x10(6) M-1 s(-1), respectively. Our data indicated that MtPrxQ B is monomeric in solution both under reduced and oxidized states. In spite of the similar hydrodynamic behavior the reduced and oxidized forms of the protein showed important structural differences that were reflected in the protein circular dichroism spectra.
Reconstructing and understanding the Human Physiome virtually is a complex mathematical problem, and a highly demanding computational challenge. Mathematical models spanning from the molecular level through to whole populations of individuals must be integrated, then personalized. This requires interoperability with multiple disparate and geographically separated data sources, and myriad computational software tools. Extracting and producing knowledge from such sources, even when the databases and software are readily available, is a challenging task. Despite the difficulties, researchers must frequently perform these tasks so that available knowledge can be continually integrated into the common framework required to realize the Human Physiome. Software and infrastructures that support the communities that generate these, together with their underlying standards to format, describe and interlink the corresponding data and computer models, are pivotal to the Human Physiome being realized. They provide the foundations for integrating, exchanging and re-using data and models efficiently, and correctly, while also supporting the dissemination of growing knowledge in these forms. In this paper, we explore the standards, software tooling, repositories and infrastructures that support this work, and detail what makes them vital to realizing the Human Physiome.
This dissertation examines the impact of the type of referring expression on the acquisition of word order variation in German-speaking preschoolers. A puzzle in the area of language acquisition concerns the production-comprehension asymmetry for non-canonical sentences like "Den Affen fängt die Kuh." (“The monkey, the cow chases.”), that is, preschoolers usually have difficulties in accurately understanding non-canonical sentences approximately until age six (e.g., Dittmar et al., 2008) although they produce non-canonical sentences already around age three (e.g., Poeppel & Wexler, 1993; Weissenborn, 1990). This dissertation investigated the production and comprehension of non-canonical sentences to address this issue.
Three corpus analyses were conducted to investigate the impact of givenness, topic status and the type of referring expression on word order in the spontaneous speech of two- to four-year-olds and the child-directed speech produced by their mothers. The positioning of the direct object in ditransitive sentences was examined; in particular, sentences in which the direct object occurred before or after the indirect object in the sentence-medial positions and sentences in which it occurred in the sentence-initial position. The results reveal similar ordering patterns for children and adults. Word order variation was to a large extent predictable from the type of referring expression, especially with respect to the word order involving the sentence-medial positions. Information structure (e.g., topic status) had an additional impact only on word order variation that involved the sentence-initial position.
Two comprehension experiments were conducted to investigate whether the type of referring expression and topic status influences the comprehension of non-canonical transitive sentences in four- and five-year-olds. In the first experiment, the topic status of the one of the sentential arguments was established via a preceding context sentence, and in the second experiment, the type of referring expression for the sentential arguments was additionally manipulated by using either a full lexical noun phrase (NP) or a personal pronoun. The results demonstrate that children’s comprehension of non-canonical sentences improved when the topic argument was realized as a personal pronoun and this improvement was independent of the grammatical role of the arguments. However, children’s comprehension was not improved when the topic argument was realized as a lexical NP.
In sum, the results of both production and comprehension studies support the view that referring expressions may be seen as a sentence-level cue to word order and to the information status of the sentential arguments. The results highlight the important role of the type of referring expression on the acquisition of word order variation and indicate that the production-comprehension asymmetry is reduced when the type of referring expression is considered.
A convenient synthesis of gamma-spirolactams in only two steps was developed. Birch reduction of benzoic acids and immediate alkylation with chloroacetonitrile afforded cyclohexadienes in high yields. The products could be isolated by crystallization on a large scale in analytically pure form. Subsequent hydrogenation with platinum(IV) oxide as the catalyst reduced the nitrile functionality and the double bonds in the same step with excellent stereoselectivity. The relative configurations were determined unequivocally by X-ray analyses. Direct cyclization of the intermediary formed amino acids afforded the desired gamma-spirolactams in excellent overall yields. The procedure is characterized by few steps, cheap reagents, and can be performed on a large scale, interesting for industrial processes.
Scripts that store knowledge of everyday events are fundamentally important for managing daily routines. Content event knowledge (i.e., knowledge about which events belong to a script) and temporal event knowledge (i.e., knowledge about the chronological order of events in a script) constitute qualitatively different forms of knowledge. However, there is limited information about each distinct process and the time course involved in accessing content and temporal event knowledge. Therefore, we analyzed event-related potentials (ERPs) in response to either correctly presented event sequences or event sequences that contained a content or temporal error. We found an N400, which was followed by a posteriorly distributed P600 in response to content errors in event sequences. By contrast, we did not find an N400 but an anteriorly distributed P600 in response to temporal errors in event sequences. Thus, the N400 seems to be elicited as a response to a general mismatch between an event and the established event model. We assume that the expectancy violation of content event knowledge, as indicated by the N400, induces the collapse of the established event model, a process indicated by the posterior P600. The expectancy violation of temporal event knowledge is assumed to induce an attempt to reorganize the event model in working memory, a process indicated by the frontal P600. (C) 2015 Elsevier Ltd. All rights reserved.
1. Plant-plant interactions may critically modify the impact of climate change on plant communities. However, the magnitude and even direction of potential future interactions remains highly debated, especially for water-limited ecosystems. Predictions range from increasing facilitation to increasing competition with future aridification. 2. The different methodologies used for assessing plant-plant interactions under changing environmental conditions may affect the outcome but they are not equally represented in the literature. Mechanistic experimental manipulations are rare compared with correlative approaches that infer future patterns from current observations along spatial climatic gradients. 3. Here, we utilize a unique climatic gradient in combination with a large-scale, long-term experiment to test whether predictions about plant-plant interactions yield similar results when using experimental manipulations, spatial gradients or temporal variation. We assessed shrub-annual interactions in three different sites along a natural rainfall gradient (spatial) during 9 years of varying rainfall (temporal) and 8 years of dry and wet manipulations of ambient rainfall (experimental) that closely mimicked regional climate scenarios. 4. The results were fundamentally different among all three approaches. Experimental water manipulations hardly altered shrub effects on annual plant communities for the assessed fitness parameters biomass and survival. Along the spatial gradient, shrub effects shifted from clearly negative to mildly facilitative towards drier sites, whereas temporal variation showed the opposite trend: more negative shrub effects in drier years. 5. Based on our experimental approach, we conclude that shrub-annual interaction will remain similar under climate change. In contrast, the commonly applied space-for-time approach based on spatial gradients would have suggested increasing facilitative effects with climate change. We discuss potential mechanisms governing the differences among the three approaches. 6. Our study highlights the critical importance of long-term experimental manipulations for evaluating climate change impacts. Correlative approaches, for example along spatial or temporal gradients, may be misleading and overestimate the response of plant-plant interactions to climate change.
Defining species by their climatic niche is the simple and intuitive principle underlying Bioclimatic Envelope Model (BEM) predictions for climate change effects. However, these correlative models are often criticised for neglecting many ecological processes. Here, we apply the same niche principle to entire communities within a medium/long-term climate manipulation study, where ecological processes are inherently included. In a nine generation study in Israel, we manipulated rainfall (Drought -30%; Irrigation +30%; Control natural rainfall) at two sites which differ chiefly in rainfall quantity and variability. We analysed community responses to the manipulations by grouping species based on their climatic rainfall niche. These responses were compared to analyses based on single species, total densities, and commonly used taxonomic groupings. Climate Niche Groups yielded clear and consistent results: within communities, those species distributed in drier regions performed relatively better in the drought treatment, and those from wetter climates performed better when irrigated. In contrast, analyses based on other principles revealed little insight into community dynamics. Notably, most relationships were weaker at the drier, more variable site, suggesting that enhanced adaptation to variability may buffer climate change impacts. We provide robust experimental evidence that using climatic niches commonly applied in BEMs is a valid approach for eliciting community changes in response to climate change. However, we also argue that additional empirical information needs to be gathered using experiments in situ to correctly assess community vulnerability. Climatic Niche Groups used in this way, may therefore provide a powerful tool and directional testing framework to generalise and compare climate change impacts across habitats. (C) 2016 The Authors. Published by Elsevier GmbH.
Context. For the spectral analysis of high-resolution and high signal-to-noise (S/N) spectra of hot stars, state-of-the-art non-local thermodynamic equilibrium (NLTE) model atmospheres are mandatory. These are strongly dependent on the reliability of the atomic data that is used for their calculation. Aims. New Kr IV-VII oscillator strengths for a large number of lines enable us to construct more detailed model atoms for our NLTE model-atmosphere calculations. This enables us to search for additional Kr lines in observed spectra and to improve Kr abundance determinations. Methods. We calculated Kr IV-VII oscillator strengths to consider radiative and collisional bound-bound transitions in detail in our NLTE stellar-atmosphere models for the analysis of Kr lines that are exhibited in high-resolution and high S/N ultraviolet (UV) observations of the hot white dwarf RE 0503-289. Results. We reanalyzed the effective temperature and surface gravity and determined T-eff = 70 000 +/- 2000 K and log (g/cm s(-2)) = 7.5 +/- 0.1. We newly identified ten Kr V lines and one Kr vi line in the spectrum of RE 0503-289. We measured a Kr abundance of 3.3 +/- 0.3 (logarithmic mass fraction). We discovered that the interstellar absorption toward RE 0503-289 has a multi-velocity structure within a radial-velocity interval of -40 km s(-1) < upsilon(rad) < +18 km s(-1). Conclusions. Reliable measurements and calculations of atomic data are a prerequisite for state-of-the-art NLTE stellar-atmosphere modeling. Observed Kr V-VII line profiles in the UV spectrum of the white dwarf RE 0503-289 were simultaneously well reproduced with our newly calculated oscillator strengths.
Optical biosensors based on porous silicon were fabricated by metal assisted chemical etching. Thereby double layered porous silicon structures were obtained consisting of porous pillars with large pores on top of a porous silicon layer with smaller pores. These structures showed a similar sensing performance in comparison to electrochemically produced porous silicon interferometric sensors.
This article explores a recent performance of excerpts from T.S. Eliot’s Four Quartets (1935/36–1942) entitled Engaging Eliot: Four Quartets in Word, Color, and Sound as an example of live poetry. In this context, Eliot’s poem can be analysed as an auditory artefact that interacts strongly with other oral performances (welcome addresses and artists’ conversations), as well as with the musical performance of Christopher Theofanidis’s quintet “At the Still Point” at the end of the opening of Engaging Eliot. The event served as an introduction to a 13-day art exhibition and engaged in a re-evaluation of Eliot’s poem after 9/11: while its first part emphasises the connection between Eliot’s poem and Christian doctrine, its second part – especially the combination of poetry reading and musical performance – highlights the philosophical and spiritual dimensions of Four Quartets.
TripleA is a workshop series founded by linguists from the University of Tübingen and the University of Potsdam. Its aim is to provide a forum for semanticists doing fieldwork on understudied languages, and its focus is on languages from Africa, Asia, Australia and Oceania. The second TripleA workshop was held at the University of Potsdam, June 3-5, 2015.
Relatedness strongly influences social behaviors in a wide variety of species. For most species, the highest typical degree of relatedness is between full siblings with 50% shared genes. However, this is poorly understood in species with unusually high relatedness between individuals: clonal organisms. Although there has been some investigation into clonal invertebrates and yeast, nothing is known about kin selection in clonal vertebrates. We show that a clonal fish, the Amazon molly (Poecilia formosa), can distinguish between different clonal lineages, associating with genetically identical, sister clones, and use multiple sensory modalities. Also, they scale their aggressive behaviors according to the relatedness to other females: they are more aggressive to non-related clones. Our results demonstrate that even in species with very small genetic differences between individuals, kin recognition can be adaptive. Their discriminatory abilities and regulation of costly behaviors provides a powerful example of natural selection in species with limited genetic diversity.
The population structure of the highly mobile marine mammal, the harbor porpoise (Phocoena phocoena), in the Atlantic shelf waters follows a pattern of significant isolation-by-distance. The population structure of harbor porpoises from the Baltic Sea, which is connected with the North Sea through a series of basins separated by shallow underwater ridges, however, is more complex. Here, we investigated the population differentiation of harbor porpoises in European Seas with a special focus on the Baltic Sea and adjacent waters, using a population genomics approach. We used 2872 single nucleotide polymor-phisms (SNPs), derived from double digest restriction-site associated DNA sequencing (ddRAD-seq), as well as 13 microsatellite loci and mitochondrial haplotypes for the same set of individuals. Spatial principal components analysis (sPCA), and Bayesian clustering on a subset of SNPs suggest three main groupings at the level of all studied regions: the Black Sea, the North Atlantic, and the Baltic Sea. Furthermore, we observed a distinct separation of the North Sea harbor porpoises from the Baltic Sea populations, and identified splits between porpoise populations within the Baltic Sea. We observed a notable distinction between the Belt Sea and the Inner Baltic Sea sub-regions. Improved delineation of harbor porpoise population assignments for the Baltic based on genomic evidence is important for conservation management of this endangered cetacean in threatened habitats, particularly in the Baltic Sea proper. In addition, we show that SNPs outperform microsatellite markers and demonstrate the utility of RAD-tags from a relatively small, opportunistically sampled cetacean sample set for population diversity and divergence analysis.
The all-female Amazon molly (Poecilia formosa) originated from a single hybridization of two bisexual ancestors, Atlantic molly (Poecilia mexicana) and sailfin molly (Poecilia latipinna). As a gynogenetic species, the Amazon molly needs to copulate with a heterospecific male, but the genetic information of the sperm-donor does not contribute to the next generation, as the sperm only acts as the trigger for the diploid eggs’ embryogenesis. Here, we study the sequence evolution and gene expression of the duplicated genes coding for androgen receptors (ars) and other pathway-related genes, i.e., the estrogen receptors (ers) and cytochrome P450, family19, subfamily A, aromatase genes (cyp19as), in the Amazon molly, in comparison to its bisexual ancestors. Mollies possess–as most other teleost fish—two copies of the ar, er, and cyp19a genes, i.e., arα/arβ, erα/erβ1, and cyp19a1 (also referred as cyp19a1a)/cyp19a2 (also referred to as cyp19a1b), respectively. Non-synonymous single nucleotide polymorphisms (SNPs) among the ancestral bisexual species were generally predicted not to alter protein function. Some derived substitutions in the P. mexicana and one in P. formosa are predicted to impact protein function. We also describe the gene expression pattern of the ars and pathway-related genes in various tissues (i.e., brain, gill, and ovary) and provide SNP markers for allele specific expression research. As a general tendency, the levels of gene expression were lowest in gill and highest in ovarian tissues, while expression levels in the brain were intermediate in most cases. Expression levels in P. formosa were conserved where expression did not differ between the two bisexual ancestors. In those cases where gene expression levels significantly differed between the bisexual species, P. formosa expression was always comparable to the higher expression level among the two ancestors. Interestingly, erβ1 was expressed neither in brain nor in gill in the analyzed three molly species, which implies a more important role of erα in the estradiol synthesis pathway in these tissues. Furthermore, our data suggest that interactions of steroid-signaling pathway genes differ across tissues, in particular the interactions of ars and cyp19as.
Hemidiaptomus diaptomid copepods are known to be excellent biological indicators for the highly biodiverse crustacean communities inhabiting Mediterranean temporary ponds (MTPs), an endangered inland water habitat whose conservation is considered a priority according to the "Habitat Directive" of the European Union. This study reports on the characterization of five polymorphic microsatellite loci in Hemidiaptomus gurneyi, to be used as markers for fine-scale studies on the population genetic structure and metapopulation dynamics of a typical and obligate MTP dweller. The five selected loci proved to be polymorphic in the species, with three to five polymorphic loci per studied population. Overall, mean heterozygosity scored for all loci and populations was lower than that reported for the few other diaptomid species for which microsatellite loci have been to date described; this is possibly due to the intrinsically fragmented and isolated peculiar habitat inhabited by the species. Furthermore, the presence of indels within the flanking regions of selected loci was scored. This study, albeit confirming the technical difficulties in finding proper microsatellite markers in copepods, provides for the first time a set of useful polymorphic microsatellite loci for a Hemidiaptomus species, thus allowing the realization of fine-scale phylogeographic and population genetics studies of this flagship crustacean taxon for MTPs.
The population structure of the highly mobile marine mammal, the harbor porpoise (Phocoena phocoena), in the Atlantic shelf waters follows a pattern of significant isolation-by-distance. The population structure of harbor porpoises from the Baltic Sea, which is connected with the North Sea through a series of basins separated by shallow underwater ridges, however, is more complex. Here, we investigated the population differentiation of harbor porpoises in European Seas with a special focus on the Baltic Sea and adjacent waters, using a population genomics approach. We used 2872 single nucleotide polymorphisms (SNPs), derived from double digest restriction-site associated DNA sequencing (ddRAD-seq), as well as 13 microsatellite loci and mitochondrial haplotypes for the same set of individuals. Spatial principal components analysis (sPCA), and Bayesian clustering on a subset of SNPs suggest three main groupings at the level of all studied regions: the Black Sea, the North Atlantic, and the Baltic Sea. Furthermore, we observed a distinct separation of the North Sea harbor porpoises from the Baltic Sea populations, and identified splits between porpoise populations within the Baltic Sea. We observed a notable distinction between the Belt Sea and the Inner Baltic Sea sub-regions. Improved delineation of harbor porpoise population assignments for the Baltic based on genomic evidence is important for conservation management of this endangered cetacean in threatened habitats, particularly in the Baltic Sea proper. In addition, we show that SNPs outperform microsatellite markers and demonstrate the utility of RAD-tags from a relatively small, opportunistically sampled cetacean sample set for population diversity and divergence analysis.
The all-female Amazon molly (Poecilia formosa) originated from a single hybridization of two bisexual ancestors, Atlantic molly (Poecilia mexicana) and sailfin molly (Poecilia latipinna). As a gynogenetic species, the Amazon molly needs to copulate with a heterospecific male, but the genetic information of the sperm-donor does not contribute to the next generation, as the sperm only acts as the trigger for the diploid eggs’ embryogenesis. Here, we study the sequence evolution and gene expression of the duplicated genes coding for androgen receptors (ars) and other pathway-related genes, i.e., the estrogen receptors (ers) and cytochrome P450, family19, subfamily A, aromatase genes (cyp19as), in the Amazon molly, in comparison to its bisexual ancestors. Mollies possess–as most other teleost fish—two copies of the ar, er, and cyp19a genes, i.e., arα/arβ, erα/erβ1, and cyp19a1 (also referred as cyp19a1a)/cyp19a2 (also referred to as cyp19a1b), respectively. Non-synonymous single nucleotide polymorphisms (SNPs) among the ancestral bisexual species were generally predicted not to alter protein function. Some derived substitutions in the P. mexicana and one in P. formosa are predicted to impact protein function. We also describe the gene expression pattern of the ars and pathway-related genes in various tissues (i.e., brain, gill, and ovary) and provide SNP markers for allele specific expression research. As a general tendency, the levels of gene expression were lowest in gill and highest in ovarian tissues, while expression levels in the brain were intermediate in most cases. Expression levels in P. formosa were conserved where expression did not differ between the two bisexual ancestors. In those cases where gene expression levels significantly differed between the bisexual species, P. formosa expression was always comparable to the higher expression level among the two ancestors. Interestingly, erβ1 was expressed neither in brain nor in gill in the analyzed three molly species, which implies a more important role of erα in the estradiol synthesis pathway in these tissues. Furthermore, our data suggest that interactions of steroid-signaling pathway genes differ across tissues, in particular the interactions of ars and cyp19as.
Using data from the Berlin Speed Dating Study, we tested rival hypotheses concerning the effects of self-enhancement of attractiveness on dating outcomes. Three hundred eighty-two participants took part in one of the 17 speed-dating sessions. After each speed-dating interaction, participants indicated how interesting they found the respective person as a long-term and short-term partner. Using social relations analyses, we computed perceiver effects (being more or less choosy) and target effects (being rated as more or less interesting) of long-term and short-term partner ratings. Self-enhancement was operationalized as the discrepancy between self-rated attractiveness and four components of actual attractiveness (observer-rated facial and vocal attractiveness, height and body mass index). Results indicated that self-enhancers were less choosy with respect to their interest for short-term partners, which was especially true for men, but more choosy with respect to long-term partners. With regard to popularity as a mate, potential partners indicated that they found self-enhancers more interesting as short-term partners but not as long-term partners. As self-enhancement is a key component of narcissism, these results are consistent with findings that narcissists perceive many sexual affairs as an achievement, while preferring selected ‘trophy’ long-term partners, and narcissists have a charming appeal for short-term, but not lasting, social relationships.
Beyond the Crystal-Image
(2016)
The cytoskeleton is an essential component of living cells. It is composed of different types of protein filaments that form complex, dynamically rearranging, and interconnected networks. The cytoskeleton serves a multitude of cellular functions which further depend on the cell context. In animal cells, the cytoskeleton prominently shapes the cell's mechanical properties and movement. In plant cells, in contrast, the presence of a rigid cell wall as well as their larger sizes highlight the role of the cytoskeleton in long-distance intracellular transport. As it provides the basis for cell growth and biomass production, cytoskeletal transport in plant cells is of direct environmental and economical relevance. However, while knowledge about the molecular details of the cytoskeletal transport is growing rapidly, the organizational principles that shape these processes on a whole-cell level remain elusive.
This thesis is devoted to the following question: How does the complex architecture of the plant cytoskeleton relate to its transport functionality? The answer requires a systems level perspective of plant cytoskeletal structure and transport. To this end, I combined state-of-the-art confocal microscopy, quantitative digital image analysis, and mathematically powerful, intuitively accessible graph-theoretical approaches.
This thesis summarizes five of my publications that shed light on the plant cytoskeleton as a transportation network: (1) I developed network-based frameworks for accurate, automated quantification of cytoskeletal structures, applicable in, e.g., genetic or chemical screens; (2) I showed that the actin cytoskeleton displays properties of efficient transport networks, hinting at its biological design principles; (3) Using multi-objective optimization, I demonstrated that different plant cell types sustain cytoskeletal networks with cell-type specific and near-optimal organization; (4) By investigating actual transport of organelles through the cell, I showed that properties of the actin cytoskeleton are predictive of organelle flow and provided quantitative evidence for a coordination of transport at a cellular level; (5) I devised a robust, optimization-based method to identify individual cytoskeletal filaments from a given network representation, allowing the investigation of single filament properties in the network context. The developed methods were made publicly available as open-source software tools.
Altogether, my findings and proposed frameworks provide quantitative, system-level insights into intracellular transport in living cells. Despite my focus on the plant cytoskeleton, the established combination of experimental and theoretical approaches is readily applicable to different organisms. Despite the necessity of detailed molecular studies, only a complementary, systemic perspective, as presented here, enables both understanding of cytoskeletal function in its evolutionary context as well as its future technological control and utilization.
It is "scientific folklore" coming from physical heuristics that solutions to the heat equation on a Riemannian manifold can be represented by a path integral. However, the problem with such path integrals is that they are notoriously ill-defined. One way to make them rigorous (which is often applied in physics) is finite-dimensional approximation, or time-slicing approximation: Given a fine partition of the time interval into small subintervals, one restricts the integration domain to paths that are geodesic on each subinterval of the partition. These finite-dimensional integrals are well-defined, and the (infinite-dimensional) path integral then is defined as the limit of these (suitably normalized) integrals, as the mesh of the partition tends to zero.
In this thesis, we show that indeed, solutions to the heat equation on a general compact Riemannian manifold with boundary are given by such time-slicing path integrals. Here we consider the heat equation for general Laplace type operators, acting on sections of a vector bundle. We also obtain similar results for the heat kernel, although in this case, one has to restrict to metrics satisfying a certain smoothness condition at the boundary. One of the most important manipulations one would like to do with path integrals is taking their asymptotic expansions; in the case of the heat kernel, this is the short time asymptotic expansion. In order to use time-slicing approximation here, one needs the approximation to be uniform in the time parameter. We show that this is possible by giving strong error estimates.
Finally, we apply these results to obtain short time asymptotic expansions of the heat kernel also in degenerate cases (i.e. at the cut locus). Furthermore, our results allow to relate the asymptotic expansion of the heat kernel to a formal asymptotic expansion of the infinite-dimensional path integral, which gives relations between geometric quantities on the manifold and on the loop space. In particular, we show that the lowest order term in the asymptotic expansion of the heat kernel is essentially given by the Fredholm determinant of the Hessian of the energy functional. We also investigate how this relates to the zeta-regularized determinant of the Jacobi operator along minimizing geodesics.
In this thesis, the two prototype catalysts Fe(CO)₅ and Cr(CO)₆ are investigated with time-resolved photoelectron spectroscopy at a high harmonic setup. In both of these metal carbonyls, a UV photon can induce the dissociation of one or more ligands of the complex. The mechanism of the dissociation has been debated over the last decades. The electronic dynamics of the first dissociation occur on the femtosecond timescale.
For the experiment, an existing high harmonic setup was moved to a new location, was extended, and characterized. The modified setup can induce dynamics in gas phase samples with photon energies of 1.55eV, 3.10eV, and 4.65eV. The valence electronic structure of the samples can be probed with photon energies between 20eV and 40eV. The temporal resolution is 111fs to 262fs, depending on the combination of the two photon energies.
The electronically excited intermediates of the two complexes, as well as of the reaction product Fe(CO)₄, could be observed with photoelectron spectroscopy in the gas phase for the first time. However, photoelectron spectroscopy gives access only to the final ionic states. Corresponding calculations to simulate these spectra are still in development. The peak energies and their evolution in time with respect to the initiation pump pulse have been determined, these peaks have been assigned based on literature data. The spectra of the two complexes show clear differences. The dynamics have been interpreted with the assumption that the motion of peaks in the spectra relates to the movement of the wave packet in the multidimensional energy landscape. The results largely confirm existing models for the reaction pathways. In both metal carbonyls, this pathway involves a direct excitation of the wave packet to a metal-to-ligand charge transfer state and the subsequent crossing to a dissociative ligand field state. The coupling of the electronic dynamics to the nuclear dynamics could explain the slower dissociation in Fe(CO)₅ as compared to Cr(CO)₆.
The main results of this thesis are formulated in a class of surfaces (varifolds) generalizing closed and connected smooth submanifolds of Euclidean space which allows singularities. Given an indecomposable varifold with dimension at least two in some Euclidean space such that the first variation is locally bounded, the total variation is absolutely continuous with respect to the weight measure, the density of the weight measure is at least one outside a set of weight measure zero and the generalized mean curvature is locally summable to a natural power (dimension of the varifold minus one) with respect to the weight measure. The thesis presents an improved estimate of the set where the lower density is small in terms of the one dimensional Hausdorff measure. Moreover, if the support of the weight measure is compact, then the intrinsic diameter with respect to the support of the weight measure is estimated in terms of the generalized mean curvature. This estimate is in analogy to the diameter control for closed connected manifolds smoothly immersed in some Euclidean space of Peter Topping. Previously, it was not known whether the hypothesis in this thesis implies that two points in the support of the weight measure have finite geodesic distance.
We consider the Navier-Stokes equations in the layer R^n x [0,T] over R^n with finite T > 0. Using the standard fundamental solutions of the Laplace operator and the heat operator, we reduce the Navier-Stokes equations to a nonlinear Fredholm equation of the form (I+K) u = f, where K is a compact continuous operator in anisotropic normed Hölder spaces weighted at the point at infinity with respect to the space variables. Actually, the weight function is included to provide a finite energy estimate for solutions to the Navier-Stokes equations for all t in [0,T]. On using the particular properties of the de Rham complex we conclude that the Fréchet derivative (I+K)' is continuously invertible at each point of the Banach space under consideration and the map I+K is open and injective in the space. In this way the Navier-Stokes equations prove to induce an open one-to-one mapping in the scale of Hölder spaces.
Proteins are amphiphilic and adsorb at liquid interfaces. Therefore, they can be efficient stabilizers of foams and emulsions. β-lactoglobulin (BLG) is one of the most widely studied proteins due to its major industrial applications, in particular in food technology.
In the present work, the influence of different bulk concentration, solution pH and ionic strength on the dynamic and equilibrium pressures of BLG adsorbed layers at the solution/tetradecane (W/TD) interface has been investigated. Dynamic interfacial pressure (Π) and interfacial dilational elastic modulus (E’) of BLG solutions for various concentrations at three different pH values of 3, 5 and 7 at a fixed ionic strength of 10 mM and for a selected fixed concentration at three different ionic strengths of 1 mM, 10 mM and 100 mM are measured by Profile Analysis Tensiometer PAT-1 (SINTERFACE Technologies, Germany). A quantitative data analysis requires additional consideration of depletion due to BLG adsorption at the interface at low protein bulk concentrations. This fact makes experiments more efficient when oil drops are studied in the aqueous protein solutions rather than solution drops formed in oil. On the basis of obtained experimental data, concentration dependencies and the effect of solution pH on the protein surface activity was qualitatively analysed. In the presence of 10 mM buffer, we observed that generally the adsorbed amount is increasing with increasing BLG bulk concentration for all three pH values. The adsorption kinetics at pH 5 result in the highest Π values at any time of adsorption while it exhibits a less active behaviour at pH 3.
Since the experimental data have not been in a good agreement with the classical diffusion controlled model due to the conformational changes which occur when the protein molecules get in contact with the hydrophobic oil phase in order to adapt to the interfacial environment, a new theoretical model is proposed here. The adsorption kinetics data were analysed with the newly proposed model, which is the classical diffusion model but modified by assuming an additional change in the surface activity of BLG molecules when adsorbing at the interface. This effect can be expressed through the adsorption activity constant in the corresponding equation of state. The dilational visco-elasticity of the BLG adsorbed interfacial layers is determined from measured dynamic interfacial tensions during sinusoidal drop area variations. The interfacial tension responses to these harmonic drop oscillations are interpreted with the same thermodynamic model which is used for the corresponding adsorption isotherm.
At a selected BLG concentration of 2×10-6 mol/l, the influence of the ionic strength using different buffer concentration of 1, 10 and 100 mM on the interfacial pressure was studied. It is affected weakly at pH 5, whereas it has a strong impact by increasing buffer concentration at pH 3 and 7. In conclusion, the structure formation of BLG adsorbed layer in the early stage of adsorption at the W/TD interface is similar to those of the solution/air (W/A) surface. However, the equation of state at the W/TD interface provides an adsorption activity constant which is almost two orders of magnitude higher than that for the solution/air surface.
At the end of this work, a new experimental tool called Drop and Bubble Micro Manipulator DBMM (SINTERFACE Technologies, Germany) has been introduced to study the stability of protein covered bubbles against coalescence. Among the available protocols the lifetime between the moment of contact and coalescence of two contacting bubble is determined for different BLG concentrations. The adsorbed amount of BLG is determined as a function of time and concentration and correlates with the observed coalescence behaviour of the contacting bubbles.
The current study investigates to what extent masked morphological priming is modulated by language-particular properties, specifically by its writing system. We present results from two masked priming experiments investigating the processing of complex Japanese words written in less common (moraic) scripts. In Experiment 1, participants performed lexical decisions on target verbs; these were preceded by primes which were either (i) a past-tense form of the same verb, (ii) a stem-related form with the epenthetic vowel -i, (iii) a semantically-related form, and (iv) a phonologically-related form. Significant priming effects were obtained for prime types (i), (ii), and (iii), but not for (iv). This pattern of results differs from previous findings on languages with alphabetic scripts, which found reliable masked priming effects for morphologically related prime/target pairs of type (i), but not for non-affixal and semantically-related primes of types (ii), and (iii). In Experiment 2, we measured priming effects for prime/target pairs which are neither morphologically, semantically, phonologically nor - as presented in their moraic scripts—orthographically related, but which—in their commonly written form—share the same kanji, which are logograms adopted from Chinese. The results showed a significant priming effect, with faster lexical-decision times for kanji-related prime/target pairs relative to unrelated ones. We conclude that affix-stripping is insufficient to account for masked morphological priming effects across languages, but that language-particular properties (in the case of Japanese, the writing system) affect the processing of (morphologically) complex words.
The current study investigates to what extent masked morphological priming is modulated by language-particular properties, specifically by its writing system. We present results from two masked priming experiments investigating the processing of complex Japanese words written in less common (moraic) scripts. In Experiment 1, participants performed lexical decisions on target verbs; these were preceded by primes which were either (i) a past-tense form of the same verb, (ii) a stem-related form with the epenthetic vowel -i, (iii) a semantically-related form, and (iv) a phonologically-related form. Significant priming effects were obtained for prime types (i), (ii), and (iii), but not for (iv). This pattern of results differs from previous findings on languages with alphabetic scripts, which found reliable masked priming effects for morphologically related prime/target pairs of type (i), but not for non-affixal and semantically-related primes of types (ii), and (iii). In Experiment 2, we measured priming effects for prime/target pairs which are neither morphologically, semantically, phonologically nor - as presented in their moraic scripts—orthographically related, but which—in their commonly written form—share the same kanji, which are logograms adopted from Chinese. The results showed a significant priming effect, with faster lexical-decision times for kanji-related prime/target pairs relative to unrelated ones. We conclude that affix-stripping is insufficient to account for masked morphological priming effects across languages, but that language-particular properties (in the case of Japanese, the writing system) affect the processing of (morphologically) complex words.
Since spring 2014 the relations between the EU and Russia are stuck in an Ice Age. From a Western point of view, especially the annexation of Crimea by Russia and the intervention in the conflict in Ukraine are responsible. The EU has frozen their relations to Russia and applied sanctions against it. Russia reacted in the same way. Can this vicious circle be broken without betraying the values of the EU? This book presents the analysis and ideas of social scientists from Germany, Poland and Russia. The reasons for this crisis are seen quite differently but all try to find a way out of the current confrontation.
In this thesis sentence processing was investigated using a psychophysiological measure known as pupillometry as well as Event-Related Potentials (ERP). The scope of the the- sis was broad, investigating the processing of several different movement constructions with native speakers of English and second language learners of English, as well as word order and case marking in German speaking adults and children. Pupillometry and ERP allowed us to test competing linguistic theories and use novel methodologies to investigate the processing of word order. In doing so we also aimed to establish pupillometry as an effective way to investigate the processing of word order thus broadening the methodological spectrum.
Metal-containing ionic liquids (ILs) are of interest for a variety of technical applications, e.g., particle synthesis and materials with magnetic or thermochromic properties. In this paper we report the synthesis of, and two structures for, some new tetrabromidocuprates(II) with several “onium” cations in comparison to the results of electron paramagnetic resonance (EPR) spectroscopic analyses. The sterically demanding cations were used to separate the paramagnetic Cu(II) ions for EPR measurements. The EPR hyperfine structure in the spectra of these new compounds is not resolved, due to the line broadening resulting from magnetic exchange between the still-incomplete separated paramagnetic Cu(II) centres. For the majority of compounds, the principal g values (g|| and gK) of the tensors could be determined and information on the structural changes in the [CuBr4]2- anions can be obtained. The complexes have high potential, e.g., as ionic liquids, as precursors for the synthesis of copper bromide particles, as catalytically active or paramagnetic ionic liquids.
Metal-containing ionic liquids (ILs) are of interest for a variety of technical applications, e.g., particle synthesis and materials with magnetic or thermochromic properties. In this paper we report the synthesis of, and two structures for, some new tetrabromidocuprates(II) with several “onium” cations in comparison to the results of electron paramagnetic resonance (EPR) spectroscopic analyses. The sterically demanding cations were used to separate the paramagnetic Cu(II) ions for EPR measurements. The EPR hyperfine structure in the spectra of these new compounds is not resolved, due to the line broadening resulting from magnetic exchange between the still-incomplete separated paramagnetic Cu(II) centres. For the majority of compounds, the principal g values (g|| and gK) of the tensors could be determined and information on the structural changes in the [CuBr4]2- anions can be obtained. The complexes have high potential, e.g., as ionic liquids, as precursors for the synthesis of copper bromide particles, as catalytically active or paramagnetic ionic liquids.
Species can adjust their traits in response to selection which may strongly influence species coexistence. Nevertheless, current theory mainly assumes distinct and time-invariant trait values. We examined the combined effects of the range and the speed of trait adaptation on species coexistence using an innovative multispecies predator–prey model. It allows for temporal trait changes of all predator and prey species and thus simultaneous coadaptation within and among trophic levels. We show that very small or slow trait adaptation did not facilitate coexistence because the stabilizing niche differences were not sufficient to offset the fitness differences. In contrast, sufficiently large and fast trait adaptation jointly promoted stable or neutrally stable species coexistence. Continuous trait adjustments in response to selection enabled a temporally variable convergence and divergence of species traits; that is, species became temporally more similar (neutral theory) or dissimilar (niche theory) depending on the selection pressure, resulting over time in a balance between niche differences stabilizing coexistence and fitness differences promoting competitive exclusion. Furthermore, coadaptation allowed prey and predator species to cluster into different functional groups. This equalized the fitness of similar species while maintaining sufficient niche differences among functionally different species delaying or preventing competitive exclusion. In contrast to pre-
vious studies, the emergent feedback between biomass and trait dynamics enabled supersaturated coexistence for a broad range of potential trait adaptation and parameters. We conclude that accounting for trait adaptation may explain stable and supersaturated species coexistence for a broad range of environmental conditions in natural systems when the absence of such adaptive changes would preclude it. Small trait changes, coincident with those that may occur within many natural populations, greatly enlarged the number of coexisting species.
Species can adjust their traits in response to selection which may strongly influence species coexistence. Nevertheless, current theory mainly assumes distinct and time-invariant trait values. We examined the combined effects of the range and the speed of trait adaptation on species coexistence using an innovative multispecies predator–prey model. It allows for temporal trait changes of all predator and prey species and thus simultaneous coadaptation within and among trophic levels. We show that very small or slow trait adaptation did not facilitate coexistence because the stabilizing niche differences were not sufficient to offset the fitness differences. In contrast, sufficiently large and fast trait adaptation jointly promoted stable or neutrally stable species coexistence. Continuous trait adjustments in response to selection enabled a temporally variable convergence and divergence of species traits; that is, species became temporally more similar (neutral theory) or dissimilar (niche theory) depending on the selection pressure, resulting over time in a balance between niche differences stabilizing coexistence and fitness differences promoting competitive exclusion. Furthermore, coadaptation allowed prey and predator species to cluster into different functional groups. This equalized the fitness of similar species while maintaining sufficient niche differences among functionally different species delaying or preventing competitive exclusion. In contrast to previous studies, the emergent feedback between biomass and trait dynamics enabled supersaturated coexistence for a broad range of potential trait adaptation and parameters. We conclude that accounting for trait adaptation may explain stable and supersaturated species coexistence for a broad range of environmental conditions in natural systems when the absence of such adaptive changes would preclude it. Small trait changes, coincident with those that may occur within many natural populations, greatly enlarged the number of coexisting species.
We prove statistical rates of convergence for kernel-based least squares regression from i.i.d. data using a conjugate gradient algorithm, where regularization against overfitting is obtained by early stopping. This method is related to Kernel Partial Least Squares, a regression method that combines supervised dimensionality reduction with least squares projection. Following the setting introduced in earlier related literature, we study so-called "fast convergence rates" depending on the regularity of the target regression function (measured by a source condition in terms of the kernel integral operator) and on the effective dimensionality of the data mapped into the kernel space. We obtain upper bounds, essentially matching known minimax lower bounds, for the L^2 (prediction) norm as well as for the stronger Hilbert norm, if the true
regression function belongs to the reproducing kernel Hilbert space. If the latter assumption is not fulfilled, we obtain similar convergence rates for appropriate norms, provided additional unlabeled data are available.
The relationship between climate and forest productivity is an intensively studied subject in forest science. This Thesis is embedded within the general framework of future forest growth under climate change and its implications for the ongoing forest conversion. My objective is to investigate the future forest productivity at different spatial scales (from a single specific forest stand to aggregated information across Germany) with focus on oak-pine forests in the federal state of Brandenburg. The overarching question is: how are the oak-pine forests affected by climate change described by a variety of climate scenarios. I answer this question by using a model based analysis of tree growth processes and responses to different climate scenarios with emphasis on drought events. In addition, a method is developed which considers climate change uncertainty of forest management planning.
As a first 'screening' of climate change impacts on forest productivity, I calculated the change in net primary production on the base of a large set of climate scenarios for different tree species and the total area of Germany. Temperature increases up to 3 K lead to positive effects on the net primary production of all selected tree species. But, in water-limited regions this positive net primary production trend is dependent on the length of drought periods which results in a larger uncertainty regarding future forest productivity. One of the regions with the highest uncertainty of net primary production development is the federal state of Brandenburg.
To enhance the understanding and ability of model based analysis of tree growth sensitivity to drought stress two water uptake approaches in pure pine and mixed oak-pine stands are contrasted. The first water uptake approach consists of an empirical function for root water uptake. The second approach is more mechanistic and calculates the differences of soil water potential along a soil-plant-atmosphere continuum. I assumed the total root resistance to vary at low, medium and high total root resistance levels. For validation purposes three data sets on different tree growth relevant time scales are used. Results show that, except the mechanistic water uptake approach with high total root resistance, all transpiration outputs exceeded observed values. On the other hand high transpiration led to a better match of observed soil water content. The strongest correlation between simulated and observed annual tree ring width occurred with the mechanistic water uptake approach and high total root resistance. The findings highlight the importance of severe drought as a main reason for small diameter increment, best supported by the mechanistic water uptake approach with high root resistance. However, if all aspects of the data sets are considered no approach can be judged superior to the other. I conclude that the uncertainty of future productivity of water-limited forest ecosystems under changing environmental conditions is linked to simulated root water uptake.
Finally my study aimed at the impacts of climate change combined with management scenarios on an oak-pine forest to evaluate growth, biomass and the amount of harvested timber. The pine and the oak trees are 104 and 9 years old respectively. Three different management scenarios with different thinning intensities and different climate scenarios are used to simulate the performance of management strategies which explicitly account for the risks associated with achieving three predefined objectives (maximum carbon storage, maximum harvested timber, intermediate). I found out that in most cases there is no general management strategy which fits best to different objectives. The analysis of variance in the growth related model outputs showed an increase of climate uncertainty with increasing climate warming. Interestingly, the increase of climate-induced uncertainty is much higher from 2 to 3 K than from 0 to 2 K.
Characterization of the Clp protease complex and identification of putative substrates in N. tabacum
(2016)
Recently, due to an increasing demand on functionality and flexibility, beforehand isolated systems have become interconnected to gain powerful adaptive Systems of Systems (SoS) solutions with an overall robust, flexible and emergent behavior. The adaptive SoS comprises a variety of different system types ranging from small embedded to adaptive cyber-physical systems. On the one hand, each system is independent, follows a local strategy and optimizes its behavior to reach its goals. On the other hand, systems must cooperate with each other to enrich the overall functionality to jointly perform on the SoS level reaching global goals, which cannot be satisfied by one system alone. Due to difficulties of local and global behavior optimizations conflicts may arise between systems that have to be solved by the adaptive SoS.
This thesis proposes a modeling language that facilitates the description of an adaptive SoS by considering the adaptation capabilities in form of feedback loops as first class entities. Moreover, this thesis adopts the Models@runtime approach to integrate the available knowledge in the systems as runtime models into the modeled adaptation logic. Furthermore, the modeling language focuses on the description of system interactions within the adaptive SoS to reason about individual system functionality and how it emerges via collaborations to an overall joint SoS behavior. Therefore, the modeling language approach enables the specification of local adaptive system behavior, the integration of knowledge in form of runtime models and the joint interactions via collaboration to place the available adaptive behavior in an overall layered, adaptive SoS architecture.
Beside the modeling language, this thesis proposes analysis rules to investigate the modeled adaptive SoS, which enables the detection of architectural patterns as well as design flaws and pinpoints to possible system threats. Moreover, a simulation framework is presented, which allows the direct execution of the modeled SoS architecture. Therefore, the analysis rules and the simulation framework can be used to verify the interplay between systems as well as the modeled adaptation effects within the SoS. This thesis realizes the proposed concepts of the modeling language by mapping them to a state of the art standard from the automotive domain and thus, showing their applicability to actual systems. Finally, the modeling language approach is evaluated by remodeling up to date research scenarios from different domains, which demonstrates that the modeling language concepts are powerful enough to cope with a broad range of existing research problems.
In this thesis, a route to temperature-, pH-, solvent-, 1,2-diol-, and protein-responsive sensors made of biocompatible and low-fouling materials is established. These sensor devices are based on the sensitivemodulation of the visual band gap of a photonic crystal (PhC), which is induced by the selective binding of analytes, triggering a volume phase transition.
The PhCs introduced by this work show a high sensitivity not only for small biomolecules, but also for large analytes, such as glycopolymers or proteins. This enables the PhC to act as a sensor that detects analytes without the need of complex equipment.
Due to their periodical dielectric structure, PhCs prevent the propagation of specific wavelengths. A change of the periodicity parameters is thus indicated by a change in the reflected wavelengths. In the case explored, the PhC sensors are implemented as periodically structured responsive hydrogels in formof an inverse opal.
The stimuli-sensitive inverse opal hydrogels (IOHs) were prepared using a sacrificial opal template of monodispersed silica particles. First, monodisperse silica particles were assembled with a hexagonally packed structure via vertical deposition onto glass slides. The obtained silica crystals, also named colloidal crystals (CCs), exhibit structural color. Subsequently, the CCs templates were embedded in polymer matrix with low-fouling properties. The polymer matrices were composed of oligo(ethylene glycol) methacrylate derivatives (OEGMAs) that render the hydrogels thermoresponsive. Finally, the silica particles were etched, to produce highly porous hydrogel replicas of the CC. Importantly, the inner structure and thus the ability for light diffraction of the IOHs formed was maintained.
The IOH membrane was shown to have interconnected pores with a diameter as well as interconnections between the pores of several hundred nanometers. This enables not only the detection of small analytes, but also, the detection of even large analytes that can diffuse into the nanostructured IOH membrane. Various recognition unit – analyte model systems, such as benzoboroxole – 1,2-diols, biotin – avidin and mannose – concanavalin A, were studied by incorporating functional
comonomers of benzoboroxole, biotin and mannose into the copolymers. The incorporated recognition units specifically bind to certain low and highmolar mass biomolecules, namely to certain saccharides, catechols, glycopolymers or proteins.
Their specific binding strongly changes the overall hydrophilicity, thus modulating the swelling of the IOH matrices, and in consequence, drastically changes their internal periodicity. This swelling is amplified by the thermoresponsive properties of the polymer matrix. The shift of the interference band gap due to the specific molecular recognition is easily visible by the naked eye (up to 150 nm shifts). Moreover, preliminary trial were attempted to detect even larger entities. Therefore anti-bodies were immobilized on hydrogel platforms via polymer-analogous esterification. These platforms incorporate comonomers made of tri(ethylene glycol) methacrylate end-functionalized with a carboxylic acid. In these model systems, the bacteria analytes are too big to penetrate into the IOH membranes, but can only interact with their surfaces. The selected model bacteria, as Escherichia coli, show a specific affinity to anti-body-functionalized hydrogels. Surprisingly in the case functionalized IOHs, this study produced weak color shifts, possibly opening a path to detect directly living organism, which will need further investigations.
Dropping Out or Keeping Up?
(2016)
The aim of this study was to examine how automatic evaluations of exercising (AEE) varied according to adherence to an exercise program. Eighty-eight participants (24.98 years ± 6.88; 51.1% female) completed a Brief-Implicit Association Task assessing their AEE, positive and negative associations to exercising at the beginning of a 3-month exercise program. Attendance data were collected for all participants and used in a cluster analysis of adherence patterns. Three different adherence patterns (52 maintainers, 16 early dropouts, 20 late dropouts; 40.91% overall dropouts) were detected using cluster analyses. Participants from these three clusters differed significantly with regard to their positive and negative associations to exercising before the first course meeting (η2p = 0.07). Discriminant function analyses revealed that positive associations to exercising was a particularly good discriminating factor. This is the first study to provide evidence of the differential impact of positive and negative associations on exercise behavior over the medium term. The findings contribute to theoretical understanding of evaluative processes from a dual-process perspective and may provide a basis for targeted interventions.
Dropping Out or Keeping Up?
(2016)
The aim of this study was to examine how automatic evaluations of exercising (AEE) varied according to adherence to an exercise program. Eighty-eight participants (24.98 years ± 6.88; 51.1% female) completed a Brief-Implicit Association Task assessing their AEE, positive and negative associations to exercising at the beginning of a 3-month exercise program. Attendance data were collected for all participants and used in a cluster analysis of adherence patterns. Three different adherence patterns (52 maintainers, 16 early dropouts, 20 late dropouts; 40.91% overall dropouts) were detected using cluster analyses. Participants from these three clusters differed significantly with regard to their positive and negative associations to exercising before the first course meeting (η2p = 0.07). Discriminant function analyses revealed that positive associations to exercising was a particularly good discriminating factor. This is the first study to provide evidence of the differential impact of positive and negative associations on exercise behavior over the medium term. The findings contribute to theoretical understanding of evaluative processes from a dual-process perspective and may provide a basis for targeted interventions.
The molecular ability to selectively and efficiently convert sunlight into other forms of energy like heat, bond change, or charge separation is truly remarkable. The decisive steps in these transformations often happen on a femtosecond timescale and require transitions among different electronic states that violate the Born-Oppenheimer approximation (BOA). Non-BOA transitions pose challenges to both theory and experiment. From a theoretical point of view, excited state dynamics and nonadiabatic transitions both are difficult problems (see Figure 1(a)). However, the theory on non-BOA dynamics has advanced significantly over the last two decades. Full dynamical simulations for molecules of the size of nucleobases have been possible for a couple of years and allow predictions of experimental observables like photoelectron energy or ion yield. The availability of these calculations for isolated molecules has spurred new experimental efforts to develop methods that are sufficiently different from all optical techniques. For determination of transient molecular structure, femtosecond X-ray diffraction and electron diffraction have been implemented on optically excited molecules.
Causes, Time, and Truth
(2016)
We need causation, time, and truth in order to know how things in the broadest sense of the term hang together in the broadest sense of the term. The essays try to say something clarifying about those three classical questions of traditional metaphysics. Not dogmatic answers are offered, but guiding perspectives and possible justifiable ways of dealing with such fundamental
Gravity dictates the structure of the whole Universe and, although it is triumphantly described by the theory of General Relativity, it is the force that we least understand in nature. One of the cardinal predictions of this theory are black holes. Massive, dark objects are found in the majority of galaxies. Our own galactic center very contains such an object with a mass of about four million solar masses. Are these objects supermassive black holes (SMBHs), or do we need alternatives? The answer lies in the event horizon, the characteristic that defines a black hole. The key to probe the horizon is to model the movement of stars around a SMBH, and the interactions between them, and look for deviations from real observations. Nuclear star clusters harboring a massive, dark object with a mass of up to ~ ten million solar masses are good testbeds to probe the event horizon of the potential SMBH with stars. The channel for interactions between stars and the central MBH are the fact that (a) compact stars and stellar-mass black holes can gradually inspiral into the SMBH due to the emission of gravitational radiation, which is known as an “Extreme Mass Ratio Inspiral” (EMRI), and (b) stars can produce gases which will be accreted by the SMBH through normal stellar evolution, or by collisions and disruptions brought about by the strong central tidal field. Such processes can contribute significantly to the mass of the SMBH. These two processes involve different disciplines, which combined will provide us with detailed information about the fabric of space and time. In this habilitation I present nine articles of my recent work directly related with these topics.
In order to evade detection by network-traffic analysis, a growing proportion of malware uses the encrypted HTTPS protocol. We explore the problem of detecting malware on client computers based on HTTPS traffic analysis. In this setting, malware has to be detected based on the host IP address, ports, timestamp, and data volume information of TCP/IP packets that are sent and received by all the applications on the client. We develop a scalable protocol that allows us to collect network flows of known malicious and benign applications as training data and derive a malware-detection method based on a neural networks and sequence classification. We study the method's ability to detect known and new, unknown malware in a large-scale empirical study.
In complement to the well-established zwitterionic monomers 3-((2-(methacryloyloxy)ethyl)dimethylammonio)propane-1-sulfonate (“SPE”) and 3-((3-methacrylamidopropyl)dimethylammonio)propane-1-sulfonate (“SPP”), the closely related sulfobetaine monomers were synthesized and polymerized by reversible addition-fragmentation chain transfer (RAFT) polymerization, using a fluorophore labeled RAFT agent. The polyzwitterions of systematically varied molar mass were characterized with respect to their solubility in water, deuterated water, and aqueous salt solutions. These poly(sulfobetaine)s show thermoresponsive behavior in water, exhibiting upper critical solution temperatures (UCST). Phase transition temperatures depend notably on the molar mass and polymer concentration, and are much higher in D2O than in H2O. Also, the phase transition temperatures are effectively modulated by the addition of salts. The individual effects can be in parts correlated to the Hofmeister series for the anions studied. Still, they depend in a complex way on the concentration and the nature of the added electrolytes, on the one hand, and on the detailed structure of the zwitterionic side chain, on the other hand. For the polymers with the same zwitterionic side chain, it is found that methacrylamide-based poly(sulfobetaine)s exhibit higher UCST-type transition temperatures than their methacrylate analogs. The extension of the distance between polymerizable unit and zwitterionic groups from 2 to 3 methylene units decreases the UCST-type transition temperatures. Poly(sulfobetaine)s derived from aliphatic esters show higher UCST-type transition temperatures than their analogs featuring cyclic ammonium cations. The UCST-type transition temperatures increase markedly with spacer length separating the cationic and anionic moieties from 3 to 4 methylene units. Thus, apparently small variations of their chemical structure strongly affect the phase behavior of the polyzwitterions in specific aqueous environments.
Water-soluble block copolymers were prepared from the zwitterionic monomers and the non-ionic monomer N-isopropylmethacrylamide (“NIPMAM”) by the RAFT polymerization. Such block copolymers with two hydrophilic blocks exhibit twofold thermoresponsive behavior in water. The poly(sulfobetaine) block shows an UCST, whereas the poly(NIPMAM) block exhibits a lower critical solution temperature (LCST). This constellation induces a structure inversion of the solvophobic aggregate, called “schizophrenic micelle”. Depending on the relative positions of the two different phase transitions, the block copolymer passes through a molecularly dissolved or an insoluble intermediate regime, which can be modulated by the polymer concentration or by the addition of salt. Whereas, at low temperature, the poly(sulfobetaine) block forms polar aggregates that are kept in solution by the poly(NIPMAM) block, at high temperature, the poly(NIPMAM) block forms hydrophobic aggregates that are kept in solution by the poly(sulfobetaine) block. Thus, aggregates can be prepared in water, which switch reversibly their “inside” to the “outside”, and vice versa.
Well-developed phonological awareness skills are a core prerequisite for early literacy development. Although effective phonological awareness training programs exist, children at risk often do not reach similar levels of phonological awareness after the intervention as children with normally developed skills. Based on theoretical considerations and first promising results the present study explores effects of an early musical training in combination with a conventional phonological training in children with weak phonological awareness skills. Using a quasi-experimental pretest-posttest control group design and measurements across a period of 2 years, we tested the effects of two interventions: a consecutive combination of a musical and a phonological training and a phonological training alone. The design made it possible to disentangle effects of the musical training alone as well the effects of its combination with the phonological training. The outcome measures of these groups were compared with the control group with multivariate analyses, controlling for a number of background variables. The sample included N = 424 German-speaking children aged 4–5 years at the beginning of the study. We found a positive relationship between musical abilities and phonological awareness. Yet, whereas the well-established phonological training produced the expected effects, adding a musical training did not contribute significantly to phonological awareness development. Training effects were partly dependent on the initial level of phonological awareness. Possible reasons for the lack of training effects in the musical part of the combination condition as well as practical implications for early literacy education are discussed.
Well-developed phonological awareness skills are a core prerequisite for early literacy development. Although effective phonological awareness training programs exist, children at risk often do not reach similar levels of phonological awareness after the intervention as children with normally developed skills. Based on theoretical considerations and first promising results the present study explores effects of an early musical training in combination with a conventional phonological training in children with weak phonological awareness skills. Using a quasi-experimental pretest-posttest control group design and measurements across a period of 2 years, we tested the effects of two interventions: a consecutive combination of a musical and a phonological training and a phonological training alone. The design made it possible to disentangle effects of the musical training alone as well the effects of its combination with the phonological training. The outcome measures of these groups were compared with the control group with multivariate analyses, controlling for a number of background variables. The sample included N = 424 German-speaking children aged 4–5 years at the beginning of the study. We found a positive relationship between musical abilities and phonological awareness. Yet, whereas the well-established phonological training produced the expected effects, adding a musical training did not contribute significantly to phonological awareness development. Training effects were partly dependent on the initial level of phonological awareness. Possible reasons for the lack of training effects in the musical part of the combination condition as well as practical implications for early literacy education are discussed.
This cumulative dissertation contains four self-contained articles which are related to EU regional policy and its structural funds as the overall research topic. In particular, the thesis addresses the question if EU regional policy interventions can at all be scientifically justified and legitimated on theoretical and empirical grounds from an economics point of view. The first two articles of the thesis (“The EU structural funds as a means to hamper migration” and “Internal migration and EU regional policy transfer payments: a panel data analysis for 28 EU member countries”) enter into one particular aspect of the debate regarding the justification and legitimisation of EU regional policy. They theoretically and empirically analyse as to whether regional policy or the market force of the free flow of labour (migration) in the internal European market is the better instrument to improve and harmonise the living and working conditions of EU citizens. Based on neoclassical market failure theory, the first paper argues that the structural funds of the EU are inhibiting internal migration, which is one of the key measures in achieving convergence among the nations in the single European market. It becomes clear that European regional policy aiming at economic growth and cohesion among the member states cannot be justified and legitimated if the structural funds hamper instead of promote migration. The second paper, however, shows that the empirical evidence on the migration and regional policy nexus is not unambiguous, i.e. different empirical investigations show that EU structural funds hamper and promote EU internal migration. Hence, the question of the scientific justification and legitimisation of EU regional policy cannot be readily and unambiguously answered on empirical grounds. This finding is unsatisfying but is in line with previous theoretical and empirical literature. That is why, I take a step back and reconsider the theoretical beginnings of the thesis, which took for granted neoclassical market failure theory as the starting point for the positive explanation as well as the normative justification and legitimisation of EU regional policy. The third article of the thesis (“EU regional policy: theoretical foundations and policy conclusions revisited”) deals with the theoretical explanation and legitimisation of EU regional policy as well as the policy recommendations given to EU regional policymakers deduced from neoclassical market failure theory. The article elucidates that neoclassical market failure is a normative concept, which justifies and legitimates EU regional policy based on a political and thus subjective goal or value-judgement. It can neither be used, therefore, to give a scientifically positive explanation of the structural funds nor to obtain objective and practically applicable policy instruments. Given this critique of neoclassical market failure theory, the third paper consequently calls into question the widely prevalent explanation and justification of EU regional policy given in static neoclassical equilibrium economics. It argues that an evolutionary non-equilibrium economics perspective on EU regional policy is much more appropriate to provide a realistic understanding of one of the largest policies conducted by the EU. However, this does neither mean that evolutionary economic theory can be unreservedly seen as the panacea to positively explain EU regional policy nor to derive objective policy instruments for EU regional policymakers. This issue is discussed in the fourth article of the thesis (“Market failure vs. system failure as a rationale for economic policy? A critique from an evolutionary perspective”). This article reconsiders the explanation of economic policy from an evolutionary economics perspective. It contrasts the neoclassical equilibrium notions of market and government failure with the dominant evolutionary neo-Schumpeterian and Austrian-Hayekian perceptions. Based on this comparison, the paper criticises the fact that neoclassical failure reasoning still prevails in non-equilibrium evolutionary economics when economic policy issues are examined. This is surprising, since proponents of evolutionary economics usually view their approach as incompatible with its neoclassical counterpart. The paper therefore argues that in order to prevent the otherwise fruitful and more realistic evolutionary approach from undermining its own criticism of neoclassical economics and to create a consistent as well as objective evolutionary policy framework, it is necessary to eliminate the equilibrium spirit. Taken together, the main finding of this thesis is that European regional policy and its structural funds can neither theoretically nor empirically be justified and legitimated from an economics point of view. Moreover, the thesis finds that the prevalent positive and instrumental explanation of EU regional policy given in the literature needs to be reconsidered, because these theories can neither scientifically explain the emergence and development of this policy nor are they appropriate to derive objective and scientific policy instruments for EU regional policymakers.
This study presents new insights into null subjects, topic drop and the interpretation of topic-dropped elements. Besides providing an empirical data survey, it offers explanations to well-known problems, e.g. syncretisms in the context of null-subject licensing or the marginality of dropping an element which carries oblique case. The book constitutes a valuable source for both empirically and theoretically interested (generative) linguists.
The cell interior is a highly packed environment in which biological macromolecules evolve and function. This crowded media has effects in many biological processes such as protein-protein binding, gene regulation, and protein folding. Thus, biochemical reactions that take place in such crowded conditions differ from diluted test tube conditions, and a considerable effort has been invested in order to understand such differences.
In this work, we combine different computationally tools to disentangle the effects of molecular crowding on biochemical processes. First, we propose a lattice model to study the implications of molecular crowding on enzymatic reactions. We provide a detailed picture of how crowding affects binding and unbinding events and how the separate effects of crowding on binding equilibrium act together. Then, we implement a lattice model to study the effects of molecular crowding on facilitated diffusion. We find that obstacles on the DNA impair facilitated diffusion. However, the extent of this effect depends on how dynamic obstacles are on the DNA. For the scenario in which crowders are only present in the bulk solution, we find that at some conditions presence of crowding agents can enhance specific-DNA binding. Finally, we make use of structure-based techniques to look at the impact of the presence of crowders on the folding a protein. We find that polymeric crowders have stronger effects on protein stability than spherical crowders. The strength of this effect increases as the polymeric crowders become longer. The methods we propose here are general and can also be applied to more complicated systems.
Changes in extratropical storm track activity and their implications for extreme weather events
(2016)
It is the intention of this study to contribute to further rethinking and innovating in the Microcredit business which stands at a turning point – after around 40 years of practice it is endangered to fail as a tool for economic development and to become a doubtful finance product with a random scope instead. So far, a positive impact of Microfinance on the improvement of the lives of the poor could not be confirmed. Over-indebtment of borrowers due to the pre-dominance of consumption Microcredits has become a widespread problem. Furthermore, a rising number of abusive and commercially excessive practices have been reported.
In fact, the Microfinance sector appears to suffer from a major underlying deficit: there does not exist a coherent and transparent understanding of its meaning and objectives so that Microfinance providers worldwide follow their own approaches of Microfinance which tend to differ considerably from each other.
In this sense the study aims at consolidating the multi-faced and very often confusingly different Microcredit profiles that exist nowadays. Subsequently, in this study, the Microfinance spectrum will be narrowed to one clear-cut objective, in fact away from the mere monetary business transactions to poor people it has gradually been reduced to back towards a tool for economic development as originally envisaged by its pioneers.
Hence, the fundamental research question of this study is whether, and under which conditions, Microfinance may attain a positive economic impact leading to an improvement of the living of the poor.
The study is structured in five parts: the three main parts (II.-IV.) are surrounded by an introduction (I.) and conclusion (V.). In part II., the Microfinance sector is analysed critically aiming to identify the challenges persisting as well as their root causes. In the third part, a change to the macroeconomic perspective is undertaken in oder to learn about the potential and requirements of small-scale finance to enhance economic development, particularly within the economic context of less developed countries. By consolidating the insights gained in part IV., the elements of a new concept of Microfinance with the objecitve to achieve economic development of its borrowers are elaborated.
Microfinance is a rather sensitive business the great fundamental idea of which is easily corruptible and, additionally, the recipients of which are predestined victims of abuse due to their limited knowledge in finance. It therefore needs to be practiced responsibly, but also according to clear cut definitions of its meaning and objectives all institutions active in the sector should be devoted to comply with. This is especially relevant as the demand for Microfinance services is expected to rise further within the years coming. For example, the recent refugee migration movement towards Europe entails a vast potential for Microfinance to enable these people to make a new start into economic life. This goes to show that Microfinance may no longer mainly be associated with a less developed economic context, but that it will gain importance as a financial instrument in the developed economies, too.
Among the bloom-forming and potentially harmful cyanobacteria, the genus Microcystis represents a most diverse taxon, on the genomic as well as on morphological and secondary metabolite levels. Microcystis communities are composed of a variety of diversified strains. The focus of this study lies on potential interactions between Microcystis representatives and the roles of secondary metabolites in these interaction processes.
The role of secondary metabolites functioning as signaling molecules in the investigated interactions is demonstrated exemplary for the prevalent hepatotoxin microcystin. The extracellular and intracellular roles of microcystin are tested in microarray-based transcriptomic approaches. While an extracellular effect of microcystin on Microcystis transcription is confirmed and connected to a specific gene cluster of another secondary metabolite in this study, the intracellularly occurring microcystin is related with several pathways of the primary metabolism. A clear correlation of a microcystin knockout and the SigE-mediated regulation of carbon metabolism is found. According to the acquired transcriptional data, a model is proposed that postulates the regulating effect of microcystin on transcriptional regulators such as the alternative sigma factor SigE, which in return captures an essential role in sugar catabolism and redox-state regulation.
For the purpose of simulating community conditions as found in the field, Microcystis colonies are isolated from the eutrophic lakes near Potsdam, Germany and established as stably growing under laboratory conditions. In co-habitation simulations, the recently isolated field strain FS2 is shown to specifically induce nearly immediate aggregation reactions in the axenic lab strain Microcystis aeruginosa PCC 7806. In transcriptional studies via microarrays, the induced expression program in PCC 7806 after aggregation induction is shown to involve the reorganization of cell envelope structures, a highly altered nutrient uptake balance and the reorientation of the aggregating cells to a heterotrophic carbon utilization, e.g. via glycolysis. These transcriptional changes are discussed as mechanisms of niche adaptation and acclimation in order to prevent competition for resources.
Exploring generalisation following treatment of language deficits in aphasia can provide insights into the functional relation of the cognitive processing systems involved. In the present study, we first review treatment outcomes of interventions targeting sentence processing deficits and, second report a treatment study examining the occurrence of practice effects and generalisation in sentence comprehension and production. In order to explore the potential linkage between processing systems involved in comprehending and producing sentences, we investigated whether improvements generalise within (i.e., uni-modal generalisation in comprehension or in production) and/or across modalities (i.e., cross-modal generalisation from comprehension to production or vice versa). Two individuals with aphasia displaying co-occurring deficits in sentence comprehension and production were trained on complex, non-canonical sentences in both modalities. Two evidence-based treatment protocols were applied in a crossover intervention study with sequence of treatment phases being randomly allocated. Both participants benefited significantly from treatment, leading to uni-modal generalisation in both comprehension and production. However, cross-modal generalisation did not occur. The magnitude of uni-modal generalisation in sentence production was related to participants’ sentence comprehension performance prior to treatment. These findings support the assumption of modality-specific sub-systems for sentence comprehension and production, being linked uni-directionally from comprehension to production.
Convoluted Brownian motion
(2016)
In this paper we analyse semimartingale properties of a class of Gaussian periodic processes, called convoluted Brownian motions, obtained by convolution between a deterministic function and a Brownian motion. A classical
example in this class is the periodic Ornstein-Uhlenbeck process. We compute their characteristics and show that in general, they are neither
Markovian nor satisfy a time-Markov field property. Nevertheless, by enlargement
of filtration and/or addition of a one-dimensional component, one can in some case recover the Markovianity. We treat exhaustively the case of the bidimensional trigonometric convoluted Brownian motion and the higher-dimensional monomial convoluted Brownian motion.