Refine
Year of publication
- 2010 (705) (remove)
Document Type
- Article (475)
- Doctoral Thesis (114)
- Monograph/Edited Volume (42)
- Conference Proceeding (24)
- Postprint (22)
- Review (13)
- Preprint (7)
- Other (3)
- Working Paper (2)
- Habilitation Thesis (1)
Language
- English (705) (remove)
Keywords
- middleware (4)
- freshwater (3)
- Arrayseismologie (2)
- Aspektorientierte Softwareentwicklung (2)
- Betriebssysteme (2)
- Constraint Solving (2)
- Cosmogenic nuclides (2)
- Deduction (2)
- Erdbeben (2)
- Erdbebenkatalog (2)
Institute
- Institut für Biochemie und Biologie (136)
- Institut für Geowissenschaften (87)
- Institut für Chemie (81)
- Institut für Physik und Astronomie (81)
- Institut für Informatik und Computational Science (39)
- Institut für Mathematik (36)
- Department Psychologie (31)
- Institut für Anglistik und Amerikanistik (25)
- Extern (22)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (19)
The Visigoths as the other
(2010)
An analysis of Roman -Visigothic relations in different terms than the usual presupposition of constant military and confessional/Christia n antagonism. Structuralist methodology demonstrates how Roman needs at precise historical moments determ in e how Visigoths were perceived and, therefore, portrayed in our source.
Two types of electrical conductivity sensors were evaluated to prospect circular ditches surrounding former Bronze Age burial mounds, complementing aerial photography. The first sensor was based on the electrical resistivity (ER) method, while the second sensor was based on frequency-domain electromagnetic induction (FDEM). Both sensors were designed with multiple receivers, which measure several depth sensitivities simultaneously. First, the sensors were tested on an experimental site where a rectangular structure with limited dimensions was dug in a sandy soil. The structure appeared as a higher conductivity anomaly in the low-conductivity sand. Then, both methods were applied on two Bronze Age sites with different soil properties, which were discovered by aerial photography. The first site, in a sandy soil, gave only very weak anomalies. Soil augering revealed that the ditch filling consisted of the same sandy material as the surrounding, therefore this filling was not able to cause a high-conductivity contrast. Due to its lower sensitivity to noise in the low-conductive range, the ER-sensor produced a more pronounced anomaly than the FDEM-sensor. The second site was located on top of a ridge with a shallow substrate of Tertiary, coastal sediments. The ditch was very clearly visible on the sensor maps as a conductive low. At this location, the soil augering revealed that the ditch was dug through an alternating clay-sand layer and subsequently filled up with silty material from the topsoil. Overall, the shallow receiver separation produced anomalies that were both stronger and that corresponded better to the geometry of the ditches. The other receiver separations provided more information on the natural soil layering, and in the case of the ER-array they could be used to obtain a cross-section of the actual electrical conductivity with 2-D inversion modelling. The results of this study proofed that conductivity sensors can detect Bronze Age ditches, with varying contrast depending on the soil geomorphology. Moreover, the sensor maps combined with soil observations by coring provided insight in the environmental conditions that influence the contrast of the anomalies seen on the aerial photographs and the sensor maps.
Magnetic susceptibility is an important indicator of anthropogenic disturbance in the natural soil. This property is often mapped with magnetic gradiometers in archaeological prospection studies. It is also detected with frequency domain electromagnetic induction (FDEM) sensors, which have the advantage that they can simultaneously measure the electrical conductivity. The detection level of FDEM sensors for magnetic structures is very dependent on the coil configuration. Apart from theoretical modelling studies, a thorough investigation with field models has not been conducted until now. Therefore, the goal of this study was to test multiple coil configurations on a test field with naturally enhanced magnetic susceptibility in the topsoil and with different types of structures mimicking real archaeological features. Two FDEM sensors were used with coil separations between 0.5 and 2 m and with three coil orientations. First, a vertical sounding was conducted over the undisturbed soil to test the validity of a theoretical layered model, which can be used to infer the depth sensitivity of the coil configurations. The modelled sounding values corresponded well with the measured data, which means that the theoretical models are applicable to layered soils. Second, magnetic structures were buried in the site and the resulting anomalies measured to a very high resolution. The results showed remarkable differences in amplitude and complexity between the responses of the coil configurations. The 2-m horizontal coplanar and 1.1-m perpendicular coil configurations produced the clearest anomalies and resembled best a gradiometer measurement.
Biological invasions are a major threat to natural biodiversity; hence, understanding the mechanisms underlying invasibility (i.e., the susceptibility of a community to invasions by new species) is crucial. Invasibility of a resident community may be affected by a complex but hitherto hardly understood interplay of (1) productivity of the habitat, (2) diversity, (3) herbivory, and (4) the characteristics of both invasive and resident species. Using experimental phytoplankton microcosms, we investigated the effect of nutrient supply and species diversity on the invasibility of resident communities for two functionally different invaders in the presence or absence of an herbivore. With increasing nutrient supply, increased herbivore abundance indicated enhanced phytoplankton biomass production, and the invasion success of both invaders showed a unimodal pattern. At low nutrient supply (i.e., low influence of herbivory), the invasibility depended mainly on the competitive abilities of the invaders, whereas at high nutrient supply, the susceptibility to herbivory dominated. This resulted in different optimum nutrient levels for invasion success of the two species due to their individual functional traits. To test the effect of diversity on invasibility, a species richness gradient was generated by random selection from a resident species pool at an intermediate nutrient level. Invasibility was not affected by species richness; instead, it was driven by the functional traits of the resident and/or invasive species mediated by herbivore density. Overall, herbivory was the driving factor for invasibility of phytoplankton communities, which implies that other factors affecting the intensity of herbivory (e.g., productivity or edibility of primary producers) indirectly influence invasions.
Metabasites were sampled from rock series of the subducted margin of the Indian Plate, the so-called Higher Himalayan Crystalline, in the Upper Kaghan Valley, Pakistan. These vary from corona dolerites, cropping out around Saif- ul-Muluk in the south, to coesite-eclogite close to the suture zone against rocks of the Kohistan arc in the north. Bulk rock major- and trace-element chemistry reveals essentially a single protolith as the source for five different eclogite types, which differ in fabric, modal mineralogy as well as in mineral chemistry. The study of newly-collected samples reveals coesite (confirmed by in situ Raman spectroscopy) in both garnet and omphacite. All eclogites show growth of amphiboles during exhumation. Within some coesite-bearing eclogites the presence of glaucophane cores to barroisite is noted whereas in most samples porphyroblastic sodic-calcic amphiboles are rimmed by more aluminous calcic amphibole (pargasite, tschermakite, and edenite). Eclogite facies rutile is replaced by ilmenite which itself is commonly surrounded by titanite. In addition, some eclogite bodies show leucocratic segregations containing phengite, quartz, zoisite and/or kyanite. The important implication is that the complex exhumation path shows stages of initial cooling during decompression (formation of glaucophane) followed by reheating: a very similar situation to that reported for the coesite-bearing eclogite series of the Tso Morari massif, India, 450 km to the south-east.
Mountain gazelles (Gazella gazella) rank among the most critically endangered mammals on the Arabian Peninsula. Past conservation efforts have been plagued by confusion about the phylogenetic relationship among various 'phenotypically discernable' populations, and even the question of species boundaries was far from being certain. This lack of knowledge has had a direct impact on conservation measures, especially ex situ breeding programmes, hampering the assignment of captive stocks to potential conservation units. Here, we provide a phylogenetic framework, based on the analysis of mtDNA sequences (360 bp cytochrome b and 213 bp Control Region) of 126 individuals collected from the wild throughout the Arabian Peninsula and from captive stocks. Our analyses revealed two reciprocally monophyletic genetic lineages within the presumed species Gazella gazella: one 'northern clade' on the Golan Heights (Israel/Syrian border) and one genetically diverse larger clade from the rest of the Arabian Peninsula including the Arava Valley (Negev, Israel). Applying the Strict Phylogenetic Species Concept (sensu Mishler & Theriot, 2000) allows assigning species status to these two major clades.
In most mammals, females are philopatric while males disperse in order to avoid inbreeding. We investigated social structure in a solitary ungulate, the bushbuck Tragelaphus sylvaticus in Queen Elizabeth National Park, Uganda by combining behavioural and molecular data. We correlated spatial and social vicinity of individual females with a relatedness score obtained from mitochondrial DNA analysis. Presumed clan members shared the same haplotype, showed more socio-positive interactions and had a common home range. Males had a higher haplotype diversity than females. All this suggests the presence of a matrilineal structure in the study population. Moreover, we tested natal dispersal distances between male and female yearlings and used control region sequences to confirm that females remain in their natal breeding areas whereas males disperse. In microsatellite analysis, males showed a higher genetic variability than females. The impoverished genetic variability of females at both molecular marker sets is consistent with a philopatric and matrilineal structure, while the higher degree of genetic variability of males is congruent with a higher dispersal rate expected in this sex. Evidence even for male long-distance dispersal is brought about by one male carrying a haplotype of a different subspecies, previously not described to occur in this area.
Using molecular genetic methods and an ancient DNA approach, we studied population and species succession of rotifers of the genus Brachionus in the Kenyan alkaline-saline crater lake Sonachi since the beginning of the 19th century as well as distribution of Brachionus haplotypes in recent and historic sediments of other lakes of the East African Rift System. The sediment core record of Lake Sonachi displays haplotypes of a distinct evolutionary lineage in all increments. Populations were dominated by a single mitochondrial haplotype for a period of 150 years, and two putatively intraspecific turnovers in dominance occurred. Both changes are concordant with major environmental perturbations documented by a profound visible change in sediment composition of the core. The first change was very abrupt and occurred after the deposition of volcanic ash at the beginning of the 19th century. The second change coincides with a major lake level lowstand during the 1940s. It was preceded by a period of successively declining lake level, in which two other haplotypes appeared in the lake. One of these putatively belongs to another species documented in historical and recent Kenyan lake sediments. The analysis of plankton population dynamics through historical time can reveal patterns of population persistence and turnover in relation to environmental changes.
Unraveling the processes responsible for Earth's climate transition from an "El Nino-like state" during the warm early Pliocene into a modern-like "La Nina-dominated state" currently challenges the scientific community. Recently, the Pliocene climate switch has been linked to oceanic thermocline shoaling at similar to 3 million years ago along with Earth's final transition into a bipolar icehouse world. Here we present Pliocene proxy data and climate model results, which suggest an earlier timing of the Pliocene climate switch and a different chain of forcing mechanisms. We show that the increase in North Atlantic meridional overturning circulation between 4.8 and 4.0 million years ago, initiated by the progressive closure of the Central American Seaway, triggered overall shoaling of the tropical thermocline. This preconditioned the turnaround from a warm eastern equatorial Pacific to the modern equatorial cold tongue state about 1 million years earlier than previously assumed. Since similar to 3.6-3.5 million years ago, the intensification of Northern Hemisphere glaciation resulted in a strengthening of the trade winds, thereby amplifying upwelling and biogenic productivity at low latitudes.
In this study, we have used fragments of three mitochondrial genes (Control Region, CR; transfer RNA for methionine, tRNA-Met; NADH dehydrogenase subunit 2, ND2 for a total of 1066 bp) to reconstruct the phylogeographic history of the endemic Philippine bulbul (Hypsipetes philippinus) at the scale of the central area of the Philippine archipelago. The study includes two of the five recognized subspecies (guimarasensis and mindorensis), 7 populations and 58 individuals. Multiple phylogenetic and network analyses support the existence of two reciprocally monophyletic maternal lineages corresponding to the two named subspecies. Molecular clock estimates indicate that the split between the two subspecies is consistent with the Pleistocene geological history of the archipelago. Patterns of relationships within guimarasensis are biogeographically less clear. Here, a combination of vicariance and dispersal needs to be invoked to reconcile the molecular data with the geographical origin of samples. In particular, the two islands Boracay and Negros host mitochondrial lineages that do not form monophyletic clusters. Our genetic data suggest multiple independent colonization events for these locations.
Abstract interpretation-based model checking provides an approach to verifying properties of infinite-state systems. In practice, most previous work on abstract model checking is either restricted to verifying universal properties, or develops special techniques for temporal logics such as modal transition systems or other dual transition systems. By contrast we apply completely standard techniques for constructing abstract interpretations to the abstraction of a CTL semantic function, without restricting the kind of properties that can be verified. Furthermore we show that this leads directly to implementation of abstract model checking algorithms for abstract domains based on constraints, making use of an SMT solver.
Die besondere Beziehung zwischen Humboldt und Darwin, zwei der bedeutendsten Persönlichkeiten in der Welt der Naturwissenschaften und der Biologie des 19. Jahrhunderts, wird detailliert auf den verschiedenen Ebenen ihres Kontaktes analysiert, sowohl was das real stattgefundene persönliche Treffen betrifft, als auch hinsichtlich ihrer Korrespondenz und der Koinzidenz von Ideen. Dieser wechselseitige Blick zeigt uns wie sich die beiden Gelehrten gegenseitig wahrnahmen, ob sie wirklich versuchten, mit dem Paradigma ihrer bedeutenden Vorgänger zu brechen, oder ob sie lediglich schrittweise das bereits erlangte Wissen erweiterten, bis es durch die Erstellung einer genialen Idee zu einem Bruch des bisherigen Wissens kommt. Bekannt ist die wiederholte Referenz von Darwin auf die Werke Humboldts, insbesondere auf die Tagebücher des deutschen Naturwissenschaftler und seine Art der Beschreibung der amerikanischen Natur in ihrer ganzen Reichhaltigkeit. Weniger bekannt hingegen sind andere Verweise in seiner Autobiografie, sowie die wissenschaftliche Verwendung des Humboldtschen Werkes oder die Zitate in seiner Korrespondenz, die in diesem Beitrag aufgezeigt werden. Darüber hinaus wird die Verwendung der frühen Schriften von Darwin durch Humboldt in einigen seiner Publikationen, vor allem im Kosmos, erwähnt.
This work presents the development of entropy-elastic gelatin based networks in the form of films or scaffolds. The materials have good prospects for biomedical applications, especially in the context of bone regeneration. Entropy-elastic gelatin based hydrogel films with varying crosslinking densities were prepared with tailored mechanical properties. Gelatin was covalently crosslinked above its sol gel transition, which suppressed the gelatin chain helicity. Hexamethylene diisocyanate (HDI) or ethyl ester lysine diisocyanate (LDI) were applied as chemical crosslinkers, and the reaction was conducted either in dimethyl sulfoxide (DMSO) or water. Amorphous films were prepared as measured by Wide Angle X-ray Scattering (WAXS), with tailorable degrees of swelling (Q: 300-800 vol. %) and wet state Young’s modulus (E: 70 740 kPa). Model reactions showed that the crosslinking reaction resulted in a combination of direct crosslinks (3-13 mol.-%), grafting (5-40 mol.-%), and blending of oligoureas (16-67 mol.-%). The knowledge gained with this bulk material was transferred to the integrated process of foaming and crosslinking to obtain porous 3-D gelatin-based scaffolds. For this purpose, a gelatin solution was foamed in the presence of a surfactant, Saponin, and the resulting foam was fixed by chemical crosslinking with a diisocyanate. The amorphous crosslinked scaffolds were synthesized with varied gelatin and HDI concentrations, and analyzed in the dry state by micro computed tomography (µCT, porosity: 65±11–73±14 vol.-%), and scanning electron microscopy (SEM, pore size: 117±28–166±32 µm). Subsequently, the work focused on the characterization of the gelatin scaffolds in conditions relevant to biomedical applications. Scaffolds showed high water uptake (H: 630-1680 wt.-%) with minimal changes in outer dimension. Since a decreased scaffold pore size (115±47–130±49 µm) was revealed using confocal laser scanning microscopy (CLSM) upon wetting, the form stability could be explained. Shape recoverability was observed after removal of stress when compressing wet scaffolds, while dry scaffolds maintained the compressed shape. This was explained by a reduction of the glass transition temperature upon equilibration with water (dynamic mechanical analysis at varied temperature (DMTA)). The composition dependent compression moduli (Ec: 10 50 kPa) were comparable to the bulk micromechanical Young’s moduli, which were measured by atomic force microscopy (AFM). The hydrolytic degradation profile could be adjusted, and a controlled decrease of mechanical properties was observed. Partially-degraded scaffolds displayed an increase of pore size. This was likely due to the pore wall disintegration during degradation, which caused the pores to merge. The scaffold cytotoxicity and immunologic responses were analyzed. The porous scaffolds enabled proliferation of human dermal fibroblasts within the implants (up to 90 µm depth). Furthermore, indirect eluate tests were carried out with L929 cells to quantify the material cytotoxic response. Here, the effect of the sterilization method (Ethylene oxide sterilization), crosslinker, and surfactant were analyzed. Fully cytocompatible scaffolds were obtained by using LDI as crosslinker and PEO40 PPO20-PEO40 as surfactant. These investigations were accompanied by a study of the endotoxin material contamination. The formation of medical-grade materials was successfully obtained (<0.5 EU/mL) by using low-endotoxin gelatin and performing all synthetic steps in a laminar flow hood.
Superexponential droplet fractalization as a hierarchical formation of dissipative compactons
(2010)
We study the dynamics of a thin film over a substrate heated from below in a framework of a strongly nonlinear one-dimensional Cahn-Hilliard equation. The evolution leads to a fractalization into smaller and smaller scales. We demonstrate that a primitive element in the appearing hierarchical structure is a dissipative compacton. Both direct simulations and the analysis of a self-similar solution show that the compactons appear at superexponentially decreasing scales, which means vanishing dimension of the fractal.
Penicillin amidase from Alacaligenes faecalis is an attractive biocatalyst for hydrolysis of penicillin G for production of 6-aminopenicillanic acid, which is used in the synthesis of semi-synthetic beta-lactam antibiotics. Recently a mutant of this enzyme with extended C-terminus of the A-chain comprising parts of the connecting linker peptide was constructed. Its turnover number for the hydrolysis of penicillin G was 140 s(-1), about twice of the value for the wild-type enzyme (80 s(-1)). At the same time the specificity constant was improved about three-fold. The wild- type and the mutant enzymes showed similar pH stability suggesting that the linker peptide fragment covalently attached to the A-chain does not alter the electrostatic interactions in the protein core. Although the global stability of A. faecalis wild-type enzyme and the T206GS213G variant does not differ, the presence of the linker fragment stabilizes the domains interface, as evidenced by the monophasic transition of the mutant enzyme from folded to unfolded state during urea-induced denaturation. The high stability and activity of the mutant enzyme provides a rationale to use it as a biocatalyst in the industrial processes, where the enzyme must be more robust to fluctuations in the operational conditions.
In the humid tropics, continuing high deforestation rates are seen alongside an increasing expansion of secondary forests. In order to understand and model the consequences of these dynamic land-use changes for regional water cycles, the response of soil hydraulic properties to forest disturbance and recovery has to be quantified.At a site in the Brazilian Amazonia, we annually monitored soil infiltrability and saturated hydraulic conductivity (K-s) at 12.5, 20 cm, and 50 cm soil depth after manual forest conversion to pasture (year zero to four after pasture establishment), and during secondary succession after pasture abandonment (year zero to seven after pasture abandonment). We evaluated the hydrological consequences of the detected changes by comparing the soil hydraulic properties with site-specific rainfall intensities and hydrometric observations. Within one year after grazing started, infiltrability and K-s at 12.5 and 20 cm depth decreased by up to one order of magnitude to levels which are typical for 20-year-old pasture. In the three subsequent monitoring years, infiltrability and K-s remained stable. Land use did not impact on subsoil permeability. Whereas infiltrability values are large enough to allow all rainwater to infiltrate even after the conversion, the sudden decline of near-surface K-s is of hydrological relevance as perched water tables and overland flow occur more often on pastures than in forests at our study site. After pasture abandonment and during secondary succession, seven years of recovery did not suffice to significantly increase infiltrability and K-s at 12.5 depth although a slight recovery is obvious. At 20 cm soil depth, we detected a positive linear increase within the seven-year time frame but annual means did not differ significantly. Although more than a doubling of infiltrability and K-s is still required to achieve pre-disturbance levels, which will presumably take more than a decade, the observed slight increases of K-s might already decrease the probability of perched water table generation and overland flow development well before complete recovery.
Metal ion induced self-assembly of the rigid ligand 1,4-bis(2,2':6',2 ''-terpyridine- 4'-yl) benzene (1) with Fe(II), Co(II), Ni(II) and Zn(II) acetate in aqueous solution results in extended, rigid- rod like metallosupramolecular coordination polyelectrolytes (MEPE-1). Under the current experimental conditions the molar masses range from 1000 g mol(-1) up to 500 000 g mol(-1). The molar mass depends on concentration, stoichiometry, metal-ion and time. In addition, we present viscosity measurements, small angle neutron scattering and AFM data. We introduce a protocol to precisely control the stoichiometry during self-assembly using conductometry. The protocol can be used with different terpyridine ligands and the above-mentioned metal ions and is of paramount importance to obtain meaningful and reproducible results. As a control experiment we studied the mononuclear 4'- (phenyl)2,2':6',2 ''-terpyridine (3) complex with Ni(II) and Zn(II) and the flexible ligand 1,3- bis[4'-oxa(2,2': 6',2 ''-terpyridinyl)] propane (2) with Ni(II) acetate (Ni-MEPE-2). This ligand does not form extended macroassemblies but likely ring-like structures with 3 to 4 repeat units. Through spin- coating of Ni-MEPE-1 on a solid surface we can image the MEPEs in real space by AFM. SANS measurements of Fe-MEPE-1 verify the extended rigid-rod type structure of the MEPEs in aqueous solution.
A series of studies have distinguished two types of but, namely, corrective and counterexpectational. The difference between these two types has been considered largely semantic/pragmatic. This article shows that the semantic difference also translates into a different syntax for each type of but. More precisely, corrective but always requires clause-level coordination, with apparent counterexamples being derived through ellipsis within the second conjunct. On the other hand, counterexpectational but is not restricted in this way, and offers the possibility of coordination of both clausal and subclausal constituents. From this difference, it is possible to derive a number of syntactic asymmetries between corrective and counterexpectational but.
Metal-ion-induced self-assembly in aqueous solution of the rigid ligand 1,4-bis(2,2':6',2 ''-terpyridine-4'-yl)benzene (1) with Fe(OAc)(2) and Ni(OAc)(2) is investigated with viscosimetry, SANS, and AFM. Ligand 1 forms extended, rigid-rod like metallo-supramolecular coordination polyeectrolytes (MEPEs) with a molar mass of up to 200 000 g mol(-1) under the Current experimental conditions. The molar mass depends oil concentration, stoichiometry, and time. By spin-coating MEPEs oil a solid surface, we call image the MEPEs in real space by AFM. Both AFM and SANS confirm the extended rigid-rod-type structure of the MEPEs. As a control experiment, we also studied the flexible ligand 1,3-bis[4'-oxa(2,2':6',2 ''-terpyridinyl)]propane (2). Ligand 2 does not form extended macro-assemblies but likely ringlike structures with three 10 four repeat units. Finally, we present it protocol to control the stoichiometry during self-assembly using conductometry, which is of paramount importance to obtain meaningful and reproducible results.
The Erysiphales species Phyllactinia hippophaës Thuem. ex S. Blumer was found for the fi rst time on cultivated Sea Buckthorn (Hippophaë rhamnoides L.) near Großkayna (Saxony-Anhalt) in October 2009. This fungus was considered to be extinct in Germany. Intensive searching in Saxony-Anhalt and the Potsdam area (Brandenburg) yielded many additional records, most of them from former brown coal mining areas or in Sea Buckthorn plantations.
The formation of different micro- and nanostructures during the chemical synthesis of polypyrrole is reviewed shortly based on the conceptions of hard- and soft-templating models. Contrary to other models that emphasize the role of micelles it is found here that during the oxidative polymerization of pyrole using sulfonic acid dopants a crystalline hard template is found in the first steps of the reaction before the addition of the oxidant. This template is formed by a complex consisting of 2,5-bis(pyrrole-2-yl)pyrrolidine and the sulfonic acid anion. The acid catalyzed formation of this specific tripyrrole is discussed. (C) 2009 Elsevier B.V. All rights reserved.
A new approach for efficient second-harmonic generation using diode lasers is presented. The experimental setup is based on a tapered amplifier operated in a ring resonator that is coupled to a miniaturized enhancement ring resonator containing a periodically poled lithium niobate crystal. Frequency locking of the diode laser emission to the resonance frequency of the enhancement cavity is realized purely optically, resulting in stable, single-frequency operation. Blue light at 488 nm with an output power of 310 mW is generated with an optical-to-optical conversion efficiency of 18%.
Map form information on forest biomass is required for estimating bioenergy potentials and monitoring carbon stocks. In Finland, the growing stock of forests is monitored using multi-source forest inventory, where variables are estimated in the form of thematic maps and area statistics by combining information of field measurements, satellite images and other digital map data. In this study, we used the multi-source forest inventory methodology for estimating forest biomass characteristics. The biomass variables were estimated for national forest inventory field plots on the basis of measured tree variables. The plot-level biomass estimates were used as reference data for satellite image interpretation. The estimates produced by satellite image interpretation were tested by cross-validation. The results indicate that the method for producing biomass maps on the basis of biomass models and satellite image interpretation is operationally feasible. Furthermore, the accuracy of the estimates of biomass variables is similar or even higher than that of traditional growing stock volume estimates. The technique presented here can be applied, for example, in estimating biomass resources or in the inventory of greenhouse gases.
Human sulfite oxidase (hSO) was immobilised on SAM-coated silver electrodes under preservation of the native heme pocket structure of the cytochrome b5 (Cyt b5) domain and the functionality of the enzyme. The redox properties and catalytic activity of the entire enzyme were studied by surface enhanced resonance Raman (SERR) spectroscopy and cyclic voltammetry (CV) and compared to the isolated heme domain when possible. It is shown that heterogeneous electron transfer and catalytic activity of hSO sensitively depend on the local environment of the enzyme. Increasing the ionic strength of the buffer solution leads to an increase of the heterogeneous electron transfer rate from 17 s(-1) to 440 s(- 1) for hSO as determined by SERR spectroscopy. CV measurements demonstrate an increase of the apparent turnover rate for the immobilised hSO from 0.85 s(-1) in 100 mM buffer to 5.26 s(-1) in 750 mM buffer. We suggest that both effects originate from the increased mobility of the surface-bound enzyme with increasing ionic strength. In agreement with surface potential calculations we propose that at high ionic strength the enzyme is immobilised via the dimerisation domain to the SAM surface. The flexible loop region connecting the Moco and the Cyt b5 domain allows alternating contact with the Moco interaction site and the SAM surface, thereby promoting the sequential intramolecular and heterogeneous electron transfer from Moco via Cyt b5 to the electrode. At lower ionic strength, the contact time of the Cyt b5 domain with the SAM surface is longer, corresponding to a slower overall electron transfer process.
Simulation of spatial sensor characteristics in the context of the EnMAP Hyperspectral mission
(2010)
The simulation of remote sensing images is a valuable tool for defining future Earth observation systems, optimizing instrument parameters, and developing and validating data-processing algorithms. A scene simulator for optical Earth observation data has been developed within the Environmental Mapping and Analysis Program (EnMAP) hyperspectral mission. It produces EnMAP-like data following a sequential processing approach consisting of five independent modules referred to as reflectance, atmospheric, spatial, spectral, and radiometric modules. From a modeling viewpoint, the spatial module is the most complex. The spatial simulation process considers the satellite-target geometry, which is adapted to the EnMAP orbit and operating characteristics, the instrument spatial response, and the sources of spatial nonuniformity (keystone, telescope distortion and smile, and detector coregistration). The spatial module of the EnMAP scene simulator is presented in this paper. The EnMAP spatial and geometric characteristics will be described, the simulation methodology will be presented in detail, and the capability of the EnMAP simulator will be shown by illustrative examples.
Comparing continuous and discrete birthday coincidences : "Same-Day" versus "Within 24 Hours"
(2010)
In its classical form the famous birthday problem (Feller 1968; Mosteller 1987) addresses coincidences within a discrete sample space, looking at births that fall on the same calendar day. However, coincidence phenomena often arise in situations in which it is more natural to consider a continuous-time parameter. We first describe an elementary variant of the classical problem in continuous time, and then derive and illustrate close approximate relations that exist between the discrete and the continuous formulations.
A confocal set-up is presented that improves micro-XRF and XAFS experiment with high-pressure e diamond-anvil cells (DACs) In this experiment a probing volume is defined by the focus of the incoming synchrotron radiation beam and that of a polycapillary X-ray half-lens with a very long working distance, which is placed in front of the fluorescence detector This set-up enhances the quality of the fluorescence and XAFS spectra, and thus the sensitivity for detecting elements at low concentrations. It efficiently suppresses signal from outside the sample chamber, which stems from elastic and inelastic scattering of the incoming beam by the diamond anvils as well as from excitation of fluorescence from the body of the DAC
Background: Cysteine is a component in organic compounds including glutathione that have been implicated in the adaptation of plants to stresses. O-acetylserine (thiol) lyase (OAS-TL) catalyses the final step of cysteine biosynthesis. OAS-TL enzyme isoforms are localised in the cytoplasm, the plastids and mitochondria but the contribution of individual OAS-TL isoforms to plant sulphur metabolism has not yet been fully clarified. Results: The seedling lethal phenotype of the Arabidopsis onset of leaf death3-1 (old3-1) mutant is due to a point mutation in the OAS-A1 gene, encoding the cytosolic OAS-TL. The mutation causes a single amino acid substitution from Gly(162) to Glu(162), abolishing old3-1 OAS-TL activity in vitro. The old3-1 mutation segregates as a monogenic semidominant trait when backcrossed to its wild type accession Landsberg erecta (Ler-0) and the Di-2 accession. Consistent with its semi- dominant behaviour, wild type Ler-0 plants transformed with the mutated old3-1 gene, displayed the early leaf death phenotype. However, the old3-1 mutation segregates in an 11: 4: 1 (wild type: semi-dominant: mutant) ratio when backcrossed to the Colombia-0 and Wassilewskija accessions. Thus, the early leaf death phenotype depends on two semi- dominant loci. The second locus that determines the old3-1 early leaf death phenotype is referred to as odd-ler (for old3 determinant in the Ler accession) and is located on chromosome 3. The early leaf death phenotype is temperature dependent and is associated with increased expression of defence-response and oxidative-stress marker genes. Independent of the presence of the odd-ler gene, OAS-A1 is involved in maintaining sulphur and thiol levels and is required for resistance against cadmium stress. Conclusions: The cytosolic OAS-TL is involved in maintaining organic sulphur levels. The old3-1 mutation causes genome-dependent and independent phenotypes and uncovers a novel function for the mutated OAS- TL in cell death regulation.
Supporting species persistence may involve (re)connecting suitable habitats. However, for many declining species habitat suitability and drivers of establishment are poorly known. We addressed this experimentally for a declining flagship species of dry grasslands in Germany, Armeria maritima subsp. elongata. In three regions, we sowed seeds from each of eight source populations back to their origin and to eight apparently suitable, but currently unoccupied, habitats close to the source populations. Overall, seeds germinated and seedlings established equally well in occupied and potential sites indicating that suitable habitats are available, but lack seed input. Germination and establishment varied among sowing sites. Moreover, seeds from populations of lower current connectivity established less well in new sites, and establishment was more variable among seeds from smaller than from larger populations, possibly reflecting genetic consequences of habitat fragmentation. Further, establishment across different new environments differed between seeds from different populations. As this was neither related to a home-away contrast nor to geographic or environmental distance between sites it could not clearly be attributed to local adaptation. To promote long-term persistence within this dry-grassland meta-population context we suggest increasing the density of suitable habitats and supporting dispersal connecting multiple sites, e.g. by promoting sheep transhumance, to increase current populations and their connectivity, and to colonise suitable habitats with material from different sources. We suggest that sowing experiments with characteristic species, including multiple source populations and multiple recipient sites, should be used regularly to inform connecting efforts in plant conservation.
We evaluate the Hamiltonian particle methods (HPM) and the Nambu discretization applied to shallow-water equations on the sphere using the test suggested by Galewsky et al. (2004). Both simulations show excellent conservation of energy and are stable in long-term simulation. We repeat the test also using the ICOSWP scheme to compare with the two conservative spatial discretization schemes. The HPM simulation captures the main features of the reference solution, but wave 5 pattern is dominant in the simulations applied on the ICON grid with relatively low spatial resolutions. Nevertheless, agreement in statistics between the three schemes indicates their qualitatively similar behaviors in the long-term integration.
Representatives of the genus Stentor (Stentoridae, Heterotrichea) are striking ciliates in environmental water samples because of their size (up to 4 mm) and their trumpet-like shape. Important for species identification are the following main characteristics: (1) the presence or absence of endosymbiotic algae (zoochlorellae); (2) the colour of the pigmented cortical granules, and (3) the shape of the macronucleus. The complete small subunit rDNA (SSU rDNA) of 19 further representatives of the genus Stentor was sequenced to examine the phylogenetic relationships within this genus and to determine the taxonomic value of these main characteristics. The detailed phylogenetic analyses yielded a separation of all species possessing a single compact macronucleus from those species with an "elongated" macronucleus (moniliform or vermiform). The data also indicate that the uptake of algae as well as the loss of pigmentation happened independently in different lineages. Furthermore, a high level of intraspecific variation within several species was found. Thus, S. muelleri and S. (sp.) cf. katashimai appear to represent distinct species and S. multiformis is composed of a species complex.
The reaction of styrene with trifluoromethanesulfonyl nitrene generated from trifluoromethanesulfonamide in the system (t-BuOCl+NaI) results in the formation of trifluoro-N-[2-phenyl-2-(trifluoromethylsulfonyl) aminoethyl]methanesulfonamide, 1-pheny1-2-iodo-ethanol, and 2,5-diphenyl-1,4-bis(trifluoromethyl sulfonyl)piperazine rather than the expected product of aziridination, 2-phenyl-1-(trifluoromethylsulfonyl) aziridine. The mechanism of the reaction is discussed.
What is the most appropriate sampling scheme to estimate event-based average throughfall? A satisfactory answer to this seemingly simple question has yet to be found, a failure which we attribute to previous efforts' dependence on empirical studies. Here we try to answer this question by simulating stochastic throughfall fields based on parameters for statistical models of large monitoring data sets. We subsequently sampled these fields with different sampling designs and variable sample supports. We evaluated the performance of a particular sampling scheme with respect to the uncertainty of possible estimated means of throughfall volumes. Even for a relative error limit of 20%, an impractically large number of small, funnel-type collectors would be required to estimate mean throughfall, particularly for small events. While stratification of the target area is not superior to simple random sampling, cluster random sampling involves the risk of being less efficient. A larger sample support, e.g., the use of trough-type collectors, considerably reduces the necessary sample sizes and eliminates the sensitivity of the mean to outliers. Since the gain in time associated with the manual handling of troughs versus funnels depends on the local precipitation regime, the employment of automatically recording clusters of long troughs emerges as the most promising sampling scheme. Even so, a relative error of less than 5% appears out of reach for throughfall under heterogeneous canopies. We therefore suspect a considerable uncertainty of input parameters for interception models derived from measured throughfall, in particular, for those requiring data of small throughfall events.
Propagation and chain-length averaged termination rate coefficients, k(p) and <k(t)>, for radical polymerizations of methacrylates carrying poly(ethylene glycol) (PEG) units are reported. kp derived from pulsed laser initiated polymerizations in bulk, in organic solvents, and in ionic liquids follows the methacrylate-type family behavior. Contrary, diffusion controlled k(t) values obtained from chemically initiated polymerizations with in-line FT- NIR monitoring of monomer conversion are strongly affected by the PEG units in the ester group. Compared to alkyl methacrylates <k(t)> is unexpectedly high. Moreover, <k(t)> of poly(ethylene glycol) ethyl ether methacrylate shows a significant reduction in k(t) already at 15% conversion, whereas dodecyl methacrylate <k(t)> is constant up to at least 70% conversion.
The translation of genetic information according to the sequence of the mRNA template occurs with high accuracy and fidelity. Critical events in each single step of translation are selection of transfer RNA (tRNA), codon reading and tRNA-regeneration for a new cycle. We developed a model that accurately describes the dynamics of single elongation steps, thus providing a systematic insight into the sensitivity of the mRNA translation rate to dynamic environmental conditions. Alterations in the concentration of the aminoacylated tRNA can transiently stall the ribosomes during translation which results, as suggested by the model, in two outcomes: either stress-induced change in the tRNA availability triggers the premature termination of the translation and ribosomal dissociation, or extensive demand for one tRNA species results in a competition between frameshift to an aberrant open-reading frame and ribosomal drop-off. Using the bacterial Escherichia coli system, we experimentally draw parallels between these two possible mechanisms.
Saturn's rings host two known moons, Pan and Daphnis, which are massive enough to clear circumferential gaps in the ring around their orbits. Both moons create wake patterns at the gap edges by gravitational deflection of the ring material (Cuzzi, J.N., Scargle, J.D. [1985]. Astrophys. J. 292, 276-290; Showalter, MR., Cuzzi, J.N., Marouf, E.A., Esposito, LW. [1986]. Icarus 66, 297-323). New Cassini observations revealed that these wavy edges deviate from the sinusoidal waveform, which one would expect from a theory that assumes a circular orbit of the perturbing moon and neglects particle interactions. Resonant perturbations of the edges by moons outside the ring system, as well as an eccentric orbit of the embedded moon, may partly explain this behavior (Porco, CC., and 34 colleagues [2005]. Science 307, 1226-1236; Tiscareno, M.S., Burns, J.A., Hedman, MM., Spitale, J.N., Porco, CC., Murray, C.D., and the Cassini Imaging team [2005]. Bull. Am. Astron. Soc. 37, 767; Weiss, J.W., Porco, CC., Tiscareno, M.S., Burns, J.A., Dones, L [2005]. Bull. Am. Astron. Soc. 37, 767; Weiss, J.W., Porco, CC., Tiscareno, M.S. [2009]. Astron. J. 138, 272-286). Here we present an extended non-collisional streamline model which accounts for both effects. We describe the resulting variations of the density structure and the modification of the nonlinearity parameter q. Furthermore, an estimate is given for the applicability of the model. We use the streamwire model introduced by Stewart (Stewart, G.R. [1991]. Icarus 94, 436-450) to plot the perturbed ring density at the gap edges. We apply our model to the Keeler gap edges undulated by Daphnis and to a faint ringlet in the Encke gap close to the orbit of Pan. The modulations of the latter ringlet, induced by the perturbations of Pan (Burns, J.A., Hedman, M.M., Tiscareno, M.S., Nicholson, P.D., Streetman, B.J., Colwell, J.E., Showalter, M.R., Murray, C.D., Cuzzi, J.N., Porco, CC., and the Cassini ISS team [2005]. Bull. Am. Astron. Soc. 37, 766), can be well described by our analytical model. Our analysis yields a Hill radius of Pan of 17.5 km, which is 9% smaller than the value presented by Porco (Porco, CC., and 34 colleagues [2005]. Science 307, 1226- 1236), but fits well to the radial semi-axis of Pan of 17.4 km. This supports the idea that Pan has filled its Hill sphere with accreted material (Porco, C.C., Thomas, P.C., Weiss, J.W., Richardson, D.C. [2007]. Science 318, 1602-1607). A numerical solution of a streamline is used to estimate the parameters of the Daphnis-Keeler gap system, since the close proximity of the gap edge to the moon induces strong perturbations, not allowing an application of the analytic streamline model. We obtain a Hill radius of 5.1 km for Daphnis, an inner edge variation of 8 km, and an eccentricity for Daphnis of 1.5 x 10(-5). The latter two quantities deviate by a factor of two from values gained by direct observations (Jacobson, R.A., Spitale, J., Porco, C.C., Beurle, K., Cooper, N.J., Evans, M.W., Murray, C.D. [2008]. Astron. J. 135, 261-263; Tiscareno, M.S., Burns, J.A., Hedman, M.M., Spitale, J.N., Porco, C.C., Murray, C.D., and the Cassini Imaging team [2005]. Bull. Am. Astron. Soc. 37, 767), which might be attributed to the neglect of particle interactions and vertical motion in our model.
Business process management aims at capturing, understanding, and improving work in organizations. The central artifacts are process models, which serve different purposes. Detailed process models are used to analyze concrete working procedures, while high-level models show, for instance, handovers between departments. To provide different views on process models, business process model abstraction has emerged. While several approaches have been proposed, a number of abstraction use case that are both relevant for industry and scientifically challenging are yet to be addressed. In this paper we systematically develop, classify, and consolidate different use cases for business process model abstraction. The reported work is based on a study with BPM users in the health insurance sector and validated with a BPM consultancy company and a large BPM vendor. The identified fifteen abstraction use cases reflect the industry demand. The related work on business process model abstraction is evaluated against the use cases, which leads to a research agenda.
The programmable network envisioned in the 1990s within standardization and research for the Intelligent Network is currently coming into reality using IPbased Next Generation Networks (NGN) and applying Service-Oriented Architecture (SOA) principles for service creation, execution, and hosting. SOA is the foundation for both next-generation telecommunications and middleware architectures, which are rapidly converging on top of commodity transport services. Services such as triple/quadruple play, multimedia messaging, and presence are enabled by the emerging service-oriented IPMultimedia Subsystem (IMS), and allow telecommunications service providers to maintain, if not improve, their position in the marketplace. SOA becomes the de facto standard in next-generation middleware systems as the system model of choice to interconnect service consumers and providers within and between enterprises. We leverage previous research activities in overlay networking technologies along with recent advances in network abstraction, service exposure, and service creation to develop a paradigm for a service environment providing converged Internet and Telecommunications services that we call Service Broker. Such a Service Broker provides mechanisms to combine and mediate between different service paradigms from the two domains Internet/WWW and telecommunications. Furthermore, it enables the composition of services across these domains and is capable of defining and applying temporal constraints during creation and execution time. By adding network-awareness into the service fabric, such a Service Broker may also act as a next generation network-to-service element allowing the composition of crossdomain and cross-layer network and service resources. The contribution of this research is threefold: first, we analyze and classify principles and technologies from Information Technologies (IT) and telecommunications to identify and discuss issues allowing cross-domain composition in a converging service layer. Second, we discuss service composition methods allowing the creation of converged services on an abstract level; in particular, we present a formalized method for model-checking of such compositions. Finally, we propose a Service Broker architecture converging Internet and Telecom services. This environment enables cross-domain feature interaction in services through formalized obligation policies acting as constraints during service discovery, creation, and execution time.
Crustal deformation can be the result of volcanic and tectonic activity such as fault dislocation and magma intrusion. The crustal deformation may precede and/or succeed the earthquake occurrence and eruption. Mitigating the associated hazard, continuous monitoring of the crustal deformation accordingly has become an important task for geo-observatories and fast response systems. Due to highly non-linear behavior of the crustal deformation fields in time and space, which are not always measurable using conventional geodetic methods (e.g., Leveling), innovative techniques of monitoring and analysis are required. In this thesis I describe novel methods to improve the ability for precise and accurate mapping the spatiotemporal surface deformation field using multi acquisitions of satellite radar data. Furthermore, to better understand the source of such spatiotemporal deformation fields, I present novel static and time dependent model inversion approaches. Almost any interferograms include areas where the signal decorrelates and is distorted by atmospheric delay. In this thesis I detail new analysis methods to reduce the limitations of conventional InSAR, by combining the benefits of advanced InSAR methods such as the permanent scatterer InSAR (PSI) and the small baseline subsets (SBAS) with a wavelet based data filtering scheme. This novel InSAR time series methodology is applied, for instance, to monitor the non-linear deformation processes at Hawaii Island. The radar phase change at Hawaii is found to be due to intrusions, eruptions, earthquakes and flank movement processes and superimposed by significant environmental artifacts (e.g., atmospheric). The deformation field, I obtained using the new InSAR analysis method, is in good agreement with continuous GPS data. This provides an accurate spatiotemporal deformation field at Hawaii, which allows time dependent source modeling. Conventional source modeling methods usually deal with static deformation field, while retrieving the dynamics of the source requires more sophisticated time dependent optimization approaches. This problem I address by combining Monte Carlo based optimization approaches with a Kalman Filter, which provides the model parameters of the deformation source consistent in time. I found there are numerous deformation sources at Hawaii Island which are spatiotemporally interacting, such as volcano inflation is associated to changes in the rifting behavior, and temporally linked to silent earthquakes. I applied these new methods to other tectonic and volcanic terrains, most of which revealing the importance of associated or coupled deformation sources. The findings are 1) the relation between deep and shallow hydrothermal and magmatic sources underneath the Campi Flegrei volcano, 2) gravity-driven deformation at Damavand volcano, 3) fault interaction associated with the 2010 Haiti earthquake, 4) independent block wise flank motion at the Hilina Fault system, Kilauea, and 5) interaction between salt diapir and the 2005 Qeshm earthquake in southern Iran. This thesis, written in cumulative form including 9 manuscripts published or under review in peer reviewed journals, improves the techniques for InSAR time series analysis and source modeling and shows the mutual dependence between adjacent deformation sources. These findings allow more realistic estimation of the hazard associated with complex volcanic and tectonic systems.
This professorial dissertation thesis collects several empirical studies on tax distribution and tax reform in Germany. Chapter 2 deals with two studies on effective income taxation, based on representative micro data sets from tax statistics. The first study analyses the effective income taxation at the individual level, in particular with respect to the top incomes. It is based on an integrated micro data file of household survey data and income tax statistics, which captures the entire income distribution up to the very top. Despite substantial tax base erosion and reductions of top tax rates, the German personal income tax has remained effectively progressive. The distribution of the tax burden is highly concentrated and the German economic elite is still taxed relatively heavily, even though the effective tax rate for this group has significantly declined. The second study of Chapter 2 highlights the effective income taxation of functional income sources, such as labor income, business and capital income, etc. Using income tax micro data and microsimulation models, we allocate the individual income tax liability to the respective income sources, according to different apportionment schemes accounting for losses. We find that the choice of the apportionment scheme markedly affects the tax shares of income sources and implicit tax rates, in particular those of capital income. Income types without significant losses such as labor income or transfer incomes show higher tax shares and implicit tax rates if we account for losses. The opposite is true for capital income, in particular for income from renting and leasing. Chapter 3 presents two studies on business taxation, based on representative micro data sets from tax statistics and the microsimulation model BizTax. The first part provides a study on fundamental reform options for the German local business tax. We find that today’s high concentration of local business tax revenues on corporations with high profits decreases if the tax base is broadened by integrating more taxpayers and by including more elements of business value added. The reform scenarios with a broader tax base distribute the local business tax revenue per capita more equally across regional categories. The second study of Chapter 3 discusses the macroeconomic performance of business taxation against the background of corporate income. A comparison of the tax base reported in tax statistics with the macroeconomic corporate income from national accounts gives hints to considerable tax base erosion. The average implicit tax rate on corporate income was around 20 percent since 2001, and thus falling considerably short of statutory tax rates and effective tax rates discussed in the literature. For lack of detailed accounting data it is hard to give precise reasons for the presumptive tax base erosion. Chapter 4 deals with several assessment studies on the ecological tax reform implemented in Germany as of 1999. First, we describe the scientific, ideological, and political background of the ecological tax reform. Further, we present the main findings of a first systematic impact analysis. We employ two macroeconomic models, an econometric input-output model and a recursive-dynamic computable general equilibrium (CGE) model. Both models show that Germany’s ecological tax reform helps to reduce energy consumption and CO2 emissions without having a substantial adverse effect on overall economic growth. It could have a slightly positive effect on employment. The reform’s impact on the business sector and the effects of special provisions granted to agriculture and the goods and materials sectors are outlined in a further study. The special provisions avoid higher tax burdens on the energy-intensive production. However, they widely reduce the marginal tax rates and thus the incentives to energy saving. Though the reform of special provisions 2003 increased the overall tax burden of the energy-intensive industry, the enlarged eligibility for tax rebates neutralizes the ecologic incentives. Based on the Income and Consumption Survey of 2003, we have analyzed the distributional impact of the ecological tax reform. The increased energy taxes show a clear regressive impact relative to disposable income. Families with children face a higher tax burden relative to household income. The reduction of pension contributions and the automatic adjustment of social security transfers widely mitigate this regressive impact. Households with low income or with many children nevertheless bear a slight increase in tax burden. Refunding the eco tax revenue by an eco bonus would make the reform clearly progressive.
Ghrelin is a unique hunger-inducing stomach-borne hormone. It activates orexigenic circuits in the central nervous system (CNS) when acylated with a fatty acid residue by the Ghrelin O-acyltransferase (GOAT). Soon after the discovery of ghrelin a theoretical model emerged which suggests that the gastric peptide ghrelin is the first “meal initiation molecule
This work describes the realization of physically crosslinked networks based on gelatin by the introduction of functional groups enabling specific supramolecular interactions. Molecular models were developed in order to predict the material properties and permit to establish a knowledge-based approach to material design. The effect of additional supramolecular interactions with hydroxyapaptite was then studied in composite materials. The calculated properties are compared to experimental results to validate the models. The models are then further used for the study of physically crosslinked networks. Gelatin was functionalized with desaminotyrosine (DAT) and desaminotyrosyl-tyrosine (DATT) side groups, derived from the natural amino acid tyrosine. These group can potentially undergo to π-π and hydrogen bonding interactions also under physiological conditions. Molecular dynamics (MD) simulations were performed on models with 0.8 wt.-% or 25 wt.-% water content, using the second generation forcefield CFF91. The validation of the models was obtained by the comparison with specific experimental data such as, density, peptide conformational angles and X-ray scattering spectra. The models were then used to predict the supramolecular organization of the polymer chain, analyze the formation of physical netpoints and calculate the mechanical properties. An important finding of simulation was that with the increase of aromatic groups also the number of observed physical netpoints increased. The number of relatively stable physical netpoints, on average zero 0 for natural gelatin, increased to 1 and 6 for DAT and DATT functionalized gelatins respectively. A comparison with the Flory-Rehner model suggested reduced equilibrium swelling by factor 6 of the DATT-functionalized materials in water. The functionalized gelatins could be synthesized by chemoselective coupling of the free carboxylic acid groups of DAT and DATT to the free amino groups of gelatin. At 25 wt.-% water content, the simulated and experimentally determined elastic mechanical properties (e.g. Young Modulus) were both in the order of GPa and were not influenced by the degree of aromatic modification. The experimental equilibrium degree of swelling in water decreased with increasing the number of inserted aromatic functions (from 2800 vol.-% for pure gelatin to 300 vol.-% for the DATT modified gelatin), at the same time, Young’s modulus, elongation at break, and maximum tensile strength increased. It could be show that the functionalization with DAT and DATT influences the chain organization of gelatin based materials together with a controlled drying condition. Functionalization with DAT and DATT lead to a drastic reduction of helical renaturation, that could be more finely controlled by the applied drying conditions. The properties of the materials could then be influenced by application of two independent methods. Composite materials of DAT and DATT functionalized gelatins with hydroxyapatite (HAp) show a drastic reduction of swelling degree. In tensile tests and rheological measurements, the composites equilibrated in water had increased Young’s moduli (from 200 kPa up to 2 MPa) and tensile strength (from 57 kPa up to 1.1 MPa) compared to the natural polymer matrix without affecting the elongation at break. Furthermore, an increased thermal stability from 40 °C to 85 °C of the networks could be demonstrated. The differences of the behaviour of the functionalized gelatins to pure gelatin as matrix suggested an additional stabilizing bond between the incorporated aromatic groups to the hydroxyapatite.
About the relation between implicit Theory of Mind & the comprehension of complement sentences
(2010)
Previous studies on the relation between language and social cognition have shown that children’s mastery of embedded sentential complements plays a causal role for the development of a Theory of Mind (ToM). Children start to succeed on complementation tasks in which they are required to report the content of an embedded clause in the second half of the fourth year. Traditional ToM tasks test the child’s ability to predict that a person who is holding a false belief (FB) about a situation will act "falsely". In these task, children do not represent FBs until the age of 4 years. According the linguistic determinism hypothesis, only the unique syntax of complement sentences provides the format for representing FBs. However, experiments measuring children’s looking behavior instead of their explicit predictions provided evidence that already 2-year olds possess an implicit ToM. This dissertation examined the question of whether there is an interrelation also between implicit ToM and the comprehension of complement sentences in typically developing German preschoolers. Two studies were conducted. In a correlational study (Study 1 ), 3-year-old children’s performance on a traditional (explicit) FB task, on an implicit FB task and on language tasks measuring children’s comprehension of tensed sentential complements were collected and tested for their interdependence. Eye-tracking methodology was used to assess implicit ToM by measuring participants’ spontaneous anticipatory eye movements while they were watching FB movies. Two central findings emerged. First, predictive looking (implicit ToM) was not correlated with complement mastery, although both measures were associated with explicit FB task performance. This pattern of results suggests that explicit, but not implicit ToM is language dependent. Second, as a group, 3-year-olds did not display implicit FB understanding. That is, previous findings on a precocious reasoning ability could not be replicated. This indicates that the characteristics of predictive looking tasks play a role for the elicitation of implicit FB understanding as the current task was completely nonverbal and as complex as traditional FB tasks. Study 2 took a methodological approach by investigating whether children display an earlier comprehension of sentential complements when using the same means of measurement as used in experimental tasks tapping implicit ToM, namely anticipatory looking. Two experiments were conducted. 3-year-olds were confronted either with a complement sentence expressing the protagonist’s FB (Exp. 1) or with a complex sentence expressing the protagonist’s belief without giving any information about the truth/ falsity of the belief (Exp. 2). Afterwards, their expectations about the protagonist’s future behavior were measured. Overall, implicit measures reveal no considerably earlier understanding of sentential complementation. Whereas 3-year-olds did not display a comprehension of complex sentences if these embedded a false proposition, children from 3;9 years on were proficient in processing complement sentences if the truth value of the embedded proposition could not be evaluated. This pattern of results suggests that (1) the linguistic expression of a person’s FB does not elicit implicit FB understanding and that (2) the assessment of the purely syntactic understanding of complement sentences is affected by competing reality information. In conclusion, this dissertation found no evidence that the implicit ToM is related to the comprehension of sentential complementation. The findings suggest that implicit ToM might be based on nonlinguistic processes. Results are discussed in the light of recently proposed dual-process models that assume two cognitive mechanisms that account for different levels of ToM task performance.
This thesis is concerned with the development of numerical methods using finite difference techniques for the discretization of initial value problems (IVPs) and initial boundary value problems (IBVPs) of certain hyperbolic systems which are first order in time and second order in space. This type of system appears in some formulations of Einstein equations, such as ADM, BSSN, NOR, and the generalized harmonic formulation. For IVP, the stability method proposed in [14] is extended from second and fourth order centered schemes, to 2n-order accuracy, including also the case when some first order derivatives are approximated with off-centered finite difference operators (FDO) and dissipation is added to the right-hand sides of the equations. For the model problem of the wave equation, special attention is paid to the analysis of Courant limits and numerical speeds. Although off-centered FDOs have larger truncation errors than centered FDOs, it is shown that in certain situations, off-centering by just one point can be beneficial for the overall accuracy of the numerical scheme. The wave equation is also analyzed in respect to its initial boundary value problem. All three types of boundaries - outflow, inflow and completely inflow that can appear in this case, are investigated. Using the ghost-point method, 2n-accurate (n = 1, 4) numerical prescriptions are prescribed for each type of boundary. The inflow boundary is also approached using the SAT-SBP method. In the end of the thesis, a 1-D variant of BSSN formulation is derived and some of its IBVPs are considered. The boundary procedures, based on the ghost-point method, are intended to preserve the interior 2n-accuracy. Numerical tests show that this is the case if sufficient dissipation is added to the rhs of the equations.
rezensiertes Werk: Leshonot yehude Sefarad ve-ha-mizrach vesifruyotehem / Languages and literatures of Sephardic and Oriental Jews. - Jerusalem : Misgav Yerushalayim, 2009. - 484 S. [hebr.] + 434 S. [lat.], ; Ill.
The origin and evolution of granites has been widely studied because granitoid rocks constitute a major portion of the Earth ́s crust. The formation of granitic magma is, besides temperature mainly triggered by the water content of these rocks. The presence of water in magmas plays an important role due to the ability of aqueous fluids to change the chemical composition of the magma. The exsolution of aqueous fluids from melts is closely linked to a fractionation of elements between the two phases. Then, aqueous fluids migrate to shallower parts of the Earth ́s crust because of it ́s lower density compared to that of melts and adjacent rocks. This process separates fluids and melts, and furthermore, during the ascent, aqueous fluids can react with the adjacent rocks and alter their chemical signature. This is particularly impor- tant during the formation of magmatic-hydrothermal ore deposits or in the late stages of the evolution of magmatic complexes. For a deeper insight to these processes, it is essential to improve our knowledge on element behavior in such systems. In particular, trace elements are used for these studies and petrogenetic interpretations because, unlike major elements, they are not essential for the stability of the phases involved and often reflect magmatic processes with less ambiguity. However, for the majority of important trace elements, the dependence of the geochemical behavior on temperature, pressure, and in particular on the composition of the system are only incompletely or not at all experimentally studied. Former studies often fo- cus on the determination of fluid−melt partition coefficients (Df/m=cfluid/cmelt) of economically interesting elements, e.g., Mo, Sn, Cu, and there are some partitioning data available for ele- ments that are also commonly used for petrological interpretations. At present, no systematic experimental data on trace element behavior in fluid−melt systems as function of pressure, temperature, and chemical composition are available. Additionally, almost all existing data are based on the analysis of quenched phases. This results in substantial uncertainties, particularly for the quenched aqueous fluid because trace element concentrations may change upon cooling. The objective of this PhD thesis consisted in the study of fluid−melt partition coefficients between aqueous solutions and granitic melts for different trace elements (Rb, Sr, Ba, La, Y, and Yb) as a function of temperature, pressure, salinity of the fluid, composition of the melt, and experimental and analytical approach. The latter included the refinement of an existing method to measure trace element concentrations in fluids equilibrated with silicate melts di- rectly at elevated pressures and temperatures using a hydrothermal diamond-anvil cell and synchrotron radiation X-ray fluorescence microanalysis. The application of this in-situ method enables to avoid the main source of error in data from quench experiments, i.e., trace element concentration in the fluid. A comparison of the in-situ results to data of conventional quench experiments allows a critical evaluation of quench data from this study and literature data. In detail, starting materials consisted of a suite of trace element doped haplogranitic glasses with ASI varying between 0.8 and 1.4 and H2O or a chloridic solution with m NaCl/KCl=1 and different salinities (1.16 to 3.56 m (NaCl+KCl)). Experiments were performed at 750 to 950◦C and 0.2 or 0.5 GPa using conventional quench devices (externally and internally heated pressure vessels) with different quench rates, and at 750◦C and 0.2 to 1.4 GPa with in-situ analysis of the trace element concentration in the fluids. The fluid−melt partitioning data of all studied trace elements show 1. a preference for the melt (Df/m < 1) at all studied conditions, 2. one to two orders of magnitude higher Df/m using chloridic solutions compared to experiments with H2O, 3. a clear dependence on the melt composition for fluid−melt partitioning of Sr, Ba, La, Y, and Yb in experiments using chloridic solutions, 4. quench rate−related differences of fluid−melt partition coefficients of Rb and Sr, and 5. distinctly higher fluid−melt partitioning data obtained from in-situ experiments than from comparable quench runs, particularly in the case of H2O as starting solution. The data point to a preference of all studied trace elements for the melt even at fairly high salinities, which contrasts with other experimental studies, but is supported by data from studies of natural co-genetically trapped fluid and melt inclusions. The in-situ measurements of trace element concentrations in the fluid verify that aqueous fluids will change their composition upon cooling, which is in particular important for Cl free systems. The distinct differences of the in-situ results to quench data of this study as well as to data from the literature signify the im- portance of a careful fluid sampling and analysis. Therefore, the direct measurement of trace element contents in fluids equilibrated with silicate melts at elevated PT conditions represents an important development to obtain more reliable fluid−melt partition coefficients. For further improvement, both the aqueous fluid and the silicate melt need to be analyzed in-situ because partitioning data that are based on the direct measurement of the trace element content in the fluid and analysis of a quenched melt are still not completely free of quench effects. At present, all available data on element complexation in aqueous fluids in equilibrium with silicate melts at high PT are indirectly derived from partitioning data, which involves in these experiments assumptions on the species present in the fluid. However, the activities of chemical components in these partitioning experiments are not well constrained, which is required for the definition of exchange equilibria between melt and fluid species. For example, the melt-dependent variation of partition coefficient observed for Sr imply that this element can not only be complexed by Cl− as suggested previously. The data indicate a more complicated complexation of Sr in the aqueous fluid. To verify this hypothesis, the in-situ setup was also used to determine strontium complexation in fluids equilibrated with silicate melts at desired PT conditions by the application of X-ray absorption near edge structure (XANES) spectroscopy. First results show a strong effect of both fluid and melt composition on the resulting XANES spectra, which indicates different complexation environments for Sr.
Lake ecosystems across the globe have responded to climate warming of recent decades. However, correctly attributing observed changes to altered climatic conditions is complicated by multiple anthropogenic influences on lakes. This thesis contributes to a better understanding of climate impacts on freshwater phytoplankton, which forms the basis of the food chain and decisively influences water quality. The analyses were, for the most part, based on a long-term data set of physical, chemical and biological variables of a shallow, polymictic lake in north-eastern Germany (Müggelsee), which was subject to a simultaneous change in climate and trophic state during the past three decades. Data analysis included constructing a dynamic simulation model, implementing a genetic algorithm to parameterize models, and applying statistical techniques of classification tree and time-series analysis. Model results indicated that climatic factors and trophic state interactively determine the timing of the phytoplankton spring bloom (phenology) in shallow lakes. Under equally mild spring conditions, the phytoplankton spring bloom collapsed earlier under high than under low nutrient availability, due to a switch from a bottom-up driven to a top-down driven collapse. A novel approach to model phenology proved useful to assess the timings of population peaks in an artificially forced zooplankton-phytoplankton system. Mimicking climate warming by lengthening the growing period advanced algal blooms and consequently also peaks in zooplankton abundance. Investigating the reasons for the contrasting development of cyanobacteria during two recent summer heat wave events revealed that anomalously hot weather did not always, as often hypothesized, promote cyanobacteria in the nutrient-rich lake studied. The seasonal timing and duration of heat waves determined whether critical thresholds of thermal stratification, decisive for cyanobacterial bloom formation, were crossed. In addition, the temporal patterns of heat wave events influenced the summer abundance of some zooplankton species, which as predators may serve as a buffer by suppressing phytoplankton bloom formation. This thesis adds to the growing body of evidence that lake ecosystems have strongly responded to climatic changes of recent decades. It reaches beyond many previous studies of climate impacts on lakes by focusing on underlying mechanisms and explicitly considering multiple environmental changes. Key findings show that climate impacts are more severe in nutrient-rich than in nutrient-poor lakes. Hence, to develop lake management plans for the future, limnologists need to seek a comprehensive, mechanistic understanding of overlapping effects of the multi-faceted human footprint on aquatic ecosystems.
Temporal gravimeter observations, used in geodesy and geophysics to study variation of the Earth’s gravity field, are influenced by local water storage changes (WSC) and – from this perspective – add noise to the gravimeter signal records. At the same time, the part of the gravity signal caused by WSC may provide substantial information for hydrologists. Water storages are the fundamental state variable of hydrological systems, but comprehensive data on total WSC are practically inaccessible and their quantification is associated with a high level of uncertainty at the field scale. This study investigates the relationship between temporal gravity measurements and WSC in order to reduce the hydrological interfering signal from temporal gravity measurements and to explore the value of temporal gravity measurements for hydrology for the superconducting gravimeter (SG) of the Geodetic Observatory Wettzell, Germany. A 4D forward model with a spatially nested discretization domain was developed to simulate and calculate the local hydrological effect on the temporal gravity observations. An intensive measurement system was installed at the Geodetic Observatory Wettzell and WSC were measured in all relevant storage components, namely groundwater, saprolite, soil, top soil and snow storage. The monitoring system comprised also a suction-controlled, weighable, monolith-filled lysimeter, allowing an all time first comparison of a lysimeter and a gravimeter. Lysimeter data were used to estimate WSC at the field scale in combination with complementary observations and a hydrological 1D model. Total local WSC were derived, uncertainties were assessed and the hydrological gravity response was calculated from the WSC. A simple conceptual hydrological model was calibrated and evaluated against records of a superconducting gravimeter, soil moisture and groundwater time series. The model was evaluated by a split sample test and validated against independently estimated WSC from the lysimeter-based approach. A simulation of the hydrological gravity effect showed that WSC of one meter height along the topography caused a gravity response of 52 µGal, whereas, generally in geodesy, on flat terrain, the same water mass variation causes a gravity change of only 42 µGal (Bouguer approximation). The radius of influence of local water storage variations can be limited to 1000 m and 50 % to 80 % of the local hydro¬logical gravity signal is generated within a radius of 50 m around the gravimeter. At the Geodetic Observatory Wettzell, WSC in the snow pack, top soil, unsaturated saprolite and fractured aquifer are all important terms of the local water budget. With the exception of snow, all storage components have gravity responses of the same order of magnitude and are therefore relevant for gravity observations. The comparison of the total hydrological gravity response to the gravity residuals obtained from the SG, showed similarities in both short-term and seasonal dynamics. However, the results demonstrated the limitations of estimating total local WSC using hydrological point measurements. The results of the lysimeter-based approach showed that gravity residuals are caused to a larger extent by local WSC than previously estimated. A comparison of the results with other methods used in the past to correct temporal gravity observations for the local hydrological influence showed that the lysimeter measurements improved the independent estimation of WSC significantly and thus provided a better way of estimating the local hydrological gravity effect. In the context of hydrological noise reduction, at sites where temporal gravity observations are used for geophysical studies beyond local hydrology, the installation of a lysimeter in combination with complementary hydrological measurements is recommended. From the hydrological view point, using gravimeter data as a calibration constraint improved the model results in comparison to hydrological point measurements. Thanks to their capacity to integrate over different storage components and a larger area, gravimeters provide generalized information on total WSC at the field scale. Due to their integrative nature, gravity data must be interpreted with great care in hydrological studies. However, gravimeters can serve as a novel measurement instrument for hydrology and the application of gravimeters especially designed to study open research questions in hydrology is recommended.
Large open-source software projects involve developers with a wide variety of backgrounds and expertise. Such software projects furthermore include many internal APIs that developers must understand and use properly. According to the intended purpose of these APIs, they are more or less frequently used, and used by developers with more or less expertise. In this paper, we study the impact of usage patterns and developer expertise on the rate of defects occurring in the use of internal APIs. For this preliminary study, we focus on memory management APIs in the Linux kernel, as the use of these has been shown to be highly error prone in previous work. We study defect rates and developer expertise, to consider e.g., whether widely used APIs are more defect prone because they are used by less experienced developers, or whether defects in widely used APIs are more likely to be fixed.
Preface
(2010)
Aspect-oriented programming, component models, and design patterns are modern and actively evolving techniques for improving the modularization of complex software. In particular, these techniques hold great promise for the development of "systems infrastructure" software, e.g., application servers, middleware, virtual machines, compilers, operating systems, and other software that provides general services for higher-level applications. The developers of infrastructure software are faced with increasing demands from application programmers needing higher-level support for application development. Meeting these demands requires careful use of software modularization techniques, since infrastructural concerns are notoriously hard to modularize. Aspects, components, and patterns provide very different means to deal with infrastructure software, but despite their differences, they have much in common. For instance, component models try to free the developer from the need to deal directly with services like security or transactions. These are primary examples of crosscutting concerns, and modularizing such concerns are the main target of aspect-oriented languages. Similarly, design patterns like Visitor and Interceptor facilitate the clean modularization of otherwise tangled concerns. Building on the ACP4IS meetings at AOSD 2002-2009, this workshop aims to provide a highly interactive forum for researchers and developers to discuss the application of and relationships between aspects, components, and patterns within modern infrastructure software. The goal is to put aspects, components, and patterns into a common reference frame and to build connections between the software engineering and systems communities.
Because software development is increasingly expensive and timeconsuming, software reuse gains importance. Aspect-oriented software development modularizes crosscutting concerns which enables their systematic reuse. Literature provides a number of AOP patterns and best practices for developing reusable aspects based on compelling examples for concerns like tracing, transactions and persistence. However, such best practices are lacking for systematically reusing invasive aspects. In this paper, we present the ‘callback mismatch problem’. This problem arises in the context of abstraction mismatch, in which the aspect is required to issue a callback to the base application. As a consequence, the composition of invasive aspects is cumbersome to implement, difficult to maintain and impossible to reuse. We motivate this problem in a real-world example, show that it persists in the current state-of-the-art, and outline the need for advanced aspectual composition mechanisms to deal with this.
A deterministic cycle scheduling of partitions at the operating system level is supposed for a multiprocessor system. In this paper, we propose a tool for generating such schedules. We use constraint based programming and develop methods and concepts for a combined interactive and automatic partition scheduling system. This paper is also devoted to basic methods and techniques for modeling and solving this partition scheduling problem. Initial application of our partition scheduling tool has proved successful and demonstrated the suitability of the methods used.
This thesis presents methods for automated synthesis of flexible chip multiprocessor systems from parallel programs targeted at FPGAs to exploit both task-level parallelism and architecture customization. Automated synthesis is necessitated by the complexity of the design space. A detailed description of the design space is provided in order to determine which parameters should be modeled to facilitate automated synthesis by optimizing a cost function, the emphasis being placed on inclusive modeling of parameters from application, architectural and physical subspaces, as well as their joint coverage in order to avoid pre-constraining the design space. Given a parallel program and a set of an IP library, the automated synthesis problem is to simultaneously (i) select processors (ii) map and schedule tasks to them, and (iii) select one or several networks for inter-task communications such that design constraints and optimization objectives are met. The research objective in this thesis is to find a suitable model for automated synthesis, and to evaluate methods of using the model for architectural optimizations. Our contributions are a holistic approach for the design of such systems, corresponding models to facilitate automated synthesis, evaluation of optimization methods using state of the art integer linear and answer set programming, as well as the development of synthesis heuristics to solve runtime challenges.
An important characteristic of Service-Oriented Architectures is that clients do not depend on the service implementation's internal assignment of methods to objects. It is perhaps the most important technical characteristic that differentiates them from more common object-oriented solutions. This characteristic makes clients and services malleable, allowing them to be rearranged at run-time as circumstances change. That improvement in malleability is impaired by requiring clients to direct service requests to particular services. Ideally, the clients are totally oblivious to the service structure, as they are to aspect structure in aspect-oriented software. Removing knowledge of a method implementation's location, whether in object or service, requires re-defining the boundary line between programming language and middleware, making clearer specification of dependence on protocols, and bringing the transaction-like concept of failure scopes into language semantics as well. This paper explores consequences and advantages of a transition from object-request brokering to service-request brokering, including the potential to improve our ability to write more parallel software.
A constraint programming system combines two essential components: a constraint solver and a search engine. The constraint solver reasons about satisfiability of conjunctions of constraints, and the search engine controls the search for solutions by iteratively exploring a disjunctive search tree defined by the constraint program. The Monadic Constraint Programming framework gives a monadic definition of constraint programming where the solver is defined as a monad threaded through the monadic search tree. Search and search strategies can then be defined as firstclass objects that can themselves be built or extended by composable search transformers. Search transformers give a powerful and unifying approach to viewing search in constraint programming, and the resulting constraint programming system is first class and extremely flexible.
The genome can be considered the blueprint for an organism. Composed of DNA, it harbours all organism-specific instructions for the synthesis of all structural components and their associated functions. The role of carriers of actual molecular structure and functions was believed to be exclusively assumed by proteins encoded in particular segments of the genome, the genes. In the process of converting the information stored genes into functional proteins, RNA – a third major molecule class – was discovered early on to act a messenger by copying the genomic information and relaying it to the protein-synthesizing machinery. Furthermore, RNA molecules were identified to assist in the assembly of amino acids into native proteins. For a long time, these - rather passive - roles were thought to be the sole purpose of RNA. However, in recent years, new discoveries have led to a radical revision of this view. First, RNA molecules with catalytic functions - thought to be the exclusive domain of proteins - were discovered. Then, scientists realized that much more of the genomic sequence is transcribed into RNA molecules than there are proteins in cells begging the question what the function of all these molecules are. Furthermore, very short and altogether new types of RNA molecules seemingly playing a critical role in orchestrating cellular processes were discovered. Thus, RNA has become a central research topic in molecular biology, even to the extent that some researcher dub cells as “RNA machines”. This thesis aims to contribute towards our understanding of RNA-related phenomena by applying Bioinformatics means. First, we performed a genome-wide screen to identify sites at which the chemical composition of DNA (the genotype) critically influences phenotypic traits (the phenotype) of the model plant Arabidopsis thaliana. Whole genome hybridisation arrays were used and an informatics strategy developed, to identify polymorphic sites from hybridisation to genomic DNA. Following this approach, not only were genotype-phenotype associations discovered across the entire Arabidopsis genome, but also regions not currently known to encode proteins, thus representing candidate sites for novel RNA functional molecules. By statistically associating them with phenotypic traits, clues as to their particular functions were obtained. Furthermore, these candidate regions were subjected to a novel RNA-function classification prediction method developed as part of this thesis. While determining the chemical structure (the sequence) of candidate RNA molecules is relatively straightforward, the elucidation of its structure-function relationship is much more challenging. Towards this end, we devised and implemented a novel algorithmic approach to predict the structural and, thereby, functional class of RNA molecules. In this algorithm, the concept of treating RNA molecule structures as graphs was introduced. We demonstrate that this abstraction of the actual structure leads to meaningful results that may greatly assist in the characterization of novel RNA molecules. Furthermore, by using graph-theoretic properties as descriptors of structure, we indentified particular structural features of RNA molecules that may determine their function, thus providing new insights into the structure-function relationships of RNA. The method (termed Grapple) has been made available to the scientific community as a web-based service. RNA has taken centre stage in molecular biology research and novel discoveries can be expected to further solidify the central role of RNA in the origin and support of life on earth. As illustrated by this thesis, Bioinformatics methods will continue to play an essential role in these discoveries.
Models employed in exercise psychology highlight the role of reflective processes for explaining behavior change. However, as discussed in social cognition literature, information-processing models also consider automatic processes (dual-process models). To examine the relevance of automatic processing in exercise psychology, we used a priming task to assess the automatic evaluations of exercise stimuli in physically active sport and exercise majors (n = 32), physically active nonsport majors (n = 31), and inactive students (n = 31). Results showed that physically active students responded faster to positive words after exercise primes, whereas inactive students responded more rapidly to negative words. Priming task reaction times were successfully used to predict reported amounts of exercise in an ordinal regression model. Findings were obtained only with experiential items reflecting negative and positive consequences of exercise. The results illustrate the potential importance of dual-process models in exercise psychology.
rezensiertes Werk: Gartner, Isabella: Menorah : Jüdisches Familienblatt für Wissenschaft, Kunst und Literatur (1923–1932) ; Materialien zur Geschichte einer Wiener zionistischen Zeitschrift. - Würzburg : Königshausen & Neumann, 2009. - 356 S. ISBN 978-3-8260-3864-8
rezensiertes Werk: Stephan Dörschel: Fritz Wisten : bis Zum letzten Augenblick : ein jüdisches Theaterleben. - Hentrich & Hentrich : Berlin, 2009. - 112 S. (Jüdische Miniaturen ; 74) ISBN 978-3-938485-85-9
rezensiertes Werk: Shraibman, Yechiel: Sieben Jahre und sieben Monate : meine Bukarester Jahre ; Roman. - Berlin : be.bra, 2009. - 272 S. ISBN 978-3-937233-56-7
rezensiertes Werk: Grossman, David: Eine Frau flieht vor einer Nachricht. - München : Hanser, 2009. - 728 S. ISBN 978-3-446-23397-3
rezensiertes Werk: Schwartz, Yigal: Maamin beli Kenessija : 4 Massot al Aharon Appelfeld. - Tel Aviv : Dvir, 2009.- 181 S.
This thesis is concerned with the issue of extinction of populations composed of different types of individuals, and their behavior before extinction and in case of a very late extinction. We approach this question firstly from a strictly probabilistic viewpoint, and secondly from the standpoint of risk analysis related to the extinction of a particular model of population dynamics. In this context we propose several statistical tools. The population size is modeled by a branching process, which is either a continuous-time multitype Bienaymé-Galton-Watson process (BGWc), or its continuous-state counterpart, the multitype Feller diffusion process. We are interested in different kinds of conditioning on non-extinction, and in the associated equilibrium states. These ways of conditioning have been widely studied in the monotype case. However the literature on multitype processes is much less extensive, and there is no systematic work establishing connections between the results for BGWc processes and those for Feller diffusion processes. In the first part of this thesis, we investigate the behavior of the population before its extinction by conditioning the associated branching process X_t on non-extinction (X_t≠0), or more generally on non-extinction in a near future 0≤θ<∞ (X_{t+θ}≠0), and by letting t tend to infinity. We prove the result, new in the multitype framework and for θ>0, that this limit exists and is non-degenerate. This reflects a stationary behavior for the dynamics of the population conditioned on non-extinction, and provides a generalization of the so-called Yaglom limit, corresponding to the case θ=0. In a second step we study the behavior of the population in case of a very late extinction, obtained as the limit when θ tends to infinity of the process conditioned by X_{t+θ}≠0. The resulting conditioned process is a known object in the monotype case (sometimes referred to as Q-process), and has also been studied when X_t is a multitype Feller diffusion process. We investigate the not yet considered case where X_t is a multitype BGWc process and prove the existence of the associated Q-process. In addition, we examine its properties, including the asymptotic ones, and propose several interpretations of the process. Finally, we are interested in interchanging the limits in t and θ, as well as in the not yet studied commutativity of these limits with respect to the high-density-type relationship between BGWc processes and Feller processes. We prove an original and exhaustive list of all possible exchanges of limit (long-time limit in t, increasing delay of extinction θ, diffusion limit). The second part of this work is devoted to the risk analysis related both to the extinction of a population and to its very late extinction. We consider a branching population model (arising notably in the epidemiological context) for which a parameter related to the first moments of the offspring distribution is unknown. We build several estimators adapted to different stages of evolution of the population (phase growth, decay phase, and decay phase when extinction is expected very late), and prove moreover their asymptotic properties (consistency, normality). In particular, we build a least squares estimator adapted to the Q-process, allowing a prediction of the population development in the case of a very late extinction. This would correspond to the best or to the worst-case scenario, depending on whether the population is threatened or invasive. These tools enable us to study the extinction phase of the Bovine Spongiform Encephalopathy epidemic in Great Britain, for which we estimate the infection parameter corresponding to a possible source of horizontal infection persisting after the removal in 1988 of the major route of infection (meat and bone meal). This allows us to predict the evolution of the spread of the disease, including the year of extinction, the number of future cases and the number of infected animals. In particular, we produce a very fine analysis of the evolution of the epidemic in the unlikely event of a very late extinction.
Between 2002 and 2006 the Colombian government of Álvaro Uribe counted with great international support to hand a demobilization process of right-wing paramilitary groups, along with the implementation of transitional justice policies such as penal prosecutions and the creation of a National Commission for Reparation and Reconciliation (NCRR) to address justice, truth and reparation for victims of paramilitary violence. The demobilization process began when in 2002 the United Self Defence Forces of Colombia (Autodefensas Unidas de Colombia, AUC) agreed to participate in a government-sponsored demobilization process. Paramilitary groups were responsible for the vast majority of human rights violations for a period of over 30 years. The government designed a special legal framework that envisaged great leniency for paramilitaries who committed serious crimes and reparations for victims of paramilitary violence. More than 30,000 paramilitaries have demobilized under this process between January 2003 and August 2006. Law 975, also known as the “Justice and Peace Law”, and Decree 128 have served as the legal framework for the demobilization and prosecutions of paramilitaries. It has offered the prospect of reduced sentences to demobilized paramilitaries who committed crimes against humanity in exchange for full confessions of crimes, restitution for illegally obtained assets, the release of child soldiers, the release of kidnapped victims and has also provided reparations for victims of paramilitary violence. The Colombian demobilization process presents an atypical case of transitional justice. Many observers have even questioned whether Colombia can be considered a case of transitional justice. Transitional justice measures are often taken up after the change of an authoritarian regime or at a post-conflict stage. However, the particularity of the Colombian case is that transitional justice policies were introduced while the conflict still raged. In this sense, the Colombian case expresses one of the key elements to be addressed which is the tension between offering incentives to perpetrators to disarm and demobilize to prevent future crimes and providing an adequate response to the human rights violations perpetrated throughout the course of an internal conflict. In particular, disarmament, demobilization and reintegration processes require a fine balance between the immunity guarantees offered to ex-combatants and the sought of accountability for their crimes. International law provides the legal framework defining the rights to justice, truth and reparations for victims and the corresponding obligations of the State, but the peace negotiations and conflicted political structures do not always allow for the fulfillment of those rights. Thus, the aim of this article is to analyze what kind of transition may be occurring in Colombia by focusing on the role that transitional justice mechanisms may play in political negotiations between the Colombian government and paramilitary groups. In particular, it seeks to address to what extent such processes contribute to or hinder the achievement of the balance between peacebuilding and accountability, and thus facilitate a real transitional process.
Persistence of stock returns is an extensively studied and discussed theme in the analysis of financial markets. Antipersistence is usually attributed to volatilities. However, not only volatilities but also stock returns can exhibit antipersistence. Antipersistent noise has a somewhat rougher appearance than Gaussian noise. Heuristically spoken, price movements are more likely followed by movements in the opposite direction than in the same direction. The pertaining integrated process exhibits a smaller range – prices seem to stay in the vicinity of the initial value. We apply a widely used test based upon the modified R/S-Method by Lo [1991] to daily returns of 21 German stocks from 1960 to 2008. Combining this test with the concept of moving windows by Carbone et al. [2004], we are able to determine periods of antipersistence for some of the series under examination. Our results suggest that antipersistence can be found for stocks and periods where extraordinary corporate actions such as mergers & acquisitions or financial distress are present. These effects should be properly accounted for when choosing and designing models for inference.
The European Values Education (EVE) project is a large-scale, cross-national, and longitudinal survey research program on basic human values. The main topic of its first stage was "work" in Europe. Student teachers of several universities in Europe worked together in multicultural exchange groups. Their results are presented in this issue.
In recent years computer games have been discussed by a variety of disciplines from various perspectives. A fundamental difference with other media, which is a point of continuous consideration, is the specific relationship between the viewer and the image, the player and the game apparatus, which is a characteristic of video games as a dispositive. Terms such as immersion, participation, interactivity, or ergodic are an indication of the deep interest in this constellation. This paper explores the resonance between body and image in video games like REZ, SOUL CALIBUR and DANCE DANCE REVOLUTION from the perspective of a temporal ontology of the image, taking particular account of the structuring power of the interface and its subject positioning aspects.
Computer games may be defined as artifacts that connect the input devices of a computer (such as keyboard, mouse or controller) with its output devices (in most cases a screen and speakers) in such a way that on the screen a challenge is displayed. On the screen we see pictorial elements that have to be manipulated to master a game, that is to win a competition, to solve a riddle or to adopt a skill. Therefore the characteristics of the representational function of computer games have to be contrasted phenomenologically with conventional games on the one hand and cinematic depictions on the other. It shows that computer games separate the player from the playing field, and translate bodily felt concrete actions into situational abstract cinematic depictions. These features add up to the situational abstract presentation of self-action experience. In this framework computer games reveal a potential as a new means of shared cognition that might unfold in the 21st century and change the beingin- the-world in a similar way as cinematic depiction did in the 20th century
With the rise of nanotechnology in the last decade, nanofluidics has been established as a research field and gained increased interest in science and industry. Natural aqueous nanofluidic systems are very complex, there is often a predominance of liquid interfaces or the fluid contains charged or differently shaped colloids. The effects, promoted by these additives, are far from being completely understood and interesting questions arise with regards to the confinement of such complex fluidic systems. A systematic study of nanofluidic processes requires designing suitable experimental model nano – channels with required characteristics. The present work employed thin liquid films (TLFs) as experimental models. They have proven to be useful experimental tools because of their simple geometry, reproducible preparation, and controllable liquid interfaces. The thickness of the channels can be adjusted easily by the concentration of electrolyte in the film forming solution. This way, channel dimensions from 5 – 100 nm are possible, a high flexibility for an experimental system. TLFs have liquid IFs of different charge and properties and they offer the possibility to confine differently shaped ions and molecules to very small spaces, or to subject them to controlled forces. This makes the foam films a unique “device” available to obtain information about fluidic systems in nanometer dimensions. The main goal of this thesis was to study nanofluidic processes using TLFs as models, or tools, and to subtract information about natural systems plus deepen the understanding on physical chemical conditions. The presented work showed that foam films can be used as experimental models to understand the behavior of liquids in nano – sized confinement. In the first part of the thesis, we studied the process of thinning of thin liquid films stabilized with the non – ionic surfactant n – dodecyl – β – maltoside (β – C₁₂G₂) with primary interest in interfacial diffusion processes during the thinning process dependent on surfactant concentration 64. The surfactant concentration in the film forming solutions was varied at constant electrolyte (NaCl) concentration. The velocity of thinning was analyzed combining previously developed theoretical approaches. Qualitative information about the mobility of the surfactant molecules at the film surfaces was obtained. We found that above a certain limiting surfactant concentration the film surfaces were completely immobile and they behaved as non – deformable, which decelerated the thinning process. This follows the predictions for Reynolds flow of liquid between two non – deformable disks. In the second part of the thesis, we designed a TLF nanofluidic system containing rod – like multivalent ions and compared this system to films containing monovalent ions. We presented first results which recognized for the first time the existence of an additional attractive force in the foam films based on the electrostatic interaction between rod – like ions and oppositely charged surfaces. We may speculate that this is an ion bridging component of the disjoining pressure. The results show that for films prepared in presence of spermidine the transformation of the thicker CF to the thinnest NBF is more probable as films prepared with NaCl at similar conditions of electrostatic interaction. This effect is not a result of specific adsorption of any of the ions at the fluid surfaces and it does not lead to any changes in the equilibrium properties of the CF and NBF. Our hypothesis was proven using the trivalent ion Y3+ which does not show ion bridging. The experimental results are compared to theoretical predictions and a quantitative agreement on the system’s energy gain for the change from CF to NBF could be obtained. In the third part of the work, the behavior of nanoparticles in confinement was investigated with respect to their impact on the fluid flow velocity. The particles altered the flow velocity by an unexpected high amount, so that the resulting changes in the dynamic viscosity could not be explained by a realistic change of the fluid viscosity. Only aggregation, flocculation and plug formation can explain the experimental results. The particle systems in the presented thesis had a great impact on the film interfaces due to the stabilizer molecules present in the bulk solution. Finally, the location of the particles with respect to their lateral and vertical arrangement in the film was studied with advanced reflectivity and scattering methods. Neutron Reflectometry studies were performed to investigate the location of nanoparticles in the TLF perpendicular to the IF. For the first time, we study TLFs using grazing incidence small angle X – ray scattering (GISAXS), which is a technique sensitive to the lateral arrangement of particles in confined volumes. This work provides preliminary data on a lateral ordering of particles in the film.
Fire prone Mediterranean-type vegetation systems like those in the Mediterranean Basin and South-Western Australia are global hot spots for plant species diversity. To ensure management programs act to maintain these highly diverse plant communities, it is necessary to get a profound understanding of the crucial mechanisms of coexistence. In the current literature several mechanisms are discussed. The objective of my thesis is to systematically explore the importance of potential mechanisms for maintaining multi-species, fire prone vegetation by modelling. The model I developed is spatially-explicit, stochastic, rule- and individual-based. It is parameterised on data of population dynamics collected over 18 years in the Mediterranean-type shrublands of Eneabba, Western Australia. From 156 woody species of the area seven plant traits have been identified to be relevant for this study: regeneration mode, annual maximum seed production, seed size, maximum crown diameter, drought tolerance, dispersal mode and seed bank type. Trait sets are used for the definition of plant functional types (PFTs). The PFT dynamics are simulated annual by iterating life history processes. In the first part of my thesis I investigate the importance of trade-offs for the maintenance of high diversity in multi-species systems with 288 virtual PFTs. Simulation results show that the trade-off concept can be helpful to identify non-viable combinations of plant traits. However, the Shannon Diversity Index of modelled communities can be high despite of the presence of ‘supertypes’. I conclude, that trade-offs between two traits are less important to explain multi-species coexistence and high diversity than it is predicted by more conceptual models. Several studies show, that seed immigration from the regional seed pool is essential for maintaining local species diversity. However, systematical studies on the seed rain composition to multi-species communities are missing. The results of the simulation experiments, as presented in part two of this thesis, show clearly, that without seed immigration the local species community found in Eneabba drifts towards a state with few coexisting PFTs. With increasing immigration rates the number of simulated coexisting PFTs and Shannon diversity quickly approaches values as also observed in the field. Including the regional seed input in the model is suited to explain more aggregated measures of the local plant community structure such as species richness and diversity. Hence, the seed rain composition should be implemented in future studies. In the third part of my thesis I test the sensitivity of Eneabba PFTs to four different climate change scenarios, considering their impact on both local and regional processes. The results show that climate change clearly has the potential to alter the number of dispersed seeds for most of the Eneabba PFTs and therefore the source of the ‘immigrants’ at the community level. A classification tree analysis shows that, in general, the response to climate change was PFT-specific. In the Eneabba sand plains sensitivity of a PFT to climate change depends on its specific trait combination and on the scenario of environmental change i.e. development of the amount of rainfall and the fire frequency. This result emphasizes that PFT-specific responses and regional process seed immigration should not be ignored in studies dealing with the impact of climate change on future species distribution. The results of the three chapters are finally analysed in a general discussion. The model is discussed and improvements and suggestions are made for future research. My work leads to the following conclusions: i) It is necessary to support modelling with empirical work to explain coexistence in species-rich plant communities. ii) The chosen modelling approach allows considering the complexity of coexistence and improves the understanding of coexistence mechanisms. iii) Field research based assumptions in terms of environmental conditions and plant life histories can relativise the importance of more hypothetic coexistence theories in species-rich systems. In consequence, trade-offs can play a lower role than predicted by conceptual models. iv) Seed immigration is a key process for local coexistence. Its alteration because of climate change should be considered for prognosis of coexistence. Field studies should be carried out to get data on seed rain composition.
The widespread usage of products containing volatile organic compounds (VOC) has lead to a general human exposure to these chemicals in work places or homes being suspected to contribute to the growing incidence of environmental diseases. Since the causal molecular mechanisms for the development of these disorders are not completely understood, the overall objective of this thesis was to investigate VOC-mediated molecular effects on human lung cells in vitro at VOC concentrations comparable to exposure scenarios below current occupational limits. Although differential expression of single proteins in response to VOCs has been reported, effects on complex protein networks (proteome) have not been investigated. However, this information is indispensable when trying to ascertain a mechanism for VOC action on the cellular level and establishing preventive strategies. For this study, the alveolar epithelial cell line A549 has been used. This cell line, cultured in a two-phase (air/liquid) model allows the most direct exposure and had been successfully applied for the analysis of inflammatory effects in response to VOCs. Mass spectrometric identification of 266 protein spots provided the first proteomic map of A549 cell line to this extent that may foster future work with this frequently used cellular model. The distribution of three typical air contaminants, monochlorobenzene (CB), styrene and 1,2 dichlorobenzene (1,2-DCB), between gas and liquid phase of the exposure model has been analyzed by gas chromatography. The obtained VOC partitioning was in agreement with available literature data. Subsequently the adapted in vitro system has been successfully employed to characterize the effects of the aromatic compound styrene on the proteome of A549 cells (Chapter 4). Initially, the cell toxicity has been assessed in order to ensure that most of the concentrations used in the following proteomic approach were not cytotoxic. Significant changes in abundance and phosphorylation in the total soluble protein fraction of A549 cells have been detected following styrene exposure. All proteins have been identified using mass spectrometry and the main cellular functions have been assigned. Validation experiments on protein and transcript level confirmed the results of the 2-DE experiments. From the results, two main cellular pathways have been identified that were induced by styrene: the cellular oxidative stress response combined with moderate pro-apoptotic signaling. Measurement of cellular reactive oxygen species (ROS) as well as the styrene-mediated induction of oxidative stress marker proteins confirmed the hypothesis of oxidative stress as the main molecular response mechanism. Finally, adducts of cellular proteins with the reactive styrene metabolite styrene 7,8 oxide (SO) have been identified. Especially the SO-adducts observed at both the reactive centers of thioredoxin reductase 1, which is a key element in the control of the cellular redox state, may be involved in styrene-induced ROS formation and apoptosis. A similar proteomic approach has been carried out with the halobenzenes CB and 1,2-DCB (Chapter 5). In accordance with previous findings, cell toxicity assessment showed enhanced toxicity compared to the one caused by styrene. Significant changes in abundance and phosphorylation of total soluble proteins of A549 cells have been detected following exposure to subtoxic concentrations of CB and 1,2-DCB. All proteins have been identified using mass spectrometry and the main cellular functions have been assigned. As for the styrene experiment, the results indicated two main pathways to be affected in the presence of chlorinated benzenes, cell death signaling and oxidative stress response. The strong induction of pro-apoptotic signaling has been confirmed for both treatments by detection of the cleavage of caspase 3. Likewise, the induction of redox-sensitive protein species could be correlated to an increased cellular level of ROS observed following CB treatment. Finally, common mechanisms in the cellular response to aromatic VOCs have been investigated (Chapter 6). A similar number (4.6-6.9%) of all quantified protein spots showed differential expression (p<0.05) following cell exposure to styrene, CB or 1,2-DCB. However, not more than three protein spots showed significant regulation in the same direction for all three volatile compounds: voltage-dependent anion-selective channel protein 2, peroxiredoxin 1 and elongation factor 2. However, all of these proteins are important molecular targets in stress- and cell death-related signaling pathways.
Pattern matching is a well-established concept in the functional programming community. It provides the means for concisely identifying and destructuring values of interest. This enables a clean separation of data structures and respective functionality, as well as dispatching functionality based on more than a single value. Unfortunately, expressive pattern matching facilities are seldomly incorporated in present object-oriented programming languages. We present a seamless integration of pattern matching facilities in an object-oriented and dynamically typed programming language: Newspeak. We describe language extensions to improve the practicability and integrate our additions with the existing programming environment for Newspeak. This report is based on the first author’s master’s thesis.
In the first part of the report of the GTZ expert group an overview on the basics of integration and tax harmonisation within a common market is given. Chapter II. concentrates on the problems of national and international tax law regarding double taxation before the harmonisation process within the EU is described in detail. This process is not a best practice example but at least the experiences made in the course of the last five decades are interesting enough and might contribute important information for regions, which more or less recently have started a similar endeavour. The harmonisation needs are discussed for value added taxation (VAT), excise taxation, and income taxation. The problems of tax administrations, procedures laws, taxpayers’ rights and obligations as well as tax compliance are also taken into consideration. The second part of the study reviews the national tax systems within the EAC member countries. Before the single taxes are described in more detail, the macroeconomic situation is illuminated by some basic figures and the current stand of the inner-community integration analysed. Then the single tax bases and tax rates are confronted to shed some light on the necessities for the development of a common market within the near future. Again the value added tax laws, excise taxes and income taxes are discussed in detail, while regarding the latter the focus is on company taxation. For a better systematic analysis the national tax laws are confronted within an overview. The chapter is closed with a summary of the tax rates applied and a rough estimation of the tax burdens within the Partner States. The third part of this report contains the policy recommendations of the expert group following the same structures as the chapters before and presenting the results for the VAT, the excises and the corporate income tax (CIT). Additionally the requirements for tax procedures and administration as well as problems of transparency and information exchange are discussed in detail before the strategic recommendations are derived in close relation to the experiences made within the EU harmonisation process. The recommendations are based on the following normative arguments: (1) Tax harmonisation is a basic requirement for economic integration. (2) Equality of taxation is an imperative of tax justice and demands the avoidance of double taxation as well as the combat of tax evasion and corruption. (3) The avoidance of harmful tax competition between the Partner States. (4) The strengthening of taxpayers’ rights in tax procedures. Hence, all kinds of income, goods and services should be taxed once and only once.
Between history and legend
(2010)
In the early modern period, Jewish historiography moved from the Hebrew domain into the Yiddish one. Jewish writers have succeeded to match the historical literature to the particular needs of their audience. The most popular Yiddish chronicle of this kind was written in Amsterdam in the 18th century by Menachem Man Amelander, following both the Jewish and Christian genre. This paper briefly surveys the genre characteristics of this chronicle and the way it served the purpose of guarding Jewish memory and tradition.
On the example of the women’s magazines in Yiddish “Yidishe Froyenvelt” (1902- 1903), “Di Froy” (Vilnius1925-1933), “Froyen-Shtim” (Warsaw 1925) and “Di Froyen-Velt” (New York 1913) this article presents: • how feminist postulates are connected with questions of Jewish identity in a religious and political context • how the model image of a modern Jewish woman is presented • what the main spheres of feminist interests presented in the magazines are (a struggle for equal rights within the Jewish community as well as other social spheres, searching for and presenting outstanding women in the Jewish and world history, descriptions of women’s professional activities, psychological analysis of a woman's nature, establishing ties and a feeling of solidarity between women’s movements of other nations) • how the traditional women's roles are presented (mother, wife, housewife) • what degree of women’s participation in the edition of these periodicals is (a list of articles' authoresses and literature works appearing on columns of the periodicals) • whether and how a feminist discourse affects a language structure of the periodicals Comparing magazines from the beginning of the 20th century and the latter part of 1920s the article answers the question what direction did Jewish feminism evolve to and what content rose or fell in importance.
CHAMP (CHAllenging Minisatellite Payload) is a German small satellite mission to study the earth's gravity field, magnetic field and upper atmosphere. Thanks to the good condition of the satellite so far, the planned 5 years mission is extended to year 2009. The satellite provides continuously a large quantity of measurement data for the purpose of Earth study. The measurements of the magnetic field are undertaken by two Fluxgate Magnetometers (vector magnetometer) and one Overhauser Magnetometer (scalar magnetometer) flown on CHAMP. In order to ensure the quality of the data during the whole mission, the calibration of the magnetometers has to be performed routinely in orbit. The scalar magnetometer serves as the magnetic reference and its readings are compared with the readings of the vector magnetometer. The readings of the vector magnetometer are corrected by the parameters that are derived from this comparison, which is called the scalar calibration. In the routine processing, these calibration parameters are updated every 15 days by means of scalar calibration. There are also magnetic effects coming from the satellite which disturb the measurements. Most of them have been characterized during tests before launch. Among them are the remanent magnetization of the spacecraft and fields generated by currents. They are all considered to be constant over the mission life. The 8 years of operation experience allow us to investigate the long-term behaviors of the magnetometers and the satellite systems. According to the investigation, it was found that for example the scale factors of the FGM show obvious long-term changes which can be described by logarithmic functions. The other parameters (offsets and angles between the three components) can be considered constant. If these continuous parameters are applied for the FGM data processing, the disagreement between the OVM and the FGM readings is limited to \pm1nT over the whole mission. This demonstrates, the magnetometers on CHAMP exhibit a very good stability. However, the daily correction of the parameter Z component offset of the FGM improves the agreement between the magnetometers markedly. The Z component offset plays a very important role for the data quality. It exhibits a linear relationship with the standard deviation of the disagreement between the OVM and the FGM readings. After Z offset correction, the errors are limited to \pm0.5nT (equivalent to a standard deviation of 0.2nT). We improved the corrections of the spacecraft field which are not taken into account in the routine processing. Such disturbance field, e.g. from the power supply system of the satellite, show some systematic errors in the FGM data and are misinterpreted in 9-parameter calibration, which brings false local time related variation of the calibration parameters. These corrections are made by applying a mathematical model to the measured currents. This non-linear model is derived from an inversion technique. If the disturbance field of the satellite body are fully corrected, the standard deviation of scalar error \triangle B remains about 0.1nT. Additionally, in order to keep the OVM readings a reliable standard, the imperfect coefficients of the torquer current correction for the OVM are redetermined by solving a minimization problem. The temporal variation of the spacecraft remanent field is investigated. It was found that the average magnetic moment of the magneto-torquers reflects well the moment of the satellite. This allows for a continuous correction of the spacecraft field. The reasons for the possible unknown systemic error are discussed in this thesis. Particularly, both temperature uncertainties and time errors have influence on the FGM data. Based on the results of this thesis the data processing of future magnetic missions can be designed in an improved way. In particular, the upcoming ESA mission Swarm can take advantage of our findings and provide all the auxiliary measurements needed for a proper recovery of the ambient magnetic field.
Background: Local adaptation to divergent environmental conditions can promote population genetic differentiation even in the absence of geographic barriers and hence, lead to speciation. Perturbations by catastrophic events, however, can distort such parapatric ecological speciation processes. Here, we asked whether an exceptionally strong flood led to homogenization of gene pools among locally adapted populations of the Atlantic molly (Poecilia mexicana, Poeciliidae) in the Cueva del Azufre system in southern Mexico, where two strong environmental selection factors (darkness within caves and/or presence of toxic H2S in sulfidic springs) drive the diversification of P. mexicana. Nine nuclear microsatellites as well as heritable female life history traits (both as a proxy for quantitative genetics and for trait divergence) were used as markers to compare genetic differentiation, genetic diversity, and especially population mixing (immigration and emigration) before and after the flood. Results: Habitat type (i.e., non-sulfidic surface, sulfidic surface, or sulfidic cave), but not geographic distance was the major predictor of genetic differentiation. Before and after the flood, each habitat type harbored a genetically distinct population. Only a weak signal of individual dislocation among ecologically divergent habitat types was uncovered (with the exception of slightly increased dislocation from the Cueva del Azufre into the sulfidic creek, El Azufre). By contrast, several lines of evidence are indicative of increased flood-induced dislocation within the same habitat type, e.g., between different cave chambers of the Cueva del Azufre. Conclusions: The virtual absence of individual dislocation among ecologically different habitat types indicates strong natural selection against migrants. Thus, our current study exemplifies that ecological speciation in this and other systems, in which extreme environmental factors drive speciation, may be little affected by temporary perturbations, as adaptations to physico-chemical stressors may directly affect the survival probability in divergent habitat types.
The primary auditory cortex (AI) of adult Pteronotus parnellii features a foveal representation of the second harmonic constant frequency (CF2) echolocation call component. In the corresponding Doppler-shifted constant frequency (DSCF) area, the 61 kHz range is over-represented for extraction of frequency-shift information in CF2 echoes. To assess to which degree AI postnatal maturation depends on active echolocation or/and reflects ongoing cochlear maturation, cortical neurons were recorded in juveniles up to postnatal day P29, before the bats are capable of active foraging.At P1-2, neurons in posterior AI are tuned sensitively to low frequencies (22-45 dB SPL, 28-35 kHz). Within the prospective DSCF area, neurons had insensitive responses (>60 dB SPL) to frequencies <40 kHz and lacked sensitive tuning curve tips. Up to P10, when bats do not yet actively echolocate, tonotopy is further developed and DSCF neurons respond to frequencies of 51-57 kHz with maximum tuning sharpness (Q(10dB)) of 57. Between P11 and 20, the frequency representation in AI includes higher frequencies anterior and dorsal to the DSCF area. More multipeaked neurons (33%) are found than at older age. In the oldest group, DSCF neurons are tuned to frequencies close to 61 kHz with Q(10dB) values <= 212, and threshold sensitivity, tuning sharpness and cortical latencies are adult-like. The data show that basic aspects of cortical tonotopy are established before the bats actively echolocate. Maturation of tonotopy, increase of tuning sharpness, and upward shift in the characteristic frequency of DSCF neurons appear to strongly reflect cochlear maturation.
Think local sell global
(2010)
Reading song lyrics
(2010)
The Culture of Lyrics
(2010)
Erll, A., Mediation, remediation and the dynamics of cultural memory; Berlin, DeGruyter, 2009
(2010)
In this paper we introduce and study some new cooperation protocols for cooperating distributed (CD) grammar systems. These derivation modes depend on the number of different nonterminals present in the sentential form obtained when a component finished a derivation phase. This measure describes the competence of the grammar on the string (the competence is high if the number of the different nonterminals is small). It is also a measure of the efficiency of the grammar on the given string (a component is more efficient than another one if it is able to decrease the number of nonterminals in the string to a greater extent). We prove that if the underlying derivation mode is the t-mode derivation, then some variants of these systems determine the class of random context ET0L languages. If these CD grammar systems use the k step limited derivations as underlying derivation mode, then they are able to generate any recursively enumerable language.
The centromeric histone H3 variant (CenH3) serves to target the kinetochore to the centromeres and thus ensures correct chromosome segregation during mitosis and meiosis. The Dictyostelium H3-like variant H3v1 was identified as the CenH3 ortholog. Dictyostelium CenH3 has an extended N-terminal domain with no similarity to any other known proteins and a histone fold domain at its C-terminus. Within the histone fold, alpha-helix 2 (alpha 2) and an extended loop 1 (L1) have been shown to be required for targeting CenH3 to centromeres. Compared to other known and putative CenH3 histones, Dictyostelium CenH3 has a shorter L1, suggesting that the extension is not an obligatory feature. Through ChIP analysis and fluorescence microscopy of live and fixed cells, we provide here the first survey of centromere structure in amoebozoa. The six telocentric centromeres were found to mostly consist of all the DIRS-1 elements and to associate with H3K9me3. During interphase, the centromeres remain attached to the centrosome forming a single CenH3-containing cluster. Loading of Dictyostelium CenH3 onto centromeres occurs at the G2/prophase transition, in contrast to the anaphase/ telophase loading of CenH3 observed in metazoans. This suggests that loading during G2/prophase is the ancestral eukaryotic mechanism and that anaphase/telophase loading of CenH3 has evolved more recently after the amoebozoa diverged from the animal linage.
The launch-site effect, a systematic variation of within-word landing position as a function of launch-site distance, is among the most important oculomotor phenomena in reading. Here we show that the launch-site effect is strongly modulated in word skipping, a finding which is inconsistent with the view that the launch-site effect is caused by a saccadic-range error. We observe that distributions of landing positions in skipping saccades show an increased leftward shift compared to non-skipping saccades at equal launch-site distances. Using an improved algorithm for the estimation of mislocated fixations, we demonstrate the reliability of our results.
In 1915, Alfred Wegener published his hypotheses of plate tectonics that revolutionised the world for geologists. Since then, many scientists have studied the evolution of continents and especially the geologic structure of orogens: the most visible consequence of tectonic processes. Although the morphology and landscape evolution of mountain belts can be observed due to surface processes, the driving force and dynamics at lithosphere scale are less well understood despite the fact that rocks from deeper levels of orogenic belts are in places exposed at the surface. In this thesis, such formerly deeply-buried (ultra-) high-pressure rocks, in particular eclogite facies series, have been studied in order to reveal details about the formation and exhumation conditions and rates and thus provide insights into the geodynamics of the most spectacular orogenic belt in the world: the Himalaya. The specific area investigated was the Kaghan Valley in Pakistan (NW Himalaya). Following closure of the Tethyan Ocean by ca. 55-50 Ma, the northward subduction of the leading edge of India beneath the Eurasian Plate and subsequent collision initiated a long-lived process of intracrustal thrusting that continues today. The continental crust of India – granitic basement, Paleozoic and Mesozoic cover series and Permo-Triassic dykes, sills and lavas – has been buried partly to mantle depths. Today, these rocks crop out as eclogites, amphibolites and gneisses within the Higher Himalayan Crystalline between low-grade metamorphosed rocks (600-640°C/ ca. 5 kbar) of the Lesser Himalaya and Tethyan sediments. Beside tectonically driven exhumation mechanisms the channel flow model, that describes a denudation focused ductile extrusion of low viscosity material developed in the middle to lower crust beneath the Tibetan Plateau, has been postulated. To get insights into the lithospheric and crustal processes that have initiated and driven the exhumation of this (ultra-) high-pressure rocks, mineralogical, petrological and isotope-geochemical investigations have been performed. They provide insights into 1) the depths and temperatures to which these rocks were buried, 2) the pressures and temperatures the rocks have experienced during their exhumation, 3) the timing of these processes 4) and the velocity with which these rocks have been brought back to the surface. In detail, through microscopical studies, the identification of key minerals, microprobe analyses, standard geothermobarometry and modelling using an effective bulk rock composition it has been shown that published exhumation paths are incomplete. In particular, the eclogites of the northern Kaghan Valley were buried to depths of 140-100 km (36-30 kbar) at 790-640°C. Subsequently, cooling during decompression (exhumation) towards 40-35 km (17-10 kbar) and 630-580°C has been superseded by a phase of reheating to about 720-650°C at roughly the same depth before final exhumation has taken place. In the southern-most part of the study area, amphibolite facies assemblages with formation conditions similar to the deduced reheating phase indicate a juxtaposition of both areas after the eclogite facies stage and thus a stacking of Indian Plate units. Radiometric dating of zircon, titanite and rutile by U-Pb and amphibole and micas by Ar-Ar reveal peak pressure conditions at 47-48 Ma. With a maximum exhumation rate of 14 cm/a these rocks reached the crust-mantle boundary at 40-35 km within 1 Ma. Subsequent exhumation (46-41 Ma, 40-35 km) decelerated to ca. 1 mm/a at the base of the continental crust but rose again to about 2 mm/a in the period of 41-31 Ma, equivalent to 35-20 km. Apatite fission track (AFT) and (U-Th)/He ages from eclogites, amphibolites, micaschists and gneisses yielded moderate Oligocene to Miocene cooling rates of about 10°C/Ma in the high altitude northern parts of the Kaghan Valley using the mineral-pair method. AFT ages are of 24.5±3.8 to 15.6±2.1 Ma whereas apatite (U-Th)/He analyses yielded ages between 21.0±0.6 and 5.3±0.2 Ma. The southern-most part of the Valley is dominated by younger late Miocene to Pliocene apatite fission track ages of 7.6±2.1 and 4.0±0.5 Ma that support earlier tectonically and petrologically findings of a juxtaposition and stack of Indian Plate units. As this nappe is tectonically lowermost, a later distinct exhumation and uplift driven by thrusting along the Main Boundary Thrust is inferred. A multi-stage exhumation path is evident from petrological, isotope-geochemical and low temperature thermochronology investigations. Buoyancy driven exhumation caused an initial rapid exhumation: exhumation as fast as recent normal plate movements (ca. 10 cm/a). As the exhuming units reached the crust-mantle boundary the process slowed down due to changes in buoyancy. Most likely, this exhumation pause has initiated the reheating event that is petrologically evident (e.g. glaucophane rimmed by hornblende, ilmenite overgrowth of rutile). Late stage processes involved widespread thrusting and folding with accompanied regional greenschist facies metamorphism, whereby contemporaneous thrusting on the Batal Thrust (seen by some authors equivalent to the MCT) and back sliding of the Kohistan Arc along the inverse reactivated Main Mantle Thrust caused final exposure of these rocks. Similar circumstances have been seen at Tso Morari, Ladakh, India, 200 km further east where comparable rock assemblages occur. In conclusion, as exhumation was already done well before the initiation of the monsoonal system, climate dependent effects (erosion) appear negligible in comparison to far-field tectonic effects.
Background: Cysteine is a component in organic compounds including glutathione that have been implicated in the adaptation of plants to stresses. O-acetylserine (thiol) lyase (OAS-TL) catalyses the final step of cysteine biosynthesis. OAS-TL enzyme isoforms are localised in the cytoplasm, the plastids and mitochondria but the contribution of individual OAS-TL isoforms to plant sulphur metabolism has not yet been fully clarified.
Results: The seedling lethal phenotype of the Arabidopsis onset of leaf death3-1 (old3-1) mutant is due to a point mutation in the OAS-A1 gene, encoding the cytosolic OAS-TL. The mutation causes a single amino acid substitution from Gly(162) to Glu(162), abolishing old3-1 OAS-TL activity in vitro. The old3-1 mutation segregates as a monogenic semidominant trait when backcrossed to its wild type accession Landsberg erecta (Ler-0) and the Di-2 accession. Consistent with its semi-dominant behaviour, wild type Ler-0 plants transformed with the mutated old3-1 gene, displayed the early leaf death phenotype. However, the old3-1 mutation segregates in an 11: 4: 1 (wild type: semi-dominant: mutant) ratio when backcrossed to the Colombia-0 and Wassilewskija accessions. Thus, the early leaf death phenotype depends on two semi-dominant loci. The second locus that determines the old3-1 early leaf death phenotype is referred to as odd-ler (for old3 determinant in the Ler accession) and is located on chromosome 3. The early leaf death phenotype is temperature dependent and is associated with increased expression of defence-response and oxidative-stress marker genes. Independent of the presence of the odd-ler gene, OAS-A1 is involved in maintaining sulphur and thiol levels and is required for resistance against cadmium stress.
Conclusions: The cytosolic OAS-TL is involved in maintaining organic sulphur levels. The old3-1 mutation causes genome-dependent and independent phenotypes and uncovers a novel function for the mutated OAS-TL in cell death regulation.
Live cell flattening
(2010)
Eukaryotic cell flattening is valuable for improving microscopic observations, ranging from bright field (BF) to total internal reflection fluorescence (TIRF) microscopy. Fundamental processes, such as mitosis and in vivo actin polymerization, have been investigated using these techniques. Here, we review the well known agar overlayer protocol and the oil overlay method. In addition, we present more elaborate microfluidics-based techniques that provide us with a greater level of control. We demonstrate these techniques on the social amoebae Dictyostelium discoideum, comparing the advantages and disadvantages of each method.
Background: For heterogeneous tissues, such as blood, measurements of gene expression are confounded by relative proportions of cell types involved. Conclusions have to rely on estimation of gene expression signals for homogeneous cell populations, e.g. by applying micro-dissection, fluorescence activated cell sorting, or in-silico deconfounding. We studied feasibility and validity of a non-negative matrix decomposition algorithm using experimental gene expression data for blood and sorted cells from the same donor samples. Our objective was to optimize the algorithm regarding detection of differentially expressed genes and to enable its use for classification in the difficult scenario of reversely regulated genes. This would be of importance for the identification of candidate biomarkers in heterogeneous tissues.
Results: Experimental data and simulation studies involving noise parameters estimated from these data revealed that for valid detection of differential gene expression, quantile normalization and use of non-log data are optimal. We demonstrate the feasibility of predicting proportions of constituting cell types from gene expression data of single samples, as a prerequisite for a deconfounding-based classification approach. Classification cross-validation errors with and without using deconfounding results are reported as well as sample-size dependencies. Implementation of the algorithm, simulation and analysis scripts are available.
Conclusions: The deconfounding algorithm without decorrelation using quantile normalization on non-log data is proposed for biomarkers that are difficult to detect, and for cases where confounding by varying proportions of cell types is the suspected reason. In this case, a deconfounding ranking approach can be used as a powerful alternative to, or complement of, other statistical learning approaches to define candidate biomarkers for molecular diagnosis and prediction in biomedicine, in realistically noisy conditions and with moderate sample sizes.
Multi-color fluorescence imaging experiments of wave forming Dictyostelium cells have revealed that actin waves separate two domains of the cell cortex that differ in their actin structure and phosphoinositide composition. We propose a bistable model of actin dynamics to account for these experimental observation. The model is based on the simplifying assumption that the actin cytoskeleton is composed of two distinct network types, a dendritic and a bundled network. The two structurally different states that were observed in experiments correspond to the stable fixed points in the bistable regime of this model. Each fixed point is dominated by one of the two network types. The experimentally observed actin waves can be considered as trigger waves that propagate transitions between the two stable fixed points.