Refine
Has Fulltext
- yes (156) (remove)
Year of publication
- 2010 (156) (remove)
Document Type
- Doctoral Thesis (47)
- Conference Proceeding (24)
- Monograph/Edited Volume (23)
- Postprint (22)
- Article (21)
- Preprint (7)
- Review (6)
- Other (2)
- Working Paper (2)
- Habilitation Thesis (1)
- Lecture (1)
Language
- English (156) (remove)
Keywords
- middleware (4)
- Arrayseismologie (2)
- Aspektorientierte Softwareentwicklung (2)
- Betriebssysteme (2)
- Constraint Solving (2)
- Cosmogenic nuclides (2)
- Deduction (2)
- Erdbeben (2)
- Erdbebenkatalog (2)
- Erdbebenschwarm 2008/09 (2)
Institute
- Extern (22)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (17)
- Institut für Geowissenschaften (16)
- Institut für Jüdische Studien und Religionswissenschaft (16)
- Vereinigung für Jüdische Studien e. V. (15)
- Institut für Biochemie und Biologie (13)
- Institut für Künste und Medien (11)
- Mathematisch-Naturwissenschaftliche Fakultät (11)
- Institut für Mathematik (9)
- Wirtschaftswissenschaften (9)
- Institut für Physik und Astronomie (8)
- Institut für Informatik und Computational Science (7)
- Strukturbereich Kognitionswissenschaften (4)
- Department Psychologie (3)
- Institut für Umweltwissenschaften und Geographie (3)
- Humanwissenschaftliche Fakultät (2)
- Institut für Anglistik und Amerikanistik (2)
- Institut für Chemie (2)
- Institut für Ernährungswissenschaft (2)
- Sonderforschungsbereich 632 - Informationsstruktur (2)
- Department Linguistik (1)
- Department Sport- und Gesundheitswissenschaften (1)
- Institut für Religionswissenschaft (1)
- Institut für Romanistik (1)
- Interdisziplinäres Zentrum für Dynamik komplexer Systeme (1)
- MenschenRechtsZentrum (1)
- Sozialwissenschaften (1)
- WeltTrends e.V. Potsdam (1)
Proceedings of KogWis 2010 : 10th Biannual Meeting of the German Society for Cognitive Science
(2010)
As the latest biannual meeting of the German Society for Cognitive Science (Gesellschaft für Kognitionswissenschaft, GK), KogWis 2010 at Potsdam University reflects the current trends in a fascinating domain of research concerned with human and artificial cognition and the interaction of mind and brain. The Plenary talks provide a venue for questions of the numerical capacities and human arithmetic (Brian Butterworth), of the theoretical development of cognitive architectures and intelligent virtual agents (Pat Langley), of categorizations induced by linguistic constructions (Claudia Maienborn), and of a cross-level account of the “Self as a complex system“ (Paul Thagard). KogWis 2010 integrates a wealth of experimental research, cognitive modelling, and conceptual analysis in 5 invited symposia, over 150 individual talks, 6 symposia, and more than 40 poster contributions. Some of the invited symposia reflect local and regional strenghts of research in the Berlin-Brandenburg area: the two largests research fields of the university Cognitive Sciences Area of Excellence in Potsdam are represented by an invited symposium on “Information Structure” by the Special Research Area 632 (“Sonderforschungsbereich”, SFB) of the same name, of Potsdam University and Humboldt-University Berlin, and by a satellite conference of the research group “Mind and Brain Dynamics”. The Berlin School of Mind and Brain at Humboldt-University Berlin takes part with an invited symposium on “Decision Making” from a perspective of cognitive neuroscience and philosophy and the DFG Cluster of Excellence “Languages of Emotion” of Free University presents interdisciplinary research results in an invited symposium on “Symbolising Emotions”.
Different properties of programs, implemented in Constraint Handling Rules (CHR), have already been investigated. Proving these properties in CHR is fairly simpler than proving them in any type of imperative programming language, which triggered the proposal of a methodology to map imperative programs into equivalent CHR. The equivalence of both programs implies that if a property is satisfied for one, then it is satisfied for the other. The mapping methodology could be put to other beneficial uses. One such use is the automatic generation of global constraints, at an attempt to demonstrate the benefits of having a rule-based implementation for constraint solvers.
We establish elements of a new approach to ellipticity and parametrices within operator algebras on manifolds with higher singularities, only based on some general axiomatic requirements on parameter-dependent operators in suitable scales of spaes. The idea is to model an iterative process with new generations of parameter-dependent operator theories, together with new scales of spaces that satisfy analogous requirements as the original ones, now on a corresponding higher level. The "full" calculus involves two separate theories, one near the tip of the corner and another one at the conical exit to infinity. However, concerning the conical exit to infinity, we establish here a new concrete calculus of edge-degenerate operators which can be iterated to higher singularities.
In the present work, we study wave phenomena in strongly nonlinear lattices. Such lattices are characterized by the absence of classical linear waves. We demonstrate that compactons – strongly localized solitary waves with tails decaying faster than exponential – exist and that they play a major role in the dynamics of the system under consideration. We investigate compactons in different physical setups. One part deals with lattices of dispersively coupled limit cycle oscillators which find various applications in natural sciences such as Josephson junction arrays or coupled Ginzburg-Landau equations. Another part deals with Hamiltonian lattices. Here, a prominent example in which compactons can be found is the granular chain. In the third part, we study systems which are related to the discrete nonlinear Schrödinger equation describing, for example, coupled optical wave-guides or the dynamics of Bose-Einstein condensates in optical lattices. Our investigations are based on a numerical method to solve the traveling wave equation. This results in a quasi-exact solution (up to numerical errors) which is the compacton. Another ansatz which is employed throughout this work is the quasi-continuous approximation where the lattice is described by a continuous medium. Here, compactons are found analytically, but they are defined on a truly compact support. Remarkably, both ways give similar qualitative and quantitative results. Additionally, we study the dynamical properties of compactons by means of numerical simulation of the lattice equations. Especially, we concentrate on their emergence from physically realizable initial conditions as well as on their stability due to collisions. We show that the collisions are not exactly elastic but that a small part of the energy remains at the location of the collision. In finite lattices, this remaining part will then trigger a multiple scattering process resulting in a chaotic state.
In this paper we develop a spatial Cournot trade model with two unequally sized countries, using the geographical interpretation of the Hotelling line. We analyze the trade and welfare effects of international trade between these two countries. The welfare analysis indicates that in this framework the large country benefits from free trade and the small country may be hurt by opening to trade. This finding is contrary to the results of Shachmurove and Spiegel (1995) as well as Tharakan and Thisse (2002), who use related models to analyze size effects in international trade, where the small country usually gains from trade and the large country may lose.
rezensiertes Werk: Leshonot yehude Sefarad ve-ha-mizrach vesifruyotehem / Languages and literatures of Sephardic and Oriental Jews. - Jerusalem : Misgav Yerushalayim, 2009. - 484 S. [hebr.] + 434 S. [lat.], ; Ill.
In a very simplified view, the plant leaf growth can be reduced to two processes, cell division and cell expansion, accompanied by expansion of their surrounding cell walls. The vacuole, as being the largest compartment of the plant cell, plays a major role in controlling the water balance of the plant. This is achieved by regulating the osmotic pressure, through import and export of solutes over the vacuolar membrane (the tonoplast) and by controlling the water channels, the aquaporins. Together with the control of cell wall relaxation, vacuolar osmotic pressure regulation is thought to play an important role in cell expansion, directly by providing cell volume and indirectly by providing ion and pH homestasis for the cytosoplasm. In this thesis the role of tonoplast protein coding genes in cell expansion in the model plant Arabidopsis thaliana is studied and genes which play a putative role in growth are identified. Since there is, to date, no clearly identified protein localization signal for the tonoplast, there is no possibility to perform genome-wide prediction of proteins localized to this compartment. Thus, a series of recent proteomic studies of the tonoplast were used to compile a list of cross-membrane tonoplast protein coding genes (117 genes), and other growth-related genes from notably the growth regulating factor (GRF) and expansin families were included (26 genes). For these genes a platform for high-throughput reverse transcription quantitative real time polymerase chain reaction (RT-qPCR) was developed by selecting specific primer pairs. To this end, a software tool (called QuantPrime, see http://www.quantprime.de) was developed that automatically designs such primers and tests their specificity in silico against whole transcriptomes and genomes, to avoid cross-hybridizations causing unspecific amplification. The RT-qPCR platform was used in an expression study in order to identify candidate growth related genes. Here, a growth-associative spatio-temporal leaf sampling strategy was used, targeting growing regions at high expansion developmental stages and comparing them to samples taken from non-expanding regions or stages of low expansion. Candidate growth related genes were identified after applying a template-based scoring analysis on the expression data, ranking the genes according to their association with leaf expansion. To analyze the functional involvement of these genes in leaf growth on a macroscopic scale, knockout mutants of the candidate growth related genes were screened for growth phenotypes. To this end, a system for non-invasive automated leaf growth phenotyping was established, based on a commercially available image capture and analysis system. A software package was developed for detailed developmental stage annotation of the images captured with the system, and an analysis pipeline was constructed for automated data pre-processing and statistical testing, including modeling and graph generation, for various growth-related phenotypes. Using this system, 24 knockout mutant lines were analyzed, and significant growth phenotypes were found for five different genes.
Companies develop process models to explicitly describe their business operations. In the same time, business operations, business processes, must adhere to various types of compliance requirements. Regulations, e.g., Sarbanes Oxley Act of 2002, internal policies, best practices are just a few sources of compliance requirements. In some cases, non-adherence to compliance requirements makes the organization subject to legal punishment. In other cases, non-adherence to compliance leads to loss of competitive advantage and thus loss of market share. Unlike the classical domain-independent behavioral correctness of business processes, compliance requirements are domain-specific. Moreover, compliance requirements change over time. New requirements might appear due to change in laws and adoption of new policies. Compliance requirements are offered or enforced by different entities that have different objectives behind these requirements. Finally, compliance requirements might affect different aspects of business processes, e.g., control flow and data flow. As a result, it is infeasible to hard-code compliance checks in tools. Rather, a repeatable process of modeling compliance rules and checking them against business processes automatically is needed. This thesis provides a formal approach to support process design-time compliance checking. Using visual patterns, it is possible to model compliance requirements concerning control flow, data flow and conditional flow rules. Each pattern is mapped into a temporal logic formula. The thesis addresses the problem of consistency checking among various compliance requirements, as they might stem from divergent sources. Also, the thesis contributes to automatically check compliance requirements against process models using model checking. We show that extra domain knowledge, other than expressed in compliance rules, is needed to reach correct decisions. In case of violations, we are able to provide a useful feedback to the user. The feedback is in the form of parts of the process model whose execution causes the violation. In some cases, our approach is capable of providing automated remedy of the violation.
This professorial dissertation thesis collects several empirical studies on tax distribution and tax reform in Germany. Chapter 2 deals with two studies on effective income taxation, based on representative micro data sets from tax statistics. The first study analyses the effective income taxation at the individual level, in particular with respect to the top incomes. It is based on an integrated micro data file of household survey data and income tax statistics, which captures the entire income distribution up to the very top. Despite substantial tax base erosion and reductions of top tax rates, the German personal income tax has remained effectively progressive. The distribution of the tax burden is highly concentrated and the German economic elite is still taxed relatively heavily, even though the effective tax rate for this group has significantly declined. The second study of Chapter 2 highlights the effective income taxation of functional income sources, such as labor income, business and capital income, etc. Using income tax micro data and microsimulation models, we allocate the individual income tax liability to the respective income sources, according to different apportionment schemes accounting for losses. We find that the choice of the apportionment scheme markedly affects the tax shares of income sources and implicit tax rates, in particular those of capital income. Income types without significant losses such as labor income or transfer incomes show higher tax shares and implicit tax rates if we account for losses. The opposite is true for capital income, in particular for income from renting and leasing. Chapter 3 presents two studies on business taxation, based on representative micro data sets from tax statistics and the microsimulation model BizTax. The first part provides a study on fundamental reform options for the German local business tax. We find that today’s high concentration of local business tax revenues on corporations with high profits decreases if the tax base is broadened by integrating more taxpayers and by including more elements of business value added. The reform scenarios with a broader tax base distribute the local business tax revenue per capita more equally across regional categories. The second study of Chapter 3 discusses the macroeconomic performance of business taxation against the background of corporate income. A comparison of the tax base reported in tax statistics with the macroeconomic corporate income from national accounts gives hints to considerable tax base erosion. The average implicit tax rate on corporate income was around 20 percent since 2001, and thus falling considerably short of statutory tax rates and effective tax rates discussed in the literature. For lack of detailed accounting data it is hard to give precise reasons for the presumptive tax base erosion. Chapter 4 deals with several assessment studies on the ecological tax reform implemented in Germany as of 1999. First, we describe the scientific, ideological, and political background of the ecological tax reform. Further, we present the main findings of a first systematic impact analysis. We employ two macroeconomic models, an econometric input-output model and a recursive-dynamic computable general equilibrium (CGE) model. Both models show that Germany’s ecological tax reform helps to reduce energy consumption and CO2 emissions without having a substantial adverse effect on overall economic growth. It could have a slightly positive effect on employment. The reform’s impact on the business sector and the effects of special provisions granted to agriculture and the goods and materials sectors are outlined in a further study. The special provisions avoid higher tax burdens on the energy-intensive production. However, they widely reduce the marginal tax rates and thus the incentives to energy saving. Though the reform of special provisions 2003 increased the overall tax burden of the energy-intensive industry, the enlarged eligibility for tax rebates neutralizes the ecologic incentives. Based on the Income and Consumption Survey of 2003, we have analyzed the distributional impact of the ecological tax reform. The increased energy taxes show a clear regressive impact relative to disposable income. Families with children face a higher tax burden relative to household income. The reduction of pension contributions and the automatic adjustment of social security transfers widely mitigate this regressive impact. Households with low income or with many children nevertheless bear a slight increase in tax burden. Refunding the eco tax revenue by an eco bonus would make the reform clearly progressive.
Abstract interpretation-based model checking provides an approach to verifying properties of infinite-state systems. In practice, most previous work on abstract model checking is either restricted to verifying universal properties, or develops special techniques for temporal logics such as modal transition systems or other dual transition systems. By contrast we apply completely standard techniques for constructing abstract interpretations to the abstraction of a CTL semantic function, without restricting the kind of properties that can be verified. Furthermore we show that this leads directly to implementation of abstract model checking algorithms for abstract domains based on constraints, making use of an SMT solver.
Data obtained from foreign data sources often come with only superficial structural information, such as relation names and attribute names. Other types of metadata that are important for effective integration and meaningful querying of such data sets are missing. In particular, relationships among attributes, such as foreign keys, are crucial metadata for understanding the structure of an unknown database. The discovery of such relationships is difficult, because in principle for each pair of attributes in the database each pair of data values must be compared. A precondition for a foreign key is an inclusion dependency (IND) between the key and the foreign key attributes. We present with Spider an algorithm that efficiently finds all INDs in a given relational database. It leverages the sorting facilities of DBMS but performs the actual comparisons outside of the database to save computation. Spider analyzes very large databases up to an order of magnitude faster than previous approaches. We also evaluate in detail the effectiveness of several heuristics to reduce the number of necessary comparisons. Furthermore, we generalize Spider to find composite INDs covering multiple attributes, and partial INDs, which are true INDs for all but a certain number of values. This last type is particularly relevant when integrating dirty data as is often the case in the life sciences domain - our driving motivation.
Multi-color fluorescence imaging experiments of wave forming Dictyostelium cells have revealed that actin waves separate two domains of the cell cortex that differ in their actin structure and phosphoinositide composition. We propose a bistable model of actin dynamics to account for these experimental observation. The model is based on the simplifying assumption that the actin cytoskeleton is composed of two distinct network types, a dendritic and a bundled network. The two structurally different states that were observed in experiments correspond to the stable fixed points in the bistable regime of this model. Each fixed point is dominated by one of the two network types. The experimentally observed actin waves can be considered as trigger waves that propagate transitions between the two stable fixed points.
In the most abstract definition of its operational semantics, the declarative and concurrent programming language CHR is trivially non-terminating for a significant class of programs. Common refinements of this definition, in closing the gap to real-world implementations, compromise on declarativity and/or concurrency. Building on recent work and the notion of persistent constraints, we introduce an operational semantics avoiding trivial non-termination without compromising on its essential features.
The programmable network envisioned in the 1990s within standardization and research for the Intelligent Network is currently coming into reality using IPbased Next Generation Networks (NGN) and applying Service-Oriented Architecture (SOA) principles for service creation, execution, and hosting. SOA is the foundation for both next-generation telecommunications and middleware architectures, which are rapidly converging on top of commodity transport services. Services such as triple/quadruple play, multimedia messaging, and presence are enabled by the emerging service-oriented IPMultimedia Subsystem (IMS), and allow telecommunications service providers to maintain, if not improve, their position in the marketplace. SOA becomes the de facto standard in next-generation middleware systems as the system model of choice to interconnect service consumers and providers within and between enterprises. We leverage previous research activities in overlay networking technologies along with recent advances in network abstraction, service exposure, and service creation to develop a paradigm for a service environment providing converged Internet and Telecommunications services that we call Service Broker. Such a Service Broker provides mechanisms to combine and mediate between different service paradigms from the two domains Internet/WWW and telecommunications. Furthermore, it enables the composition of services across these domains and is capable of defining and applying temporal constraints during creation and execution time. By adding network-awareness into the service fabric, such a Service Broker may also act as a next generation network-to-service element allowing the composition of crossdomain and cross-layer network and service resources. The contribution of this research is threefold: first, we analyze and classify principles and technologies from Information Technologies (IT) and telecommunications to identify and discuss issues allowing cross-domain composition in a converging service layer. Second, we discuss service composition methods allowing the creation of converged services on an abstract level; in particular, we present a formalized method for model-checking of such compositions. Finally, we propose a Service Broker architecture converging Internet and Telecom services. This environment enables cross-domain feature interaction in services through formalized obligation policies acting as constraints during service discovery, creation, and execution time.
The Antarctic plays an important role in the global climate system. On the one hand, the Antarctic Ice Sheet is the largest freshwater reservoir on Earth. On the other hand, a major proportion of the global bottom-water formation takes place in Antarctic shelf regions, forcing the global thermohaline circulation. The main goal of this dissertation is to provide new insights into the dynamics and stability of the EAIS during the Quaternary. Additionally, variations in the activity of bottom-water formation and their causes are investigated. The dissertation is a German contribution to the International Polar Year 2007/ 2008 and was funded by the ‘Deutsche Forschungsgesellschaft’ (DFG) within the scope of priority program 1158 ‘Antarctic research with comparative studies in Arctic ice regions’. During RV Polarstern expedition ANT-XXIII/9, glaciomarine sediments were recovered from the Prydz Bay-Kerguelen region. Prydz Bay is a key region for the study of East EAIS dynamics, as 16% of the EAIS are drained through the Lambert Glacier into the bay. Thereby, the glacier transports sediment into Prydz Bay which is then further distributed by calving icebergs or by current transport. The scientific approach of this dissertation is the reconstruction of past glaciomarine environments to infer on the response of the Lambert Glacier-Amery Ice Shelf system to climate shifts during the Quaternary. To characterize the depositional setting, sedimentological methods are used and statistical analyses are applied. Mineralogical and (bio)geochemical methods provide a means to reconstruct sediment provenances and to provide evidence on changes in the primary production in the surface water column. Age-depth models were constructed based on palaeomagnetic and palaeointensity measurements, diatom stratigraphy and radiocarbon dating. Sea-bed surface sediments in the investigation area show distinct variations in terms of their clay minerals and heavy-mineral assemblages. Considerable differences in the mineralogical composition of surface sediments are determined on the continental shelf. Clay minerals as well as heavy minerals provide useful parameters to differentiate between sediments which originated from erosion of crystalline rocks and sediments originating from Permo-Triassic deposits. Consequently, mineralogical parameters can be used to reconstruct the provenance of current-transported and ice-rafted material. The investigated sediment cores cover the time intervals of the last 1.4 Ma (continental slope) and the last 12.8 cal. ka BP (MacRobertson shelf). The sediment deposits were mainly influenced by glacial and oceanographic processes and further by biological activity (continental shelf), meltwater input and possibly gravitational transport. Sediments from the continental slope document two major deglacial events: the first deglaciation is associated with the mid-Pleistocene warming recognized around the Antarctic. In Prydz Bay, the Lambert Glacier-Amery Ice Shelf retreated far to the south and high biogenic productivity commenced or biogenic remains were better preserved due to increased sedimentation rates. Thereafter, stable glacial conditions continued until 400 - 500 ka BP. Calving of icebergs was restricted to the western part of the Lambert Glacier. The deeper bathymetry in this area allows for floating ice shelf even during times of decreased sea-level. Between 400 - 500 ka BP and the last interglacial (marine isotope stage 5) the glacier was more dynamic. During or shortly after the last interglacial the LAIS retreated again due to sea-level rise of 6 - 9 m. Both deglacial events correlate with a reduction in the thickness of ice masses in the Prince Charles Mountains. It indicates that a disintegration of the Amery Ice Shelf possibly led to increased drainage of ice masses from the Prydz Bay hinterland. A new end-member modelling algorithm was successfully applied on sediments from the MacRobertson shelf used to unmix the sand grain size fractions sorted by current activity and ice transport, respectively. Ice retreat on MacRobertson Shelf commenced 12.8 cal. ka BP and ended around 5.5 cal. ka BP. During the Holocene, strong fluctuations of the bottomwater activity were observed, probably related to variations of sea-ice formation in the Cape Darnley polynya. Increased activity of bottom-water flow was reconstructed at transitions from warm to cool conditions, whereas bottom-water activity receded during the mid- Holocene climate optimum. It can be concluded that the Lambert Glacier-Amery Ice Shelf system was relatively stable in terms of climate variations during the Quaternary. In contrast, bottom-water formation due to polynya activity was very sensitive to changes in atmospheric forcing and should gain more attention in future research.
The origin and evolution of granites has been widely studied because granitoid rocks constitute a major portion of the Earth ́s crust. The formation of granitic magma is, besides temperature mainly triggered by the water content of these rocks. The presence of water in magmas plays an important role due to the ability of aqueous fluids to change the chemical composition of the magma. The exsolution of aqueous fluids from melts is closely linked to a fractionation of elements between the two phases. Then, aqueous fluids migrate to shallower parts of the Earth ́s crust because of it ́s lower density compared to that of melts and adjacent rocks. This process separates fluids and melts, and furthermore, during the ascent, aqueous fluids can react with the adjacent rocks and alter their chemical signature. This is particularly impor- tant during the formation of magmatic-hydrothermal ore deposits or in the late stages of the evolution of magmatic complexes. For a deeper insight to these processes, it is essential to improve our knowledge on element behavior in such systems. In particular, trace elements are used for these studies and petrogenetic interpretations because, unlike major elements, they are not essential for the stability of the phases involved and often reflect magmatic processes with less ambiguity. However, for the majority of important trace elements, the dependence of the geochemical behavior on temperature, pressure, and in particular on the composition of the system are only incompletely or not at all experimentally studied. Former studies often fo- cus on the determination of fluid−melt partition coefficients (Df/m=cfluid/cmelt) of economically interesting elements, e.g., Mo, Sn, Cu, and there are some partitioning data available for ele- ments that are also commonly used for petrological interpretations. At present, no systematic experimental data on trace element behavior in fluid−melt systems as function of pressure, temperature, and chemical composition are available. Additionally, almost all existing data are based on the analysis of quenched phases. This results in substantial uncertainties, particularly for the quenched aqueous fluid because trace element concentrations may change upon cooling. The objective of this PhD thesis consisted in the study of fluid−melt partition coefficients between aqueous solutions and granitic melts for different trace elements (Rb, Sr, Ba, La, Y, and Yb) as a function of temperature, pressure, salinity of the fluid, composition of the melt, and experimental and analytical approach. The latter included the refinement of an existing method to measure trace element concentrations in fluids equilibrated with silicate melts di- rectly at elevated pressures and temperatures using a hydrothermal diamond-anvil cell and synchrotron radiation X-ray fluorescence microanalysis. The application of this in-situ method enables to avoid the main source of error in data from quench experiments, i.e., trace element concentration in the fluid. A comparison of the in-situ results to data of conventional quench experiments allows a critical evaluation of quench data from this study and literature data. In detail, starting materials consisted of a suite of trace element doped haplogranitic glasses with ASI varying between 0.8 and 1.4 and H2O or a chloridic solution with m NaCl/KCl=1 and different salinities (1.16 to 3.56 m (NaCl+KCl)). Experiments were performed at 750 to 950◦C and 0.2 or 0.5 GPa using conventional quench devices (externally and internally heated pressure vessels) with different quench rates, and at 750◦C and 0.2 to 1.4 GPa with in-situ analysis of the trace element concentration in the fluids. The fluid−melt partitioning data of all studied trace elements show 1. a preference for the melt (Df/m < 1) at all studied conditions, 2. one to two orders of magnitude higher Df/m using chloridic solutions compared to experiments with H2O, 3. a clear dependence on the melt composition for fluid−melt partitioning of Sr, Ba, La, Y, and Yb in experiments using chloridic solutions, 4. quench rate−related differences of fluid−melt partition coefficients of Rb and Sr, and 5. distinctly higher fluid−melt partitioning data obtained from in-situ experiments than from comparable quench runs, particularly in the case of H2O as starting solution. The data point to a preference of all studied trace elements for the melt even at fairly high salinities, which contrasts with other experimental studies, but is supported by data from studies of natural co-genetically trapped fluid and melt inclusions. The in-situ measurements of trace element concentrations in the fluid verify that aqueous fluids will change their composition upon cooling, which is in particular important for Cl free systems. The distinct differences of the in-situ results to quench data of this study as well as to data from the literature signify the im- portance of a careful fluid sampling and analysis. Therefore, the direct measurement of trace element contents in fluids equilibrated with silicate melts at elevated PT conditions represents an important development to obtain more reliable fluid−melt partition coefficients. For further improvement, both the aqueous fluid and the silicate melt need to be analyzed in-situ because partitioning data that are based on the direct measurement of the trace element content in the fluid and analysis of a quenched melt are still not completely free of quench effects. At present, all available data on element complexation in aqueous fluids in equilibrium with silicate melts at high PT are indirectly derived from partitioning data, which involves in these experiments assumptions on the species present in the fluid. However, the activities of chemical components in these partitioning experiments are not well constrained, which is required for the definition of exchange equilibria between melt and fluid species. For example, the melt-dependent variation of partition coefficient observed for Sr imply that this element can not only be complexed by Cl− as suggested previously. The data indicate a more complicated complexation of Sr in the aqueous fluid. To verify this hypothesis, the in-situ setup was also used to determine strontium complexation in fluids equilibrated with silicate melts at desired PT conditions by the application of X-ray absorption near edge structure (XANES) spectroscopy. First results show a strong effect of both fluid and melt composition on the resulting XANES spectra, which indicates different complexation environments for Sr.
Deductive databases need general formulas in rule bodies, not only conjuctions of literals. This is well known since the work of Lloyd and Topor about extended logic programming. Of course, formulas must be restricted in such a way that they can be effectively evaluated in finite time, and produce only a finite number of new tuples (in each iteration of the TP-operator: the fixpoint can still be infinite). It is also necessary to respect binding restrictions of built-in predicates: many of these predicates can be executed only when certain arguments are ground. Whereas for standard logic programming rules, questions of safety, allowedness, and range-restriction are relatively easy and well understood, the situation for general formulas is a bit more complicated. We give a syntactic analysis of formulas that guarantees the necessary properties.
Background
Micrometer resolution placement and immobilization of probe molecules is an important step in the preparation of biochips and a wide range of lab-on-chip systems. Most known methods for such a deposition of several different substances are costly and only suitable for a limited number of probes. In this article we present a flexible procedure for simultaneous spatially controlled immobilization of functional biomolecules by molecular ink lithography.
Results
For the bottom-up fabrication of surface bound nanostructures a universal method is presented that allows the immobilization of different types of biomolecules with micrometer resolution. A supporting surface is biotinylated and streptavidin molecules are deposited with an AFM (atomic force microscope) tip at distinct positions. Subsequent incubation with a biotinylated molecule species leads to binding only at these positions. After washing streptavidin is deposited a second time with the same AFM tip and then a second biotinylated molecule species is coupled by incubation. This procedure can be repeated several times. Here we show how to immobilize different types of biomolecules in an arbitrary arrangement whereas most common methods can deposit only one type of molecules. The presented method works on transparent as well as on opaque substrates. The spatial resolution is better than 400 nm and is limited only by the AFM's positional accuracy after repeated z-cycles since all steps are performed in situ without moving the supporting surface. The principle is demonstrated by hybridization to different immobilized DNA oligomers and was validated by fluorescence microscopy.
Conclusions
The immobilization of different types of biomolecules in high-density microarrays is a challenging task for biotechnology. The method presented here not only allows for the deposition of DNA at submicrometer resolution but also for proteins and other molecules of biological relevance that can be coupled to biotin.
This paper is a critical examination of the relationship between reality and simulation. After a brief theoretical introduction, it unfolds its argument on an empirical level, using a thick game playing description of GRAND THEFT AUTO IV. This in-game experience serves as material for the subsequent analysis, in the course of which defining characteristics of computer game playing are formulated. Finally, on the basis of this analysis, the paper postulates the hypothesis that playing computer games like GTA IV promotes competency in deconstructing simulations and implements a cyclic logic of recreation.
Because software development is increasingly expensive and timeconsuming, software reuse gains importance. Aspect-oriented software development modularizes crosscutting concerns which enables their systematic reuse. Literature provides a number of AOP patterns and best practices for developing reusable aspects based on compelling examples for concerns like tracing, transactions and persistence. However, such best practices are lacking for systematically reusing invasive aspects. In this paper, we present the ‘callback mismatch problem’. This problem arises in the context of abstraction mismatch, in which the aspect is required to issue a callback to the base application. As a consequence, the composition of invasive aspects is cumbersome to implement, difficult to maintain and impossible to reuse. We motivate this problem in a real-world example, show that it persists in the current state-of-the-art, and outline the need for advanced aspectual composition mechanisms to deal with this.
Based on technological advances made within the past decades, ground-penetrating radar (GPR) has become a well-established, non-destructive subsurface imaging technique. Catalyzed by recent demands for high-resolution, near-surface imaging (e.g., the detection of unexploded ordnances and subsurface utilities, or hydrological investigations), the quality of today's GPR-based, near-surface images has significantly matured. At the same time, the analysis of oil and gas related reflection seismic data sets has experienced significant advances. Considering the sensitivity of attribute analysis with respect to data positioning in general, and multi-trace attributes in particular, trace positioning accuracy is of major importance for the success of attribute-based analysis flows. Therefore, to study the feasibility of GPR-based attribute analyses, I first developed and evaluated a real-time GPR surveying setup based on a modern tracking total station (TTS). The combination of current GPR systems capability of fusing global positioning system (GPS) and geophysical data in real-time, the ability of modern TTS systems to generate a GPS-like positional output and wireless data transmission using radio modems results in a flexible and robust surveying setup. To elaborate the feasibility of this setup, I studied the major limitations of such an approach: system cross-talk and data delays known as latencies. Experimental studies have shown that when a minimal distance of ~5 m between the GPR and the TTS system is considered, the signal-to-noise ratio of the acquired GPR data using radio communication equals the one without radio communication. To address the limitations imposed by system latencies, inherent to all real-time data fusion approaches, I developed a novel correction (calibration) strategy to assess the gross system latency and to correct for it. This resulted in the centimeter trace accuracy required by high-frequency and/or three-dimensional (3D) GPR surveys. Having introduced this flexible high-precision surveying setup, I successfully demonstrated the application of attribute-based processing to GPR specific problems, which may differ significantly from the geological ones typically addressed by the oil and gas industry using seismic data. In this thesis, I concentrated on archaeological and subsurface utility problems, as they represent typical near-surface geophysical targets. Enhancing 3D archaeological GPR data sets using a dip-steered filtering approach, followed by calculation of coherency and similarity, allowed me to conduct subsurface interpretations far beyond those obtained by classical time-slice analyses. I could show that the incorporation of additional data sets (magnetic and topographic) and attributes derived from these data sets can further improve the interpretation. In a case study, such an approach revealed the complementary nature of the individual data sets and, for example, allowed conclusions about the source location of magnetic anomalies by concurrently analyzing GPR time/depth slices to be made. In addition to archaeological targets, subsurface utility detection and characterization is a steadily growing field of application for GPR. I developed a novel attribute called depolarization. Incorporation of geometrical and physical feature characteristics into the depolarization attribute allowed me to display the observed polarization phenomena efficiently. Geometrical enhancement makes use of an improved symmetry extraction algorithm based on Laplacian high-boosting, followed by a phase-based symmetry calculation using a two-dimensional (2D) log-Gabor filterbank decomposition of the data volume. To extract the physical information from the dual-component data set, I employed a sliding-window principle component analysis. The combination of the geometrically derived feature angle and the physically derived polarization angle allowed me to enhance the polarization characteristics of subsurface features. Ground-truth information obtained by excavations confirmed this interpretation. In the future, inclusion of cross-polarized antennae configurations into the processing scheme may further improve the quality of the depolarization attribute. In addition to polarization phenomena, the time-dependent frequency evolution of GPR signals might hold further information on the subsurface architecture and/or material properties. High-resolution, sparsity promoting decomposition approaches have recently had a significant impact on the image and signal processing community. In this thesis, I introduced a modified tree-based matching pursuit approach. Based on different synthetic examples, I showed that the modified tree-based pursuit approach clearly outperforms other commonly used time-frequency decomposition approaches with respect to both time and frequency resolutions. Apart from the investigation of tuning effects in GPR data, I also demonstrated the potential of high-resolution sparse decompositions for advanced data processing. Frequency modulation of individual atoms themselves allows to efficiently correct frequency attenuation effects and improve resolution based on shifting the average frequency level. GPR-based attribute analysis is still in its infancy. Considering the growing widespread realization of 3D GPR studies there will certainly be an increasing demand towards improved subsurface interpretations in the future. Similar to the assessment of quantitative reservoir properties through the combination of 3D seismic attribute volumes with sparse well-log information, parameter estimation in a combined manner represents another step in emphasizing the potential of attribute-driven GPR data analyses.
In this paper we consider a simple syntactic extension of Answer Set Programming (ASP) for dealing with (nested) existential quantifiers and double negation in the rule bodies, in a close way to the recent proposal RASPL-1. The semantics for this extension just resorts to Equilibrium Logic (or, equivalently, to the General Theory of Stable Models), which provides a logic-programming interpretation for any arbitrary theory in the syntax of Predicate Calculus. We present a translation of this syntactic class into standard logic programs with variables (either disjunctive or normal, depending on the input rule heads), as those allowed by current ASP solvers. The translation relies on the introduction of auxiliary predicates and the main result shows that it preserves strong equivalence modulo the original signature.
How much is too much?
(2010)
Although dietary nutrient intake is often adequate, nutritional supplement use is common among elite athletes. However, high-dose supplements or the use of multiple supplements may exceed the recommended daily allowance (RDA) of particular nutrients or even result in a daily intake above tolerable upper limits (UL). The present case report presents nutritional intake data and supplement use of a highly trained male swimmer competing at international level. Habitual energy and micronutrient intake were analysed by 3 d dietary reports. Supplement use and dosage were assessed, and total amount of nutrient supply was calculated. Micronutrient intake was evaluated based on RDA and UL as presented by the European Scientific Committee on Food, and maximum permitted levels in supplements (MPL) are given. The athlete’s diet provided adequate micronutrient content well above RDA except for vitamin D. Simultaneous use of ten different supplements was reported, resulting in excess intake above tolerable UL for folate, vitamin E and Zn. Additionally, daily supplement dosage was considerably above MPL for nine micronutrients consumed as artificial products. Risks and possible side effects of exceeding UL by the athlete are discussed. Athletes with high energy intake may be at risk of exceeding UL of particular nutrients if multiple supplements are added. Therefore, dietary counselling of athletes should include assessment of habitual diet and nutritional supplement intake. Educating athletes to balance their diets instead of taking supplements might be prudent to prevent health risks
that may occur with long-term excess nutrient intake.
Communication, simulation, interactive narrative and ubiquitous computing are widely accepted as perspectives in humancomputer interaction. This paper proposes play as another possible perspective. Everyday uses of the computer increasingly show signs of similarity to play. This is not discussed with regard to the so-called media society, the playful society, the growing cultural acceptance of the computer, the spread of computer games or a new version of Windows, but in view of the playful character of interaction with the computer that has always been part of it. The exploratory learning process involved with new software and the creative tasks that are often undertaken when using the computer may support this argument. Together with its high level of interactivity, these observations point to a sense of security, autonomy and freedom of the user that produce play and are, in turn, produced by play. This notion of play refers not to the playing of computer games, but to an implicit, abstract (or symbolic) process based on a certain attitude, the play spirit. This attitude is discussed regarding everyday computer use and related to the other mentioned perspectives.
The genome can be considered the blueprint for an organism. Composed of DNA, it harbours all organism-specific instructions for the synthesis of all structural components and their associated functions. The role of carriers of actual molecular structure and functions was believed to be exclusively assumed by proteins encoded in particular segments of the genome, the genes. In the process of converting the information stored genes into functional proteins, RNA – a third major molecule class – was discovered early on to act a messenger by copying the genomic information and relaying it to the protein-synthesizing machinery. Furthermore, RNA molecules were identified to assist in the assembly of amino acids into native proteins. For a long time, these - rather passive - roles were thought to be the sole purpose of RNA. However, in recent years, new discoveries have led to a radical revision of this view. First, RNA molecules with catalytic functions - thought to be the exclusive domain of proteins - were discovered. Then, scientists realized that much more of the genomic sequence is transcribed into RNA molecules than there are proteins in cells begging the question what the function of all these molecules are. Furthermore, very short and altogether new types of RNA molecules seemingly playing a critical role in orchestrating cellular processes were discovered. Thus, RNA has become a central research topic in molecular biology, even to the extent that some researcher dub cells as “RNA machines”. This thesis aims to contribute towards our understanding of RNA-related phenomena by applying Bioinformatics means. First, we performed a genome-wide screen to identify sites at which the chemical composition of DNA (the genotype) critically influences phenotypic traits (the phenotype) of the model plant Arabidopsis thaliana. Whole genome hybridisation arrays were used and an informatics strategy developed, to identify polymorphic sites from hybridisation to genomic DNA. Following this approach, not only were genotype-phenotype associations discovered across the entire Arabidopsis genome, but also regions not currently known to encode proteins, thus representing candidate sites for novel RNA functional molecules. By statistically associating them with phenotypic traits, clues as to their particular functions were obtained. Furthermore, these candidate regions were subjected to a novel RNA-function classification prediction method developed as part of this thesis. While determining the chemical structure (the sequence) of candidate RNA molecules is relatively straightforward, the elucidation of its structure-function relationship is much more challenging. Towards this end, we devised and implemented a novel algorithmic approach to predict the structural and, thereby, functional class of RNA molecules. In this algorithm, the concept of treating RNA molecule structures as graphs was introduced. We demonstrate that this abstraction of the actual structure leads to meaningful results that may greatly assist in the characterization of novel RNA molecules. Furthermore, by using graph-theoretic properties as descriptors of structure, we indentified particular structural features of RNA molecules that may determine their function, thus providing new insights into the structure-function relationships of RNA. The method (termed Grapple) has been made available to the scientific community as a web-based service. RNA has taken centre stage in molecular biology research and novel discoveries can be expected to further solidify the central role of RNA in the origin and support of life on earth. As illustrated by this thesis, Bioinformatics methods will continue to play an essential role in these discoveries.
This thesis is concerned with the development of numerical methods using finite difference techniques for the discretization of initial value problems (IVPs) and initial boundary value problems (IBVPs) of certain hyperbolic systems which are first order in time and second order in space. This type of system appears in some formulations of Einstein equations, such as ADM, BSSN, NOR, and the generalized harmonic formulation. For IVP, the stability method proposed in [14] is extended from second and fourth order centered schemes, to 2n-order accuracy, including also the case when some first order derivatives are approximated with off-centered finite difference operators (FDO) and dissipation is added to the right-hand sides of the equations. For the model problem of the wave equation, special attention is paid to the analysis of Courant limits and numerical speeds. Although off-centered FDOs have larger truncation errors than centered FDOs, it is shown that in certain situations, off-centering by just one point can be beneficial for the overall accuracy of the numerical scheme. The wave equation is also analyzed in respect to its initial boundary value problem. All three types of boundaries - outflow, inflow and completely inflow that can appear in this case, are investigated. Using the ghost-point method, 2n-accurate (n = 1, 4) numerical prescriptions are prescribed for each type of boundary. The inflow boundary is also approached using the SAT-SBP method. In the end of the thesis, a 1-D variant of BSSN formulation is derived and some of its IBVPs are considered. The boundary procedures, based on the ghost-point method, are intended to preserve the interior 2n-accuracy. Numerical tests show that this is the case if sufficient dissipation is added to the rhs of the equations.
The issue of determining the time, when the Judaic communities have settled on Romanian land, is one of the most interesting and most delicate details that can be mentioned when talking about this ethnic group. The presence of the first Jewish communities in ancient times on this land was a “taboo” subject during many historical periods until 1989, but even after this year, studies oriented in this direction were more than sketchy. The article does not only bring a surplus of information in this domain, but manages to concentrate – almost didactically – the information and the archaeological proofs known and reknown to the present time. There are depicted material evidences as well as linguistic ones, toponymical and even religious. Also, the author tries to draw a parallel between some layouts of the Dacian state and Dacia Felix, conquered by the Romans, and the presence of some Judaic communities, not very numerous, made out of Judaic population who came together with the Roman conqueror.
Temporal gravimeter observations, used in geodesy and geophysics to study variation of the Earth’s gravity field, are influenced by local water storage changes (WSC) and – from this perspective – add noise to the gravimeter signal records. At the same time, the part of the gravity signal caused by WSC may provide substantial information for hydrologists. Water storages are the fundamental state variable of hydrological systems, but comprehensive data on total WSC are practically inaccessible and their quantification is associated with a high level of uncertainty at the field scale. This study investigates the relationship between temporal gravity measurements and WSC in order to reduce the hydrological interfering signal from temporal gravity measurements and to explore the value of temporal gravity measurements for hydrology for the superconducting gravimeter (SG) of the Geodetic Observatory Wettzell, Germany. A 4D forward model with a spatially nested discretization domain was developed to simulate and calculate the local hydrological effect on the temporal gravity observations. An intensive measurement system was installed at the Geodetic Observatory Wettzell and WSC were measured in all relevant storage components, namely groundwater, saprolite, soil, top soil and snow storage. The monitoring system comprised also a suction-controlled, weighable, monolith-filled lysimeter, allowing an all time first comparison of a lysimeter and a gravimeter. Lysimeter data were used to estimate WSC at the field scale in combination with complementary observations and a hydrological 1D model. Total local WSC were derived, uncertainties were assessed and the hydrological gravity response was calculated from the WSC. A simple conceptual hydrological model was calibrated and evaluated against records of a superconducting gravimeter, soil moisture and groundwater time series. The model was evaluated by a split sample test and validated against independently estimated WSC from the lysimeter-based approach. A simulation of the hydrological gravity effect showed that WSC of one meter height along the topography caused a gravity response of 52 µGal, whereas, generally in geodesy, on flat terrain, the same water mass variation causes a gravity change of only 42 µGal (Bouguer approximation). The radius of influence of local water storage variations can be limited to 1000 m and 50 % to 80 % of the local hydro¬logical gravity signal is generated within a radius of 50 m around the gravimeter. At the Geodetic Observatory Wettzell, WSC in the snow pack, top soil, unsaturated saprolite and fractured aquifer are all important terms of the local water budget. With the exception of snow, all storage components have gravity responses of the same order of magnitude and are therefore relevant for gravity observations. The comparison of the total hydrological gravity response to the gravity residuals obtained from the SG, showed similarities in both short-term and seasonal dynamics. However, the results demonstrated the limitations of estimating total local WSC using hydrological point measurements. The results of the lysimeter-based approach showed that gravity residuals are caused to a larger extent by local WSC than previously estimated. A comparison of the results with other methods used in the past to correct temporal gravity observations for the local hydrological influence showed that the lysimeter measurements improved the independent estimation of WSC significantly and thus provided a better way of estimating the local hydrological gravity effect. In the context of hydrological noise reduction, at sites where temporal gravity observations are used for geophysical studies beyond local hydrology, the installation of a lysimeter in combination with complementary hydrological measurements is recommended. From the hydrological view point, using gravimeter data as a calibration constraint improved the model results in comparison to hydrological point measurements. Thanks to their capacity to integrate over different storage components and a larger area, gravimeters provide generalized information on total WSC at the field scale. Due to their integrative nature, gravity data must be interpreted with great care in hydrological studies. However, gravimeters can serve as a novel measurement instrument for hydrology and the application of gravimeters especially designed to study open research questions in hydrology is recommended.
rezensiertes Werk: Grossman, David: Eine Frau flieht vor einer Nachricht. - München : Hanser, 2009. - 728 S. ISBN 978-3-446-23397-3
In reading, word frequency is commonly regarded as the major bottom-up determinant for the speed of lexical access. Moreover, language processing depends on top-down information, such as the predictability of a word from a previous context. Yet, however, the exact role of top-down predictions in visual word recognition is poorly understood: They may rapidly affect lexical processes, or alternatively, influence only late post-lexical stages. To add evidence about the nature of top-down processes and their relation to bottom-up information in the timeline of word recognition, we examined influences of frequency and predictability on event-related potentials (ERPs) in several sentence reading studies. The results were related to eye movements from natural reading as well as to models of word recognition. As a first and major finding, interactions of frequency and predictability on ERP amplitudes consistently revealed top-down influences on lexical levels of word processing (Chapters 2 and 4). Second, frequency and predictability mediated relations between N400 amplitudes and fixation durations, pointing to their sensitivity to a common stage of word recognition; further, larger N400 amplitudes entailed longer fixation durations on the next word, a result providing evidence for ongoing processing beyond a fixation (Chapter 3). Third, influences of presentation rate on ERP frequency and predictability effects demonstrated that the time available for word processing critically co-determines the course of bottom-up and top-down influences (Chapter 4). Fourth, at a near-normal reading speed, an early predictability effect suggested the rapid comparison of top-down hypotheses with the actual visual input (Chapter 5). The present results are compatible with interactive models of word recognition assuming that early lexical processes depend on the concerted impact of bottom-up and top-down information. We offered a framework that reconciles the findings on a timeline of word recognition taking into account influences of frequency, predictability, and presentation rate (Chapter 4).
Recent large earthquakes put in evidence the need of improving and developing robust and rapid procedures to properly calculate the magnitude of an earthquake in a short time after its occurrence. The most famous example is the 26 December 2004 Sumatra earthquake, when the limitations of the standard procedures adopted at that time by many agencies failed to provide accurate magnitude estimates of this exceptional event in time to launch early enough warnings and appropriate response. Being related to the radiated seismic energy ES, the energy magnitude ME is a good estimator of the high frequency content radiated by the source which goes into the seismic waves. However, a procedure to rapidly determine ME (that is to say, within 15 minutes after the earthquake occurrence) was required. Here it is presented a procedure able to provide in a rapid way the energy magnitude ME for shallow earthquakes by analyzing teleseismic P‑waves in the distance range 20-98. To account for the energy loss experienced by the seismic waves from the source to the receivers, spectral amplitude decay functions obtained from numerical simulations of Greens functions based on the average global model AK135Q are used. The proposed method has been tested using a large global dataset (~1000 earthquakes) and the obtained rapid ME estimations have been compared to other magnitude scales from different agencies. Special emphasis is given to the comparison with the moment magnitude MW, since the latter is very popular and extensively used in common seismological practice. However, it is shown that MW alone provide only limited information about the seismic source properties, and that disaster management organizations would benefit from a combined use of MW and ME in the prompt evaluation of an earthquake’s tsunami and shaking potential. In addition, since the proposed approach for ME is intended to work without knowledge of the fault plane geometry (often available only hours after an earthquake occurrence), the suitability of this method is discussed by grouping the analyzed earthquakes according to their type of mechanism (strike-slip, normal faulting, thrust faulting, etc.). No clear trend is found from the rapid ME estimates with the different fault plane solution groups. This is not the case for the ME routinely determined by the U.S. Geological Survey, which uses specific radiation pattern corrections. Further studies are needed to verify the effect of such corrections on ME estimates. Finally, exploiting the redundancy of the information provided by the analyzed dataset, the components of variance on the single station ME estimates are investigated. The largest component of variance is due to the intra-station (record-to-record) error, although the inter-station (station-to-station) error is not negligible and is of several magnitude units for some stations. Moreover, it is shown that the intra-station component of error is not random but depends on the travel path from a source area to a given station. Consequently, empirical corrections may be used to account for the heterogeneities of the real Earth not considered in the theoretical calculations of the spectral amplitude decay functions used to correct the recorded data for the propagation effects.
Background: Leishmania tarentolae, a unicellular eukaryotic protozoan, has been established as a novel host for recombinant protein production in recent years. Current protocols for protein expression in Leishmania are, however, time consuming and require extensive lab work in order to identify well-expressing cell lines. Here we established an alternative protein expression work-flow that employs recently engineered infrared fluorescence protein (IFP) as a suitable and easy-to-handle reporter protein for recombinant protein expression in Leishmania. As model proteins we tested three proteins from the plant Arabidopsis thaliana, including a NAC and a type-B ARR transcription factor. Results: IFP and IFP fusion proteins were expressed in Leishmania and rapidly detected in cells by deconvolution microscopy and in culture by infrared imaging of 96-well microtiter plates using small cell culture volumes (2 μL - 100 μL). Motility, shape and growth of Leishmania cells were not impaired by intracellular accumulation of IFP. In-cell detection of IFP and IFP fusion proteins was straightforward already at the beginning of the expression pipeline and thus allowed early pre-selection of well-expressing Leishmania clones. Furthermore, IFP fusion proteins retained infrared fluorescence after electrophoresis in denaturing SDS-polyacrylamide gels, allowing direct in-gel detection without the need to disassemble cast protein gels. Thus, parameters for scaling up protein production and streamlining purification routes can be easily optimized when employing IFP as reporter. Conclusions: Using IFP as biosensor we devised a protocol for rapid and convenient protein expression in Leishmania tarentolae. Our expression pipeline is superior to previously established methods in that it significantly reduces the hands-on-time and work load required for identifying well-expressing clones, refining protein production parameters and establishing purification protocols. The facile in-cell and in-gel detection tools built on IFP make Leishmania amenable for high-throughput expression of proteins from plant and animal sources.
Think local sell global
(2010)
Fire prone Mediterranean-type vegetation systems like those in the Mediterranean Basin and South-Western Australia are global hot spots for plant species diversity. To ensure management programs act to maintain these highly diverse plant communities, it is necessary to get a profound understanding of the crucial mechanisms of coexistence. In the current literature several mechanisms are discussed. The objective of my thesis is to systematically explore the importance of potential mechanisms for maintaining multi-species, fire prone vegetation by modelling. The model I developed is spatially-explicit, stochastic, rule- and individual-based. It is parameterised on data of population dynamics collected over 18 years in the Mediterranean-type shrublands of Eneabba, Western Australia. From 156 woody species of the area seven plant traits have been identified to be relevant for this study: regeneration mode, annual maximum seed production, seed size, maximum crown diameter, drought tolerance, dispersal mode and seed bank type. Trait sets are used for the definition of plant functional types (PFTs). The PFT dynamics are simulated annual by iterating life history processes. In the first part of my thesis I investigate the importance of trade-offs for the maintenance of high diversity in multi-species systems with 288 virtual PFTs. Simulation results show that the trade-off concept can be helpful to identify non-viable combinations of plant traits. However, the Shannon Diversity Index of modelled communities can be high despite of the presence of ‘supertypes’. I conclude, that trade-offs between two traits are less important to explain multi-species coexistence and high diversity than it is predicted by more conceptual models. Several studies show, that seed immigration from the regional seed pool is essential for maintaining local species diversity. However, systematical studies on the seed rain composition to multi-species communities are missing. The results of the simulation experiments, as presented in part two of this thesis, show clearly, that without seed immigration the local species community found in Eneabba drifts towards a state with few coexisting PFTs. With increasing immigration rates the number of simulated coexisting PFTs and Shannon diversity quickly approaches values as also observed in the field. Including the regional seed input in the model is suited to explain more aggregated measures of the local plant community structure such as species richness and diversity. Hence, the seed rain composition should be implemented in future studies. In the third part of my thesis I test the sensitivity of Eneabba PFTs to four different climate change scenarios, considering their impact on both local and regional processes. The results show that climate change clearly has the potential to alter the number of dispersed seeds for most of the Eneabba PFTs and therefore the source of the ‘immigrants’ at the community level. A classification tree analysis shows that, in general, the response to climate change was PFT-specific. In the Eneabba sand plains sensitivity of a PFT to climate change depends on its specific trait combination and on the scenario of environmental change i.e. development of the amount of rainfall and the fire frequency. This result emphasizes that PFT-specific responses and regional process seed immigration should not be ignored in studies dealing with the impact of climate change on future species distribution. The results of the three chapters are finally analysed in a general discussion. The model is discussed and improvements and suggestions are made for future research. My work leads to the following conclusions: i) It is necessary to support modelling with empirical work to explain coexistence in species-rich plant communities. ii) The chosen modelling approach allows considering the complexity of coexistence and improves the understanding of coexistence mechanisms. iii) Field research based assumptions in terms of environmental conditions and plant life histories can relativise the importance of more hypothetic coexistence theories in species-rich systems. In consequence, trade-offs can play a lower role than predicted by conceptual models. iv) Seed immigration is a key process for local coexistence. Its alteration because of climate change should be considered for prognosis of coexistence. Field studies should be carried out to get data on seed rain composition.
Enforcing security policies to distributed systems is difficult, in particular, when a system contains untrusted components. We designed AspectKE*, a distributed AOP language based on a tuple space, to tackle this issue. In AspectKE*, aspects can enforce access control policies that depend on future behavior of running processes. One of the key language features is the predicates and functions that extract results of static program analysis, which are useful for defining security aspects that have to know about future behavior of a program. AspectKE* also provides a novel variable binding mechanism for pointcuts, so that pointcuts can uniformly specify join points based on both static and dynamic information about the program. Our implementation strategy performs fundamental static analysis at load-time, so as to retain runtime overheads minimal. We implemented a compiler for AspectKE*, and demonstrate usefulness of AspectKE* through a security aspect for a distributed chat system.
Preparation and investigation of polymer-foam films and polymer-layer systems for ferroelectrets
(2010)
Piezoelectric materials are very useful for applications in sensors and actuators. In addition to traditional ferroelectric ceramics and ferroelectric polymers, ferroelectrets have recently become a new group of piezoelectrics. Ferroelectrets are functional polymer systems for electromechanical transduction, with elastically heterogeneous cellular structures and internal quasi-permanent dipole moments. The piezoelectricity of ferroelectrets stems from linear changes of the dipole moments in response to external mechanical or electrical stress. Over the past two decades, polypropylene (PP) foams have been investigated with the aim of ferroelectret applications, and some products are already on the market. PP-foam ferroelectrets may exhibit piezoelectric d33 coefficients of 600 pC/N and more. Their operating temperature can, however, not be much higher than 60 °C. Recently developed polyethylene-terephthalate (PET) and cyclo-olefin copolymer (COC) foam ferroelectrets show slightly better d33 thermal stabilities, but usually at the price of smaller d33 values. Therefore, the main aim of this work is the development of new thermally stable ferroelectrets with appreciable piezoelectricity. Physical foaming is a promising technique for generating polymer foams from solid films without any pollution or impurity. Supercritical carbon dioxide (CO2) or nitrogen (N2) are usually employed as foaming agents due to their good solubility in several polymers. Polyethylene propylene (PEN) is a polyester with slightly better properties than PET. A “voiding + inflation + stretching” process has been specifically developed to prepare PEN foams. Solid PEN films are saturated with supercritical CO2 at high pressure and then thermally voided at high temperatures. Controlled inflation (Gas-Diffusion Expansion or GDE) is applied in order to adjust the void dimensions. Additional biaxial stretching decreases the void heights, since it is known lens-shaped voids lead to lower elastic moduli and therefore also to stronger piezoelectricity. Both, contact and corona charging are suitable for the electric charging of PEN foams. The light emission from the dielectric-barrier discharges (DBDs) can be clearly observed. Corona charging in a gas of high dielectric strength such as sulfur hexafluoride (SF6) results in higher gas-breakdown strength in the voids and therefore increases the piezoelectricity. PEN foams can exhibit piezoelectric d33 coefficients as high as 500 pC/N. Dielectric-resonance spectra show elastic moduli c33 of 1 − 12 MPa, anti-resonance frequencies of 0.2 − 0.8 MHz, and electromechanical coupling factors of 0.016 − 0.069. As expected, it is found that PEN foams show better thermal stability than PP and PET. Samples charged at room temperature can be utilized up to 80 − 100 °C. Annealing after charging or charging at elevated temperatures may improve thermal stabilities. Samples charged at suitable elevated temperatures show working temperatures as high as 110 − 120 °C. Acoustic measurements at frequencies of 2 Hz − 20 kHz show that PEN foams can be well applied in this frequency range. Fluorinated ethylene-propylene (FEP) copolymers are fluoropolymers with very good physical, chemical and electrical properties. The charge-storage ability of solid FEP films can be significantly improved by adding boron nitride (BN) filler particles. FEP foams are prepared by means of a one-step procedure consisting of CO2 saturation and subsequent in-situ high-temperature voiding. Piezoelectric d33 coefficients up to 40 pC/N are measured on such FEP foams. Mechanical fatigue tests show that the as-prepared PEN and FEP foams are mechanically stable for long periods of time. Although polymer-foam ferroelectrets have a high application potential, their piezoelectric properties strongly depend on the cellular morphology, i.e. on size, shape, and distribution of the voids. On the other hand, controlled preparation of optimized cellular structures is still a technical challenge. Consequently, new ferroelectrets based on polymer-layer system (sandwiches) have been prepared from FEP. By sandwiching an FEP mesh between two solid FEP films and fusing the polymer system with a laser beam, a well-designed uniform macroscopic cellular structure can be formed. Dielectric resonance spectroscopy reveals piezoelectric d33 coefficients as high as 350 pC/N, elastic moduli of about 0.3 MPa, anti-resonance frequencies of about 30 kHz, and electromechanical coupling factors of about 0.05. Samples charged at elevated temperatures show better thermal stabilities than those charged at room temperature, and the higher the charging temperature, the better is the stability. After proper charging at 140 °C, the working temperatures can be as high as 110 − 120 °C. Acoustic measurements at frequencies of 200 Hz − 20 kHz indicate that the FEP layer systems are suitable for applications at least in this range.
Left peripheral focus
(2010)
In Czech, German, and many other languages, part of the semantic focus
of the utterance can be moved to the left periphery of the clause. The main generalization is that only the leftmost accented part of the semantic focus can be moved. We propose that movement to the left periphery is generally triggered by an unspecific edge feature of C (Chomsky 2008) and its restrictions can be attributed to requirements of cyclic linearization, modifying the theory of cyclic linearization developed by Fox and Pesetsky (2005). The crucial assumption is that structural accent is a direct consequence of being linearized at merge, thus it is indirectly relevant for (locality restrictions on) movement. The absence of structural accent correlates with given-ness. Given elements may later receive (topic or contrastive) accents, which accounts for fronting in multiple focus/contrastive topic constructions. Without any additional assumptions, the model can account for movement of pragmatically unmarked elements to the left periphery (‘formal fronting’, Frey 2005). Crucially, the analysis makes no reference at all to concepts of information structure in the syntax, in line with the claim of Chomsky (2008) that UG specifies no direct link between syntax and information structure.
rezensiertes Werk: Stephan Dörschel: Fritz Wisten : bis Zum letzten Augenblick : ein jüdisches Theaterleben. - Hentrich & Hentrich : Berlin, 2009. - 112 S. (Jüdische Miniaturen ; 74) ISBN 978-3-938485-85-9
rezensiertes Werk: Schwartz, Yigal: Maamin beli Kenessija : 4 Massot al Aharon Appelfeld. - Tel Aviv : Dvir, 2009.- 181 S.
Between 2002 and 2006 the Colombian government of Álvaro Uribe counted with great international support to hand a demobilization process of right-wing paramilitary groups, along with the implementation of transitional justice policies such as penal prosecutions and the creation of a National Commission for Reparation and Reconciliation (NCRR) to address justice, truth and reparation for victims of paramilitary violence. The demobilization process began when in 2002 the United Self Defence Forces of Colombia (Autodefensas Unidas de Colombia, AUC) agreed to participate in a government-sponsored demobilization process. Paramilitary groups were responsible for the vast majority of human rights violations for a period of over 30 years. The government designed a special legal framework that envisaged great leniency for paramilitaries who committed serious crimes and reparations for victims of paramilitary violence. More than 30,000 paramilitaries have demobilized under this process between January 2003 and August 2006. Law 975, also known as the “Justice and Peace Law”, and Decree 128 have served as the legal framework for the demobilization and prosecutions of paramilitaries. It has offered the prospect of reduced sentences to demobilized paramilitaries who committed crimes against humanity in exchange for full confessions of crimes, restitution for illegally obtained assets, the release of child soldiers, the release of kidnapped victims and has also provided reparations for victims of paramilitary violence. The Colombian demobilization process presents an atypical case of transitional justice. Many observers have even questioned whether Colombia can be considered a case of transitional justice. Transitional justice measures are often taken up after the change of an authoritarian regime or at a post-conflict stage. However, the particularity of the Colombian case is that transitional justice policies were introduced while the conflict still raged. In this sense, the Colombian case expresses one of the key elements to be addressed which is the tension between offering incentives to perpetrators to disarm and demobilize to prevent future crimes and providing an adequate response to the human rights violations perpetrated throughout the course of an internal conflict. In particular, disarmament, demobilization and reintegration processes require a fine balance between the immunity guarantees offered to ex-combatants and the sought of accountability for their crimes. International law provides the legal framework defining the rights to justice, truth and reparations for victims and the corresponding obligations of the State, but the peace negotiations and conflicted political structures do not always allow for the fulfillment of those rights. Thus, the aim of this article is to analyze what kind of transition may be occurring in Colombia by focusing on the role that transitional justice mechanisms may play in political negotiations between the Colombian government and paramilitary groups. In particular, it seeks to address to what extent such processes contribute to or hinder the achievement of the balance between peacebuilding and accountability, and thus facilitate a real transitional process.
Within our research group Bayesian Risk Solutions we have coined the idea of a Bayesian Risk Management (BRM). It claims (1) a more transparent and diligent data analysis as well as (2)an open-minded incorporation of human expertise in risk management. In this dissertation we formulize a framework for BRM based on the two pillars Hardcore-Bayesianism (HCB) and Softcore-Bayesianism (SCB) providing solutions for the claims above. For data analysis we favor Bayesian statistics with its Markov Chain Monte Carlo (MCMC) simulation algorithm. It provides a full illustration of data-induced uncertainty beyond classical point-estimates. We calibrate twelve different stochastic processes to four years of CO2 price data. Besides, we calculate derived risk measures (ex ante/ post value-at-risks, capital charges, option prices) and compare them to their classical counterparts. When statistics fails because of a lack of reliable data we propose our integrated Bayesian Risk Analysis (iBRA) concept. It is a basic guideline for an expertise-driven quantification of critical risks. We additionally review elicitation techniques and tools supporting experts to express their uncertainty. Unfortunately, Bayesian thinking is often blamed for its arbitrariness. Therefore, we introduce the idea of a Bayesian due diligence judging expert assessments according to their information content and their inter-subjectivity.
Coupling of the electrical, mechanical and optical response in polymer/liquid-crystal composites
(2010)
Micrometer-sized liquid-crystal (LC) droplets embedded in a polymer matrix may enable optical switching in the composite film through the alignment of the LC director along an external electric field. When a ferroelectric material is used as host polymer, the electric field generated by the piezoelectric effect can orient the director of the LC under an applied mechanical stress, making these materials interesting candidates for piezo-optical devices. In this work, polymer-dispersed liquid crystals (PDLCs) are prepared from poly(vinylidene fluoride-trifluoroethylene) (P(VDF-TrFE)) and a nematic liquid crystal (LC). The anchoring effect is studied by means of dielectric relaxation spectroscopy. Two dispersion regions are observed in the dielectric spectra of the pure P(VDF-TrFE) film. They are related to the glass transition and to a charge-carrier relaxation, respectively. In PDLC films containing 10 and 60 wt% LC, an additional, bias-field-dependent relaxation peak is found that can be attributed to the motion of LC molecules. Due to the anchoring effect of the LC molecules, this relaxation process is slowed down considerably, when compared with the related process in the pure LC. The electro-optical and piezo-optical behavior of PDLC films containing 10 and 60 wt% LCs is investigated. In addition to the refractive-index mismatch between the polymer matrix and the LC molecules, the interaction between the polymer dipoles and the LC molecules at the droplet interface influences the light-scattering behavior of the PDLC films. For the first time, it was shown that the electric field generated by the application of a mechanical stress may lead to changes in the transmittance of a PDLC film. Such a piezo-optical PDLC material may be useful e.g. in sensing and visualization applications. Compared to a non-polar matrix polymer, the polar matrix polymer exhibits a strong interaction with the LC molecules at the polymer/LC interface which affects the electro-optical effect of the PDLC films and prevents a larger increase in optical transmission.
We introduce a simple approach extending the input language of Answer Set Programming (ASP) systems by multi-valued propositions. Our approach is implemented as a (prototypical) preprocessor translating logic programs with multi-valued propositions into logic programs with Boolean propositions only. Our translation is modular and heavily benefits from the expressive input language of ASP. The resulting approach, along with its implementation, allows for solving interesting constraint satisfaction problems in ASP, showing a good performance.
Pattern matching is a well-established concept in the functional programming community. It provides the means for concisely identifying and destructuring values of interest. This enables a clean separation of data structures and respective functionality, as well as dispatching functionality based on more than a single value. Unfortunately, expressive pattern matching facilities are seldomly incorporated in present object-oriented programming languages. We present a seamless integration of pattern matching facilities in an object-oriented and dynamically typed programming language: Newspeak. We describe language extensions to improve the practicability and integrate our additions with the existing programming environment for Newspeak. This report is based on the first author’s master’s thesis.
Nanofibrous mats are interesting scaffold materials for biomedical applications like tissue engineering due to their interconnectivity and their size dimension which mimics the native cell environment. Electrospinning provides a simple route to access such fiber meshes. This thesis addresses the structural and functional control of electrospun fiber mats. In the first section, it is shown that fiber meshes with bimodal size distribution could be obtained in a single-step process by electrospinning. A standard single syringe set-up was used to spin concentrated poly(ε-caprolactone) (PCL) and poly(lactic-co-glycolic acid) (PLGA) solutions in chloroform and meshes with bimodal-sized fiber distribution could be directly obtained by reducing the spinning rate at elevated humidity. Scanning electron microscopy (SEM) and mercury porosity of the meshes suggested a suitable pore size distribution for effective cell infiltration. The bimodal fiber meshes together with unimodal fiber meshes were evaluated for cellular infiltration. While the micrometer fibers in the mixed meshes generate an open pore structure, the submicrometer fibers support cell adhesion and facilitate cell bridging on the large pores. This was revealed by initial cell penetration studies, showing superior ingrowth of epithelial cells into the bimodal meshes compared to a mesh composed of unimodal 1.5 μm fibers. The bimodal fiber meshes together with electrospun nano- and microfiber meshes were further used for the inorganic/organic hybrid fabrication of PCL with calcium carbonate or calcium phosphate, two biorelevant minerals. Such composite structures are attractive for the potential improvement of properties such as stiffness or bioactivity. It was possible to encapsulate nano and mixed sized plasma-treated PCL meshes to areas > 1 mm2 with calcium carbonate using three different mineralization methods including the use of poly(acrylic acid). The additive seemed to be useful in stabilizing amorphous calcium carbonate to effectively fill the space between the electrospun fibers resulting in composite structures. Micro-, nano- and mixed sized fiber meshes were successfully coated within hours by fiber directed crystallization of calcium phosphate using a ten-times concentrated simulated body fluid. It was shown that nanofibers accelerated the calcium phosphate crystallization, as compared to microfibers. In addition, crystallizations performed at static conditions led to hydroxyapatite formations whereas in dynamic conditions brushite coexisted. In the second section, nanofiber functionalization strategies are investigated. First, a one-step process was introduced where a peptide-polymer-conjugate (PLLA-b-CGGRGDS) was co-spun with PLGA in such a way that the peptide is enriched on the surface. It was shown that by adding methanol to the chloroform/blend solution, a dramatic increase of the peptide concentration at the fiber surface could be achieved as determined by X-ray photoelectron spectroscopy (XPS). Peptide accessibility was demonstrated via a contact angle comparison of pure PLGA and RGD-functionalized fiber meshes. In addition, the electrostatic attraction between a RGD-functionalized fiber and a silica bead at pH ~ 4 confirmed the accessibility of the peptide. The bioactivity of these RGD-functionalized fiber meshes was demonstrated using blends containing 18 wt% bioconjugate. These meshes promoted adhesion behavior of fibroblast compared to pure PLGA meshes. In a second functionalization approach, a modular strategy was investigated. In a single step, reactive fiber meshes were fabricated and then functionalized with bioactive molecules. While the electrospinning of the pure reactive polymer poly(pentafluorophenyl methacrylate) (PPFPMA) was feasible, the inherent brittleness of PPFPMA required to spin a PCL blend. Blends and pure PPFPMA showed a two-step functionalization kinetics. An initial fast reaction of the pentafluorophenyl esters with aminoethanol as a model substance was followed by a slow conversion upon further hydrophilization. This was analysed by UV/Vis-spectroscopy of the pentaflurorophenol release upon nucleophilic substitution with the amines. The conversion was confirmed by increased hydrophilicity of the resulting meshes. The PCL/PPFPMA fiber meshes were then used for functionalization with more complex molecules such as saccharides. Aminofunctionalized D-Mannose or D-Galactose was reacted with the active pentafluorophenyl esters as followed by UV/Vis spectroscopy and XPS. The functionality was shown to be bioactive using macrophage cell culture. The meshes functionalized with D-Mannose specifically stimulated the cytokine production of macrophages when lipopolysaccharides were added. This was in contrast to D-Galactose- or aminoethanol-functionalized and unfunctionalized PCL/PPFPMA fiber mats.
The difference-list technique is described in literature as effective method for extending lists to the right without using calls of append/3. There exist some proposals for automatic transformation of list programs into differencelist programs. However, we are interested in construction of difference-list programs by the programmer, avoiding the need of a transformation step. In [GG09] it was demonstrated, how left-recursive procedures with a dangling call of append/3 can be transformed into right-recursion using the unfolding technique. For simplification of writing difference-list programs using a new cons/2 procedure was introduced. In the present paper, we investigate how efficieny is influenced using cons/2. We measure the efficiency of procedures using accumulator technique, cons/2, DCG’s, and difference lists and compute the resulting speedup in respect to the simple procedure definition using append/3. Four Prolog systems were investigated and we found different behaviour concerning the speedup by difference lists. A result of our investigations is, that an often advice given in the literature for avoiding calls append/3 could not be confirmed in this strong formulation.
Preface
(2010)
The workshops on (constraint) logic programming (WLP) are the annual meeting of the Society of Logic Programming (GLP e.V.) and bring together researchers interested in logic programming, constraint programming, and related areas like databases, artificial intelligence and operations research. In this decade, previous workshops took place in Dresden (2008), Würzburg (2007), Vienna (2006), Ulm (2005), Potsdam (2004), Dresden (2002), Kiel (2001), and Würzburg (2000). Contributions to workshops deal with all theoretical, experimental, and application aspects of constraint programming (CP) and logic programming (LP), including foundations of constraint/ logic programming. Some of the special topics are constraint solving and optimization, extensions of functional logic programming, deductive databases, data mining, nonmonotonic reasoning, , interaction of CP/LP with other formalisms like agents, XML, JAVA, program analysis, program transformation, program verification, meta programming, parallelism and concurrency, answer set programming, implementation and software techniques (e.g., types, modularity, design patterns), applications (e.g., in production, environment, education, internet), constraint/logic programming for semantic web systems and applications, reasoning on the semantic web, data modelling for the web, semistructured data, and web query languages.
The correctness of model transformations is a crucial element for the model-driven engineering of high quality software. A prerequisite to verify model transformations at the level of the model transformation specification is that an unambiguous formal semantics exists and that the employed implementation of the model transformation language adheres to this semantics. However, for existing relational model transformation approaches it is usually not really clear under which constraints particular implementations are really conform to the formal semantics. In this paper, we will bridge this gap for the formal semantics of triple graph grammars (TGG) and an existing efficient implementation. Whereas the formal semantics assumes backtracking and ignores non-determinism, practical implementations do not support backtracking, require rule sets that ensure determinism, and include further optimizations. Therefore, we capture how the considered TGG implementation realizes the transformation by means of operational rules, define required criteria and show conformance to the formal semantics if these criteria are fulfilled. We further outline how static analysis can be employed to guarantee these criteria.