Refine
Year of publication
Document Type
- Article (20726)
- Doctoral Thesis (3140)
- Postprint (2090)
- Monograph/Edited Volume (1198)
- Other (660)
- Review (585)
- Conference Proceeding (326)
- Preprint (232)
- Part of a Book (231)
- Working Paper (134)
Language
- English (29537) (remove)
Is part of the Bibliography
- yes (29537) (remove)
Keywords
- climate change (172)
- Germany (103)
- machine learning (86)
- diffusion (76)
- German (68)
- Arabidopsis thaliana (67)
- anomalous diffusion (58)
- stars: massive (58)
- Climate change (55)
- Holocene (55)
Institute
- Institut für Physik und Astronomie (4876)
- Institut für Biochemie und Biologie (4710)
- Institut für Geowissenschaften (3309)
- Institut für Chemie (2855)
- Institut für Mathematik (1571)
- Department Psychologie (1405)
- Institut für Ernährungswissenschaft (1031)
- Department Linguistik (924)
- Wirtschaftswissenschaften (825)
- Institut für Informatik und Computational Science (796)
The seismicity of the Dead Sea fault zone (DSFZ) during the last two millennia is characterized by a number of damaging and partly devastating earthquakes. These events pose a considerable seismic hazard and seismic risk to Syria, Lebanon, Palestine, Jordan, and Israel. The occurrence rates for large earthquakes along the DSFZ show indications to temporal changes in the long-term view. The aim of this thesis is to find out, if the occurrence rates of large earthquakes (Mw ≥ 6) in different parts of the DSFZ are time-dependent and how. The results are applied to probabilistic seismic hazard assessments (PSHA) in the DSFZ and neighboring areas. Therefore, four time-dependent statistical models (distributions), including Weibull, Gamma, Lognormal and Brownian Passage Time (BPT), are applied beside the exponential distribution (Poisson process) as the classical time-independent model. In order to make sure, if the earthquake occurrence rate follows a unimodal or a multimodal form, a nonparametric bootstrap test of multimodality has been done. A modified method of weighted Maximum Likelihood Estimation (MLE) is applied to estimate the parameters of the models. For the multimodal cases, an Expectation Maximization (EM) method is used in addition to the MLE method. The selection of the best model is done by two methods; the Bayesian Information Criterion (BIC) as well as a modified Kolmogorov-Smirnov goodness-of-fit test. Finally, the confidence intervals of the estimated parameters corresponding to the candidate models are calculated, using the bootstrap confidence sets. In this thesis, earthquakes with Mw ≥ 6 along the DSFZ, with a width of about 20 km and inside 29.5° ≤ latitude ≤ 37° are considered as the dataset. The completeness of this dataset is calculated since 300 A.D. The DSFZ has been divided into three sub zones; the southern, the central and the northern sub zone respectively. The central and the northern sub zones have been investigated but not the southern sub zone, because of the lack of sufficient data. The results of the thesis for the central part of the DSFZ show that the earthquake occurrence rate does not significantly pursue a multimodal form. There is also no considerable difference between the time-dependent and time-independent models. Since the time-independent model is easier to interpret, the earthquake occurrence rate in this sub zone has been estimated under the exponential distribution assumption (Poisson process) and will be considered as time-independent with the amount of 9.72 * 10-3 events/year. The northern part of the DSFZ is a special case, where the last earthquake has occurred in 1872 (about 137 years ago). However, the mean recurrence time of Mw ≥ 6 events in this area is about 51 years. Moreover, about 96 percent of the observed earthquake inter-event times (the time between two successive earthquakes) in the dataset regarding to this sub zone are smaller than 137 years. Therefore, it is a zone with an overdue earthquake. The results for this sub zone verify that the earthquake occurrence rate is strongly time-dependent, especially shortly after an earthquake occurrence. A bimodal Weibull-Weibull model has been selected as the best fit for this sub zone. The earthquake occurrence rate, corresponding to the selected model, is a smooth function of time and reveals two clusters within the time after an earthquake occurrence. The first cluster begins right after an earthquake occurrence, lasts about 80 years, and is explicitly time-dependent. The occurrence rate, regarding to this cluster, is considerably lower right after an earthquake occurrence, increases strongly during the following ten years and reaches its maximum about 0.024 events/year, then decreases over the next 70 years to its minimum about 0.0145 events/year. The second cluster begins 80 years after an earthquake occurrence and lasts until the next earthquake occurs. The earthquake occurrence rate, corresponding to this cluster, increases extremely slowly, such as it can be considered as an almost constant rate about 0.015 events/year. The results are applied to calculate the time-dependent PSHA in the northern part of the DSFZ and neighbouring areas.
What is visualization?
(2011)
Over the last 20 years, information visualization became a common tool in science and also a growing presence in the arts and culture at large. However, the use of visualization in cultural research is still in its infancy. Based on the work in the analysis of video games, cinema, TV, animation, Manga and other media carried out in Software Studies Initiative at University of California, San Diego over last two years, a number of visualization techniques and methods particularly useful for cultural and media research are presented.
This co-authored paper is based on research that originated in 2003 when our team started a series of extensive field studies into the character of gameplay experiences. Originally within the Children as the Actors of Game Cultures research project, our aim was to better understand why particularly young people enjoy playing games, while also asking their parents how they perceive gaming as playing partners or as close observers. Gradually our in-depth interviews started to reveal a complex picture of more general relevance, where personal experiences, social contexts and cultural practices all came together to frame gameplay within something we called game cultures. Culture was the keyword, since we were not interested in studying games and play experiences in isolation, but rather as part of the rich meaning- making practices of lived reality.
This paper addresses a theoretical reconfiguration of experience, a repositioning of the techno-social within the domains of mobility, games, and play, and embodiment. The ideas aim to counter the notion that our experience with videogames (and digital media more generally), is largely “virtual” and disembodied – or at most exclusively audiovisual. Notions of the virtual and disembodied support an often-tacit belief that technologically mediated experiences count for nothing if not perceived and valued as human. It is here where play in particular can be put to work, be made to highlight and clarify, for it is in play that we find this value of humanity most wholly embodied. Further, it is in considering the design of the metagame that questions regarding the play experience can be most powerfully engaged. While most of any given game’s metagame emerges from play communities and their larger social worlds (putting it out of reach of game design proper), mobile platforms have the potential to enable a stitching together of these experiences: experiences held across time, space, communities, and bodies. This coming together thus represents a convergence not only of media, participants, contexts, and technologies, but of human experience itself. This coming together is hardly neat, nor fully realized. It is, if nothing else, multifaceted and worthy of further study. It is a convergence in which the dynamics of screen play are reengaged.
Define real, Moron!
(2011)
Academic language should not be a ghetto dialect at odds with ordinary language, but rather an extension that is compatible with lay-language. To define ‘game’ with the unrealistic ambition of satisfying both lay-people and experts should not be a major concern for a game ontology, since the field it addresses is subject to cultural evolution and diachronic change. Instead of the impossible mission of turning the common word into an analytic concept, a useful task for an ontology of games is to model game differences, to show how the things we call games can be different from each other in a number of different ways.
Space is understood best through movement, and complex spaces require not only movement but navigation. The theorization of navigable space requires a conceptual representation of space which is adaptable to the great malleability of video game spaces, a malleability which allows for designs which combine spaces with differing dimensionality and even involve non-Euclidean configurations with contingent connectivity. This essay attempts to describe the structural elements of video game space and to define them in such a way so as to make them applicable to all video game spaces, including potential ones still undiscovered, and to provide analytical tools for their comparison and examination. Along with the consideration of space, there will be a brief discussion of navigational logic, which arises from detectable regularities in a spatial structure that allow players to understand and form expectations regarding a game’s spaces.
The present thesis was born and evolved within the RAdial Velocity Experiment (RAVE) with the goal of measuring chemical abundances from the RAVE spectra and exploit them to investigate the chemical gradients along the plane of the Galaxy to provide constraints on possible Galactic formation scenarios. RAVE is a large spectroscopic survey which aims to observe spectroscopically ~10^6 stars by the end of 2012 and measures their radial velocities, atmospheric parameters and chemical abundances. The project makes use of the UK Schmidt telescope at Australian Astronomical Observatory (AAO) in Siding Spring, Australia, equipped with the multiobject spectrograph 6dF. To date, RAVE collected and measured more than 450,000 spectra. The precision of the chemical abundance estimations depends on the reliability of the atomic and atmosphere parameters adopted (in particular the oscillator strengths of the absorption lines and the effective temperature, gravity, and metallicity of the stars measured). Therefore we first identified 604 absorption lines in the RAVE wavelength range and refined their oscillator strengths with an inverse spectral analysis. Then, we improved the RAVE stellar parameters by modifying the RAVE pipeline and the spectral library the pipeline rely on. The modifications removed some systematic errors in stellar parameters discovered during this work. To obtain chemical abundances, we developed two different processing pipelines. Both of them perform chemical abundances measurements by assuming stellar atmospheres in Local Thermodynamic Equilibrium (LTE). The first one determines elements abundances from equivalent widths of absorption lines. Since this pipeline showed poor sensibility on abundances relative to iron, it has been superseded. The second one exploits the chi^2 minimization technique between observed and model spectra. Thanks to its precision, it has been adopted for the creation of the RAVE chemical catalogue. This pipeline provides abundances with uncertains of about ~0.2dex for spectra with signal-to-noise ratio S/N>40 and ~0.3dex for spectra with 20>S/N>40. For this work, the pipeline measured chemical abundances up to 7 elements for 217,358 RAVE stars. With these data we investigated the chemical gradients along the Galactic radius of the Milky Way. We found that stars with low vertical velocities |W| (which stay close to the Galactic plane) show an iron abundance gradient in agreement with previous works (~-0.07$ dex kpc^-1) whereas stars with larger |W| which are able to reach larger heights above the Galactic plane, show progressively flatter gradients. The gradients of the other elements follow the same trend. This suggests that an efficient radial mixing acts in the Galaxy or that the thick disk formed from homogeneous interstellar matter. In particular, we found hundreds of stars which can be kinetically classified as thick disk stars exhibiting a chemical composition typical of the thin disk. A few stars of this kind have already been detected by other authors, and their origin is still not clear. One possibility is that they are thin disk stars kinematically heated, and then underwent an efficient radial mixing process which blurred (and so flattened) the gradient. Alternatively they may be a transition population" which represents an evolutionary bridge between thin and thick disk. Our analysis shows that the two explanations are not mutually exclusive. Future follow-up high resolution spectroscopic observations will clarify their role in the Galactic disk evolution.
Ghrelin is a unique hunger-inducing stomach-borne hormone. It activates orexigenic circuits in the central nervous system (CNS) when acylated with a fatty acid residue by the Ghrelin O-acyltransferase (GOAT). Soon after the discovery of ghrelin a theoretical model emerged which suggests that the gastric peptide ghrelin is the first “meal initiation molecule
The Arctic is a particularly sensitive area with respect to climate change due to the high surface albedo of snow and ice and the extreme radiative conditions. Clouds and aerosols as parts of the Arctic atmosphere play an important role in the radiation budget, which is, as yet, poorly quantified and understood. The LIDAR (Light Detection And Ranging) measurements presented in this PhD thesis contribute with continuous altitude resolved aerosol profiles to the understanding of occurrence and characteristics of aerosol layers above Ny-Ålesund, Spitsbergen. The attention was turned to the analysis of periods with high aerosol load. As the Arctic spring troposphere exhibits maximum aerosol optical depths (AODs) each year, March and April of both the years 2007 and 2009 were analyzed. Furthermore, stratospheric aerosol layers of volcanic origin were analyzed for several months, subsequently to the eruptions of the Kasatochi and Sarychev volcanoes in summer 2008 and 2009, respectively. The Koldewey Aerosol Raman LIDAR (KARL) is an instrument for the active remote sensing of atmospheric parameters using pulsed laser radiation. It is operated at the AWIPEV research base and was fundamentally upgraded within the framework of this PhD project. It is now equipped with a new telescope mirror and new detection optics, which facilitate atmospheric profiling from 450m above sea level up to the mid-stratosphere. KARL provides highly resolved profiles of the scattering characteristics of aerosol and cloud particles (backscattering, extinction and depolarization) as well as water vapor profiles within the lower troposphere. Combination of KARL data with data from other instruments on site, namely radiosondes, sun photometer, Micro Pulse LIDAR, and tethersonde system, resulted in a comprehensive data set of scattering phenomena in the Arctic atmosphere. The two spring periods March and April 2007 and 2009 were at first analyzed based on meteorological parameters, like local temperature and relative humidity profiles as well as large scale pressure patterns and air mass origin regions. Here, it was not possible to find a clear correlation between enhanced AOD and air mass origin. However, in a comparison of two cloud free periods in March 2007 and April 2009, large AOD values in 2009 coincided with air mass transport through the central Arctic. This suggests the occurrence of aerosol transformation processes during the aerosol transport to Ny-Ålesund. Measurements on 4 April 2009 revealed maximum AOD values of up to 0.12 and aerosol size distributions changing with altitude. This and other performed case studies suggest the differentiation between three aerosol event types and their origin: Vertically limited aerosol layers in dry air, highly variable hygroscopic boundary layer aerosols and enhanced aerosol load across wide portions of the troposphere. For the spring period 2007, the available KARL data were statistically analyzed using a characterization scheme, which is based on optical characteristics of the scattering particles. The scheme was validated using several case studies. Volcanic eruptions in the northern hemisphere in August 2008 and June 2009 arose the opportunity to analyze volcanic aerosol layers within the stratosphere. The rate of stratospheric AOD change was similar within both years with maximum values above 0.1 about three to five weeks after the respective eruption. In both years, the stratospheric AOD persisted at higher rates than usual until the measurements were stopped in late September due to technical reasons. In 2008, up to three aerosol layers were detected, the layer structure in 2009 was characterized by up to six distinct and thin layers which smeared out to one broad layer after about two months. The lowermost aerosol layer was continuously detected at the tropopause altitude. Three case studies were performed, all revealed rather large indices of refraction of m = (1.53–1.55) - 0.02i, suggesting the presence of an absorbing carbonaceous component. The particle radius, derived with inversion calculations, was also similar in both years with values ranging from 0.16 to 0.19 μm. However, in 2009, a second mode in the size distribution was detected at about 0.5 μm. The long term measurements with the Koldewey Aerosol Raman LIDAR in Ny-Ålesund provide the opportunity to study Arctic aerosols in the troposphere and the stratosphere not only in case studies but on longer time scales. In this PhD thesis, both, tropospheric aerosols in the Arctic spring and stratospheric aerosols following volcanic eruptions have been described qualitatively and quantitatively. Case studies and comparative studies with data of other instruments on site allowed for the analysis of microphysical aerosol characteristics and their temporal evolution.
In a very simplified view, the plant leaf growth can be reduced to two processes, cell division and cell expansion, accompanied by expansion of their surrounding cell walls. The vacuole, as being the largest compartment of the plant cell, plays a major role in controlling the water balance of the plant. This is achieved by regulating the osmotic pressure, through import and export of solutes over the vacuolar membrane (the tonoplast) and by controlling the water channels, the aquaporins. Together with the control of cell wall relaxation, vacuolar osmotic pressure regulation is thought to play an important role in cell expansion, directly by providing cell volume and indirectly by providing ion and pH homestasis for the cytosoplasm. In this thesis the role of tonoplast protein coding genes in cell expansion in the model plant Arabidopsis thaliana is studied and genes which play a putative role in growth are identified. Since there is, to date, no clearly identified protein localization signal for the tonoplast, there is no possibility to perform genome-wide prediction of proteins localized to this compartment. Thus, a series of recent proteomic studies of the tonoplast were used to compile a list of cross-membrane tonoplast protein coding genes (117 genes), and other growth-related genes from notably the growth regulating factor (GRF) and expansin families were included (26 genes). For these genes a platform for high-throughput reverse transcription quantitative real time polymerase chain reaction (RT-qPCR) was developed by selecting specific primer pairs. To this end, a software tool (called QuantPrime, see http://www.quantprime.de) was developed that automatically designs such primers and tests their specificity in silico against whole transcriptomes and genomes, to avoid cross-hybridizations causing unspecific amplification. The RT-qPCR platform was used in an expression study in order to identify candidate growth related genes. Here, a growth-associative spatio-temporal leaf sampling strategy was used, targeting growing regions at high expansion developmental stages and comparing them to samples taken from non-expanding regions or stages of low expansion. Candidate growth related genes were identified after applying a template-based scoring analysis on the expression data, ranking the genes according to their association with leaf expansion. To analyze the functional involvement of these genes in leaf growth on a macroscopic scale, knockout mutants of the candidate growth related genes were screened for growth phenotypes. To this end, a system for non-invasive automated leaf growth phenotyping was established, based on a commercially available image capture and analysis system. A software package was developed for detailed developmental stage annotation of the images captured with the system, and an analysis pipeline was constructed for automated data pre-processing and statistical testing, including modeling and graph generation, for various growth-related phenotypes. Using this system, 24 knockout mutant lines were analyzed, and significant growth phenotypes were found for five different genes.
New survey data for a panel of Polish firms is used to estimate employment and wage adjustments under various forms of ownership (insider vs. outsider) and asymmetric response to exogenous shocks. In contrast to earlier studies, dynamic panel data estimators (GMM) allow for endogeneity of observed variables and partial adjustment to shocks. Results differ from other findings in the transition literature: wages have little effect on dynamic labor demand and the firm-size wage effect is confirmed. Firms that expand employment have to pay significantly larger wage increases and rising sales add little to employment, suggesting labor hoarding. Dec1ining sales, however, significantly reduce employment and privatization (or anticipation thereof) has the expected benefits.
In socialist economies firms have provided various social benefits, like child care, health care, food subsidies, housing etc. Using panel data from Bulgarian and Polish firms, this paper attempts to explain firm-specific provision of social benefits in the process of transition. We investigate empirically with the help of qualitative response models, how ownership type and structure, firm size, profitability, change in management, foreign direct investment, wage and employment policies, union involvement and employee power have impacted the state of non-wage benefits provision.
Privatisation and ownership : the impact on firms in transition survey evidence from Bulgaria
(1999)
Previous papers in this Special Series, have described in detail the theoretical background and development patterns, along with some empirical results, for the privatisation processes in Bulgaria and Poland. A range of issues have been raised which demand closer empirical investigation. For this purpose, the research group has developed questionnaire studies for Bulgaria and Poland. In Bulgaria, the National Statistical Institute (NSI) carried out the case studies between February and April 1998. The problems of the questionnaire set-up were identified in apre-test study, but unlike the Polish case, they led to only minor differentiation. Since financial limitations prevented a larger sample size, a sample size of 61 mid-sized and large Bulgarian enterprises was selected. Failure to respond was not a serious problem, unlike with the Polish questionnaire; this is because the NSI has maintained good links to the enterprise sector and management were prepared to give detailed answers, even on questions of their firms' financial status. However, as the Polish experience suggests, it has become obvious that the privatisation process is also associated with management's increasing reluctance to answer comparatively 'intimate' questions. Thus, future questionnaire studies must take a much higher rate of refusals into consideration. The pre-selection procedure in Bulgaria was determined by the project target, which sought to analyse the effects of the privatisation process on firm' s behaviour during the transition process, and hence only firms which had already existed before the changes were included. For small and medium-size enterprises (SME's), most of which were founded after the changes, partly due to the legal processes of spontaneous privatisation, some empirical, as weIl as analytical, studies were carried out. Thus, the research group limited the scope of investigation to enterprises with more than 250 employees. The underlying hypothesis is that employment problems are concentrated in larger firms, in particular amongst those still (partly) state owned. Because of the former ownership structures and relatively slower capacity for management change, the assumption is that state-owned enterprises (SOE's) which have only been recently privatised might still have traditional links to government even after privatisation. On the one hand, the SME's are obviously more prone to, and linked with, market processes. As a result, they don't have the financial potential and incentives to follow job-hoarding strategies. On the other hand, there are almost no SME's which are still stateowned. Hence, the prevailing opinion in the literature is that 'larger industrial firms were apt to be least efficient, most often producing inadequate and non-competitive products, with a high degree ofunder-utilisation oflabour and most inflexible to change' (lones & Nikolov 1997, p. 252). Thus, as mentioned above, though there may be some limitations with regard to firm representation, our sample characterises a number of enterprises that offer fertile ground for the analysis of firms' adjustment to the newly established market realities in a transition economy. Our study is unique in the sense that existing empirical studies on privatisation and enterprise restructuring generally cover the time period just before and after the initial stages of transition, e.g. 1988/89 to 1992. In those studies, samples of firms in the Czech Republic, Poland, Hungary and Bulgaria recognise that behavioural adaptations at the enterprise level had taken place just before the actual privatisation process materialised. Therefore, almost all of the firms under examination were still state-owned. The firms were usually divided according to their performance as 'good', 'average' and 'bad' enterprises. The main findings of those early studies have shown that the macroeconomic adaptations (i.e., macro-level changes which induced micro-level adjustment by the firms), as well as emerging market structures, have created enormous pressures which in turn have influenced firms' economic behaviour, reallocation of resources and consequent restructuring. This evidence supports the hypothesis that the SOE's started restructuring and adjusting their behaviour and performance, in response to the harsh realities of more open markets, before privatisation actually started. In this paper, we seek to present some results on these developments in Bulgaria, at the later stages of transition and privatisation (1992-1996). The aim of our questionnaire study is therefore to show the effects of the privatisation process and ownership on the behavioural adaptations of firms which had once been state-owned or continue to be owned by the state. The period under investigation is 1992 to 1996. For 1990 and 1991, the number of missing values is reactively high and, where relevant, we partly exclude these observations from our analysis. The paper contains seven sections. Section 11 outlines the macroeconomic environment in which our sample firms operate, provides some specifics of the Bulgarian privatisation process, and discusses data quality. Section 111 concentrates on the analysis of privatisation, the specific forms of ownership that resulted from it, and firm size. In Section IV, we describe the trends of the main economic variables within firms (such as employment, wages, labour productivity, etc), and a number of proxies of firm viability, while Section V presents some regression results to corroborate the discussion of the previous section. Section VI gives an overview of survey results of the impact of enterprise determined wage policy, trade union activity and membership, government control, and social benefits on enterprise restructuring. Section VII is a summary of our findings.
Privatisation in Central and Eastern Europe can be defined as the transfer of property rights from the State to private owners. The transfers are carried out so as to vest the new private owners with the full property rights of use and disposal over their property, these rights being guaranteed by the legal framework established by the rule of law. In Bulgaria, one can distinguish between three main stages in the process of privatisation. Each was shaped by the conflicting resolutions of frequently changing governments and meant to serve different political goals. The first stage (1990-1993) is characterised by the blockade of legal privatisation, as ‘spontaneous privatisation’ was accorded high priority. As in other former socialist countries, great emphasis was placed on the so-called commercialisation of state-owned enterprises. This did not involve the actual transfer of State property into private hands, but rather the independent transformation of state-owned enterprises into joint-stock companies, as well as the establishment of subsidiary companies.1 The goals of introducing more efficient structures and applying modern methods of production by transferring property to a more suitable management were not achieved. The second stage (1993-1995) is a cash privatisation, which laid the foundation for an employee/management buy-out, aided by the legal provisions granting concessions in the payment of instalments. The most important factor in the third stage of the process of privatisation in Bulgaria was the adoption of the mass privatisation model as an alternative method of procedure. In 1996, legal regulations for mass privatisation were introduced and a privatisation fund was established. In the meantime, the process has evolved into its fourth stage, during which a strategy of privatisation has been formulated under the supervision of a monetary council, and various agreements with the IMF and the World Bank are being adhered to. Privatisation is the decisive factor in the structural reforms of East European countries. The problem of converting State property into more effective forms of property management has been exacerbated by the additional demand of carrying out the far-reaching structural changes as swiftly as possible. The expectation that a large part of State property would be privatised within a short time in Bulgaria, has not been met for a number of reasons. When the reforms began, the private sector was too weakly developed to become a catalyst for structural changes. Until 1995 there were no laws regulating the stock exchange or securities and bonds - the capital market was practically non-existent. Moreover, the various political parties could not agree upon the various models and objectives of privatisation. The population itself had no capital. The restitution of private ownership which will not be discussed in further detail was limited to the smallest businesses, traders and workshops. Furthermore, the Privatisation Agency and State authorities employed to initiate the privatisation process lacked experience. Another problem hindering privatisation was that the laws passed lacked precision and were constantly subject to change.
The economy in Poland has changed tremendously in recent years. Agricultural enterprises can defend their market share only if they are able to adjust quickly and efficiently to new circumstances. The most effective strategy to cope with changing operating conditions is a strategy of permanent development of human resources. This strategy must embrace a constant improvement of professional entrepreneurial skills and of management structures within organizations. Only such a strategy will allow businesses to hold on to or to increase their market standing despite strong competition. It will also allow them to meet, for instance, the newly introduced standardisation procedures for goods produced and supplied. This challenge holds especially true for agricultural enterprises that operate in highly competitive markets; markets which are currently characterised by a permanent surplus of supply over demand and a great number of businesses, mainly of small or medium size. Demand in the agricultural market is exerted by millions of consumers, all of different consumption habits with idiosyncratic consumption preferences. Agricultural producers as a group are extremely sensitive to any kind of change in their environment. This is especially true in the current transition period when a worsening of economic conditions can be observed: an economic downturn caused by the price of inputs increasing at a faster rate than agricultural product prices and an ineffective agricultural policy. One of the agricultural production factors which allows for quick adjustment to change and which can thus be used to improve one’s market position is the human factor. It is a wellknown fact that a good level of professional skills in combination with ongoing means of furthering and updating professional qualifications of workers can help to facilitate coping with market challenges. The aim of this study is first to determine specific quality and quantity features of human resources in agricultural production, looking, inter alia, at changes in employment, specific employment structures and the number of recruitments and dismissals in a given period. A further aim was to undertake an efficiency analysis of limited partnerships which leased their agricultural real estate from the Agricultural Property Agency (APA) in the Voivodeship of Gorzów between 1995 and 1997. The first analysis was carried out using data which were collected from surveys amongst the owners of 36 privately owned farms and the managers of 14 limited partnerships. The data cover the period between 1994 and 1997. The incentive to conduct research on large farms in the Gorzów Voivodeship using the Data Envelopment Analysis method (DEA) lay in the outcome of various earlier studies on the financial standing of limited partnerships leasing real estate from APA in the Gorzów Voivodeship in 1996 and 1997. Apart from general adjustment processes, these inquiries proved that, in 1997, the economic condition of the farms analysed was worse when compared to the situation in 1996; the following ratios worsened: the financial support ratio, the liquidity ratio, the turnover ratio, the profitability ratio and the cost level ratio (see Świtłyk, 1998, 1999). These results determined the focus of our research, namely input efficiency in particular limited partnerships. The base of our calculations was a research model which consisted of efficiency measures focusing on firms’ inputs The analysis was carried out on a sample of 90 firms in the years between 1995 and 1997 (30 firms every year). Other data material was collected from national statistical office reports on incomes, costs and financial results (F-O1) and statistics about land usage, crop area and yields (R-O5). In the next section we briefly discuss privatisation in agriculture. Sections 3 and 4 present results from our survey. Section 5 concludes.
After promising beginnings towards transformation, in 1991 the Bulgarian economy fell into deep crisis in the period from 1995 to 1997. Social policy, already overstrained due to the demands of transition, was unable to cope effectively with the rapidly spreading state of emergency. The following essay analyses the development of the social indicators and instruments of social security in the years 1990 to 1998. In addition to unemployment and unemployment insurance, the issue of pensions and poverty will also be examined.
Like in all countries in transition, the tax as well as the transfer system have been under serious reform pressures. The socialistic systems were not able to fulfill the necessary functions in providing a certain degree of redistribution and social security, which are inevitable for social oriented market economies. Increasing income and wage differentiation is one of the most important prerequisites for a market oriented ability to pay tax system. But in the transformation period, numerous quasi-legal or even illegal property transactions have taken place, thus leading to wealth concentrations on the one hand while as consequence of the bankruptcy of socialism, enormous poverty problems have arisen on the other. For the political acceptance of the transformation process it is of utmost importance that an efficient and fair tax system is implemented and social security is organised by the state on a level which secures at least the physical minimum of subsistence or – if economically possible – even a social-cultural minimum. Whether the state should go further in providing compulsory social insurance systems has been a hotly debated topic for decades even in the welfare and social states of the Western type. Whereas the basic security systems have to be financed by general tax revenue, for a compulsory social insurance system – due to the insurance character – special earmarked social security contribution are held necessary. Both public goods and services as well as at least basic security have to be financed by total tax revenue. For the acceptance and fairness of the whole system the total redistributive effect of both sides of the budget – the tax system as well as the expenditure system – are decisive. In this paper we will concentrate on the revenue side, e.g. on the taxes as well as on the social security contributions. Adam Smith had already formulated some very simple tax norms which have been transformed in modern tax theory. The equivalence as well as the ability-topay principle are basic yardsticks for every tax system in a democratic oriented market system, not to forget tax fairness. In the historical development process equity-oriented measures have often produced an enormous complexity of the single taxes as well as of the whole tax system. Therefore, reconsidering the Smithian principles of simplicity and of minimum compliance costs for the tax payer would even press many Western European tax systems to undergo serious reform processes which often are delayed because of intense interest group influence. Hence, a modern tax system is a simple one which consists only of a few single taxes which are easy to administer. Such a system consists of two main taxes, the income and the value added tax. Consequently in all countries of transition both taxes have been implemented, while the implementation was fostered by the fact that both also constitute the typical components of the EU member states systems. Therefore such a harmonising tax reform is the most important prerequisite to become a membership candidate. Bulgaria also tried to follow this general pattern in reforming the income tax system starting in 1992 and replacing the old socialistic turnover tax and excise duty system by the value added tax (VAT) in 1994. Especially with regard to the income tax system the demand for simplicity has not been met yet. Complex rules to define the tax base as well as a steeply progressive tax schedule have led to behavioral adaptations which are even strengthened by the effects of a high social contribution burden which is predominantly laid on the employers. In the following some concise descriptions of the tax and social contribution system are given; the paper closes with a summary, in which the impacts of the system are evaluated and some political recommendations for further reforms are presented.
Industrial policy and social strategy at the corporate level in Poland : questionnaire results
(1999)
This paper presents results from a survey of industrial policy of the state and the social security system at the corporate level in Poland. Previous reports in this area indicated preferable directions of research to be taken in order to prove various hypotheses of the purposefulness of an integral approach to industrial policy and social security in the analysis of economic processes in transition (see Weikard 1997). This paper summarises the results and draws conclusions from a questionnaire study on subsidies, social benefits and economic policy in Polish firms during the process of transformation. Our results and conclusions show the scope and character of the processes in the area of industrial and social policy in the period 1994 to 1997. The paper is divided into five parts. The first part concerns the aims and methodology of the questionnaire; it also gives a brief description of the sample. The second part shows how enterprises dealt with the issues of employment and wages in this period. The third part characterises industrial policy at the corporate level, while the next presents results from the survey of various social schemes pursued. The final part aims at an integral approach in the analysis of various processes taking place in Polish enterprises. The survey was conducted in the period April to June 1998. Its aim was to observe certain phenomena occurring at the corporate level. The questionnaire was distributed among the managers, directors and presidents of large-size enterprises, which had been selected to satisfy the following three criteria. Firstly, the number of employees had to be considerable (over 300 workers). This criterion was applied following the consideration that certain social phenomena are more conspicuous in enterprises with large manpower. Secondly, only operating enterprises were selected, the enterprises which closed down were disregarded. Finally, for the purposes of the survey the units differed as regards their legal situation and form of ownership. Out of over 1800 enterprises 370 units were drawn where we sent the questionnaire. Unfortunately, as many as 51.9% of the respondents refused co-operation, questions to a certain extent puts the representativeness of the sample in question. Finally, 178 questionnaires were subsequently completed and returned for analysis. However, not all of these questionnaires included full answers to all of the 75 questions; therefore, while discussing the results of the survey we have indicated the number of relevant answers we have received.
The aim of the work was to present the results of the analyses economic standing of the partnership companies which lease agricultural real estate from Agricultural Property Agency of State Treasury (APA) in 1996 and 1997. The analyses proved poor economic condition of the firms under investigation and especially their low level of stabilisation (the index of total debt was in 1996 equal to 0.88 and in 1997 to 0.96) and the low level of their solvency.
The study presents estimates and analyses of the social expenditure in Poland. Changes which occurred during the transformation period are a reflection of consciously launched political transformations as well as decisions taken as a result of current needs and political pressures. This has an impact on the volume and structure of expenditures which are under consolidation. The debate devoted to budget issues, which gets more intense every autumn, testifies to increasing problems with correcting guidelines for distribution of expenditures. Even slight changes stand for depriving a specified group of transfers, what in democratic conditions produces strong protests. A similar negative attitude to changes became evident with regard to taxation. Recommendations presented in 1998 by the Polish government [see Ministry of Finance, 1998a, 1998b] introduce substantial modifications to the current tax system (withdrawal from tax exemptions and introduction of a tax-free minimum income) and thus met with a massive reluctance of major political fractions. This study provides readers with information on the volume of public expenditures, the source of public revenue, that is taxes, and a thorough study on expenditures allocated to social goals. The analysis was carried out on the basis of own estimates, which employ data acquired from the Ministry of Finance and the Ministry of Labour and Social Policy.
In centrally planned economies state subsidies were the main instrument of supporting the economic sector. Most of them had also social functions (e.g. through subsidising the consumption of households). In the period of transition, with the withdraw all of the state from economic decisions of the enterprises, new social problems appeared. The paper analyses the process of granting state support to economic units - its scope and forms - in the 90-ties.
This paper analyses the macroeconomic developments which have taken place in the Bulgarian economy in the period 1993-1997. The paper also looks at the institutional arrangements and the process of economic policy-making in the country. In this context the problems the Bulgarian economy has experienced in the transition process towards a market-oriented economy are also studied. The paper proceeds as follows: Section 2 looks at the institutional arrangements and the process of economic policy-making through 1995. Section 3 studies the deep economic crisis in 1996 and points out what went wrong in that period. Section 4 continues studying the economic crisis of the Bulgarian economy as well as the problems in the transition process during the first half of 1997. Section 5 looks at the economic developments during the second half of 1997 and points to the prospects for growth in 1998. Section 6 deals with the Bulgarian financial institutions and the existing institutional arrangements. Finally, Section 7 concludes the paper.
Industrial policy measures can be a reasonable supplement to economic and social policy actions during the period of transformation of centrally planned economies. This paper shows the interplay between industrial and social policy. Special attention is given to the timing and sequencing of the transformation process. This approach is closely modeled on the example of New Zealand.
The East African Plateau provides a spectacular example of geodynamic plateau uplift, active continental rifting, and associated climatic forcing. It is an integral part of the East African Rift System and has an average elevation of approximately 1,000 m. Its location coincides with a negative Bouguer gravity anomaly with a semi-circular shape, closely related to a mantle plume, which influences the Cenozoic crustal development since its impingement in Eocene-Oligocene time. The uplift of the East African Plateau, preceding volcanism, and rifting formed an important orographic barrier and tectonically controlled environment, which is profoundly influenced by climate driven processes. Its location within the equatorial realm supports recently proposed hypotheses, that topographic changes in this region must be considered as the dominant forcing factor influencing atmospheric circulation patterns and rainfall distribution. The uplift of this region has therefore often been associated with fundamental climatic and environmental changes in East Africa and adjacent regions. While the far-reaching influence of the plateau uplift is widely accepted, the timing and the magnitude of the uplift are ambiguous and are still subject to ongoing discussion. This dilemma stems from the lack of datable, geomorphically meaningful reference horizons that could record surface uplift. In order to quantify the amount of plateau uplift and to find evidence for the existence of significant relief along the East African Plateau prior to rifting, I analyzed and modeled one of the longest terrestrial lava flows; the 300-km-long Yatta phonolite flow in Kenya. This lava flow is 13.5 Ma old and originated in the region that now corresponds to the eastern rift shoulders. The phonolitic flow utilized an old riverbed that once drained the eastern flank of the plateau. Due to differential erosion this lava flow now forms a positive relief above the parallel-flowing Athi River, which is mimicking the course of the paleo-river. My approach is a lava-flow modeling, based on an improved composition and temperature dependent method to parameterize the flow of an arbitrary lava in a rectangular-shaped channel. The essential growth pattern is described by a one-dimensional model, in which Newtonian rheological flow advance is governed by the development of viscosity and/or velocity in the internal parts of the lava-flow front. Comparing assessments of different magma compositions reveal that length-dominated, channelized lava flows are characterized by high effusion rates, rapid emplacement under approximately isothermal conditions, and laminar flow. By integrating the Yatta lava flow dimensions and the covered paleo-topography (slope angle) into the model, I was able to determine the pre-rift topography of the East African Plateau. The modeling results yield a pre-rift slope of at least 0.2°, suggesting that the lava flow must have originated at a minimum elevation of 1,400 m. Hence, high topography in the region of the present-day Kenya Rift must have existed by at least 13.5 Ma. This inferred mid-Miocene uplift coincides with the two-step expansion of grasslands, as well as important radiation and speciation events in tropical Africa. Accordingly, the combination of my results regarding the Yatta lava flow emplacement history, its location, and its morphologic character, validates it as a suitable “paleo-tiltmeter” and has thus to be considered as an important topographic and volcanic feature for the topographic evolution in East Africa.
The creation of complex polymer structures has been one of the major research topics over the last couple of decades. This work deals with the synthesis of (block co-)polymers, the creation of complex and stimuli-responsive aggregates by self-assembly, and the cross-linking of these structures. Also the higher-order self-assembly of the aggregates is investigated. The formation of poly-2-oxazoline based micelles in aqueous solution and their simultaneous functionalization and cross-linking using thiol-yne chemistry is e.g. presented. By introducing pH responsive thiols in the core of the micelles the influence of charged groups in the core of micelles on the entire structure can be studied. The charging of these groups leads to a swelling of the core and a decrease in the local concentration of the corona forming block (poly(2-ethyl-2-oxazoline)). This decrease in concentration yields a shift in the cloud point temperature to higher temperatures for this Type I thermoresponsive polymer. When the swelling of the core is prohibited, e.g. by the introduction of sufficient amounts of salt, this behavior disappears. Similar structures can be prepared using complex coacervate core micelles (C3Ms) built through the interaction of weakly acidic and basic polymer blocks. The advantage of these structures is that two different stabilizing blocks can be incorporated, which allows for more diverse and complex structures and behavior of the micelles. Using block copolymers with either a polyanionic or a polycationic block C3Ms could be created with a corona which contains two different soluble nonionic polymers, which either have a mixed corona or a Janus type corona, depending on the polymers that were chosen. Using NHS and EDC the micelles could easily be cross-linked by the formation of amide bonds in the core of the micelles. The higher-order self-assembly behavior of these core cross-linked complex coacervate core micelles (C5Ms) was studied. Due to the cross-linking the micelles are stabilized towards changes in pH and ionic strength, but polymer chains are also no longer able to rearrange. For C5Ms with a mixed corona likely network structures were formed upon the collapse of the thermoresponsive poly(N-isopropylacrylamide) (PNIPAAm), whereas for Janus type C5Ms well defined spherical aggregates of micelles could be obtained, depending on the pH of the solution. Furthermore it could be shown that Janus micelles can adsorb onto inorganic nanoparticles such as colloidal silica (through a selective interaction between PEO and the silica surface) or gold nanoparticles (by the binding of thiol end-groups). Asymmetric aggregates were also formed using the streptavidin-biotin binding motive. This is achieved by using three out of the four binding sites of streptavidin for the binding of one three-arm star polymer, end-functionalized with biotin groups. A homopolymer with one biotin end-group can be used to occupy the last position. This binding of two different polymers makes it possible to create asymmetric complexes. This phase separation is theoretically independent of the kind of polymer since the structure of the protein is the driving force, not the intrinsic phase separation between polymers. Besides Janus structures also specific cross-linking can be achieved by using other mixing ratios.
Non-mycorrhizal fungal endophytes are able to colonize internally roots without causing visible disease symptoms establishing neutral or mutualistic associations with plants. These fungi known as non-clavicipitaceous endophytes have a broad host range of monocot and eudicot plants and are highly diverse. Some of them promote plant growth and confer increased abiotic-stress tolerance and disease resistance. According to such possible effects on host plants, it was aimed to isolate and to characterize native fungal root endophytes from tomato (Lycopersicon esculentum Mill.) and to analyze their effects on plant development, plant resistance and fruit yield and quality together with the model endophyte Piriformospora indica. Fifty one new fungal strains were isolated from desinfected tomato roots of four different crop sites in Colombia. These isolates were roughly characterized and fourteen potential endophytes were further analyzed concerning their taxonomy, their root colonization capacity and their impact on plant growth. Sequencing of the ITS region from the ribosomal RNA gene cluster and in-depth morphological characterisation revealed that they correspond to different phylogenetic groups among the phylum Ascomycota. Nine different morphotypes were described including six dark septate endophytes (DSE) that did not correspond to the Phialocephala group. Detailed confocal microscopy analysis showed various colonization patterns of the endophytes inside the roots ranging from epidermal penetration to hyphal growth through the cortex. Tomato pot experiments under glass house conditions showed that they differentially affect plant growth depending on colonization time and inoculum concentration. Three new isolates (two unknown fungal endophyte DSE48, DSE49 and one identified as Leptodontidium orchidicola) with neutral or positiv effects were selected and tested in several experiments for their influence on vegetative growth, fruit yield and quality and their ability to diminish the impact of the pathogen Verticillium dahliae on tomato plants. Although plant growth promotion by all three fungi was observed in young plants, vegetative growth parameters were not affected after 22 weeks of cultivation except a reproducible increase of root diameter by the endophyte DSE49. Additionally, L. orchidicola increased biomass and glucose content of tomato fruits, but only at an early date of harvest and at a certain level of root colonization. Concerning bioprotective effects, the endophytes DSE49 and L. orchidicola decreased significantly disease symptoms caused by the pathogen V. dahliae, but only at a low dosis of the pathogen. In order to analyze, if the model root endophytic fungus Piriformospora indica could be suitable for application in production systems, its impact on tomato was evaluated. Similarly to the new fungal isolates, significant differences for vegetative growth parameters were only observable in young plants and, but protection against V. dahliae could be seen in one experiment also at high dosage of the pathogen. As the DSE L. orchidicola, P. indica increased the number and biomass of marketable tomatoes only at the beginning of fruit setting, but this did not lead to a significant higher total yield. If the effects on growth are due to a better nutrition of the plant with mineral element was analyzed in barley in comparison to the arbuscular mycorrhizal fungus Glomus mosseae. While the mycorrhizal fungus increased nitrogen and phosphate uptake of the plant, no such effect was observed for P. indica. In summary this work shows that many different fungal endophytes can be also isolated from roots of crops and, that these isolates can have positive effects on early plant development. This does, however, not lead to an increase in total yield or in improvement of fruit quality of tomatoes under greenhouse conditions.
The recent discovery of an intricate and nontrivial interaction topology among the elements of a wide range of natural systems has altered the manner we understand complexity. For example, the axonal fibres transmitting electrical information between cortical regions form a network which is neither regular nor completely random. Their structure seems to follow functional principles to balance between segregation (functional specialisation) and integration. Cortical regions are clustered into modules specialised in processing different kinds of information, e.g. visual or auditory. However, in order to generate a global perception of the real world, the brain needs to integrate the distinct types of information. Where this integration happens, nobody knows. We have performed an extensive and detailed graph theoretical analysis of the cortico-cortical organisation in the brain of cats, trying to relate the individual and collective topological properties of the cortical areas to their function. We conclude that the cortex possesses a very rich communication structure, composed of a mixture of parallel and serial processing paths capable of accommodating dynamical processes with a wide variety of time scales. The communication paths between the sensory systems are not random, but largely mediated by a small set of areas. Far from acting as mere transmitters of information, these central areas are densely connected to each other, strongly indicating their functional role as integrators of the multisensory information. In the quest of uncovering the structure-function relationship of cortical networks, the peculiarities of this network have led us to continuously reconsider the stablished graph measures. For example, a normalised formalism to identify the “functional roles” of vertices in networks with community structure is proposed. The tools developed for this purpose open the door to novel community detection techniques which may also characterise the overlap between modules. The concept of integration has been revisited and adapted to the necessities of the network under study. Additionally, analytical and numerical methods have been introduced to facilitate understanding of the complicated statistical interrelations between the distinct network measures. These methods are helpful to construct new significance tests which may help to discriminate the relevant properties of real networks from side-effects of the evolutionary-growth processes.
We are interested in modeling some two-level population dynamics, resulting from the interplay of ecological interactions and phenotypic variation of individuals (or hosts) and the evolution of cells (or parasites) of two types living in these individuals. The ecological parameters of the individual dynamics depend on the number of cells of each type contained by the individual and the cell dynamics depends on the trait of the invaded individual. Our models are rooted in the microscopic description of a random (discrete) population of individuals characterized by one or several adaptive traits and cells characterized by their type. The population is modeled as a stochastic point process whose generator captures the probabilistic dynamics over continuous time of birth, mutation and death for individuals and birth and death for cells. The interaction between individuals (resp. between cells) is described by a competition between individual traits (resp. between cell types). We look for tractable large population approximations. By combining various scalings on population size, birth and death rates and mutation step, the single microscopic model is shown to lead to contrasting nonlinear macroscopic limits of different nature: deterministic approximations, in the form of ordinary, integro- or partial differential equations, or probabilistic ones, like stochastic partial differential equations or superprocesses. The study of the long time behavior of these processes seems very hard and we only develop some simple cases enlightening the difficulties involved.
The Ginibre gas is a Poisson point process defined on a space of loops related to the Feynman-Kac representation of the ideal Bose gas. Here we study thermodynamic limits of different ensembles via Martin-Dynkin boundary technique and show, in which way infinitely long loops occur. This effect is the so-called Bose-Einstein condensation.
Estimation and testing of distributions in metric spaces are well known. R.A. Fisher, J. Neyman, W. Cochran and M. Bartlett achieved essential results on the statistical analysis of categorical data. In the last 40 years many other statisticians found important results in this field. Often data sets contain categorical data, e.g. levels of factors or names. There does not exist any ordering or any distance between these categories. At each level there are measured some metric or categorical values. We introduce a new method of scaling based on statistical decisions. For this we define empirical probabilities for the original observations and find a class of distributions in a metric space where these empirical probabilities can be found as approximations for equivalently defined probabilities. With this method we identify probabilities connected with the categorical data and probabilities in metric spaces. Here we get a mapping from the levels of factors or names into points of a metric space. This mapping yields the scale for the categorical data. From the statistical point of view we use multivariate statistical methods, we calculate maximum likelihood estimations and compare different approaches for scaling.
We give the explicit solution for the minimax linear estimate. For scale dependent models an empirical minimax linear estimates is de¯ned and we prove that these estimates are Stein's estimates.
We study resonances for the generator of a diffusion with small noise in R(d) : L = -∈∆ + ∇F * ∇, when the potential F grows slowly at infinity (typically as a square root of the norm). The case when F grows fast is well known, and under suitable conditions one can show that there exists a family of exponentially small eigenvalues, related to the wells of F. We show that, for an F with a slow growth, the spectrum is R+, but we can find a family of resonances whose real parts behave as the eigenvalues of the "quick growth" case, and whose imaginary parts are small.
We consider an infinite system of hard balls in Rd undergoing Brownian motions and submitted to a pair potential with infinite range and quasi polynomial decay. It is modelized by an infinite-dimensional Stochastic Differential Equation with an infinite-dimensional local time term. Existence and uniqueness of a strong solution is proven for such an equation with deterministic initial condition. We also show that the set of all equilibrium measures, solution of a Detailed Balance Equation, coincides with the set of canonical Gibbs measures associated to the hard core potential.
We consider an infinite system of hard balls in Rd undergoing Brownian motions and submitted to a smooth pair potential. It is modelized by an infinite- dimensional Stochastic Differential Equation with an infinite-dimensional local time term. Existence and uniqueness of a strong solution is proven for such an equation with fixed deterministic initial condition. We also show that Gibbs measures are reversible measures.
The two and k-sample tests of equality of the survival distributions against the alternatives including cross-effects of survival functions, proportional and monotone hazard ratios, are given for the right censored data. The asymptotic power against approaching alternatives is investigated. The tests are applied to the well known chemio and radio therapy data of the Gastrointestinal Tumor Study Group. The P-values for both proposed tests are much smaller then in the case of other known tests. Differently from the test of Stablein and Koutrouvelis the new tests can be applied not only for singly but also to randomly censored data.
The aim of this thesis is the design, expression and purification of human cytochrome c mutants and their characterization with regard to electrochemical and structural properties as well as with respect to the reaction with the superoxide radical and the selected proteins sulfite oxidase from human and fungi bilirubin oxidase. All three interaction partners are studied here for the first time with human cyt c and with mutant forms of cyt c. A further aim is the incorporation of the different cyt c forms in two bioelectronic systems: an electrochemical superoxide biosensor with an enhanced sensitivity and a protein multilayer assembly with and without bilirubin oxidase on electrodes. The first part of the thesis is dedicated to the design, expression and characterization of the mutants. A focus is here the electrochemical characterization of the protein in solution and immobilized on electrodes. Further the reaction of these mutants with superoxide was investigated and the possible reaction mechanisms are discussed. In the second part of the work an amperometric superoxide biosensor with selected human cytochrome c mutants was constructed and the performance of the sensor electrodes was studied. The human wild-type and four of the five mutant electrodes could be applied successfully for the detection of the superoxide radical. In the third part of the thesis the reaction of horse heart cyt c, the human wild-type and seven human cyt c mutants with the two proteins sulfite oxidase and bilirubin oxidase was studied electrochemically and the influence of the mutations on the electron transfer reactions was discussed. Finally protein multilayer electrodes with different cyt form including the mutant forms G77K and N70K which exhibit different reaction rates towards BOD were investigated and BOD together with the wild-type and engineered cyt c was embedded in the multilayer assembly. The relevant electron transfer steps and the kinetic behavior of the multilayer electrodes are investigated since the functionality of electroactive multilayer assemblies with incorporated redox proteins is often limited by the electron transfer abilities of the proteins within the multilayer. The formation via the layer-by-layer technique and the kinetic behavior of the mono and bi-protein multilayer system are studied by SPR and cyclic voltammetry. In conclusion this thesis shows that protein engineering is a helpful instrument to study protein reactions as well as electron transfer mechanisms of complex bioelectronic systems (such as bi-protein multilayers). Furthermore, the possibility to design tailored recognition elements for the construction of biosensors with an improved performance is demonstrated.
Soft nanocomposites with enhanced electromechanical response for dielectric elastomer actuators
(2011)
Electromechanical transducers based on elastomer capacitors are presently considered for many soft actuation applications, due to their large reversible deformation in response to electric field induced electrostatic pressure. The high operating voltage of such devices is currently a large drawback, hindering their use in applications such as biomedical devices and biomimetic robots, however, they could be improved with a careful design of their material properties. The main targets for improving their properties are increasing the relative permittivity of the active material, while maintaining high electric breakdown strength and low stiffness, which would lead to enhanced electrostatic storage ability and hence, reduced operating voltage. Improvement of the functional properties is possible through the use of nanocomposites. These exploit the high surface-to-volume ratio of the nanoscale filler, resulting in large effects on macroscale properties. This thesis explores several strategies for nanomaterials design. The resulting nanocomposites are fully characterized with respect to their electrical and mechanical properties, by use of dielectric spectroscopy, tensile mechanical analysis, and electric breakdown tests. First, nanocomposites consisting of high permittivity rutile TiO2 nanoparticles dispersed in thermoplastic block copolymer SEBS (poly-styrene-coethylene-co-butylene-co-styrene) are shown to exhibit permittivity increases of up to 3.7 times, leading to 5.6 times improvement in electrostatic energy density, but with a trade-off in mechanical properties (an 8-fold increase in stiffness). The variation in both electrical and mechanical properties still allows for electromechanical improvement, such that a 27 % reduction of the electric field is found compared to the pure elastomer. Second, it is shown that the use of nanofiller conductive particles (carbon black (CB)) can lead to a strong increase of relative permittivity through percolation, however, with detrimental side effects. These are due to localized enhancement of the electric field within the composite, which leads to sharp reductions in electric field strength. Hence, the increase in permittivity does not make up for the reduction in breakdown strength in relation to stored electrical energy, which may prohibit their practical use. Third, a completely new approach for increasing the relative permittivity and electrostatic energy density of a polymer based on 'molecular composites' is presented, relying on chemically grafting soft π-conjugated macromolecules to a flexible elastomer backbone. Polarization caused by charge displacement along the conjugated backbone is found to induce a large and controlled permittivity enhancement (470 % over the elastomer matrix), while chemical bonding, encapsulates the PANI chains manifesting in hardly any reduction in electric breakdown strength, and hence resulting in a large increase in stored electrostatic energy. This is shown to lead to an improvement in the sensitivity of the measured electromechanical response (83 % reduction of the driving electric field) as well as in the maximum actuation strain (250 %). These results represent a large step forward in the understanding of the strategies which can be employed to obtain high permittivity polymer materials with practical use for electro-elastomer actuation.
A systems biological approach towards the molecular basis of heterosis in Arabidopsis thaliana
(2011)
Heterosis is defined as the superiority in performance of heterozygous genotypes compared to their corresponding genetically different homozygous parents. This phenomenon is already known since the beginning of the last century and it has been widely used in plant breeding, but the underlying genetic and molecular mechanisms are not well understood. In this work, a systems biological approach based on molecular network structures is proposed to contribute to the understanding of heterosis. Hybrids are likely to contain additional regulatory possibilities compared to their homozygous parents and, therefore, they may be able to correctly respond to a higher number of environmental challenges, which leads to a higher adaptability and, thus, the heterosis phenomenon. In the network hypothesis for heterosis, presented in this work, more regulatory interactions are expected in the molecular networks of the hybrids compared to the homozygous parents. Partial correlations were used to assess this difference in the global interaction structure of regulatory networks between the hybrids and the homozygous genotypes. This network hypothesis for heterosis was tested on metabolite profiles as well as gene expression data of the two parental Arabidopsis thaliana accessions C24 and Col-0 and their reciprocal crosses. These plants are known to show a heterosis effect in their biomass phenotype. The hypothesis was confirmed for mid-parent and best-parent heterosis for either hybrid of our experimental metabolite as well as gene expression data. It was shown that this result is influenced by the used cutoffs during the analyses. Too strict filtering resulted in sets of metabolites and genes for which the network hypothesis for heterosis does not hold true for either hybrid regarding mid-parent as well as best-parent heterosis. In an over-representation analysis, the genes that show the largest heterosis effects according to our network hypothesis were compared to genes of heterotic quantitative trait loci (QTL) regions. Separately for either hybrid regarding mid-parent as well as best-parent heterosis, a significantly larger overlap between the resulting gene lists of the two different approaches towards biomass heterosis was detected than expected by chance. This suggests that each heterotic QTL region contains many genes influencing biomass heterosis in the early development of Arabidopsis thaliana. Furthermore, this integrative analysis led to a confinement and an increased confidence in the group of candidate genes for biomass heterosis in Arabidopsis thaliana identified by both approaches.
It is well documented that transcriptionally coordinated genes tend to be functionally related, and that such relationships may be conserved across different species, and even kingdoms. (Ihmels et al., 2004). Such relationships was initially utilized to reveal functional gene modules in yeast and mammals (Ihmels et al., 2004), and to explore orthologous gene functions between different species and kingdoms (Stuart et al., 2003; Bergmann et al., 2004). Model organisms, such as Arabidopsis, are readily used in basic research due to resource availability and relative speed of data acquisition. A major goal is to transfer the acquired knowledge from these model organisms to species that are of greater importance to our society. However, due to large gene families in plants, the identification of functional equivalents of well characterized Arabidopsis genes in other plants is a non-trivial task, which often returns erroneous or inconclusive results. In this thesis, concepts of utilizing co-expression networks to help infer (i) gene function, (ii) organization of biological processes and (iii) knowledge transfer between species are introduced. An often overlooked fact by bioinformaticians is that a bioinformatic method is as useful as its accessibility. Therefore, majority of the work presented in this thesis was directed on developing freely available, user-friendly web-tools accessible for any biologist.
This thesis contains quantum chemical models and force field calculations for the RuBisCO isotope effect, the spectral characteristics of the blue-light sensor BLUF and the light harvesting complex II. The work focuses on the influence of the environment on the corresponding systems. For RuBisCO, it was found that the isotopic effect is almost unaffected by the environment. In case of the BLUF domain, an amino acid was found to be important for the UV/vis spectrum, but unaccounted for in experiments so far (Ser41). The residue was shown to be highly mobile and with a systematic influence on the spectral shift of the BLUF domain chromophore (flavin). Finally, for LHCII it was found that small changes in the geometry of a Chlorophyll b/Violaxanthin chromophore pair can have strong influences regarding the light harvesting mechanism. Especially here it was seen that the proper description of the environment can be critical. In conclusion, the environment was observed to be of often unexpected importance for the molecular properties, and it seems not possible to give a reliable estimate on the changes created by the presence of the environment.
The programmable network envisioned in the 1990s within standardization and research for the Intelligent Network is currently coming into reality using IPbased Next Generation Networks (NGN) and applying Service-Oriented Architecture (SOA) principles for service creation, execution, and hosting. SOA is the foundation for both next-generation telecommunications and middleware architectures, which are rapidly converging on top of commodity transport services. Services such as triple/quadruple play, multimedia messaging, and presence are enabled by the emerging service-oriented IPMultimedia Subsystem (IMS), and allow telecommunications service providers to maintain, if not improve, their position in the marketplace. SOA becomes the de facto standard in next-generation middleware systems as the system model of choice to interconnect service consumers and providers within and between enterprises. We leverage previous research activities in overlay networking technologies along with recent advances in network abstraction, service exposure, and service creation to develop a paradigm for a service environment providing converged Internet and Telecommunications services that we call Service Broker. Such a Service Broker provides mechanisms to combine and mediate between different service paradigms from the two domains Internet/WWW and telecommunications. Furthermore, it enables the composition of services across these domains and is capable of defining and applying temporal constraints during creation and execution time. By adding network-awareness into the service fabric, such a Service Broker may also act as a next generation network-to-service element allowing the composition of crossdomain and cross-layer network and service resources. The contribution of this research is threefold: first, we analyze and classify principles and technologies from Information Technologies (IT) and telecommunications to identify and discuss issues allowing cross-domain composition in a converging service layer. Second, we discuss service composition methods allowing the creation of converged services on an abstract level; in particular, we present a formalized method for model-checking of such compositions. Finally, we propose a Service Broker architecture converging Internet and Telecom services. This environment enables cross-domain feature interaction in services through formalized obligation policies acting as constraints during service discovery, creation, and execution time.
The exponential expanding of the numbers of web sites and Internet users makes WWW the most important global information resource. From information publishing and electronic commerce to entertainment and social networking, the Web allows an inexpensive and efficient access to the services provided by individuals and institutions. The basic units for distributing these services are the web sites scattered throughout the world. However, the extreme fragility of web services and content, the high competence between similar services supplied by different sites, and the wide geographic distributions of the web users drive the urgent requirement from the web managers to track and understand the usage interest of their web customers. This thesis, "X-tracking the Usage Interest on Web Sites", aims to fulfill this requirement. "X" stands two meanings: one is that the usage interest differs from various web sites, and the other is that usage interest is depicted from multi aspects: internal and external, structural and conceptual, objective and subjective. "Tracking" shows that our concentration is on locating and measuring the differences and changes among usage patterns. This thesis presents the methodologies on discovering usage interest on three kinds of web sites: the public information portal site, e-learning site that provides kinds of streaming lectures and social site that supplies the public discussions on IT issues. On different sites, we concentrate on different issues related with mining usage interest. The educational information portal sites were the first implementation scenarios on discovering usage patterns and optimizing the organization of web services. In such cases, the usage patterns are modeled as frequent page sets, navigation paths, navigation structures or graphs. However, a necessary requirement is to rebuild the individual behaviors from usage history. We give a systematic study on how to rebuild individual behaviors. Besides, this thesis shows a new strategy on building content clusters based on pair browsing retrieved from usage logs. The difference between such clusters and the original web structure displays the distance between the destinations from usage side and the expectations from design side. Moreover, we study the problem on tracking the changes of usage patterns in their life cycles. The changes are described from internal side integrating conceptual and structure features, and from external side for the physical features; and described from local side measuring the difference between two time spans, and global side showing the change tendency along the life cycle. A platform, Web-Cares, is developed to discover the usage interest, to measure the difference between usage interest and site expectation and to track the changes of usage patterns. E-learning site provides the teaching materials such as slides, recorded lecture videos and exercise sheets. We focus on discovering the learning interest on streaming lectures, such as real medias, mp4 and flash clips. Compared to the information portal site, the usage on streaming lectures encapsulates the variables such as viewing time and actions during learning processes. The learning interest is discovered in the form of answering 6 questions, which covers finding the relations between pieces of lectures and the preference among different forms of lectures. We prefer on detecting the changes of learning interest on the same course from different semesters. The differences on the content and structure between two courses leverage the changes on the learning interest. We give an algorithm on measuring the difference on learning interest integrated with similarity comparison between courses. A search engine, TASK-Moniminer, is created to help the teacher query the learning interest on their streaming lectures on tele-TASK site. Social site acts as an online community attracting web users to discuss the common topics and share their interesting information. Compared to the public information portal site and e-learning web site, the rich interactions among users and web content bring the wider range of content quality, on the other hand, provide more possibilities to express and model usage interest. We propose a framework on finding and recommending high reputation articles in a social site. We observed that the reputation is classified into global and local categories; the quality of the articles having high reputation is related with the content features. Based on these observations, our framework is implemented firstly by finding the articles having global or local reputation, and secondly clustering articles based on their content relations, and then the articles are selected and recommended from each cluster based on their reputation ranks.
Crustal deformation can be the result of volcanic and tectonic activity such as fault dislocation and magma intrusion. The crustal deformation may precede and/or succeed the earthquake occurrence and eruption. Mitigating the associated hazard, continuous monitoring of the crustal deformation accordingly has become an important task for geo-observatories and fast response systems. Due to highly non-linear behavior of the crustal deformation fields in time and space, which are not always measurable using conventional geodetic methods (e.g., Leveling), innovative techniques of monitoring and analysis are required. In this thesis I describe novel methods to improve the ability for precise and accurate mapping the spatiotemporal surface deformation field using multi acquisitions of satellite radar data. Furthermore, to better understand the source of such spatiotemporal deformation fields, I present novel static and time dependent model inversion approaches. Almost any interferograms include areas where the signal decorrelates and is distorted by atmospheric delay. In this thesis I detail new analysis methods to reduce the limitations of conventional InSAR, by combining the benefits of advanced InSAR methods such as the permanent scatterer InSAR (PSI) and the small baseline subsets (SBAS) with a wavelet based data filtering scheme. This novel InSAR time series methodology is applied, for instance, to monitor the non-linear deformation processes at Hawaii Island. The radar phase change at Hawaii is found to be due to intrusions, eruptions, earthquakes and flank movement processes and superimposed by significant environmental artifacts (e.g., atmospheric). The deformation field, I obtained using the new InSAR analysis method, is in good agreement with continuous GPS data. This provides an accurate spatiotemporal deformation field at Hawaii, which allows time dependent source modeling. Conventional source modeling methods usually deal with static deformation field, while retrieving the dynamics of the source requires more sophisticated time dependent optimization approaches. This problem I address by combining Monte Carlo based optimization approaches with a Kalman Filter, which provides the model parameters of the deformation source consistent in time. I found there are numerous deformation sources at Hawaii Island which are spatiotemporally interacting, such as volcano inflation is associated to changes in the rifting behavior, and temporally linked to silent earthquakes. I applied these new methods to other tectonic and volcanic terrains, most of which revealing the importance of associated or coupled deformation sources. The findings are 1) the relation between deep and shallow hydrothermal and magmatic sources underneath the Campi Flegrei volcano, 2) gravity-driven deformation at Damavand volcano, 3) fault interaction associated with the 2010 Haiti earthquake, 4) independent block wise flank motion at the Hilina Fault system, Kilauea, and 5) interaction between salt diapir and the 2005 Qeshm earthquake in southern Iran. This thesis, written in cumulative form including 9 manuscripts published or under review in peer reviewed journals, improves the techniques for InSAR time series analysis and source modeling and shows the mutual dependence between adjacent deformation sources. These findings allow more realistic estimation of the hazard associated with complex volcanic and tectonic systems.
Recent large earthquakes put in evidence the need of improving and developing robust and rapid procedures to properly calculate the magnitude of an earthquake in a short time after its occurrence. The most famous example is the 26 December 2004 Sumatra earthquake, when the limitations of the standard procedures adopted at that time by many agencies failed to provide accurate magnitude estimates of this exceptional event in time to launch early enough warnings and appropriate response. Being related to the radiated seismic energy ES, the energy magnitude ME is a good estimator of the high frequency content radiated by the source which goes into the seismic waves. However, a procedure to rapidly determine ME (that is to say, within 15 minutes after the earthquake occurrence) was required. Here it is presented a procedure able to provide in a rapid way the energy magnitude ME for shallow earthquakes by analyzing teleseismic P‑waves in the distance range 20-98. To account for the energy loss experienced by the seismic waves from the source to the receivers, spectral amplitude decay functions obtained from numerical simulations of Greens functions based on the average global model AK135Q are used. The proposed method has been tested using a large global dataset (~1000 earthquakes) and the obtained rapid ME estimations have been compared to other magnitude scales from different agencies. Special emphasis is given to the comparison with the moment magnitude MW, since the latter is very popular and extensively used in common seismological practice. However, it is shown that MW alone provide only limited information about the seismic source properties, and that disaster management organizations would benefit from a combined use of MW and ME in the prompt evaluation of an earthquake’s tsunami and shaking potential. In addition, since the proposed approach for ME is intended to work without knowledge of the fault plane geometry (often available only hours after an earthquake occurrence), the suitability of this method is discussed by grouping the analyzed earthquakes according to their type of mechanism (strike-slip, normal faulting, thrust faulting, etc.). No clear trend is found from the rapid ME estimates with the different fault plane solution groups. This is not the case for the ME routinely determined by the U.S. Geological Survey, which uses specific radiation pattern corrections. Further studies are needed to verify the effect of such corrections on ME estimates. Finally, exploiting the redundancy of the information provided by the analyzed dataset, the components of variance on the single station ME estimates are investigated. The largest component of variance is due to the intra-station (record-to-record) error, although the inter-station (station-to-station) error is not negligible and is of several magnitude units for some stations. Moreover, it is shown that the intra-station component of error is not random but depends on the travel path from a source area to a given station. Consequently, empirical corrections may be used to account for the heterogeneities of the real Earth not considered in the theoretical calculations of the spectral amplitude decay functions used to correct the recorded data for the propagation effects.
The seismically active Alborz mountains of northern Iran are an integral part of the Arabia-Eurasia collision. Linked strike-slip and thrust/reverse-fault systems in this mountain belt are characterized by slow loading rates, and large earthquakes are highly disparate in space and time. Similar to other intracontinental deformation zones such a pattern of tectonic activity is still insufficiently understood, because recurrence intervals between seismic events may be on the order of thousands of years, and are thus beyond the resolution of short term measurements based on GPS or instrumentally recorded seismicity. This study bridges the gap of deformation processes on different time scales. In particular, my investigation focuses on deformation on the Quaternary time scale, beyond present-day deformation rates, and it uses present-day and paleotectonic characteristics to model fault behavior. The study includes data based on structural and geomorphic mapping, faultkinematic analysis, DEM-based morphometry, and numerical fault-interaction modeling. In order to better understand the long- to short term behavior of such complex fault systems, I used geomorphic surfaces as strain markers and dated fluvial and alluvial surfaces using terrestrial cosmogenic nuclides (TCN, 10Be, 26Al, 36Cl) and optically stimulated luminescence (OSL). My investigation focuses on the seismically active Mosha-Fasham fault (MFF) and the seismically virtually inactive North Tehran Thrust (NTT), adjacent to the Tehran metropolitan area. Fault-kinematic data reveal an early mechanical linkage of the NTT and MFF during an earlier dextral transpressional stage, when the shortening direction was oriented northwest. This regime was superseded by Pliocene to Recent NE-oriented shortening, which caused thrusting and sinistral strike-slip faulting. In the course of this kinematic changeover, the NTT and MFF were reactivated and incorporated into a nascent transpressional duplex, which has significantly affected landscape evolution in this part of the range. Two of three distinctive features which characterize topography and relief in the study area can be directly related to their location inside the duplex array and are thus linked to interaction between eastern MFF and NTT, and between western MFF and Taleghan fault, respectively. To account for inferred inherited topography from the previous dextral-transpression regime, a new concept of tectonic landscape characterization has been used. Accordingly, I define simple landscapes as those environments, which have developed during the influence of a sustained tectonic regime. In contrast, composite landscapes contain topographic elements inherited from previous tectonic conditions that are inconsistent with the regional present-day stress field and kinematic style. Using numerical fault-interaction modeling with different tectonic boundary conditions, I calculated synoptic snapshots of artificial topography to compare it with the real topographic metrics. However, in the Alborz mountains, E-W faults are favorably oriented to accommodate the entire range of NW- to NE-directed compression. These faults show the highest total displacement which might indicate sustained faulting under changing boundary conditions. In contrast to the fault system within and at the flanks of the Alborz mountains, Quaternary deformation in the adjacent Tehran plain is characterized by oblique motion and thrust and strike-slip fault systems. In this morphotectonic province fault-propagation folding along major faults, limited strike-slip motion, and en-échelon arrays of second-order upper plate thrusts are typical. While the Tehran plain is characterized by young deformation phenomena, the majority of faulting took place in the early stages of the Quaternary and during late Pliocene time. TCN-dating, which was performed for the first time on geomorphic surfaces in the Tehran plain, revealed that the oldest two phases of alluviation (units A and B) must be older than late Pleistocene. While urban development in Tehran increasingly covers and obliterates the active fault traces, the present-day kinematic style, the vestiges of formerly undeformed Quaternary landforms, and paleo earthquake indicators from the last millennia attest to the threat that these faults and their related structures pose for the megacity.
Phase Space Reconstruction is a method that allows to reconstruct the phase space of a system using only an one dimensional time series as input. It can be used for calculating Lyapunov-exponents and detecting chaos. It helps to understand complex dynamics and their behavior. And it can reproduce datasets which were not measured. There are many different methods which produce correct reconstructions such as time-delay, Hilbert-transformation, derivation and integration. The most used one is time-delay but all methods have special properties which are useful in different situations. Hence, every reconstruction method has some situations where it is the best choice. Looking at all these different methods the questions are: Why can all these different looking methods be used for the same purpose? Is there any connection between all these functions? The answer is found in the frequency domain : Performing a Fourier transformation all these methods getting a similar shape: Every presented reconstruction method can be described as a multiplication in the frequency domain with a frequency-depending reconstruction function. This structure is also known as a filter. From this point of view every reconstructed dimension can be seen as a filtered version of the measured time series. It contains the original data but applies just a new focus: Some parts are amplified and other parts are reduced. Furthermore I show, that not every function can be used for reconstruction. In the thesis three characteristics are identified, which are mandatory for the reconstruction function. Under consideration of these restrictions one gets a whole bunch of new reconstruction functions. So it is possible to reduce noise within the reconstruction process itself or to use some advantages of already known reconstructions methods while suppressing unwanted characteristics of it.
In Kooperation mit Partnern aus der Industrie etabliert das Hasso-Plattner-Institut (HPI) ein “HPI Future SOC Lab”, das eine komplette Infrastruktur von hochkomplexen on-demand Systemen auf neuester, am Markt noch nicht verfügbarer, massiv paralleler (multi-/many-core) Hardware mit enormen Hauptspeicherkapazitäten und dafür konzipierte Software bereitstellt. Das HPI Future SOC Lab verfügt über prototypische 4- und 8-way Intel 64-Bit Serversysteme von Fujitsu und Hewlett-Packard mit 32- bzw. 64-Cores und 1 - 2 TB Hauptspeicher. Es kommen weiterhin hochperformante Speichersysteme von EMC² sowie Virtualisierungslösungen von VMware zum Einsatz. SAP stellt ihre neueste Business by Design (ByD) Software zur Verfügung und auch komplexe reale Unternehmensdaten stehen zur Verfügung, auf die für Forschungszwecke zugegriffen werden kann. Interessierte Wissenschaftler aus universitären und außeruniversitären Forschungsinstitutionen können im HPI Future SOC Lab zukünftige hoch-komplexe IT-Systeme untersuchen, neue Ideen / Datenstrukturen / Algorithmen entwickeln und bis hin zur praktischen Erprobung verfolgen. Dieser Technische Bericht stellt erste Ergebnisse der im Rahmen der Eröffnung des Future SOC Labs im Juni 2010 gestarteten Forschungsprojekte vor. Ausgewählte Projekte stellten ihre Ergebnisse am 27. Oktober 2010 im Rahmen der Future SOC Lab Tag Veranstaltung vor.
This thesis presents methods, techniques and tools for developing three-dimensional representations of tactical intelligence assessments. Techniques from GIScience are combined with crime mapping methods. The range of methods applied in this study provides spatio-temporal GIS analysis as well as 3D geovisualisation and GIS programming. The work presents methods to enhance digital three-dimensional city models with application specific thematic information. This information facilitates further geovisual analysis, for instance, estimations of urban risks exposure. Specific methods and workflows are developed to facilitate the integration of spatio-temporal crime scene analysis results into 3D tactical intelligence assessments. Analysis comprises hotspot identification with kernel-density-estimation techniques (KDE), LISA-based verification of KDE hotspots as well as geospatial hotspot area characterisation and repeat victimisation analysis. To visualise the findings of such extensive geospatial analysis, three-dimensional geovirtual environments are created. Workflows are developed to integrate analysis results into these environments and to combine them with additional geospatial data. The resulting 3D visualisations allow for an efficient communication of complex findings of geospatial crime scene analysis.
Russian Jews who left the Former Soviet Union (FSU) and its Successor States after 1989 are considered as one of the best qualified migrants group worldwide. In the preferred countries of destination (Israel, the United States and Germany) they are well-known for cultural self-assertion, strong social upward mobility and manifold forms of self organisation and empowerment. Using Suzanne Kellers sociological model of “Strategic Elites”, it easily becomes clear that a huge share of the Russian Jewish Immigrants in Germany and Israel are part of various elites due to their qualification and high positions in the FSU – first of all professional, cultural and intellectual elites (“Intelligentsija”). The study aimed to find out to what extent developments of cultural self-assertion, of local and transnational networking and of ethno-cultural empowerment are supported or even initiated by the immigrated (Russian Jewish) Elites. The empirical basis for this study have been 35 half-structured expert interviews with Russian Jews in both countries (Israel, Germany) – most of them scholars, artists, writers, journalists/publicists, teachers, engineers, social workers, students and politicians. The qualitative analysis of the interview material in Israel and Germany revealed that there are a lot of commonalities but also significant differences. It was obvious that almost all of the interview partners remained to be linked with Russian speaking networks and communities, irrespective of their success (or failure) in integration into the host societies. Many of them showed self-confidence with regard to the groups’ amazing professional resources (70% of the adults with academic degree), and the cultural, professional and political potential of the FSU immigrants was usually considered as equal to those of the host population(s). Thus, the immigrants’ interest in direct societal participation and social acceptance was accordingly high. Assimilation was no option. For the Russian Jewish “sense of community” in Israel and Germany, Russian Language, Arts and general Russian culture have remained of key importance. The Immigrants do not feel an insuperable contradiction when feeling “Russian” in cultural terms, “Jewish” in ethnical terms and “Israeli” / “German” in national terms – in that a typical case of additive identity shaping what is also significant for the Elites of these Immigrants. Tendencies of ethno-cultural self organisation – which do not necessarily hinder impressing individual careers in the new surroundings – are more noticeable in Israel. Thus, a part of the Russian Jewish Elites has responded to social exclusion, discrimination or blocking by local population (and by local elites) with intense efforts to build (Russian Jewish) Associations, Media, Educational Institutions and even Political Parties. All in all, the results of this study do very much contradict popular stereotypes of the Russian Jewish Immigrant as a pragmatic, passive “Homo Sovieticus”. Among the Interview Partners in this study, civil-societal commitment was not the exception but rather the rule. Traditional activities of the early, legendary Russian „Intelligentsija“ were marked by smooth transitions from arts, education and societal/political commitment. There seem to be certain continuities of this self-demand in some of the Russian Jewish groups in Israel. Though, nothing comparable could be drawn from the Interviews with the Immigrants in Germany. Thus, the myth and self-demand of Russian “Intelligentsija” is irrelevant for collective discourses among Russian Jews in Germany.
The Greenland Ice Sheet (GIS) contains enough water volume to raise global sea level by over 7 meters. It is a relic of past glacial climates that could be strongly affected by a warming world. Several studies have been performed to investigate the sensitivity of the ice sheet to changes in climate, but large uncertainties in its long-term response still exist. In this thesis, a new approach has been developed and applied to modeling the GIS response to climate change. The advantages compared to previous approaches are (i) that it can be applied over a wide range of climatic scenarios (both in the deep past and the future), (ii) that it includes the relevant feedback processes between the climate and the ice sheet and (iii) that it is highly computationally efficient, allowing simulations over very long timescales. The new regional energy-moisture balance model (REMBO) has been developed to model the climate and surface mass balance over Greenland and it represents an improvement compared to conventional approaches in modeling present-day conditions. Furthermore, the evolution of the GIS has been simulated over the last glacial cycle using an ensemble of model versions. The model performance has been validated against field observations of the present-day climate and surface mass balance, as well as paleo information from ice cores. The GIS contribution to sea level rise during the last interglacial is estimated to be between 0.5-4.1 m, consistent with previous estimates. The ensemble of model versions has been constrained to those that are consistent with the data, and a range of valid parameter values has been defined, allowing quantification of the uncertainty and sensitivity of the modeling approach. Using the constrained model ensemble, the sensitivity of the GIS to long-term climate change was investigated. It was found that the GIS exhibits hysteresis behavior (i.e., it is multi-stable under certain conditions), and that a temperature threshold exists above which the ice sheet transitions to an essentially ice-free state. The threshold in the global temperature is estimated to be in the range of 1.3-2.3°C above preindustrial conditions, significantly lower than previously believed. The timescale of total melt scales non-linearly with the overshoot above the temperature threshold, such that a 2°C anomaly causes the ice sheet to melt in ca. 50,000 years, but an anomaly of 6°C will melt the ice sheet in less than 4,000 years. The meltback of the ice sheet was found to become irreversible after a fraction of the ice sheet is already lost – but this level of irreversibility also depends on the temperature anomaly.
Based on technological advances made within the past decades, ground-penetrating radar (GPR) has become a well-established, non-destructive subsurface imaging technique. Catalyzed by recent demands for high-resolution, near-surface imaging (e.g., the detection of unexploded ordnances and subsurface utilities, or hydrological investigations), the quality of today's GPR-based, near-surface images has significantly matured. At the same time, the analysis of oil and gas related reflection seismic data sets has experienced significant advances. Considering the sensitivity of attribute analysis with respect to data positioning in general, and multi-trace attributes in particular, trace positioning accuracy is of major importance for the success of attribute-based analysis flows. Therefore, to study the feasibility of GPR-based attribute analyses, I first developed and evaluated a real-time GPR surveying setup based on a modern tracking total station (TTS). The combination of current GPR systems capability of fusing global positioning system (GPS) and geophysical data in real-time, the ability of modern TTS systems to generate a GPS-like positional output and wireless data transmission using radio modems results in a flexible and robust surveying setup. To elaborate the feasibility of this setup, I studied the major limitations of such an approach: system cross-talk and data delays known as latencies. Experimental studies have shown that when a minimal distance of ~5 m between the GPR and the TTS system is considered, the signal-to-noise ratio of the acquired GPR data using radio communication equals the one without radio communication. To address the limitations imposed by system latencies, inherent to all real-time data fusion approaches, I developed a novel correction (calibration) strategy to assess the gross system latency and to correct for it. This resulted in the centimeter trace accuracy required by high-frequency and/or three-dimensional (3D) GPR surveys. Having introduced this flexible high-precision surveying setup, I successfully demonstrated the application of attribute-based processing to GPR specific problems, which may differ significantly from the geological ones typically addressed by the oil and gas industry using seismic data. In this thesis, I concentrated on archaeological and subsurface utility problems, as they represent typical near-surface geophysical targets. Enhancing 3D archaeological GPR data sets using a dip-steered filtering approach, followed by calculation of coherency and similarity, allowed me to conduct subsurface interpretations far beyond those obtained by classical time-slice analyses. I could show that the incorporation of additional data sets (magnetic and topographic) and attributes derived from these data sets can further improve the interpretation. In a case study, such an approach revealed the complementary nature of the individual data sets and, for example, allowed conclusions about the source location of magnetic anomalies by concurrently analyzing GPR time/depth slices to be made. In addition to archaeological targets, subsurface utility detection and characterization is a steadily growing field of application for GPR. I developed a novel attribute called depolarization. Incorporation of geometrical and physical feature characteristics into the depolarization attribute allowed me to display the observed polarization phenomena efficiently. Geometrical enhancement makes use of an improved symmetry extraction algorithm based on Laplacian high-boosting, followed by a phase-based symmetry calculation using a two-dimensional (2D) log-Gabor filterbank decomposition of the data volume. To extract the physical information from the dual-component data set, I employed a sliding-window principle component analysis. The combination of the geometrically derived feature angle and the physically derived polarization angle allowed me to enhance the polarization characteristics of subsurface features. Ground-truth information obtained by excavations confirmed this interpretation. In the future, inclusion of cross-polarized antennae configurations into the processing scheme may further improve the quality of the depolarization attribute. In addition to polarization phenomena, the time-dependent frequency evolution of GPR signals might hold further information on the subsurface architecture and/or material properties. High-resolution, sparsity promoting decomposition approaches have recently had a significant impact on the image and signal processing community. In this thesis, I introduced a modified tree-based matching pursuit approach. Based on different synthetic examples, I showed that the modified tree-based pursuit approach clearly outperforms other commonly used time-frequency decomposition approaches with respect to both time and frequency resolutions. Apart from the investigation of tuning effects in GPR data, I also demonstrated the potential of high-resolution sparse decompositions for advanced data processing. Frequency modulation of individual atoms themselves allows to efficiently correct frequency attenuation effects and improve resolution based on shifting the average frequency level. GPR-based attribute analysis is still in its infancy. Considering the growing widespread realization of 3D GPR studies there will certainly be an increasing demand towards improved subsurface interpretations in the future. Similar to the assessment of quantitative reservoir properties through the combination of 3D seismic attribute volumes with sparse well-log information, parameter estimation in a combined manner represents another step in emphasizing the potential of attribute-driven GPR data analyses.
Earthquake faults interact with each other in many different ways and hence earthquakes cannot be treated as individual independent events. Although earthquake interactions generally lead to a complex evolution of the crustal stress field, it does not necessarily mean that the earthquake occurrence becomes random and completely unpredictable. In particular, the interplay between earthquakes can rather explain the occurrence of pronounced characteristics such as periods of accelerated and depressed seismicity (seismic quiescence) as well as spatiotemporal earthquake clustering (swarms and aftershock sequences). Ignoring the time-dependence of the process by looking at time-averaged values – as largely done in standard procedures of seismic hazard assessment – can thus lead to erroneous estimations not only of the activity level of future earthquakes but also of their spatial distribution. Therefore, it exists an urgent need for applicable time-dependent models. In my work, I aimed at better understanding and characterization of the earthquake interactions in order to improve seismic hazard estimations. For this purpose, I studied seismicity patterns on spatial scales ranging from hydraulic fracture experiments (meter to kilometer) to fault system size (hundreds of kilometers), while the temporal scale of interest varied from the immediate aftershock activity (minutes to months) to seismic cycles (tens to thousands of years). My studies revealed a number of new characteristics of fluid-induced and stress-triggered earthquake clustering as well as precursory phenomena in earthquake cycles. Data analysis of earthquake and deformation data were accompanied by statistical and physics-based model simulations which allow a better understanding of the role of structural heterogeneities, stress changes, afterslip and fluid flow. Finally, new strategies and methods have been developed and tested which help to improve seismic hazard estimations by taking the time-dependence of the earthquake process appropriately into account.
This work presents the development of entropy-elastic gelatin based networks in the form of films or scaffolds. The materials have good prospects for biomedical applications, especially in the context of bone regeneration. Entropy-elastic gelatin based hydrogel films with varying crosslinking densities were prepared with tailored mechanical properties. Gelatin was covalently crosslinked above its sol gel transition, which suppressed the gelatin chain helicity. Hexamethylene diisocyanate (HDI) or ethyl ester lysine diisocyanate (LDI) were applied as chemical crosslinkers, and the reaction was conducted either in dimethyl sulfoxide (DMSO) or water. Amorphous films were prepared as measured by Wide Angle X-ray Scattering (WAXS), with tailorable degrees of swelling (Q: 300-800 vol. %) and wet state Young’s modulus (E: 70 740 kPa). Model reactions showed that the crosslinking reaction resulted in a combination of direct crosslinks (3-13 mol.-%), grafting (5-40 mol.-%), and blending of oligoureas (16-67 mol.-%). The knowledge gained with this bulk material was transferred to the integrated process of foaming and crosslinking to obtain porous 3-D gelatin-based scaffolds. For this purpose, a gelatin solution was foamed in the presence of a surfactant, Saponin, and the resulting foam was fixed by chemical crosslinking with a diisocyanate. The amorphous crosslinked scaffolds were synthesized with varied gelatin and HDI concentrations, and analyzed in the dry state by micro computed tomography (µCT, porosity: 65±11–73±14 vol.-%), and scanning electron microscopy (SEM, pore size: 117±28–166±32 µm). Subsequently, the work focused on the characterization of the gelatin scaffolds in conditions relevant to biomedical applications. Scaffolds showed high water uptake (H: 630-1680 wt.-%) with minimal changes in outer dimension. Since a decreased scaffold pore size (115±47–130±49 µm) was revealed using confocal laser scanning microscopy (CLSM) upon wetting, the form stability could be explained. Shape recoverability was observed after removal of stress when compressing wet scaffolds, while dry scaffolds maintained the compressed shape. This was explained by a reduction of the glass transition temperature upon equilibration with water (dynamic mechanical analysis at varied temperature (DMTA)). The composition dependent compression moduli (Ec: 10 50 kPa) were comparable to the bulk micromechanical Young’s moduli, which were measured by atomic force microscopy (AFM). The hydrolytic degradation profile could be adjusted, and a controlled decrease of mechanical properties was observed. Partially-degraded scaffolds displayed an increase of pore size. This was likely due to the pore wall disintegration during degradation, which caused the pores to merge. The scaffold cytotoxicity and immunologic responses were analyzed. The porous scaffolds enabled proliferation of human dermal fibroblasts within the implants (up to 90 µm depth). Furthermore, indirect eluate tests were carried out with L929 cells to quantify the material cytotoxic response. Here, the effect of the sterilization method (Ethylene oxide sterilization), crosslinker, and surfactant were analyzed. Fully cytocompatible scaffolds were obtained by using LDI as crosslinker and PEO40 PPO20-PEO40 as surfactant. These investigations were accompanied by a study of the endotoxin material contamination. The formation of medical-grade materials was successfully obtained (<0.5 EU/mL) by using low-endotoxin gelatin and performing all synthetic steps in a laminar flow hood.
In the high mountains of Asia, glaciers cover an area of approximately 115,000 km² and constitute one of the largest continental ice accumulations outside Greenland and Antarctica. Their sensitivity to climate change makes them valuable palaeoclimate archives, but also vulnerable to current and predicted Global Warming. This is a pressing problem as snow and glacial melt waters are important sources for agriculture and power supply of densely populated regions in south, east, and central Asia. Successful prediction of the glacial response to climate change in Asia and mitigation of the socioeconomic impacts requires profound knowledge of the climatic controls and the dynamics of Asian glaciers. However, due to their remoteness and difficult accessibility, ground-based studies are rare, as well as temporally and spatially limited. We therefore lack basic information on the vast majority of these glaciers. In this thesis, I employ different methods to assess the dynamics of Asian glaciers on multiple time scales. First, I tested a method for precise satellite-based measurement of glacier-surface velocities and conducted a comprehensive and regional survey of glacial flow and terminus dynamics of Asian glaciers between 2000 and 2008. This novel and unprecedented dataset provides unique insights into the contrasting topographic and climatic controls of glacial flow velocities across the Asian highlands. The data document disparate recent glacial behavior between the Karakoram and the Himalaya, which I attribute to the competing influence of the mid-latitude westerlies during winter and the Indian monsoon during summer. Second, I tested whether such climate-related longitudinal differences in glacial behavior also prevail on longer time scales, and potentially account for observed regionally asynchronous glacial advances. I used cosmogenic nuclide surface exposure dating of erratic boulders on moraines to obtain a glacial chronology for the upper Tons Valley, situated in the headwaters of the Ganges River. This area is located in the transition zone from monsoonal to westerly moisture supply and therefore ideal to examine the influence of these two atmospheric circulation regimes on glacial advances. The new glacial chronology documents multiple glacial oscillations during the last glacial termination and during the Holocene, suggesting largely synchronous glacial changes in the western Himalayan region that are related to gradual glacial-interglacial temperature oscillations with superimposed monsoonal precipitation changes of higher frequency. In a third step, I combine results from short-term satellite-based climate records and surface velocity-derived ice-flux estimates, with topographic analyses to deduce the erosional impact of glaciations on long-term landscape evolution in the Himalayan-Tibetan realm. The results provide evidence for the long-term effects of pronounced east-west differences in glaciation and glacial erosion, depending on climatic and topographic factors. Contrary to common belief the data suggest that monsoonal climate in the central Himalaya weakens glacial erosion at high elevations, helping to maintain a steep southern orographic barrier that protects the Tibetan Plateau from lateral destruction. The results of this thesis highlight how climatic and topographic gradients across the high mountains of Asia affect glacier dynamics on time scales ranging from 10^0 to 10^6 years. Glacial response times to climate changes are tightly linked to properties such as debris cover and surface slope, which are controlled by the topographic setting, and which need to be taken into account when reconstructing mountainous palaeoclimate from glacial histories or assessing the future evolution of Asian glaciers. Conversely, the regional topographic differences of glacial landscapes in Asia are partly controlled by climatic gradients and the long-term influence of glaciers on the topographic evolution of the orogenic system.
In current practice, business processes modeling is done by trained method experts. Domain experts are interviewed to elicit their process information but not involved in modeling. We created a haptic toolkit for process modeling that can be used in process elicitation sessions with domain experts. We hypothesize that this leads to more effective process elicitation. This paper brakes down "effective elicitation" to 14 operationalized hypotheses. They are assessed in a controlled experiment using questionnaires, process model feedback tests and video analysis. The experiment compares our approach to structured interviews in a repeated measurement design. We executed the experiment with 17 student clerks from a trade school. They represent potential users of the tool. Six out of fourteen hypotheses showed significant difference due to the method applied. Subjects reported more fun and more insights into process modeling with tangible media. Video analysis showed significantly more reviews and corrections applied during process elicitation. Moreover, people take more time to talk and think about their processes. We conclude that tangible media creates a different working mode for people in process elicitation with fun, new insights and instant feedback on preliminary results.
Complex network theory provides an elegant and powerful framework to statistically investigate the topology of local and long range dynamical interrelationships, i.e., teleconnections, in the climate system. Employing a refined methodology relying on linear and nonlinear measures of time series analysis, the intricate correlation structure within a multivariate climatological data set is cast into network form. Within this graph theoretical framework, vertices are identified with grid points taken from the data set representing a region on the the Earth's surface, and edges correspond to strong statistical interrelationships between the dynamics on pairs of grid points. The resulting climate networks are neither perfectly regular nor completely random, but display the intriguing and nontrivial characteristics of complexity commonly found in real world networks such as the internet, citation and acquaintance networks, food webs and cortical networks in the mammalian brain. Among other interesting properties, climate networks exhibit the "small-world" effect and possess a broad degree distribution with dominating super-nodes as well as a pronounced community structure. We have performed an extensive and detailed graph theoretical analysis of climate networks on the global topological scale focussing on the flow and centrality measure betweenness which is locally defined at each vertex, but includes global topological information by relying on the distribution of shortest paths between all pairs of vertices in the network. The betweenness centrality field reveals a rich internal structure in complex climate networks constructed from reanalysis and atmosphere-ocean coupled general circulation model (AOGCM) surface air temperature data. Our novel approach uncovers an elaborately woven meta-network of highly localized channels of strong dynamical information flow, that we relate to global surface ocean currents and dub the backbone of the climate network in analogy to the homonymous data highways of the internet. This finding points to a major role of the oceanic surface circulation in coupling and stabilizing the global temperature field in the long term mean (140 years for the model run and 60 years for reanalysis data). Carefully comparing the backbone structures detected in climate networks constructed using linear Pearson correlation and nonlinear mutual information, we argue that the high sensitivity of betweenness with respect to small changes in network structure may allow to detect the footprints of strongly nonlinear physical interactions in the climate system. The results presented in this thesis are thoroughly founded and substantiated using a hierarchy of statistical significance tests on the level of time series and networks, i.e., by tests based on time series surrogates as well as network surrogates. This is particularly relevant when working with real world data. Specifically, we developed new types of network surrogates to include the additional constraints imposed by the spatial embedding of vertices in a climate network. Our methodology is of potential interest for a broad audience within the physics community and various applied fields, because it is universal in the sense of being valid for any spatially extended dynamical system. It can help to understand the localized flow of dynamical information in any such system by combining multivariate time series analysis, a complex network approach and the information flow measure betweenness centrality. Possible fields of application include fluid dynamics (turbulence), plasma physics and biological physics (population models, neural networks, cell models). Furthermore, the climate network approach is equally relevant for experimental data as well as model simulations and hence introduces a novel perspective on model evaluation and data driven model building. Our work is timely in the context of the current debate on climate change within the scientific community, since it allows to assess from a new perspective the regional vulnerability and stability of the climate system while relying on global and not only on regional knowledge. The methodology developed in this thesis hence has the potential to substantially contribute to the understanding of the local effect of extreme events and tipping points in the earth system within a holistic global framework.
On the 20.01.1991 the Latvian people defended the Latvian political elite from the Soviet OMON troops in order to achieve independence. After this impressive sign of civil society the people fell asleep, the level of mobility and the satisfaction with the functioning of democracy therefore is rather weak. The referendum (2008), to gain the right to dissolve the Parliament by the people, initiated by the Trade Unions can be assessed as a sign that there is something on the move. This paper is trying to give an impression of the situation of the civil society in terms of participation in the decision- making process. Hereby the focus lays on NGOs: What is the legal base and which problems do they face. To learn more about the situation interviews were organized with representatives of NGOs from different sectors like community development; Social inclusion; advocating gender issues as well as environment and sustainable development. As a result of the research it can be said that the civil society made some steps forward but it is still struggling with a high level of corruption, lack of interested from the elite and the ordinary people and the insecure financial state.
Flood design necessitates discharge estimates for large recurrence intervals. However, in a flood frequency analysis, the uncertainty of discharge estimates increases with higher recurrence intervals, particularly due to the small number of available flood data. Furthermore, traditional distribution functions increase unlimitedly without consideration of an upper bound discharge. Hence, additional information needs to be considered which is representative for high recurrence intervals. Envelope curves which bound the maximum observed discharges of a region are an adequate regionalisation method to provide additional spatial information for the upper tail of a distribution function. Probabilistic regional envelope curves (PRECs) are an extension of the traditional empirical envelope curve approach, in which a recurrence interval is estimated for a regional envelope curve (REC). The REC is constructed for a homogeneous pooling group of sites. The estimation of this recurrence interval is based on the effective sample years of data considering the intersite dependence among all sites of the pooling group. The core idea of this thesis was an improvement of discharge estimates for high recurrence intervals by integrating empirical and probabilistic regional envelope curves into the flood frequency analysis. Therefore, the method of probabilistic regional envelope curves was investigated in detail. Several pooling groups were derived by modifying candidate sets of catchment descriptors and settings of two different pooling methods. These were used to construct PRECs. A sensitivity analysis shows the variability of discharges and the recurrence intervals for a given site due to the different assumptions. The unit flood of record which governs the intercept of PREC was determined as the most influential aspect. By separating the catchments into nested and unnested pairs, the calculation algorithm for the effective sample years of data was refined. In this way, the estimation of the recurrence intervals was improved, and therefore the use of different parameter sets for nested and unnested pairs of catchments is recommended. In the second part of this thesis, PRECs were introduced into a distribution function. Whereas in the traditional approach only discharge values are used, PRECs provide a discharge and its corresponding recurrence interval. Hence, a novel approach was developed, which allows a combination of the PREC results with the traditional systematic flood series while taking the PREC recurrence interval into consideration. An adequate mixed bounded distribution function was presented, which in addition to the PREC results also uses an upper bound discharge derived by an empirical envelope curve. By doing so, two types of additional information which are representative for the upper tail of a distribution function were included in the flood frequency analysis. The integration of both types of additional information leads to an improved discharge estimation for recurrence intervals between 100 and 1000 years.
This is the 13th issue of the working paper series Interdisciplinary Studies on Information Structure (ISIS) of the Sonderforschungsbereich (SFB) 632. It is the first part of a series of Linguistic Fieldnote issues which present data collected by members of different projects of the SFB during fieldwork on various languages or dialects spoken worldwide. This part of the Fieldnote Series is dedicated to data from African languages. It contains contributions by Mira Grubic (A5) on Ngizim, and Susanne Genzel & Frank Kügler (D5) on Akan. The papers allow insights into various aspects of the elicitation of formal correlates of focus and related phenomena in different African languages investigated by the SFB in the second funding phase, especially in the period between 2007 and 2010.
Companies develop process models to explicitly describe their business operations. In the same time, business operations, business processes, must adhere to various types of compliance requirements. Regulations, e.g., Sarbanes Oxley Act of 2002, internal policies, best practices are just a few sources of compliance requirements. In some cases, non-adherence to compliance requirements makes the organization subject to legal punishment. In other cases, non-adherence to compliance leads to loss of competitive advantage and thus loss of market share. Unlike the classical domain-independent behavioral correctness of business processes, compliance requirements are domain-specific. Moreover, compliance requirements change over time. New requirements might appear due to change in laws and adoption of new policies. Compliance requirements are offered or enforced by different entities that have different objectives behind these requirements. Finally, compliance requirements might affect different aspects of business processes, e.g., control flow and data flow. As a result, it is infeasible to hard-code compliance checks in tools. Rather, a repeatable process of modeling compliance rules and checking them against business processes automatically is needed. This thesis provides a formal approach to support process design-time compliance checking. Using visual patterns, it is possible to model compliance requirements concerning control flow, data flow and conditional flow rules. Each pattern is mapped into a temporal logic formula. The thesis addresses the problem of consistency checking among various compliance requirements, as they might stem from divergent sources. Also, the thesis contributes to automatically check compliance requirements against process models using model checking. We show that extra domain knowledge, other than expressed in compliance rules, is needed to reach correct decisions. In case of violations, we are able to provide a useful feedback to the user. The feedback is in the form of parts of the process model whose execution causes the violation. In some cases, our approach is capable of providing automated remedy of the violation.
This is the 15th issue of the working paper series Interdisciplinary Studies on Information Structure (ISIS) of the Sonderforschungsbereich (SFB) 632. This online version contains the Questionnaire on Focus Semantics contributed by Agata Renans, Malte Zimmermann and Markus Greif, members of Project D2 investigating information structural phenomena from a typological perspective. The present issue provides a tool for collecting and analyzing natural data with respect to relevant linguistic questions concerning focus types, focus sensitive particles, and the effects of quantificational adverbs and presupposition on focus semantics. This volume is a supplementation to the Reference manual of the Questionnaire on Information Structure, issued by Project D2 in ISIS 4 (2006).
The widespread usage of products containing volatile organic compounds (VOC) has lead to a general human exposure to these chemicals in work places or homes being suspected to contribute to the growing incidence of environmental diseases. Since the causal molecular mechanisms for the development of these disorders are not completely understood, the overall objective of this thesis was to investigate VOC-mediated molecular effects on human lung cells in vitro at VOC concentrations comparable to exposure scenarios below current occupational limits. Although differential expression of single proteins in response to VOCs has been reported, effects on complex protein networks (proteome) have not been investigated. However, this information is indispensable when trying to ascertain a mechanism for VOC action on the cellular level and establishing preventive strategies. For this study, the alveolar epithelial cell line A549 has been used. This cell line, cultured in a two-phase (air/liquid) model allows the most direct exposure and had been successfully applied for the analysis of inflammatory effects in response to VOCs. Mass spectrometric identification of 266 protein spots provided the first proteomic map of A549 cell line to this extent that may foster future work with this frequently used cellular model. The distribution of three typical air contaminants, monochlorobenzene (CB), styrene and 1,2 dichlorobenzene (1,2-DCB), between gas and liquid phase of the exposure model has been analyzed by gas chromatography. The obtained VOC partitioning was in agreement with available literature data. Subsequently the adapted in vitro system has been successfully employed to characterize the effects of the aromatic compound styrene on the proteome of A549 cells (Chapter 4). Initially, the cell toxicity has been assessed in order to ensure that most of the concentrations used in the following proteomic approach were not cytotoxic. Significant changes in abundance and phosphorylation in the total soluble protein fraction of A549 cells have been detected following styrene exposure. All proteins have been identified using mass spectrometry and the main cellular functions have been assigned. Validation experiments on protein and transcript level confirmed the results of the 2-DE experiments. From the results, two main cellular pathways have been identified that were induced by styrene: the cellular oxidative stress response combined with moderate pro-apoptotic signaling. Measurement of cellular reactive oxygen species (ROS) as well as the styrene-mediated induction of oxidative stress marker proteins confirmed the hypothesis of oxidative stress as the main molecular response mechanism. Finally, adducts of cellular proteins with the reactive styrene metabolite styrene 7,8 oxide (SO) have been identified. Especially the SO-adducts observed at both the reactive centers of thioredoxin reductase 1, which is a key element in the control of the cellular redox state, may be involved in styrene-induced ROS formation and apoptosis. A similar proteomic approach has been carried out with the halobenzenes CB and 1,2-DCB (Chapter 5). In accordance with previous findings, cell toxicity assessment showed enhanced toxicity compared to the one caused by styrene. Significant changes in abundance and phosphorylation of total soluble proteins of A549 cells have been detected following exposure to subtoxic concentrations of CB and 1,2-DCB. All proteins have been identified using mass spectrometry and the main cellular functions have been assigned. As for the styrene experiment, the results indicated two main pathways to be affected in the presence of chlorinated benzenes, cell death signaling and oxidative stress response. The strong induction of pro-apoptotic signaling has been confirmed for both treatments by detection of the cleavage of caspase 3. Likewise, the induction of redox-sensitive protein species could be correlated to an increased cellular level of ROS observed following CB treatment. Finally, common mechanisms in the cellular response to aromatic VOCs have been investigated (Chapter 6). A similar number (4.6-6.9%) of all quantified protein spots showed differential expression (p<0.05) following cell exposure to styrene, CB or 1,2-DCB. However, not more than three protein spots showed significant regulation in the same direction for all three volatile compounds: voltage-dependent anion-selective channel protein 2, peroxiredoxin 1 and elongation factor 2. However, all of these proteins are important molecular targets in stress- and cell death-related signaling pathways.
Large-scale volcanic deformation recently detected by radar interferometry (InSAR) provides new information and thus new scientific challenges for understanding volcano-tectonic activity and magmatic systems. The destabilization of such a system at depth noticeably affects the surrounding environment through magma injection, ground displacement and volcanic eruptions. To determine the spatiotemporal evolution of the Lazufre volcanic area located in the central Andes, we combined short-term ground displacement acquired by InSAR with long-term geological observations. Ground displacement was first detected using InSAR in 1997. By 2008, this displacement affected 1800 km2 of the surface, an area comparable in size to the deformation observed at caldera systems. The original displacement was followed in 2000 by a second, small-scale, neighbouring deformation located on the Lastarria volcano. We performed a detailed analysis of the volcanic structures at Lazufre and found relationships with the volcano deformations observed with InSAR. We infer that these observations are both likely to be the surface expression of a long-lived magmatic system evolving at depth. It is not yet clear whether Lazufre may trigger larger unrest or volcanic eruptions; however, the second deformation detected at Lastarria and the clear increase of the large-scale deformation rate make this an area of particular interest for closer continuous monitoring.
The presented work describes new concepts of fast switching elements based on principles of photonics. The waveguides working in visible and infra-red ranges are put in a basis of these elements. And as materials for manufacturing of waveguides the transparent polymers, dopped by molecules of the dyes possessing second order nonlinear-optical properties are proposed. The work shows how nonlinear-optical processes in such structures can be implemented by electro-optical and opto-optical control circuit signals. In this paper we consider the complete cycle of fabrication of several types of integral photonic elements. The theoretical analysis of high-intensity beam propagation in media with second-order optical nonlinearity is performed. Quantitative estimations of necessary conditions of occurrence of the nonlinear-optical phenomena of the second order taking into account properties of used materials are made. The paper describes the various stages of manufacture of the basic structure of the integrated photonics: a planar waveguide. Using the finite element method the structure of the electromagnetic field inside the waveguide in different modes was analysed. A separate part of the work deals with the creation of composite organic materials with high optical nonlinearity. Using the methods of quantum chemistry, the dependence of nonlinear properties of dye molecules from its structure were investigated in details. In addition, the paper discusses various methods of inducing of an optical nonlinearity in dye-doping of polymer films. In the work, for the first time is proposed the use of spatial modulation of nonlinear properties of waveguide according Fibonacci law. This allows involving several different nonlinear optical processes simultaneously. The final part of the work describes various designs of integrated optical modulators and switches constructed of organic nonlinear optical waveguides. A practical design of the optical modulator based on Mach-Zehnder interferometer made by a photolithography on polymer film is presented.
This paper opens a series of discussion papers which report about the findings of a research project within the Phare-ACE Programme of the European Union. We, a group of Bulgarian, German, Greek, Polish and Scottish economists and agricultural economists, undertake this research to provide An Integrated Analysis of Industrial Policies and Social Security Systems in Countries in Transition.1 This paper outlines the basic motivation for such study.
Foraging in space and time
(2010)
All animals are adapted to the environmental conditions of the habitat they chose to live in. It was the aim of this PhD-project, to show which behavioral strategies are expressed as mechanisms to cope with the constraints, which contribute to the natural selection pressure acting on individuals. For this purpose, small mammals were exposed to different levels and types of predation risk while actively foraging. Individuals were either exposed to different predator types (airborne or ground) or combinations of both, or to indirect predators (nest predators). Risk was assumed to be distributed homogeneously, so changing the habitat or temporal adaptations where not regarded as potential options. Results show that wild-caught voles have strategic answers to this homogeneously distributed risk, which is perceived by tactile, olfactory or acoustic cues. Thus, they do not have to know an absolut quality (e.g., in terms of food provisioning and risk levels of all possible habitats), but they can adapt their behavior to the actual circumstances. Deriving risk uniform levels from cues and adjusting activity levels to the perceived risk is an option to deal with predators of the same size or with unforeseeable attack rates. Experiments showed that as long as there are no safe places or times, it is best to reduce activity and behave as inconspicuous as possible as long as the costs of missed opportunities do not exceed the benefits of a higher survival probability. Test showed that these costs apparently grow faster for males than for females, especially in times of inactivity. This is supported by strong predatory pressure on the most active groups of rodents (young males, sexually active or dispersers) leading to extremely female-biased operative sex ratios in natural populations. Other groups of animals, those with parental duties such as nest guarding, for example, have to deal with the actual risk in their habitat as well. Strategies to indirect predation pressure were tested by using bank vole mothers, confronted with a nest predator that posed no actual threat to themselves but to their young (Sorex araneus). They reduced travelling and concentrated their effort in the presence of shrews, independent of the different nutritional provisioning of food by varying resource levels due to the different seasons. Additionally, they exhibited nest-guarding strategies by not foraging in the vicinity of the nest site in order to reduce conspicuous scent marks. The repetition of the experiment in summer and autumn showed that changing environmental constraints can have a severe impact on results of outdoor studies. In our case, changing resource levels changed the type of interaction between the two species. The experiments show that it is important to analyze decision making and optimality models on an individual level, and, when that is not possible (maybe because of the constraints of field work), groups of animals should be classified by using the least common denominator that can be identified (such as sex, age, origin or kinship). This will control for the effects of the sex or stage of life history or the individual´s reproductive and nutritional status on decision making and will narrow the wide behavioral variability associated with the complex term of optimality.
Roughly every third Wikipedia article contains an infobox - a table that displays important facts about the subject in attribute-value form. The schema of an infobox, i.e., the attributes that can be expressed for a concept, is defined by an infobox template. Often, authors do not specify all template attributes, resulting in incomplete infoboxes. With iPopulator, we introduce a system that automatically populates infoboxes of Wikipedia articles by extracting attribute values from the article's text. In contrast to prior work, iPopulator detects and exploits the structure of attribute values for independently extracting value parts. We have tested iPopulator on the entire set of infobox templates and provide a detailed analysis of its effectiveness. For instance, we achieve an average extraction precision of 91% for 1,727 distinct infobox template attributes.
Temporal gravimeter observations, used in geodesy and geophysics to study variation of the Earth’s gravity field, are influenced by local water storage changes (WSC) and – from this perspective – add noise to the gravimeter signal records. At the same time, the part of the gravity signal caused by WSC may provide substantial information for hydrologists. Water storages are the fundamental state variable of hydrological systems, but comprehensive data on total WSC are practically inaccessible and their quantification is associated with a high level of uncertainty at the field scale. This study investigates the relationship between temporal gravity measurements and WSC in order to reduce the hydrological interfering signal from temporal gravity measurements and to explore the value of temporal gravity measurements for hydrology for the superconducting gravimeter (SG) of the Geodetic Observatory Wettzell, Germany. A 4D forward model with a spatially nested discretization domain was developed to simulate and calculate the local hydrological effect on the temporal gravity observations. An intensive measurement system was installed at the Geodetic Observatory Wettzell and WSC were measured in all relevant storage components, namely groundwater, saprolite, soil, top soil and snow storage. The monitoring system comprised also a suction-controlled, weighable, monolith-filled lysimeter, allowing an all time first comparison of a lysimeter and a gravimeter. Lysimeter data were used to estimate WSC at the field scale in combination with complementary observations and a hydrological 1D model. Total local WSC were derived, uncertainties were assessed and the hydrological gravity response was calculated from the WSC. A simple conceptual hydrological model was calibrated and evaluated against records of a superconducting gravimeter, soil moisture and groundwater time series. The model was evaluated by a split sample test and validated against independently estimated WSC from the lysimeter-based approach. A simulation of the hydrological gravity effect showed that WSC of one meter height along the topography caused a gravity response of 52 µGal, whereas, generally in geodesy, on flat terrain, the same water mass variation causes a gravity change of only 42 µGal (Bouguer approximation). The radius of influence of local water storage variations can be limited to 1000 m and 50 % to 80 % of the local hydro¬logical gravity signal is generated within a radius of 50 m around the gravimeter. At the Geodetic Observatory Wettzell, WSC in the snow pack, top soil, unsaturated saprolite and fractured aquifer are all important terms of the local water budget. With the exception of snow, all storage components have gravity responses of the same order of magnitude and are therefore relevant for gravity observations. The comparison of the total hydrological gravity response to the gravity residuals obtained from the SG, showed similarities in both short-term and seasonal dynamics. However, the results demonstrated the limitations of estimating total local WSC using hydrological point measurements. The results of the lysimeter-based approach showed that gravity residuals are caused to a larger extent by local WSC than previously estimated. A comparison of the results with other methods used in the past to correct temporal gravity observations for the local hydrological influence showed that the lysimeter measurements improved the independent estimation of WSC significantly and thus provided a better way of estimating the local hydrological gravity effect. In the context of hydrological noise reduction, at sites where temporal gravity observations are used for geophysical studies beyond local hydrology, the installation of a lysimeter in combination with complementary hydrological measurements is recommended. From the hydrological view point, using gravimeter data as a calibration constraint improved the model results in comparison to hydrological point measurements. Thanks to their capacity to integrate over different storage components and a larger area, gravimeters provide generalized information on total WSC at the field scale. Due to their integrative nature, gravity data must be interpreted with great care in hydrological studies. However, gravimeters can serve as a novel measurement instrument for hydrology and the application of gravimeters especially designed to study open research questions in hydrology is recommended.
In the present work, we study wave phenomena in strongly nonlinear lattices. Such lattices are characterized by the absence of classical linear waves. We demonstrate that compactons – strongly localized solitary waves with tails decaying faster than exponential – exist and that they play a major role in the dynamics of the system under consideration. We investigate compactons in different physical setups. One part deals with lattices of dispersively coupled limit cycle oscillators which find various applications in natural sciences such as Josephson junction arrays or coupled Ginzburg-Landau equations. Another part deals with Hamiltonian lattices. Here, a prominent example in which compactons can be found is the granular chain. In the third part, we study systems which are related to the discrete nonlinear Schrödinger equation describing, for example, coupled optical wave-guides or the dynamics of Bose-Einstein condensates in optical lattices. Our investigations are based on a numerical method to solve the traveling wave equation. This results in a quasi-exact solution (up to numerical errors) which is the compacton. Another ansatz which is employed throughout this work is the quasi-continuous approximation where the lattice is described by a continuous medium. Here, compactons are found analytically, but they are defined on a truly compact support. Remarkably, both ways give similar qualitative and quantitative results. Additionally, we study the dynamical properties of compactons by means of numerical simulation of the lattice equations. Especially, we concentrate on their emergence from physically realizable initial conditions as well as on their stability due to collisions. We show that the collisions are not exactly elastic but that a small part of the energy remains at the location of the collision. In finite lattices, this remaining part will then trigger a multiple scattering process resulting in a chaotic state.
This thesis is focused on the electronic, spin-dependent and dynamical properties of thin magnetic systems. Photoemission-related techniques are combined with synchrotron radiation to study the spin-dependent properties of these systems in the energy and time domains. In the first part of this thesis, the strength of electron correlation effects in the spin-dependent electronic structure of ferromagnetic bcc Fe(110) and hcp Co(0001) is investigated by means of spin- and angle-resolved photoemission spectroscopy. The experimental results are compared to theoretical calculations within the three-body scattering approximation and within the dynamical mean-field theory, together with one-step model calculations of the photoemission process. From this comparison it is demonstrated that the present state of the art many-body calculations, although improving the description of correlation effects in Fe and Co, give too small mass renormalizations and scattering rates thus demanding more refined many-body theories including nonlocal fluctuations. In the second part, it is shown in detail monitoring by photoelectron spectroscopy how graphene can be grown by chemical vapour deposition on the transition-metal surfaces Ni(111) and Co(0001) and intercalated by a monoatomic layer of Au. For both systems, a linear E(k) dispersion of massless Dirac fermions is observed in the graphene pi-band in the vicinity of the Fermi energy. Spin-resolved photoemission from the graphene pi-band shows that the ferromagnetic polarization of graphene/Ni(111) and graphene/Co(0001) is negligible and that graphene on Ni(111) is after intercalation of Au spin-orbit split by the Rashba effect. In the last part, a time-resolved x-ray magnetic circular dichroic-photoelectron emission microscopy study of a permalloy platelet comprising three cross-tie domain walls is presented. It is shown how a fast picosecond magnetic response in the precessional motion of the magnetization can be induced by means of a laser-excited photoswitch. From a comparision to micromagnetic calculations it is demonstrated that the relatively high precessional frequency observed in the experiments is directly linked to the nature of the vortex/antivortex dynamics and its response to the magnetic perturbation. This includes the time-dependent reversal of the vortex core polarization, a process which is beyond the limit of detection in the present experiments.
The fourth volume of the DIGAREC Series holds the proceedings to the conference “Logic and Structure of the Computer Game”, held at the House of Brandenburg- Prussian History in Potsdam on November 6 and 7, 2009. The conference was the first to explicitly address the medial logic and structure of the computer game. The contributions focus on the specific potential for mediation and on the unique form of mediation inherent in digital games. This includes existent, yet scattered approaches to develop a unique curriculum of game studies. In line with the concept of ‘mediality’, the notions of aesthetics, interactivity, software architecture, interface design, iconicity, spatiality, and rules are of special interest. Presentations were given by invited German scholars and were commented on by international respondents in a dialogical structure.
Preparation and investigation of polymer-foam films and polymer-layer systems for ferroelectrets
(2010)
Piezoelectric materials are very useful for applications in sensors and actuators. In addition to traditional ferroelectric ceramics and ferroelectric polymers, ferroelectrets have recently become a new group of piezoelectrics. Ferroelectrets are functional polymer systems for electromechanical transduction, with elastically heterogeneous cellular structures and internal quasi-permanent dipole moments. The piezoelectricity of ferroelectrets stems from linear changes of the dipole moments in response to external mechanical or electrical stress. Over the past two decades, polypropylene (PP) foams have been investigated with the aim of ferroelectret applications, and some products are already on the market. PP-foam ferroelectrets may exhibit piezoelectric d33 coefficients of 600 pC/N and more. Their operating temperature can, however, not be much higher than 60 °C. Recently developed polyethylene-terephthalate (PET) and cyclo-olefin copolymer (COC) foam ferroelectrets show slightly better d33 thermal stabilities, but usually at the price of smaller d33 values. Therefore, the main aim of this work is the development of new thermally stable ferroelectrets with appreciable piezoelectricity. Physical foaming is a promising technique for generating polymer foams from solid films without any pollution or impurity. Supercritical carbon dioxide (CO2) or nitrogen (N2) are usually employed as foaming agents due to their good solubility in several polymers. Polyethylene propylene (PEN) is a polyester with slightly better properties than PET. A “voiding + inflation + stretching” process has been specifically developed to prepare PEN foams. Solid PEN films are saturated with supercritical CO2 at high pressure and then thermally voided at high temperatures. Controlled inflation (Gas-Diffusion Expansion or GDE) is applied in order to adjust the void dimensions. Additional biaxial stretching decreases the void heights, since it is known lens-shaped voids lead to lower elastic moduli and therefore also to stronger piezoelectricity. Both, contact and corona charging are suitable for the electric charging of PEN foams. The light emission from the dielectric-barrier discharges (DBDs) can be clearly observed. Corona charging in a gas of high dielectric strength such as sulfur hexafluoride (SF6) results in higher gas-breakdown strength in the voids and therefore increases the piezoelectricity. PEN foams can exhibit piezoelectric d33 coefficients as high as 500 pC/N. Dielectric-resonance spectra show elastic moduli c33 of 1 − 12 MPa, anti-resonance frequencies of 0.2 − 0.8 MHz, and electromechanical coupling factors of 0.016 − 0.069. As expected, it is found that PEN foams show better thermal stability than PP and PET. Samples charged at room temperature can be utilized up to 80 − 100 °C. Annealing after charging or charging at elevated temperatures may improve thermal stabilities. Samples charged at suitable elevated temperatures show working temperatures as high as 110 − 120 °C. Acoustic measurements at frequencies of 2 Hz − 20 kHz show that PEN foams can be well applied in this frequency range. Fluorinated ethylene-propylene (FEP) copolymers are fluoropolymers with very good physical, chemical and electrical properties. The charge-storage ability of solid FEP films can be significantly improved by adding boron nitride (BN) filler particles. FEP foams are prepared by means of a one-step procedure consisting of CO2 saturation and subsequent in-situ high-temperature voiding. Piezoelectric d33 coefficients up to 40 pC/N are measured on such FEP foams. Mechanical fatigue tests show that the as-prepared PEN and FEP foams are mechanically stable for long periods of time. Although polymer-foam ferroelectrets have a high application potential, their piezoelectric properties strongly depend on the cellular morphology, i.e. on size, shape, and distribution of the voids. On the other hand, controlled preparation of optimized cellular structures is still a technical challenge. Consequently, new ferroelectrets based on polymer-layer system (sandwiches) have been prepared from FEP. By sandwiching an FEP mesh between two solid FEP films and fusing the polymer system with a laser beam, a well-designed uniform macroscopic cellular structure can be formed. Dielectric resonance spectroscopy reveals piezoelectric d33 coefficients as high as 350 pC/N, elastic moduli of about 0.3 MPa, anti-resonance frequencies of about 30 kHz, and electromechanical coupling factors of about 0.05. Samples charged at elevated temperatures show better thermal stabilities than those charged at room temperature, and the higher the charging temperature, the better is the stability. After proper charging at 140 °C, the working temperatures can be as high as 110 − 120 °C. Acoustic measurements at frequencies of 200 Hz − 20 kHz indicate that the FEP layer systems are suitable for applications at least in this range.
The European Values Education (EVE) project is a large-scale, cross-national, and longitudinal survey research program on basic human values. The main topic of its first stage was "work" in Europe. Student teachers of several universities in Europe worked together in multicultural exchange groups. Their results are presented in this issue.
In this paper we develop a spatial Cournot trade model with two unequally sized countries, using the geographical interpretation of the Hotelling line. We analyze the trade and welfare effects of international trade between these two countries. The welfare analysis indicates that in this framework the large country benefits from free trade and the small country may be hurt by opening to trade. This finding is contrary to the results of Shachmurove and Spiegel (1995) as well as Tharakan and Thisse (2002), who use related models to analyze size effects in international trade, where the small country usually gains from trade and the large country may lose.
‘Heterosis’ is a term used in genetics and breeding referring to hybrid vigour or the superiority of hybrids over their parents in terms of traits such as size, growth rate, biomass, fertility, yield, nutrient content, disease resistance or tolerance to abiotic and abiotic stress. Parental plants which are two different inbred (pure) lines that have desired traits are crossed to obtain hybrids. Maximum heterosis is observed in the first generation (F1) of crosses. Heterosis has been utilised in plant and animal breeding programs for at least 90 years: by the end of the 21st century, 65% of worldwide maize production was hybrid-based. Generally, it is believed that an understanding of the molecular basis of heterosis will allow the creation of new superior genotypes which could either be used directly as F1 hybrids or form the basis for the future breeding selection programmes. Two selected accessions of a research model plant Arabidopsis thaliana (thale cress) were crossed to obtain hybrids. These typically exhibited a 60-80% increase of biomass when compared to the average weight of both parents. This PhD project focused on investigating the role of selected regulatory genes given their potentially key involvement in heterosis. In the first part of the project, the most appropriate developmental stage for this heterosis study was determined by metabolite level measurements and growth observations in parents and hybrids. At the selected stage, around 60 candidate regulatory genes (i.e. differentially expressed in hybrids when compared to parents) were identified. Of these, the majority were transcription factors, genes that coordinate the expression of other genes. Subsequent expression analyses of the candidate genes in biomass-heterotic hybrids of other Arabidopsis accessions revealed a differential expression in a gene subset, highlighting their relevance for heterosis. Moreover, a fraction of the candidate regulatory genes were found within DNA regions closely linked to the genes that underlie the biomass or growth heterosis. Additional analyses to validate the role of selected candidate regulatory genes in heterosis appeared insufficient to establish their role in heterosis. This uncovered a need for using novel approaches as discussed in the thesis. Taken together, the work provided an insight into studies on the molecular mechanisms underlying heterosis. Although studies on heterosis date back to more than one hundred years, this project as many others revealed that more investigations will be needed to uncover this phenomenon.
The availability of large data sets has allowed researchers to uncover complex properties in complex systems, such as complex networks and human dynamics. A vast number of systems, from the Internet to the brain, power grids, ecosystems, can be represented as large complex networks. Dynamics on and of complex networks has attracted more and more researchers’ interest. In this thesis, first, I introduced a simple but effective dynamical optimization coupling scheme which can realize complete synchronization in networks with undelayed and delayed couplings and enhance the small-world and scale-free networks’ synchronizability. Second, I showed that the robustness of scale-free networks with community structure was enhanced due to the existence of communities in the networks and some of the response patterns were found to coincide with topological communities. My results provide insights into the relationship between network topology and the functional organization in complex networks from another viewpoint. Third, as an important kind of nodes of complex networks, human detailed correspondence dynamics was studied by both data and the model. A new and general type of human correspondence pattern was found and an interacting priority-queues model was introduced to explain it. The model can also embrace a range of realistic social interacting systems such as email and letter communication. My findings provide insight into various human activities both at the individual and network level. Fourth, I present clearly new evidence that human comment behavior in on-line social systems, a different type of interacting human dynamics, is non-Poissonian and a model based on the personal attraction was introduced to explain it. These results are helpful for discovering regular patterns of human behavior in on-line society and the evolution of the public opinion on the virtual as well as real society. Finally, there are conclusion and outlook of human dynamics and complex networks.
With the rise of nanotechnology in the last decade, nanofluidics has been established as a research field and gained increased interest in science and industry. Natural aqueous nanofluidic systems are very complex, there is often a predominance of liquid interfaces or the fluid contains charged or differently shaped colloids. The effects, promoted by these additives, are far from being completely understood and interesting questions arise with regards to the confinement of such complex fluidic systems. A systematic study of nanofluidic processes requires designing suitable experimental model nano – channels with required characteristics. The present work employed thin liquid films (TLFs) as experimental models. They have proven to be useful experimental tools because of their simple geometry, reproducible preparation, and controllable liquid interfaces. The thickness of the channels can be adjusted easily by the concentration of electrolyte in the film forming solution. This way, channel dimensions from 5 – 100 nm are possible, a high flexibility for an experimental system. TLFs have liquid IFs of different charge and properties and they offer the possibility to confine differently shaped ions and molecules to very small spaces, or to subject them to controlled forces. This makes the foam films a unique “device” available to obtain information about fluidic systems in nanometer dimensions. The main goal of this thesis was to study nanofluidic processes using TLFs as models, or tools, and to subtract information about natural systems plus deepen the understanding on physical chemical conditions. The presented work showed that foam films can be used as experimental models to understand the behavior of liquids in nano – sized confinement. In the first part of the thesis, we studied the process of thinning of thin liquid films stabilized with the non – ionic surfactant n – dodecyl – β – maltoside (β – C₁₂G₂) with primary interest in interfacial diffusion processes during the thinning process dependent on surfactant concentration 64. The surfactant concentration in the film forming solutions was varied at constant electrolyte (NaCl) concentration. The velocity of thinning was analyzed combining previously developed theoretical approaches. Qualitative information about the mobility of the surfactant molecules at the film surfaces was obtained. We found that above a certain limiting surfactant concentration the film surfaces were completely immobile and they behaved as non – deformable, which decelerated the thinning process. This follows the predictions for Reynolds flow of liquid between two non – deformable disks. In the second part of the thesis, we designed a TLF nanofluidic system containing rod – like multivalent ions and compared this system to films containing monovalent ions. We presented first results which recognized for the first time the existence of an additional attractive force in the foam films based on the electrostatic interaction between rod – like ions and oppositely charged surfaces. We may speculate that this is an ion bridging component of the disjoining pressure. The results show that for films prepared in presence of spermidine the transformation of the thicker CF to the thinnest NBF is more probable as films prepared with NaCl at similar conditions of electrostatic interaction. This effect is not a result of specific adsorption of any of the ions at the fluid surfaces and it does not lead to any changes in the equilibrium properties of the CF and NBF. Our hypothesis was proven using the trivalent ion Y3+ which does not show ion bridging. The experimental results are compared to theoretical predictions and a quantitative agreement on the system’s energy gain for the change from CF to NBF could be obtained. In the third part of the work, the behavior of nanoparticles in confinement was investigated with respect to their impact on the fluid flow velocity. The particles altered the flow velocity by an unexpected high amount, so that the resulting changes in the dynamic viscosity could not be explained by a realistic change of the fluid viscosity. Only aggregation, flocculation and plug formation can explain the experimental results. The particle systems in the presented thesis had a great impact on the film interfaces due to the stabilizer molecules present in the bulk solution. Finally, the location of the particles with respect to their lateral and vertical arrangement in the film was studied with advanced reflectivity and scattering methods. Neutron Reflectometry studies were performed to investigate the location of nanoparticles in the TLF perpendicular to the IF. For the first time, we study TLFs using grazing incidence small angle X – ray scattering (GISAXS), which is a technique sensitive to the lateral arrangement of particles in confined volumes. This work provides preliminary data on a lateral ordering of particles in the film.
Myrmecochory, i.e. dispersal of seeds by ants towards and around their nests, plays an important role in temperate forests. Yet hardly any study has examined plant population spread over several years and the underlying joint contribution of a hierarchy of dispersal modes and plant demography. We used a seed-sowing approach with three replicates to examine colonization patterns of Melampyrum pratense, an annual myrmecochorous herb, in a mixed Scots pine forest in northeastern Germany. Using a spatially explicit individualbased (SEIB) model population patterns over 4 years were explained by short-distance transport of seeds by small ant species with high nest densities, resulting in random spread. However, plant distributions in the field after another 4 years were clearly deviating from model predictions. Mean annual spread rate increased from 0.9 m to 5.1 m per year, with a clear inhomogeneous component. Obviously, after a lag-phase of several years, non-random seed dispersal by large red wood ants (Formica rufa) was determining the species’ spread, thus resulting in stratified dispersal due to interactions with different-sized ant species. Hypotheses on stratified dispersal, on dispersal lag, and on non-random dispersal were verified using an extended SEIB model, by comparison of model outputs with field patterns (individual numbers, population areas, and maximum distances). Dispersal towards red wood ant nests together with seed loss during transport and redistribution around nests were essential features of the model extension. The observed lag-phase in the initiation of non-random, medium-distance transport was probably due to a change of ant behaviour towards a new food source of increasing importance, being a meaningful example for a lag-phase in local plant species invasion. The results demonstrate that field studies should check model predictions wherever possible. Future research will show whether or not the M. pratense–ant system is representative for migration patterns of similar animal dispersal systems after having crossed range edges by long-distance dispersal events.
Proceedings of KogWis 2010 : 10th Biannual Meeting of the German Society for Cognitive Science
(2010)
As the latest biannual meeting of the German Society for Cognitive Science (Gesellschaft für Kognitionswissenschaft, GK), KogWis 2010 at Potsdam University reflects the current trends in a fascinating domain of research concerned with human and artificial cognition and the interaction of mind and brain. The Plenary talks provide a venue for questions of the numerical capacities and human arithmetic (Brian Butterworth), of the theoretical development of cognitive architectures and intelligent virtual agents (Pat Langley), of categorizations induced by linguistic constructions (Claudia Maienborn), and of a cross-level account of the “Self as a complex system“ (Paul Thagard). KogWis 2010 integrates a wealth of experimental research, cognitive modelling, and conceptual analysis in 5 invited symposia, over 150 individual talks, 6 symposia, and more than 40 poster contributions. Some of the invited symposia reflect local and regional strenghts of research in the Berlin-Brandenburg area: the two largests research fields of the university Cognitive Sciences Area of Excellence in Potsdam are represented by an invited symposium on “Information Structure” by the Special Research Area 632 (“Sonderforschungsbereich”, SFB) of the same name, of Potsdam University and Humboldt-University Berlin, and by a satellite conference of the research group “Mind and Brain Dynamics”. The Berlin School of Mind and Brain at Humboldt-University Berlin takes part with an invited symposium on “Decision Making” from a perspective of cognitive neuroscience and philosophy and the DFG Cluster of Excellence “Languages of Emotion” of Free University presents interdisciplinary research results in an invited symposium on “Symbolising Emotions”.
The activity of vacuolar H+-ATPase (V-ATPase) in the apical membrane of blowfly (Calliphora vicina) salivary glands is regulated by the neurohormone serotonin (5-HT). 5-HT induces, via protein kinase A, the phosphorylation of V-ATPase subunit C and the assembly of V-ATPase holoenzymes. The protein phosphatase responsible for the dephosphorylation of subunit C and V-ATPase inactivation is not as yet known. We show here that inhibitors of protein phosphatases PP1 and PP2A (tautomycin, ocadaic acid) and PP2B (cyclosporin A, FK-506) do not prevent V-ATPase deactivation and dephosphorylation of subunit C. A decrease in the intracellular Mg2+ level caused by loading secretory cells with EDTA-AM leads to the activation of proton pumping in the absence of 5-HT, prolongs the 5-HT-induced response in proton pumping, and inhibits the dephosphorylation of subunit C. Thus, the deactivation of V-ATPase is most probably mediated by a protein phosphatase that is insensitive to okadaic acid and that requires Mg2+, namely, a member of the PP2C protein family. By molecular biological techniques, we demonstrate the expression of at least two PP2C protein family members in blowfly salivary glands. © 2009 Wiley Periodicals, Inc.
Inverse agonist and neutral antagonist actions of synthetic compounds at an insect 5-HT1 receptor
(2010)
Background and purpose: 5-Hydroxytryptamine (5-HT) has been shown to control and modulate many physiological and behavioural functions in insects. In this study, we report the cloning and pharmacological properties of a 5-HT1 receptor of an insect model for neurobiology, physiology and pharmacology. Experimental approach: A cDNA encoding for the Periplaneta americana 5-HT1 receptor was amplified from brain cDNA. The receptor was stably expressed in HEK 293 cells, and the functional and pharmacological properties were determined in cAMP assays. Receptor distribution was investigated by RT-PCR and by immunocytochemistry using an affinity-purified polyclonal antiserum. Key results: The P. americana 5-HT1 receptor (Pea5-HT1) shares pronounced sequence and functional similarity with mammalian 5-HT1 receptors. Activation with 5-HT reduced adenylyl cyclase activity in a dose-dependent manner. Pea5-HT1 was expressed as a constitutively active receptor with methiothepin acting as a neutral antagonist, and WAY 100635 as an inverse agonist. Receptor mRNA was present in various tissues including brain, salivary glands and midgut. Receptor-specific antibodies showed that the native protein was expressed in a glycosylated form in membrane samples of brain and salivary glands. Conclusions and implications: This study marks the first pharmacological identification of an inverse agonist and a neutral antagonist at an insect 5-HT1 receptor. The results presented here should facilitate further analyses of 5-HT1 receptors in mediating central and peripheral effects of 5-HT in insects.
The biogenic amine serotonin (5-HT) plays a key role in the regulation and modulation of many physiological and behavioural processes in both vertebrates and invertebrates. These functions are mediated through the binding of serotonin to its receptors, of which 13 subtypes have been characterized in vertebrates. We have isolated a cDNA from the honeybee Apis mellifera (Am5-ht7) sharing high similarity to members of the 5-HT7 receptor family. Expression of the Am5-HT7 receptor in HEK293 cells results in an increase in basal cAMP levels, suggesting that Am5-HT7 is expressed as a constitutively active receptor. Serotonin application to Am5-ht7-transfected cells elevates cyclic adenosine 3',5'-monophosphate (cAMP) levels in a dose-dependent manner (EC50 = 1.1-1.8 nM). The Am5-HT7 receptor is also activated by 5-carboxamidotryptamine, whereas methiothepin acts as an inverse agonist. Receptor expression has been investigated by RT-PCR, in situ hybridization, and western blotting experiments. Receptor mRNA is expressed in the perikarya of various brain neuropils, including intrinsic mushroom body neurons, and in peripheral organs. This study marks the first comprehensive characterization of a serotonin receptor in the honeybee and should facilitate further analysis of the role(s) of the receptor in mediating the various central and peripheral effects of 5-HT.
The phenolamines octopamine and tyramine control, regulate, and modulate many physiological and behavioral processes in invertebrates. Vertebrates possess only small amounts of both substances, and thus, octopamine and tyramine, together with other biogenic amines, are referred to as “trace amines.” Biogenic amines evoke cellular responses by activating G-protein-coupled receptors. We have isolated a complementary DNA (cDNA) that encodes a biogenic amine receptor from the American cockroach Periplaneta americana, viz., Peatyr1, which shares high sequence similarity to members of the invertebrate tyramine-receptor family. The PeaTYR1 receptor was stably expressed in human embryonic kidney (HEK) 293 cells, and its ligand response has been examined. Receptor activation with tyramine reduces adenylyl cyclase activity in a dose-dependent manner (EC50 350 nM). The inhibitory effect of tyramine is abolished by co-incubation with either yohimbine or chlorpromazine. Receptor expression has been investigated by reverse transcription polymerase chain reaction and immunocytochemistry. The mRNA is present in various tissues including brain, salivary glands, midgut, Malpighian tubules, and leg muscles. The effect of tyramine on salivary gland acinar cells has been investigated by intracellular recordings, which have revealed excitatory presynaptic actions of tyramine. This study marks the first comprehensive molecular, pharmacological, and functional characterization of a tyramine receptor in the cockroach.
Biogenic amines and their receptors regulate and modulate many physiological and behavioural processes in animals. In vertebrates, octopamine is only found in trace amounts and its function as a true neurotransmitter is unclear. In protostomes, however, octopamine can act as neurotransmitter, neuromodulator and neurohormone. In the honeybee, octopamine acts as a neuromodulator and is involved in learning and memory formation. The identification of potential octopamine receptors is decisive for an understanding of the cellular pathways involved in mediating the effects of octopamine. Here we report the cloning and functional characterization of the first octopamine receptor from the honeybee, Apis mellifera . The gene was isolated from a brain-specific cDNA library. It encodes a protein most closely related to octopamine receptors from Drosophila melanogaster and Lymnea stagnalis . Signalling properties of the cloned receptor were studied in transiently transfected human embryonic kidney (HEK) 293 cells. Nanomolar to micromolar concentrations of octopamine induced oscillatory increases in the intracellular Ca2+ concentration. In contrast to octopamine, tyramine only elicited Ca2+ responses at micromolar concentrations. The gene is abundantly expressed in many somata of the honeybee brain, suggesting that this octopamine receptor is involved in the processing of sensory inputs, antennal motor outputs and higher-order brain functions.
In the honey bee, responsiveness to sucrose correlates with many behavioural parameters such as age of first foraging, foraging role and learning. Sucrose responsiveness can be measured using the proboscis extension response (PER) by applying sucrose solutions of increasing concentrations to the antenna of a bee. We tested whether the biogenic amines octopamine, tyramine and dopamine, and the dopamine receptor agonist 2-amino-6,7-dihydroxy-1,2,3,4-tetrahydronaphthalene (6,7-ADTN) can modulate sucrose responsiveness. The compounds were either injected into the thorax or fed in sucrose solution to compare different methods of application. Injection and feeding of tyramine or octopamine significantly increased sucrose responsiveness. Dopamine decreased sucrose responsiveness when injected into the thorax. Feeding of dopamine had no effect. Injection of 6,7-ADTN into the thorax and feeding of 6,7-ADTN reduced sucrose responsiveness significantly. These data demonstrate that sucrose responsiveness in honey bees can be modulated by biogenic amines, which has far reaching consequences for other types of behaviour in this insect. (C) 2002 Elsevier Science B.V. All rights reserved.
The acinar salivary gland of the cockroach, Periplaneta americana, is innervated by dopaminergic and serotonergic nerve fibers. Stimulation of the glands by serotonin (5-hydroxytryptamine, 5-HT) results in the production of a protein-rich saliva, whereas stimulation by dopamine results in saliva that is protein-free. Thus, dopamine acts selectively on ion-transporting peripheral cells within the acini, and 5-HT acts on protein-producing central cells. We have investigated the pharmacology of the 5-HT-induced secretory activity of isolated salivary glands of P. americana by testing several 5-HT receptor agonists and antagonists. The effects of 5-HT can be mimicked by the non-selective 5-HT receptor agonist 5-methoxytryptamine. All tested agonists that display at least some receptor subtype specificity in mammals, i.e., 5-carboxamidotryptamine, (+/-)-8-OH-DPAT, (+/-)-DOI, and AS 19, were ineffective in stimulating salivary secretion. 5-HT-induced secretion can be blocked by the vertebrate 5-HT receptor antagonists methiothepin, cyproheptadine, and mianserin. Our pharmacological data indicate that the pharmacology of arthropod 5-HT receptors is remarkably different from that of their vertebrate counterparts. (C) 2007 Elsevier Ltd. All rights reserved.
The vacuolar H+-ATPase (V-ATPase) in the apical membrane of blowfly (Calliphora vicina) salivary gland cells energizes the secretion of a KCl-rich saliva in response to the neurohormone serotonin (5-HT). We have shown previously that exposure to 5-HT induces a cAMP-mediated reversible assembly of V-0 and V-1 subcomplexes to V-ATPase holoenzymes and increases V-ATPase-driven proton transport. Here, we analyze whether the effect of cAMP on V-ATPase is mediated by protein kinase A (PKA) or exchange protein directly activated by cAMP (Epac), the cAMP target proteins that are present within the salivary glands. Immunofluorescence microscopy shows that PKA activators, but not Epac activators, induce the translocation of V1 components from the cytoplasm to the apical membrane, indicative of an assembly of V-ATPase holoenzymes. Measurements of transepithelial voltage changes and microfluorometric pH measurements at the luminal surface of cells in isolated glands demonstrate further that PKA-activating cAMP analogs increase cation transport to the gland lumen and induce a V-ATPase-dependent luminal acidification, whereas activators of Epac do not. Inhibitors of PKA block the 5-HT-induced V-1 translocation to the apical membrane and the increase in proton transport. We conclude that cAMP exerts its effects on V-ATPase via PKA.
Trying to do two things at once decreases performance of one or both tasks in many cases compared to the situation when one performs each task by itself. The present thesis deals with the question why and in which cases these dual-task costs emerge and moreover, whether there are cases in which people are able to process two cognitive tasks at the same time without costs. In four experiments the influence of stimulus-response (S-R) compatibility, S-R modality pairings, interindividual differences, and practice on parallel processing ability of two tasks are examined. Results show that parallel processing is possible. Nevertheless, dual-task costs emerge when: the personal processing strategy is serial, the two tasks have not been practiced together, S-R compatibility of both tasks is low (e.g. when a left target has to be responded with a right key press and in the other task an auditorily presented “A” has to be responded by saying “B”), and modality pairings of both tasks are Non Standard (i.e., visual-spatial stimuli are responded vocally whereas auditory-verbal stimuli are responded manually). Results are explained with respect to executive-based (S-R compatibility) and content-based crosstalk (S-R modality pairings) between tasks. Finally, an alternative information processing account with respect to the central stage of response selection (i.e., the translation of the stimulus to the response) is presented.
A production study is presented that investigates the effects of word order and information structural context on the prosodic realization of declarative sentences in Hindi. Previous work on Hindi intonation has shown that: (i) non-final content words bear rising pitch accents (Moore 1965, Dyrud 2001, Nair 1999); (ii) focused constituents show greater pitch excursion and longer duration and that post-focal material undergoes pitch range reduction (Moore 1965, Harnsberger 1994, Harnsberger and Judge 1996); and (iii) focused constituents may be followed by a phrase break (Moore 1965). By means of a controlled experiment, we investigated the effect of focus in relation to word order variation using 1200 utterances produced by 20 speakers. Fundamental frequency (F0) and duration of constituents were measured in Subject-Object-Verb (SOV) and Object-Subject-Verb (OSV) sentences in different information structural conditions (wide focus, subject focus and object focus). The analyses indicate that (i) regardless of word order and focus, the constituents are in a strict downstep relationship; (ii) focus is mainly characterized by post-focal pitch range reduction rather than pitch raising of the element in focus; (iii) given expressions that occur pre-focally appear to undergo no reduction; (iv) pitch excursion and duration of the constituents is higher in OSV compared to SOV sentences. A phonological analysis suggests that focus affects pitch scaling and that word order influences prosodic phrasing of the constituents.
Biogenic amines are important messenger substances in the central nervous system and in peripheral organs of vertebrates and of invertebrates. The honeybee, Apis mellifera, is excellently suited to uncover the functions of biogenic amines in behaviour, because it has an extensive behavioural repertoire, with a number of biogenic amine receptors characterised in this insect. In the honeybee, the biogenic amines dopamine, octopamine, serotonin and tyramine modulate neuronal functions in various ways. Dopamine and serotonin are present in high concentrations in the bee brain, whereas octopamine and tyramine are less abundant. Octopamine is a key molecule for the control of honeybee behaviour. It generally has an arousing effect and leads to higher sensitivity for sensory inputs, better learning performance and increased foraging behaviour. Tyramine has been suggested to act antagonistically to octopamine, but only few experimental data are available for this amine. Dopamine and serotonin often have antagonistic or inhibitory effects as compared to octopamine. Biogenic amines bind to membrane receptors that primarily belong to the large gene-family of GTP-binding (G) protein coupled receptors. Receptor activation leads to transient changes in concentrations of intracellular second messengers such as cAMP, IP3 and/or Ca2+. Although several biogenic amine receptors from the honeybee have been cloned and characterised more recently, many genes still remain to be identified. The availability of the completely sequenced genome of Apis mellifera will contribute substantially to closing this gap. In this review, we will discuss the present knowledge on how biogenic amines and their receptor-mediated cellular responses modulate different behaviours of honeybees including learning processes and division of labour.
The influence of information structure on tonal scaling in German is examined experimentally. Eighteen speakers uttered a total of 2277 sentences of the same syntactic structure, but with a varying number of constituents, word order and focus-given structure. The quantified results for German support findings for other Germanic languages that the scaling of high tones, and thus the entire melodic pattern, is influenced by information structure. Narrow focus raised the high tones of pitch accents, while givenness lowered them in prenuclear position and canceled them out postnuclearly. The effects of focus and givenness are calculated against all-new sentences as a baseline, which we expected to be characterized by downstep, a significantly lower scaling of high tones as compared to declination. The results further show that information structure alone cannot account for all variations. We therefore assume that dissimilatory tonal effects play a crucial role in the tonal scaling of German. The effects consist of final f0 drop, a steep fall from a raised high tone to the bottom line of the speaker, H-raising before a low tone, and H-lowering before a raised high tone. No correlation between word order and tone scaling could be established. 2008 Elsevier Ltd. All rights reserved.
Questions: 1. Are there differences among species in their preference for coniferous vs. deciduous forest? 2. Are tree and shrub species better colonizers of recent forest stands than herbaceous species? 3. Do colonization patterns of plant species groups depend on tree species composition? Location: Three deciduous and one coniferous recent forest areas in Brandenburg, NE Germany. Methods: In 34 and 21 transects in coniferous and deciduous stands, respectively, we studied the occurrence and percentage cover of vascular plants in a total of 150 plots in ancient stands, 315 in recent stands and 55 at the ecotone. Habitat preference, diaspore weight, generative dispersal potential and clonal extension were used to explain mechanisms of local migration. Regression analysis was conducted to test whether migration distance was related to species’ life-history traits. Results: 25 species were significantly associated with ancient stands and ten species were significantly more frequent in recent stands. Tree and shrub species were good colonizers of recent coniferous and deciduous stands. In the coniferous stands, all herbaceous species showed a strong dispersal limitation during colonization, whereas in the deciduous stands generalist species may have survived in the grasslands which were present prior to afforestation. Conclusions: The fast colonization of recent stands by trees and shrubs can be explained by their effective dispersal via wind and animals. This, and the comparably efficient migration of herbaceous forest specialists into recent coniferous stands, implies that the conversion of coniferous into deciduous stands adjacent to ancient deciduous forests is promising even without planting of trees.
The correctness of model transformations is a crucial element for the model-driven engineering of high quality software. A prerequisite to verify model transformations at the level of the model transformation specification is that an unambiguous formal semantics exists and that the employed implementation of the model transformation language adheres to this semantics. However, for existing relational model transformation approaches it is usually not really clear under which constraints particular implementations are really conform to the formal semantics. In this paper, we will bridge this gap for the formal semantics of triple graph grammars (TGG) and an existing efficient implementation. Whereas the formal semantics assumes backtracking and ignores non-determinism, practical implementations do not support backtracking, require rule sets that ensure determinism, and include further optimizations. Therefore, we capture how the considered TGG implementation realizes the transformation by means of operational rules, define required criteria and show conformance to the formal semantics if these criteria are fulfilled. We further outline how static analysis can be employed to guarantee these criteria.
Between 2002 and 2006 the Colombian government of Álvaro Uribe counted with great international support to hand a demobilization process of right-wing paramilitary groups, along with the implementation of transitional justice policies such as penal prosecutions and the creation of a National Commission for Reparation and Reconciliation (NCRR) to address justice, truth and reparation for victims of paramilitary violence. The demobilization process began when in 2002 the United Self Defence Forces of Colombia (Autodefensas Unidas de Colombia, AUC) agreed to participate in a government-sponsored demobilization process. Paramilitary groups were responsible for the vast majority of human rights violations for a period of over 30 years. The government designed a special legal framework that envisaged great leniency for paramilitaries who committed serious crimes and reparations for victims of paramilitary violence. More than 30,000 paramilitaries have demobilized under this process between January 2003 and August 2006. Law 975, also known as the “Justice and Peace Law”, and Decree 128 have served as the legal framework for the demobilization and prosecutions of paramilitaries. It has offered the prospect of reduced sentences to demobilized paramilitaries who committed crimes against humanity in exchange for full confessions of crimes, restitution for illegally obtained assets, the release of child soldiers, the release of kidnapped victims and has also provided reparations for victims of paramilitary violence. The Colombian demobilization process presents an atypical case of transitional justice. Many observers have even questioned whether Colombia can be considered a case of transitional justice. Transitional justice measures are often taken up after the change of an authoritarian regime or at a post-conflict stage. However, the particularity of the Colombian case is that transitional justice policies were introduced while the conflict still raged. In this sense, the Colombian case expresses one of the key elements to be addressed which is the tension between offering incentives to perpetrators to disarm and demobilize to prevent future crimes and providing an adequate response to the human rights violations perpetrated throughout the course of an internal conflict. In particular, disarmament, demobilization and reintegration processes require a fine balance between the immunity guarantees offered to ex-combatants and the sought of accountability for their crimes. International law provides the legal framework defining the rights to justice, truth and reparations for victims and the corresponding obligations of the State, but the peace negotiations and conflicted political structures do not always allow for the fulfillment of those rights. Thus, the aim of this article is to analyze what kind of transition may be occurring in Colombia by focusing on the role that transitional justice mechanisms may play in political negotiations between the Colombian government and paramilitary groups. In particular, it seeks to address to what extent such processes contribute to or hinder the achievement of the balance between peacebuilding and accountability, and thus facilitate a real transitional process.
This thesis is concerned with the issue of extinction of populations composed of different types of individuals, and their behavior before extinction and in case of a very late extinction. We approach this question firstly from a strictly probabilistic viewpoint, and secondly from the standpoint of risk analysis related to the extinction of a particular model of population dynamics. In this context we propose several statistical tools. The population size is modeled by a branching process, which is either a continuous-time multitype Bienaymé-Galton-Watson process (BGWc), or its continuous-state counterpart, the multitype Feller diffusion process. We are interested in different kinds of conditioning on non-extinction, and in the associated equilibrium states. These ways of conditioning have been widely studied in the monotype case. However the literature on multitype processes is much less extensive, and there is no systematic work establishing connections between the results for BGWc processes and those for Feller diffusion processes. In the first part of this thesis, we investigate the behavior of the population before its extinction by conditioning the associated branching process X_t on non-extinction (X_t≠0), or more generally on non-extinction in a near future 0≤θ<∞ (X_{t+θ}≠0), and by letting t tend to infinity. We prove the result, new in the multitype framework and for θ>0, that this limit exists and is non-degenerate. This reflects a stationary behavior for the dynamics of the population conditioned on non-extinction, and provides a generalization of the so-called Yaglom limit, corresponding to the case θ=0. In a second step we study the behavior of the population in case of a very late extinction, obtained as the limit when θ tends to infinity of the process conditioned by X_{t+θ}≠0. The resulting conditioned process is a known object in the monotype case (sometimes referred to as Q-process), and has also been studied when X_t is a multitype Feller diffusion process. We investigate the not yet considered case where X_t is a multitype BGWc process and prove the existence of the associated Q-process. In addition, we examine its properties, including the asymptotic ones, and propose several interpretations of the process. Finally, we are interested in interchanging the limits in t and θ, as well as in the not yet studied commutativity of these limits with respect to the high-density-type relationship between BGWc processes and Feller processes. We prove an original and exhaustive list of all possible exchanges of limit (long-time limit in t, increasing delay of extinction θ, diffusion limit). The second part of this work is devoted to the risk analysis related both to the extinction of a population and to its very late extinction. We consider a branching population model (arising notably in the epidemiological context) for which a parameter related to the first moments of the offspring distribution is unknown. We build several estimators adapted to different stages of evolution of the population (phase growth, decay phase, and decay phase when extinction is expected very late), and prove moreover their asymptotic properties (consistency, normality). In particular, we build a least squares estimator adapted to the Q-process, allowing a prediction of the population development in the case of a very late extinction. This would correspond to the best or to the worst-case scenario, depending on whether the population is threatened or invasive. These tools enable us to study the extinction phase of the Bovine Spongiform Encephalopathy epidemic in Great Britain, for which we estimate the infection parameter corresponding to a possible source of horizontal infection persisting after the removal in 1988 of the major route of infection (meat and bone meal). This allows us to predict the evolution of the spread of the disease, including the year of extinction, the number of future cases and the number of infected animals. In particular, we produce a very fine analysis of the evolution of the epidemic in the unlikely event of a very late extinction.
Processing negative imperatives in Bulgarian : evidence from normal, aphasic and child language
(2010)
The incremental nature of sentence processing raises questions about the way the information of incoming functional elements is accessed and subsequently employed in building the syntactic structure which sustains interpretation processes. The present work approaches these questions by investigating the negative particle ne used for sentential negation in Bulgarian and its impact on the overt realisation and the interpretation of imperative inflexion, bound aspectual morphemes and clitic pronouns in child, adult and aphasic language. In contrast to other Slavic languages, Bulgarian negative imperatives (NI) are grammatical only with imperfective verbs. We argue that NI are instantiations of overt aspectual coercion induced by the presence of negation as a temporally sensitive sentential operator. The scope relation between imperative mood, negation, and aspect yields the configuration of the imperfective present which in Bulgarian has to be overtly expressed and prompts the imperfective marking of the predicate. The regular and transparent application of the imperfectivising mechanism relates to the organisation of the TAM categories in Bulgarian which not only promotes the representation of fine perspective shifts but also provides for their distinct morphological expression. Using an elicitation task with NI, we investigated the way 3- and 4-year-old children represent negation in deontic contexts as reflected in their use of aspectually appropriate predicates. Our findings suggest that children are sensitive to the imperfectivity requirement in NI from early on. The imperfectivisation strategies reveal some differences from the target morphological realisation. The relatively low production of target imperfectivised prefixed verbs cannot be explained with morphological processing deficits, but rather indicates that up to the age of five children experience difficulties to apply a progressive view point to accomplishments. Two self-paced reading studies present evidence that neurologically unimpaired Bulgarian speakers profit from the syntactic and prosodic properties of negation during online sentence comprehension. The imperfectivity requirement negation imposes on the predicate speeds up lexical access to imperfective verbs. Similarly, clitic pronouns are more accessible after negation due to the phono-syntactic properties of clitic clusters. As the experimental stimuli do not provide external discourse referents, personal pronouns are parsed as object agreement markers. Without subsequent resolution, personal pronouns appear to be less resource demanding than reflexive clitics. This finding is indicative of the syntax-driven co-reference establishment processes triggered through the lexical specification of reflexive clitics. The results obtained from Bulgarian Broca's aphasics show that they exhibit processing patterns similar to those of the control group. Notwithstanding their slow processing speed, the agrammatic group showed no impairment of negation as reflected by their sensitivity to the aspectual requirements of NI, and to the prosodic constraints on clitic placement. The aphasics were able to parse the structural dependency between mood, negation and aspect as functional categories and to represent it morphologically. The prolonged reaction times (RT) elicited by prefixed verbs indicate increasing processing costs due to the semantic integration of prefixes as perfectivity markers into an overall imperfective construal. This inference is supported by the slower RT to reflexive clitics, which undergo a structurally triggered resolution. Evaluated against cross-linguistic findings, the obtained result strongly suggests that aphasic performance with pronouns depends on the interpretation efforts associated with co-reference establishment and varies due to availability of discourse referents. The investigation of normal and agrammatic processing of Bulgarian NI presents support for the hypothesis that the comprehension deficits in Broca's aphasia result from a slowed-down implementation of syntactic operations. The protracted structure building consumes processing resources and causes temporal mismatches with other processes sustaining sentence comprehension. The investigation of the way Bulgarian children and aphasic speakers process NI reveals that both groups are highly sensitive to the imperfective constraint on the aspectual construal imposed by the presence of negation. The imperfective interpretation requires access to morphologically complex verb forms which contain aspectual morphemes with conflicting semantic information – perfective prefixes and imperfective suffixes. Across modalities, both populations exhibit difficulties in processing prefixed imperfectivised verbs which as predicates of negative imperative sentences reflect the inner perspective the speaker and the addressee need to take towards a potentially bounded situation description.