Refine
Has Fulltext
- yes (4458) (remove)
Year of publication
Document Type
- Postprint (2062)
- Doctoral Thesis (1679)
- Monograph/Edited Volume (221)
- Preprint (126)
- Working Paper (117)
- Article (86)
- Master's Thesis (56)
- Habilitation Thesis (38)
- Part of Periodical (26)
- Conference Proceeding (25)
Language
- English (4458) (remove)
Is part of the Bibliography
- yes (4458) (remove)
Keywords
- climate change (68)
- Klimawandel (46)
- machine learning (37)
- Modellierung (34)
- diffusion (28)
- morphology (27)
- climate (25)
- model (25)
- Germany (24)
- German (22)
Institute
- Institut für Biochemie und Biologie (528)
- Institut für Physik und Astronomie (520)
- Mathematisch-Naturwissenschaftliche Fakultät (482)
- Institut für Geowissenschaften (443)
- Institut für Chemie (362)
- Humanwissenschaftliche Fakultät (207)
- Extern (201)
- Strukturbereich Kognitionswissenschaften (174)
- Wirtschaftswissenschaften (167)
- Institut für Mathematik (166)
Content Social stereotypes and responsibility attributions to victims of rape Atributing responsibillty to rape victims: a German study Rape myth acceptance and responsibility judgments: a British study Police officers' definitions of rape A study on cognitive prototypes of rape Conclusion References
We have used techniques of nonlinear dynamics to compare a special model for the reversals of the Earth's magnetic field with the observational data. Although this model is rather simple, there is no essential difference to the data by means of well-known characteristics, such as correlation function and probability distribution. Applying methods of symbolic dynamics we have found that the considered model is not able to describe the dynamical properties of the observed process. These significant differences are expressed by algorithmic complexity and Renyi information.
In the modern industrialized countries every year several hundred thousands of people die due to the sudden cardiac death. The individual risk for this sudden cardiac death cannot be defined precisely by common available, non-invasive diagnostic tools like Holter-monitoring, highly amplified ECG and traditional linear analysis of heart rate variability (HRV). Therefore, we apply some rather unconventional methods of nonlinear dynamics to analyse the HRV. Especially, some complexity measures that are basing on symbolic dynamics as well as a new measure, the renormalized entropy, detect some abnormalities in the HRV of several patients who have been classified in the low risk group by traditional methods. A combination of these complexity measures with the parameters in the frequency domain seems to be a promising way to get a more precise definition of the individual risk. These findings have to be validated by a representative number of patients.
We have studied bifurcation phenomena for the incompressable Navier-Stokes equations in two space dimensions with periodic boundary conditions. Fourier representations of velocity and pressure have been used to transform the original partial differential equations into systems of ordinary differential equations (ODE), to which then numerical methods for the qualitative analysis of systems of ODE have been applied, supplemented by the simulative calculation of solutions for selected initial conditions. Invariant sets, notably steady states, have been traced for varying Reynolds number or strength of the imposed forcing, respectively. A complete bifurcation sequence leading to chaos is described in detail, including the calculation of the Lyapunov exponents that characterize the resulting chaotic branch in the bifurcation diagram.
Strange nonchaotic attractors typically appear in quasiperiodically driven nonlinear systems. Two methods of their characterization are proposed. The first one is based on the bifurcation analysis of the systems, resulting from periodic approximations of the quasiperiodic forcing. Secondly, we propose th characterize their strangeness by calculating a phase sensitivity exponent, that measures the sensitivity with respect to changes of the phase of the external force. It is shown, that phase sensitivity appears if there is a non-zero probability for positive local Lyapunov exponents to occur.
Two deterministic processes leading to roughening interfaces are considered. It is shown that the dynamics of linear perturbations of turbulent regimes in coupled map lattices is governed by a discrete version of the Kardar-Parisi-Zhang equation. The asymptotic scaling behavior of the perturbation field is investigated in the case of large lattices. Secondly, the dynamics of an order-disorder interface is modelled with a simple two-dimensional coupled map lattice, possesing a turbulent and a laminar state. It is demonstrated, that in some range of parameters the spreading of the turbulent state is accompanied by kinetic roughening of the interface.
The Voyager 2 Photopolarimeter experiment has yielded the highest resolved data of Saturn's rings, exhibiting a wide variety of features. The B-ring region between 105000 km and 110000 km distance from Saturn has been investigated. It has a high matter density and contains no significance features visible by eye. Analysis with statistical methods has let us to the detection of two significant events. These features are correlated with the inner 3:2 resonances of the F-ring shepherd satellites Pandora and Prometheus, and may be evidence of large ring paricles caught in the corotation resonances.
The present paper is related to the problem of approximating the exact solution to the magnetohydrodynamic equations (MHD). The behaviour of a viscous, incompressible and resistive fluid is exemined for a long period of time. Contents: 1 The magnetohydrodynamic equations 2 Notations and precise functional setting of the problem 3 Existence, uniqueness and regularity results 4 Statement and Proof of the main theorem 5 The approximate inertial manifold 6 Summary
Projection methods based on wavelet functions combine optimal convergence rates with algorithmic efficiency. The proofs in this paper utilize the approximation properties of wavelets and results from the general theory of regularization methods. Moreover, adaptive strategies can be incorporated still leading to optimal convergence rates for the resulting algorithms. The so-called wavelet-vaguelette decompositions enable the realization of especially fast algorithms for certain operators.
We report on bifurcation studies for the incompressible magnetohydrodynamic equations in three space dimensions with periodic boundary conditions and a temporally constant external forcing. Fourier reprsentations of velocity, pressure and magnetic field have been used to transform the original partial differential equations into systems of ordinary differential equations (ODE), to which then special numerical methods for the qualitative analysis of systems of ODE have been applied, supplemented by the simulative calculation of solutions for selected initial conditions. In a part of the calculations, in order to reduce the number of modes to be retained, the concept of approximate inertial manifolds has been applied. For varying (incereasing from zero) strength of the imposed forcing, or varying Reynolds number, respectively, time-asymptotic states, notably stable stationary solutions, have been traced. A primary non-magnetic steady state loses, in a Hopf bifurcation, stability to a periodic state with a non-vanishing magnetic field, showing the appearance of a generic dynamo effect. From now on the magnetic field is present for all values of the forcing. The Hopf bifurcation is followed by furhter, symmetry-breaking, bifurcations, leading finally to chaos. We pay particular attention to kinetic and magnetic helicities. The dynamo effect is observed only if the forcing is chosen such that a mean kinetic helicity is generated; otherwise the magnetic field diffuses away, and the time-asymptotic states are non-magnetic, in accordance with traditional kinematic dynamo theory.
We report on bifurcation studies for the incompressible Navier-Stokes equations in two space dimensions with periodic boundary conditions and an external forcing of the Kolmogorov type. Fourier representations of velocity and pressure have been used to approximate the original partial differential equations by a finite-dimensional system of ordinary differential equations, which then has been studied by means of bifurcation-analysis techniques. A special route into chaos observed for increasing Reynolds number or strength of the imposed forcing is described. It includes several steady states, traveling waves, modulated traveling waves, periodic and torus solutions, as well as a period-doubling cascade for a torus solution. Lyapunov exponents and Kaplan-Yorke dimensions have been calculated to characterize the chaotic branch. While studying the dynamics of the system in Fourier space, we also have transformed solutions to real space and examined the relation between the different bifurcations in Fourier space and toplogical changes of the streamline portrait. In particular, the time-dependent solutions, such as, e.g., traveling waves, torus, and chaotic solutions, have been characterized by the associated fluid-particle motion (Lagrangian dynamics).
The bifurcation behaviour of the 3D magnetohydrodynamic equations has been studied for external forcings of varying degree of helicity. With increasing strength of the forcing a primary non-magnetic steady state loses stability to a magnetic periodic state if the helicity exceeds a threshold value and to different non-magnetic states otherwise.
Part of the intorduction: The task of writing a reliable and convincing paper on this topic is a very uneasy one because it is threefold: one has to know at least a bit about the agricultural sector, biology (or more precisely ecology), and about the sometimes beneficial but often distorting consequences of human activities. And all that has to be judged from the perspective of an economist who is aware of the steadily increasing uncertainties which are closely connected with post-modem sciences. Especially with regard to global, but also regional environmental issues, neither the conventional applied sciences nor the traditional professional consultancy deliver promising results. Today scientists have to tackle problems which are created by political necessities overwhelmingly caused by short-term human behavior, due in part to a serious lack of information on the longterm behavioral consequences. In these issues, typically, information stacks are high, scientific facts uncertain, individual as well as collective values disputed, and political decisions very urgent. "In general, the post-normal situation is one where the traditional opposition of 'hard'facts and 'soft' values is inverted. Here we find decisions that are 'hard' in every sense, for which the scientific inputs are irremediably 'soft'" (FUNTOWICZ/RAVETZ, 1991, p. 138).
We demonstrate the occurrence of regimes with singular continuous (fractal) Fourier spectra in autonomous dissipative dynamical systems. The particular example in an ODE system at the accumulation points of bifurcation sequences associated to the creation of complicated homoclinic orbits. Two different machanisms responsible for the appearance of such spectra are proposed. In the first case when the geometry of the attractor is symbolically represented by the Thue-Morse sequence, both the continuous-time process and its descrete Poincaré map have singular power spectra. The other mechanism owes to the logarithmic divergence of the first return times near the saddle point; here the Poincaré map possesses the discrete spectrum, while the continuous-time process displays the singular one. A method is presented for computing the multifractal characteristics of the singular continuous spectra with the help of the usual Fourier analysis technique.
The value concept of traditional resource economics is welfare. Therefore, sustainability of welfare is often taken to characterise our obligations to future generations. This paper argues that this view is inappropriate because it leaves no room for future generations autonomy. Future generations should be free to make their own decisions. Consequently freedom of choice is the appropriate value concept on which resource economics should be based. The concept of sustainability receives a new interpretation. Sustainability is a principle of intertemporal distributive justice which requires equitable opportunities across generations.
Of Rawls's two principles of justice only the second has received attention from economists. The second principle is concerned with the social and economic conditions in a just society. The first principle, however, has largely been neglected. It claims, that all people in society should have equal basic liberties. In this paper Rawls's first principle is characterised in a freedom of choice framework. The analysis reveals conceptual problems of the Rawlsian approach to justice.
In modern political philosophy social contract theory is the most prominent approach to individual rights and fair institutions. According to social contract theory the system of rights in a society ought to be justified by reconstructing its basic features as a contract between the mutually unconcerned members of society. This paper explores whether social contract theory can successfully be applied to justify rights of future generations. Three competing views are analysed: Rawls's theory of justice, Hobbes's radical liberalism and Gauthier's bargaining framework based on the Lockean proviso.
This paper opens a series of discussion papers which report about the findings of a research project within the Phare-ACE Programme of the European Union. We, a group of Bulgarian, German, Greek, Polish and Scottish economists and agricultural economists, undertake this research to provide An Integrated Analysis of Industrial Policies and Social Security Systems in Countries in Transition.1 This paper outlines the basic motivation for such study.
The concepts of food deficit, hunger, undernourishment and food security are discussed. Axioms and indices for the assessment of nutrition of individuals and groups are suggested. Furthermore a measure for food aid donor performance is developed and applied to a sample of bilateral and multilateral donors providing food aid for African countries.
Inhalt: Introduction: -Some Introductory Examples -Consumer-relevant Utility Dimensions -Communication Flow between the Relevant Actors -Risk Communication Dimensions -Complete Model -Aims of the Study Method: -Participants -Procedure -Content Analysis Results: -Sample Category 1: Food safety -Sample Category 2: Product Quality -Sample Category 3: Freedom of Choice -Sample Category 4: Decision Power over Foodstuffs -Strategy 1: Scientific Information Approach -Strategy 2: Balanced Information Approach -Strategy 3: Product Information Approach -Strategy 4: Classical Advertising -Strategy 5: Trust me I'm no Baddie -Strategy 6: Induction of Fear
Inhalt: Grundgedanken zur Entwicklung von Leitbildern -Leitbilder im Kontext eines Stadtmarketingkonzeptes -Ein Modell zur Entwicklung von Leitbildern -Das Leitbild als ein Element der Entwicklung eines Stadtmarketing- Konzepts -Funktion von Leitbildern -Anforderungen an Leitbilder Beispiele zur Leitbildentwicklung für die Städte Hennigsdorf und Potsdam
Industrial policy measures can be a reasonable supplement to economic and social policy actions during the period of transformation of centrally planned economies. This paper shows the interplay between industrial and social policy. Special attention is given to the timing and sequencing of the transformation process. This approach is closely modeled on the example of New Zealand.
The study presents estimates and analyses of the social expenditure in Poland. Changes which occurred during the transformation period are a reflection of consciously launched political transformations as well as decisions taken as a result of current needs and political pressures. This has an impact on the volume and structure of expenditures which are under consolidation. The debate devoted to budget issues, which gets more intense every autumn, testifies to increasing problems with correcting guidelines for distribution of expenditures. Even slight changes stand for depriving a specified group of transfers, what in democratic conditions produces strong protests. A similar negative attitude to changes became evident with regard to taxation. Recommendations presented in 1998 by the Polish government [see Ministry of Finance, 1998a, 1998b] introduce substantial modifications to the current tax system (withdrawal from tax exemptions and introduction of a tax-free minimum income) and thus met with a massive reluctance of major political fractions. This study provides readers with information on the volume of public expenditures, the source of public revenue, that is taxes, and a thorough study on expenditures allocated to social goals. The analysis was carried out on the basis of own estimates, which employ data acquired from the Ministry of Finance and the Ministry of Labour and Social Policy.
This paper analyses the macroeconomic developments which have taken place in the Bulgarian economy in the period 1993-1997. The paper also looks at the institutional arrangements and the process of economic policy-making in the country. In this context the problems the Bulgarian economy has experienced in the transition process towards a market-oriented economy are also studied. The paper proceeds as follows: Section 2 looks at the institutional arrangements and the process of economic policy-making through 1995. Section 3 studies the deep economic crisis in 1996 and points out what went wrong in that period. Section 4 continues studying the economic crisis of the Bulgarian economy as well as the problems in the transition process during the first half of 1997. Section 5 looks at the economic developments during the second half of 1997 and points to the prospects for growth in 1998. Section 6 deals with the Bulgarian financial institutions and the existing institutional arrangements. Finally, Section 7 concludes the paper.
In centrally planned economies state subsidies were the main instrument of supporting the economic sector. Most of them had also social functions (e.g. through subsidising the consumption of households). In the period of transition, with the withdraw all of the state from economic decisions of the enterprises, new social problems appeared. The paper analyses the process of granting state support to economic units - its scope and forms - in the 90-ties.
Like in all countries in transition, the tax as well as the transfer system have been under serious reform pressures. The socialistic systems were not able to fulfill the necessary functions in providing a certain degree of redistribution and social security, which are inevitable for social oriented market economies. Increasing income and wage differentiation is one of the most important prerequisites for a market oriented ability to pay tax system. But in the transformation period, numerous quasi-legal or even illegal property transactions have taken place, thus leading to wealth concentrations on the one hand while as consequence of the bankruptcy of socialism, enormous poverty problems have arisen on the other. For the political acceptance of the transformation process it is of utmost importance that an efficient and fair tax system is implemented and social security is organised by the state on a level which secures at least the physical minimum of subsistence or – if economically possible – even a social-cultural minimum. Whether the state should go further in providing compulsory social insurance systems has been a hotly debated topic for decades even in the welfare and social states of the Western type. Whereas the basic security systems have to be financed by general tax revenue, for a compulsory social insurance system – due to the insurance character – special earmarked social security contribution are held necessary. Both public goods and services as well as at least basic security have to be financed by total tax revenue. For the acceptance and fairness of the whole system the total redistributive effect of both sides of the budget – the tax system as well as the expenditure system – are decisive. In this paper we will concentrate on the revenue side, e.g. on the taxes as well as on the social security contributions. Adam Smith had already formulated some very simple tax norms which have been transformed in modern tax theory. The equivalence as well as the ability-topay principle are basic yardsticks for every tax system in a democratic oriented market system, not to forget tax fairness. In the historical development process equity-oriented measures have often produced an enormous complexity of the single taxes as well as of the whole tax system. Therefore, reconsidering the Smithian principles of simplicity and of minimum compliance costs for the tax payer would even press many Western European tax systems to undergo serious reform processes which often are delayed because of intense interest group influence. Hence, a modern tax system is a simple one which consists only of a few single taxes which are easy to administer. Such a system consists of two main taxes, the income and the value added tax. Consequently in all countries of transition both taxes have been implemented, while the implementation was fostered by the fact that both also constitute the typical components of the EU member states systems. Therefore such a harmonising tax reform is the most important prerequisite to become a membership candidate. Bulgaria also tried to follow this general pattern in reforming the income tax system starting in 1992 and replacing the old socialistic turnover tax and excise duty system by the value added tax (VAT) in 1994. Especially with regard to the income tax system the demand for simplicity has not been met yet. Complex rules to define the tax base as well as a steeply progressive tax schedule have led to behavioral adaptations which are even strengthened by the effects of a high social contribution burden which is predominantly laid on the employers. In the following some concise descriptions of the tax and social contribution system are given; the paper closes with a summary, in which the impacts of the system are evaluated and some political recommendations for further reforms are presented.
After promising beginnings towards transformation, in 1991 the Bulgarian economy fell into deep crisis in the period from 1995 to 1997. Social policy, already overstrained due to the demands of transition, was unable to cope effectively with the rapidly spreading state of emergency. The following essay analyses the development of the social indicators and instruments of social security in the years 1990 to 1998. In addition to unemployment and unemployment insurance, the issue of pensions and poverty will also be examined.
The dynamics of tail-like current sheets under the influence of small-scale plasma turbulence
(1999)
A 2D-magnetohydrodynamic model of current-sheet dynamics caused by anomalous electrical resistivity as result of small-scale plasma turbulence is proposed. The anomalous resistivity is assumed to be proportional to the square of the gradient of the magnetic pressure as may be valid for instance in the case of lower-hybrid-drift turbulence. The initial resistivity pulse is given. Then the temporal and spatial evolution of the magnetic and electric fields, plasma density, pressure, convection and resistivity are considered. The motion of the induced electric field is discussed as indicator of the plasma disturbances. The obtained results found using much improved numerical methods show a magnetic field evolution with x-line formation and plasma acceleration. Besides, in the current sheet, three types of magnetohydrodynamic waves occur, fast magnetoacoustic waves of compression and rarefaction as well as slow magnetoacoustic waves.
Die vorliegende Arbeit beschäftigt sich mit der Charakterisierung von Seismizität anhand von Erdbebenkatalogen. Es werden neue Verfahren der Datenanalyse entwickelt, die Aufschluss darüber geben sollen, ob der seismischen Dynamik ein stochastischer oder ein deterministischer Prozess zugrunde liegt und was daraus für die Vorhersagbarkeit starker Erdbeben folgt. Es wird gezeigt, dass seismisch aktive Regionen häufig durch nichtlinearen Determinismus gekennzeichent sind. Dies schließt zumindest die Möglichkeit einer Kurzzeitvorhersage ein. Das Auftreten seismischer Ruhe wird häufig als Vorläuferphaenomen für starke Erdbeben gedeutet. Es wird eine neue Methode präsentiert, die eine systematische raumzeitliche Kartierung seismischer Ruhephasen ermöglicht. Die statistische Signifikanz wird mit Hilfe des Konzeptes der Ersatzdaten bestimmt. Als Resultat erhält man deutliche Korrelationen zwischen seismischen Ruheperioden und starken Erdbeben. Gleichwohl ist die Signifikanz dafür nicht hoch genug, um eine Vorhersage im Sinne einer Aussage über den Ort, die Zeit und die Stärke eines zu erwartenden Hauptbebens zu ermöglichen.
The attractiveness of foreign direct investment in Russia and Ukraine : a statistical analysis
(1999)
In this paper a comparative exploration of the potential for foreign investment and real inflow to Russia and Ukraine are examined. The analysis showed that primarily both countries enjoyed significant comparative advantages in attracting foreign capital. Since the foundation of independent states in 1992 attractiveness began to diverge dramatically. This difference is clearly explained by the determination of the Russian government to reform the economy earlier than the Ukrainian government. The transition to a market economy is closely connected with the development of a favorable investment climate in both countries. It includes the foundation of a stable system of property rights and a conducive legal environment.
This paper presents in the first section a methodological introduction concerning statistics of consumer prices in Georgia. The second section gives a general idea of the development of consumer prices from January 1994 till September 1999. A detailed regional analysis is added in section 3. The fourth section analyses the development of consumer prices for the eight main groups included in the total CPI. Section 5 compares the changes in Georgian CPI with the movements of foreign exchange rates in Georgian Lari. This paper ends with a summary including a short outlook to the next years.
Privatisation and ownership : the impact on firms in transition survey evidence from Bulgaria
(1999)
Previous papers in this Special Series, have described in detail the theoretical background and development patterns, along with some empirical results, for the privatisation processes in Bulgaria and Poland. A range of issues have been raised which demand closer empirical investigation. For this purpose, the research group has developed questionnaire studies for Bulgaria and Poland. In Bulgaria, the National Statistical Institute (NSI) carried out the case studies between February and April 1998. The problems of the questionnaire set-up were identified in apre-test study, but unlike the Polish case, they led to only minor differentiation. Since financial limitations prevented a larger sample size, a sample size of 61 mid-sized and large Bulgarian enterprises was selected. Failure to respond was not a serious problem, unlike with the Polish questionnaire; this is because the NSI has maintained good links to the enterprise sector and management were prepared to give detailed answers, even on questions of their firms' financial status. However, as the Polish experience suggests, it has become obvious that the privatisation process is also associated with management's increasing reluctance to answer comparatively 'intimate' questions. Thus, future questionnaire studies must take a much higher rate of refusals into consideration. The pre-selection procedure in Bulgaria was determined by the project target, which sought to analyse the effects of the privatisation process on firm' s behaviour during the transition process, and hence only firms which had already existed before the changes were included. For small and medium-size enterprises (SME's), most of which were founded after the changes, partly due to the legal processes of spontaneous privatisation, some empirical, as weIl as analytical, studies were carried out. Thus, the research group limited the scope of investigation to enterprises with more than 250 employees. The underlying hypothesis is that employment problems are concentrated in larger firms, in particular amongst those still (partly) state owned. Because of the former ownership structures and relatively slower capacity for management change, the assumption is that state-owned enterprises (SOE's) which have only been recently privatised might still have traditional links to government even after privatisation. On the one hand, the SME's are obviously more prone to, and linked with, market processes. As a result, they don't have the financial potential and incentives to follow job-hoarding strategies. On the other hand, there are almost no SME's which are still stateowned. Hence, the prevailing opinion in the literature is that 'larger industrial firms were apt to be least efficient, most often producing inadequate and non-competitive products, with a high degree ofunder-utilisation oflabour and most inflexible to change' (lones & Nikolov 1997, p. 252). Thus, as mentioned above, though there may be some limitations with regard to firm representation, our sample characterises a number of enterprises that offer fertile ground for the analysis of firms' adjustment to the newly established market realities in a transition economy. Our study is unique in the sense that existing empirical studies on privatisation and enterprise restructuring generally cover the time period just before and after the initial stages of transition, e.g. 1988/89 to 1992. In those studies, samples of firms in the Czech Republic, Poland, Hungary and Bulgaria recognise that behavioural adaptations at the enterprise level had taken place just before the actual privatisation process materialised. Therefore, almost all of the firms under examination were still state-owned. The firms were usually divided according to their performance as 'good', 'average' and 'bad' enterprises. The main findings of those early studies have shown that the macroeconomic adaptations (i.e., macro-level changes which induced micro-level adjustment by the firms), as well as emerging market structures, have created enormous pressures which in turn have influenced firms' economic behaviour, reallocation of resources and consequent restructuring. This evidence supports the hypothesis that the SOE's started restructuring and adjusting their behaviour and performance, in response to the harsh realities of more open markets, before privatisation actually started. In this paper, we seek to present some results on these developments in Bulgaria, at the later stages of transition and privatisation (1992-1996). The aim of our questionnaire study is therefore to show the effects of the privatisation process and ownership on the behavioural adaptations of firms which had once been state-owned or continue to be owned by the state. The period under investigation is 1992 to 1996. For 1990 and 1991, the number of missing values is reactively high and, where relevant, we partly exclude these observations from our analysis. The paper contains seven sections. Section 11 outlines the macroeconomic environment in which our sample firms operate, provides some specifics of the Bulgarian privatisation process, and discusses data quality. Section 111 concentrates on the analysis of privatisation, the specific forms of ownership that resulted from it, and firm size. In Section IV, we describe the trends of the main economic variables within firms (such as employment, wages, labour productivity, etc), and a number of proxies of firm viability, while Section V presents some regression results to corroborate the discussion of the previous section. Section VI gives an overview of survey results of the impact of enterprise determined wage policy, trade union activity and membership, government control, and social benefits on enterprise restructuring. Section VII is a summary of our findings.
Privatisation in Central and Eastern Europe can be defined as the transfer of property rights from the State to private owners. The transfers are carried out so as to vest the new private owners with the full property rights of use and disposal over their property, these rights being guaranteed by the legal framework established by the rule of law. In Bulgaria, one can distinguish between three main stages in the process of privatisation. Each was shaped by the conflicting resolutions of frequently changing governments and meant to serve different political goals. The first stage (1990-1993) is characterised by the blockade of legal privatisation, as ‘spontaneous privatisation’ was accorded high priority. As in other former socialist countries, great emphasis was placed on the so-called commercialisation of state-owned enterprises. This did not involve the actual transfer of State property into private hands, but rather the independent transformation of state-owned enterprises into joint-stock companies, as well as the establishment of subsidiary companies.1 The goals of introducing more efficient structures and applying modern methods of production by transferring property to a more suitable management were not achieved. The second stage (1993-1995) is a cash privatisation, which laid the foundation for an employee/management buy-out, aided by the legal provisions granting concessions in the payment of instalments. The most important factor in the third stage of the process of privatisation in Bulgaria was the adoption of the mass privatisation model as an alternative method of procedure. In 1996, legal regulations for mass privatisation were introduced and a privatisation fund was established. In the meantime, the process has evolved into its fourth stage, during which a strategy of privatisation has been formulated under the supervision of a monetary council, and various agreements with the IMF and the World Bank are being adhered to. Privatisation is the decisive factor in the structural reforms of East European countries. The problem of converting State property into more effective forms of property management has been exacerbated by the additional demand of carrying out the far-reaching structural changes as swiftly as possible. The expectation that a large part of State property would be privatised within a short time in Bulgaria, has not been met for a number of reasons. When the reforms began, the private sector was too weakly developed to become a catalyst for structural changes. Until 1995 there were no laws regulating the stock exchange or securities and bonds - the capital market was practically non-existent. Moreover, the various political parties could not agree upon the various models and objectives of privatisation. The population itself had no capital. The restitution of private ownership which will not be discussed in further detail was limited to the smallest businesses, traders and workshops. Furthermore, the Privatisation Agency and State authorities employed to initiate the privatisation process lacked experience. Another problem hindering privatisation was that the laws passed lacked precision and were constantly subject to change.
New survey data for a panel of Polish firms is used to estimate employment and wage adjustments under various forms of ownership (insider vs. outsider) and asymmetric response to exogenous shocks. In contrast to earlier studies, dynamic panel data estimators (GMM) allow for endogeneity of observed variables and partial adjustment to shocks. Results differ from other findings in the transition literature: wages have little effect on dynamic labor demand and the firm-size wage effect is confirmed. Firms that expand employment have to pay significantly larger wage increases and rising sales add little to employment, suggesting labor hoarding. Dec1ining sales, however, significantly reduce employment and privatization (or anticipation thereof) has the expected benefits.
We investigate numerically the appearance of heteroclinic behavior in a three-dimensional, buoyancy-driven fluid layer with stress-free top and bottom boundaries, a square horizontal periodicity with a small aspect ratio, and rotation at low to moderate rates about a vertical axis. The Prandtl number is 6.8. If the rotation is not too slow, the skewed-varicose instability leads from stationary rolls to a stationary mixed-mode solution, which in turn loses stability to a heteroclinic cycle formed by unstable roll states and connections between them. The unstable eigenvectors of these roll states are also of the skewed-varicose or mixed-mode type and in some parameter regions skewed-varicose like shearing oscillations as well as square patterns are involved in the cycle. Always present weak noise leads to irregular horizontal translations of the convection pattern and makes the dynamics chaotic, which is verified by calculating Lyapunov exponents. In the nonrotating case, the primary rolls lose, depending on the aspect ratio, stability to traveling waves or a stationary square pattern. We also study the symmetries of the solutions at the intermittent fixed points in the heteroclinic cycle.
A numerical bifurcation analysis of the electrically driven plane sheet pinch is presented. The electrical conductivity varies across the sheet such as to allow instability of the quiescent basic state at some critical Hartmann number. The most unstable perturbation is the two-dimensional tearing mode. Restricting the whole problem to two spatial dimensions, this mode is followed up to a time-asymptotic steady state, which proves to be sensitive to three-dimensional perturbations even close to the point where the primary instability sets in. A comprehensive three-dimensional stability analysis of the two-dimensional steady tearing-mode state is performed by varying parameters of the sheet pinch. The instability with respect to three-dimensional perturbations is suppressed by a sufficiently strong magnetic field in the invariant direction of the equilibrium. For a special choice of the system parameters, the unstably perturbed state is followed up in its nonlinear evolution and is found to approach a three-dimensional steady state.
In socialist economies firms have provided various social benefits, like child care, health care, food subsidies, housing etc. Using panel data from Bulgarian and Polish firms, this paper attempts to explain firm-specific provision of social benefits in the process of transition. We investigate empirically with the help of qualitative response models, how ownership type and structure, firm size, profitability, change in management, foreign direct investment, wage and employment policies, union involvement and employee power have impacted the state of non-wage benefits provision.
Industrial policy and social strategy at the corporate level in Poland : questionnaire results
(1999)
This paper presents results from a survey of industrial policy of the state and the social security system at the corporate level in Poland. Previous reports in this area indicated preferable directions of research to be taken in order to prove various hypotheses of the purposefulness of an integral approach to industrial policy and social security in the analysis of economic processes in transition (see Weikard 1997). This paper summarises the results and draws conclusions from a questionnaire study on subsidies, social benefits and economic policy in Polish firms during the process of transformation. Our results and conclusions show the scope and character of the processes in the area of industrial and social policy in the period 1994 to 1997. The paper is divided into five parts. The first part concerns the aims and methodology of the questionnaire; it also gives a brief description of the sample. The second part shows how enterprises dealt with the issues of employment and wages in this period. The third part characterises industrial policy at the corporate level, while the next presents results from the survey of various social schemes pursued. The final part aims at an integral approach in the analysis of various processes taking place in Polish enterprises. The survey was conducted in the period April to June 1998. Its aim was to observe certain phenomena occurring at the corporate level. The questionnaire was distributed among the managers, directors and presidents of large-size enterprises, which had been selected to satisfy the following three criteria. Firstly, the number of employees had to be considerable (over 300 workers). This criterion was applied following the consideration that certain social phenomena are more conspicuous in enterprises with large manpower. Secondly, only operating enterprises were selected, the enterprises which closed down were disregarded. Finally, for the purposes of the survey the units differed as regards their legal situation and form of ownership. Out of over 1800 enterprises 370 units were drawn where we sent the questionnaire. Unfortunately, as many as 51.9% of the respondents refused co-operation, questions to a certain extent puts the representativeness of the sample in question. Finally, 178 questionnaires were subsequently completed and returned for analysis. However, not all of these questionnaires included full answers to all of the 75 questions; therefore, while discussing the results of the survey we have indicated the number of relevant answers we have received.
The aim of the work was to present the results of the analyses economic standing of the partnership companies which lease agricultural real estate from Agricultural Property Agency of State Treasury (APA) in 1996 and 1997. The analyses proved poor economic condition of the firms under investigation and especially their low level of stabilisation (the index of total debt was in 1996 equal to 0.88 and in 1997 to 0.96) and the low level of their solvency.
The economy in Poland has changed tremendously in recent years. Agricultural enterprises can defend their market share only if they are able to adjust quickly and efficiently to new circumstances. The most effective strategy to cope with changing operating conditions is a strategy of permanent development of human resources. This strategy must embrace a constant improvement of professional entrepreneurial skills and of management structures within organizations. Only such a strategy will allow businesses to hold on to or to increase their market standing despite strong competition. It will also allow them to meet, for instance, the newly introduced standardisation procedures for goods produced and supplied. This challenge holds especially true for agricultural enterprises that operate in highly competitive markets; markets which are currently characterised by a permanent surplus of supply over demand and a great number of businesses, mainly of small or medium size. Demand in the agricultural market is exerted by millions of consumers, all of different consumption habits with idiosyncratic consumption preferences. Agricultural producers as a group are extremely sensitive to any kind of change in their environment. This is especially true in the current transition period when a worsening of economic conditions can be observed: an economic downturn caused by the price of inputs increasing at a faster rate than agricultural product prices and an ineffective agricultural policy. One of the agricultural production factors which allows for quick adjustment to change and which can thus be used to improve one’s market position is the human factor. It is a wellknown fact that a good level of professional skills in combination with ongoing means of furthering and updating professional qualifications of workers can help to facilitate coping with market challenges. The aim of this study is first to determine specific quality and quantity features of human resources in agricultural production, looking, inter alia, at changes in employment, specific employment structures and the number of recruitments and dismissals in a given period. A further aim was to undertake an efficiency analysis of limited partnerships which leased their agricultural real estate from the Agricultural Property Agency (APA) in the Voivodeship of Gorzów between 1995 and 1997. The first analysis was carried out using data which were collected from surveys amongst the owners of 36 privately owned farms and the managers of 14 limited partnerships. The data cover the period between 1994 and 1997. The incentive to conduct research on large farms in the Gorzów Voivodeship using the Data Envelopment Analysis method (DEA) lay in the outcome of various earlier studies on the financial standing of limited partnerships leasing real estate from APA in the Gorzów Voivodeship in 1996 and 1997. Apart from general adjustment processes, these inquiries proved that, in 1997, the economic condition of the farms analysed was worse when compared to the situation in 1996; the following ratios worsened: the financial support ratio, the liquidity ratio, the turnover ratio, the profitability ratio and the cost level ratio (see Świtłyk, 1998, 1999). These results determined the focus of our research, namely input efficiency in particular limited partnerships. The base of our calculations was a research model which consisted of efficiency measures focusing on firms’ inputs The analysis was carried out on a sample of 90 firms in the years between 1995 and 1997 (30 firms every year). Other data material was collected from national statistical office reports on incomes, costs and financial results (F-O1) and statistics about land usage, crop area and yields (R-O5). In the next section we briefly discuss privatisation in agriculture. Sections 3 and 4 present results from our survey. Section 5 concludes.
A multidisciplinary study has been carried out to contribute to the understanding of the geologic evolution of the largest known occurrence of ultra-high-pressure (UHP) rocks on Earth, the Dabie Shan of eastern China. Geophysical data, collected along a ca. 20 km E-W trending seismic line in the eastern Dabie Shan, indicate that the crust comprises three layers. The upper crust has a homogeneously low reflectivity and exhibits roughly subhorizontal reflectors down to ca. 15 km. It is therefore interpreted to portray a crustal UHP slab thrust over non-UHP crust. An aprubt change in intensity and geometry of observed reflectors marks the boundary of a mid- to lower crustal zone which is present down to ca. 33 km. This crustal zone likely represents cratonal Yangtze crust that was unaffected by the Triassic UHP event and which has acted as the footwall during exhumation of the crustal wedge. Strong and continuous reflectors occurring at ca. 33-40 km depth most likely trace the Moho at the base of the crust. Any trace of a crustal root, that may have formed in response to collision tectonics, is therefore not preserved. A shollow tomographic velocity modell based on inversion of the first arrivals is constructed additionally. This model clearly images the distinct lithologies on both sides of the Tan Lu fault. Sediments to the east exhibit velocities of about 3.4 - 5.0 km* s^-1, whereas the gneisses have 5.2 - 6.0 km*s^-1. Geometry of velocity isolines may trace the structures present in the rocks. Thus the sediments dip shallowly towards the fault, whereas isoclinal folds are imaged to occur in the gneisses. Field data from the UHP unit of the Dabie Shan enables definition of basement-cover sequences that represent sections of the former passive margin of the Yangtze craton. One of the cover sequences, the Changpu unit, still displays a stratigraphic contact with basement gneisses, while the other, the Ganghe unit, includes no relative basement exposure. The latter unit is in tectonic contact with the basement of the former unit via a greenschist-facies blastomylonite. The Changpu unit is chiefly constituted by calc-arenitic metasediments intercalated with meta-basalts, whereas the Ganghe unit contains arenitic-volcanoclastic metasediments that are likewise associated with meta-basalts. The basement comprises a variety of felsic gneisses, ranging from preserved eclogitic- to greenschist-facies paragenesis, and locally contains mafic-ultramafic meta-plutons in addition to minor basaltic rocks. Metabasites of all lithologies are eclogite-facies or are retrogressed equivalents, which, with the exception of those from the Ganghe unit, bear coesite and thus testify to an UHP metamorphic overprint. Mineral chemistry of the analysed samples reveal large compositional variations among the main minerals, i.e. garnet and omphacite, indicating either distinct protoliths or different degrees of interaction with their host-rocks. Contents of ferric iron in low Fetot omphacites are determined by wet chemical titration and found to be rather high, i.e. 30-40 %. However, a even more conservative estimate of 50% is applied in the corresponding calculations, in order to be comparable with previous studies. Textural constraints and compositional zonation pattern are compatible with equilibrium conditions during peak metamorphism followed by a retrogressive overprint. P-T data are calculated with special focus on the application of the garnet-omphacite-phengite barometer, combined with Fe-Mg exchange thermometers. Maximum pressures range from 42-48 kbar (for the Changpu unit) to ~37 kbar (for the Ganghe unit and basement rocks). Temperatures during the eclogite metamorphism reached ca. 750 °C. Although the sample suite reveals variable peak-pressures, temperatures are in reasonable agreement. Pressure differences are interpreted to be due to strongly Ca-dominated garnet (up to 50 mol % grossular in the Changpu unit) and modification of peak-compositions during retrogressive metamorphism. The integrated geological data presented in this thesis allow it to be concluded that, i) basement and cover rocks are present in the Dabie Shan and both experienced UHP conditions ii) the Dabie Shan is the metamorphic equivalent of the former passive margin of the Yangtze craton iii) felsic gneisses undergoing UHP metamorphism are affected by volume changes due to phase transitions (qtz <-> coe), which directly influence the tectono-metamorphic processes iv) initial differences in temperature may account for the general lack of lower crustal rocks in UHP-facies
In her lifetime, Dymphna Cusack continually launched social critiques on the basis of her feminism, humanism, pacificism and anti-fascist/pro-Soviet stance. Recalling her experi-ences teaching urban and country schoolchildren in A Window in the Dark, she was particularly scathing of the Australian education system. Cusack agitated for educational reforms in the belief that Australian schools had failed to cultivate the desired liberal humanist subject: 'Neither their minds, their souls, nor their bodies were developed to make the Whole Man or the Whole Woman - especially the latter. For girls were encouraged to regard their place as German girls once did: Kinder, Küche, Kirche - Children, Kitchen and Church.' I suggest that postwar liberal humanism, with its goals of equality among the sexes and self-realisation or 'becoming Whole', created a popular demand for the romantic realism found in Cusack′s texts. This twentieth century form of humanism, evident in new ideas of the subject found in psychoanalysis, Western economic theory and Modernism, informed each of the global lobbies for peace and freedom that followed the destruction of World War II. Liberal ideas of the individual in society became synonymous with the humanist representations of gender in much of postwar, realistic literature in English-speaking countries. The individual, a free agent whose aim was to 'improve the life of human beings', was usually given the masculine gender. He was shown to achieve self-realisation through a commitment to the development of “mankind”, either materially or spiritually. Significantly, the majority of Cusack′s texts diverge from this norm by portraying women as social agents of change and indeed, as the central protagonists. Although the humanist goal of self-realisation seems to be best adapted to social realism, the generic conventions of popular romance also have humanist precepts, as Catherine Belsey has argued. The Happy End is contrived through the heroine′s mental submission to her physical desire for the previously rejected or criticised lover. As Belsey has noted, desire might be considered a deconstructive force which momentarily prevents the harmonious, permanent unification of mind and body because the body, at the moment of seduction, does not act in accord with the mind. In popular romance, however, desire usually leads to a relationship or proper union of the protagonists. In Cusack′s words, the heroine and hero become “whole men and women” through the “realistic” love story. Thus romance, like realism, seeks to stabilise gender relations, even though female desire is temporarily disruptive in the narrative. In the end, women and men become fully realised characters according to the generic conventions of the love story or the consummation of potentially subversive desire. It stayed anxieties associated with women seeking independence and self-realisation rather than traditional romance which signalled a threat to existing gender relations. I proposed that an analysis of gender in Cusack′s fiction is warranted, since these apparently unified, humanist representations of romantic realism belie the conflicting aims and actions of the gendered subjects in this historical period. For instance, when we examine women′s lives immediately after the war, we can identify in both East and West efforts initiated by women and men to reconstruct private/public roles. In order to understand how women were caught between “realism and romance”, I plan to deconstruct gender within the paradigm of this hybrid genre. By adopting a femininist methodology, new insights may be gained into the conflictual subjectivity of both genders in the periods of the interwar years, the Pacific and World Wars, the Cold War, the Australian Aboriginal Movement at the time of the Vietnam War, as well as the moment of second wave Western feminism in the seventies. My definition of romantic realism and the discourses that inform it are examined in chapters two and three. A deconstruction of femininity and the female subject is pursued in chapter four, when I argue that Cusack′s romantic narratives interact in different ways with social realism: romance variously fails, succeeds, is parodic or idealised. Applying Judith Butler′s philosophical ideas to literary criticism, I argue that this hybridisation of genre prevents the fictional subject from performing his or her gender. Like the “real” subject - actual women in society - the fictional protagonist acts in an unintelligible fashion due to the multifarious demands and constraints on her gender. Consequently, the gendering of the sexed subject produces a multiplicity of genders: Cusack′s women and men are constituted by differing and conflicting demands of the dichotomously opposed genres. Thus gender and sex become indefinite through their complex, inconsistent expression in the romantic realistic text. In other words, the popular combination of romance and realism leads to an explosion of the gender binary presupposed by both genres. Furthermore, a consideration of sexuality and race in chapter five leads to a more differentiated analysis of the humanist representations of gender in postwar fiction. The need to deconstruct these representations in popular and canonical literature is recapitulated in the final chapter of this Dissertation.
Polymers at membranes
(2000)
The surface of biological cells consists of a lipid membrane and a large amount of various proteins and polymers, which are embedded in the membrane or attached to it. We investigate how membranes are influenced by polymers, which are anchored to the membrane by one end. The entropic pressure exerted by the polymer induces a curvature, which bends the membrane away from the polymer. The resulting membrane shape profile is a cone in the vicinity of the anchor segment and a catenoid far away from it. The perturbative calculations are confirmed by Monte-Carlo simulations. An additional attractive interaction between polymer and membrane reduces the entropically induced curvature. In the limit of strong adsorption, the polymer is localized directly on the membrane surface and does not induce any pressure, i.e. the membrane curvature vanishes. If the polymer is not anchored directly on the membrane surface, but in a non-vanishing anchoring distance, the membrane bends towards the polymer for strong adsorption. In the last part of the thesis, we study membranes under the influence of non-anchored polymers in solution. In the limit of pure steric interactions between the membrane and free polymers, the membrane curves towards the polymers (in contrast to the case of anchored polymers). In the limit of strong adsorption the membrane bends away from the polymers.
This paper deals with the Mie scattering kernels for multi-spectral data. The kernels may be represented in form of power series. Furthermore, the singular-value spectrum and the degree of ill-posedness in dependence on the refractive index of the particles are numerically approximated. A special hybrid regularization technique allows us to determine via inversion the particle distributions of different types.
A numerical MHD model is developed to investigate acceleration and heating of both thermal and auroral plasma. This is done for magnetospheric flux tubes in which intensive field aligned currents flow. To give each of these tubes, the empirical Tsyganenko model of the magnetospheric field is used. The parameters of the background plasma outside the flux tube as well as the strength of the electric field of magnetospheric convection are given. Performing the numerical calculations, the distributions of the plasma densities, velocities, temperatures, parallel electric field and current, and of the coefficients of thermal conductivity are obtained in a self-consistent way. It is found that EIC turbulence develops effectively in the thermal plasma. The parallel electric field develops under the action of the anomalous resistivity. This electric field accelerates both the thermal and the auroral plasma. The thermal turbulent plasma is also subjected to an intensive heating. The increase of the plasma of the Earth's ionosphere. Besides, studying the growth and dispersion properties of oblique ion cyclotron waves excited in a drifting magnetized plasma, it is shown that under non-stationary conditions such waves may reveal the properties of bursts of polarized transverse electromagnetic waves at frequencies near the patron gyrofrequency.
Basing on recent solar models, the excitation of ion-acoustic turbulence in the weaklycollisional, fully and partially-ionized regions of the solar atmosphere is investigated. Within the frame of hydrodynamics, conditions are found under which the heating of the plasma by ion-acoustic type waves is more effective than the Joule heating. Taking into account wave and Joule heating effects, a nonlinear differential equation is derived, which describes the evolution of nonlinear ion-acoustic waves in the collisional plasma.
In this paper an analysis of the excitation conditions of mirror waves is done, which propagate parallel to an external magnetic field. There are found analytical expressions for the dispersion relations of the waves in case of different plasma conditions. These relations may be used in future to develop the nonlinear theory of mirror waves. In comparison with former analytical works, in the study the inuence of the magnetic field and nite temperatures of the ions parallel to the magnetic field are taken into account. Application is done for the earth's magnetosheath.
In this thesis we use the gravitational lensing effect as a tool to tackle two rather different cosmological topics: the nature of the dark matter in galaxy halos, and the rotation of the universe. Firstly, we study the microlensing effect in the gravitational lens systems Q0957+561 and Q2237+0305. In these systems the light from the quasar shines directly through the lensing galaxy. Due to the relative motion of the quasar, the lensing galaxy, and the observer compact objects in the galaxy or galaxy halo cause brightness fluctuations of the light from the background quasar. We compare light curve data from a monitoring program of the double quasar Q0957+561 at the 3.5m telescope at Apache Point Observatory from 1995 to 1998 (Colley, Kundic & Turner 2000) with numerical simulations to test whether the halo of the lensing galaxy consists of massive compact objects (MACHOs). This test was first proposed by Gott (1981). We can exclude MACHO masses from 10^-6 M_sun up to 10^-2 M_sun for quasar sizes of less than 3x10^14 h_60^-0.5 cm if the MACHOs make up at least 50% of the dark halo. Secondly, we present new light curve data for the gravitationally lensed quadruple quasar Q2237+0305 taken at the 3.5m telescope at Apache Point Observatory from June 1995 to January 1998. Although the images were taken under variable, often poor seeing conditions and with coarse pixel sampling, photometry is possible for the two brighter quasar images A and B with the help from HST observations. We find independent evidence for a brightness peak in image A of 0.4 to 0.5 mag with a duration of at least 100 days, which indicates that microlensing has taken place in the lensing galaxy. Finally, we use the weak gravitational lensing effect to put limits on a class of Goedel-type rotating cosmologies described by Korotky & Obukhov (1996). In weak lensing studies the shapes of thousands of background galaxies are measured and averaged to reveal coherent gravitational distortions of the galaxy shapes by foreground matter distributions, or by the large-scale structure of space-time itself. We calculate the predicted shear as a function of redshift in Goedel-type rotating cosmologies and compare this to the upper limit on cosmic shear gamma_limit of approximately 0.04 from weak lensing studies. We find that Goedel-type models cannot have larger rotations omega than H_0=6.1x10^-11 h_60/year if this shear limit is valid for the whole sky.
Contents: 1. Capitalist societies as market-bargaining societies on the basis of resources of action: The idealtypical bargain between capital and labour; an alternative to Marx' theory of exploitation - Discussion of the model 2. A general typology of paths of societies in history and a characterisation of state socialism - People's capitalisms as perspective of development - What remains from Marx' ideas? 3. Variations of welfare capitalism after the decline of state socialism 3.1 National differences of welfare capitalism 3.2 Overall inequality of income and overall class consciousness 3.3 Explaining income inequality and variation in class consciousness by class and gender 3.3.1 A test of different class models in the FRG 3.3.2 Developing an international model of gendered occupational and employment status as bundles of resources of action 4. Summary
The objective of this thesis is to provide new space compaction techniques for testing or concurrent checking of digital circuits. In particular, the work focuses on the design of space compactors that achieve high compaction ratio and minimal loss of testability of the circuits. In the first part, the compactors are designed for combinational circuits based on the knowledge of the circuit structure. Several algorithms for analyzing circuit structures are introduced and discussed for the first time. The complexity of each design procedure is linear with respect to the number of gates of the circuit. Thus, the procedures are applicable to large circuits. In the second part, the first structural approach for output compaction for sequential circuits is introduced. Essentially, it enhances the first part. For the approach introduced in the third part it is assumed that the structure of the circuit and the underlying fault model are unknown. The space compaction approach requires only the knowledge of the fault-free test responses for a precomputed test set. The proposed compactor design guarantees zero-aliasing with respect to the precomputed test set.
Contents: Production and Applications of Chitin and Chitosan Krill as a promising raw material for the production of chitin in Europe - Containerized plant for producing chitin - Preparation and characterization of chitosan from Mucorales - Chitosan from Absidia orchidis - Scaling up of lactic acid fermentation of prawn wastes in packed-bed column reactor for chitin recovery - Preparation of chitin by acetic acid fermentation - Inter-source reproducibility of the chitin deacetylation process - Comparative analysis of chitosans from insects and crustacea - Effect of the rate of deacetylation on the physico-chemical properties of cuttlefish chitosan - Deacetylation of chitin by fungal enzymes - Production of partially degraded chitosan with desired molecular weight - Chitin-containing materials Mycoton for wounds treatment - Biological activity of selected forms of chitosan - Application of chitosan on the preservation quality of cut flowers - Preparation and characterization of chitosan films: application in cell cultures - Transport phenomena in chitin gels - Symplex membranes of chitosan and sulphoethylcellulose - Preparation and use of chitosan-Ca pectinate pellets - Bioseparation of protein from cheese whey by using chitosan coagulation and ultrafiltration membranes - Preparation of silk fibroin/chitosan fiber - Preparation of paper sheets containing microcrystalline chitosan - Applications of chitosan in textile printing - Permanent modification of fibrous materials with biopolymers - Ion exchanger from chitosan - Chitosan in waste water treatment - The immobilization of tyrosinase on chitin and chitosan and its possible use in wastewater treatment - Utilization of modified chitosan in aqueous system treatment Biomaterials Chemical and preclinical studies on 6-oxychitin - Diverse biological effects of fungal chitin-glucan complex - Effect of concentration of neutralizing agent on chitosan membrane properties - Preliminary investigation of the compatibility of a chitosan-based peritoneal dialysis solution - Influence of chitosan on the growth of several cellular lines - A new chitosan containing phosphonic group with chelating properties - Biocompatibility of chitin materials using cell culture method Oral Administration of Chitosan Recent results in the oral administration of chitosan - Reduction of absorption of dietary lipids and cholesterol by chitosan, its derivatives and special formulations - Chitosan in weight reduction: results from a large scale consumer study - Conformation of chitosan ascorbic acid salt - Trimethylated chitosans as safe absorption enhancers for transmucosal delivery of peptide drugs - Chitosan derivates as intestinal penetration enhancers of the peptide drug buserelin in vivo and in vitro - Chitosan microparticles for oral vaccination: optimization and characterization - Effect of chitosan in enhancing drug delivery across buccal mucosa - Influence of chitosans on permeability of human intestinal epithelial (Caco-2) cells: The effect of molecular weight, degree of deacetylation and exposure time - Oral polymeric N-acetyl-D-glucosamine as potential treatment for patients with osteoarthritis - Clinicoimmunological efficiency of the chitin-containing drug Mycoton in complex treatment of a chronic hepatitis - Interactions of chitin, chitosan, N-laurylchitosan, and N-dimethylaminopropyl chitosan with olive oil - The chitin-containing preparation Mycoton in a pediatric gastroenterology case - Antifungal activity and release behaviour of cross-linked chitosan films incorporated with chlorhexidine gluconate - Release of N-acetyl-D-glucosamine from chitosan in saliva - Physical and Physicochemical Properties Recent approach of metal binding by chitosan and derivatives - As(V) sorption on molybdate-impregnated chitosan gel beads (MICB) - Influence of medium pH on the biosorption of heavy metals by chitin-containing sorbent Mycoton - Comparative studies on molecular chain parameters of polyelectrolyte chains: the stiffness parameter B and temperature coefficient of intrinsic viscosity of chitosans and poly(diallyldimethylammonium chloride) - Crystalline behavior of chitosan - The relationship between the crystallinity and degree of deacetylation of chitin from crab shell - Reversible water-swellable chitin gel: modulation of swellability - Syneresis aspects of chitosan based gel systems - In situ chitosan gelation using the enzyme tyrosinase - Preparation and characterization of controlling pore size chitosan membranes - Fabrication of porous chitin matrices - Changes of polydispersity and limited molecular weight of ultrasonic treated chitosan - A statistical evaluation of IR spectroscopic methods to determine the degree of acetylation of ?-chitin and chitosan - Products of alkaline hydrolysis of dibutyrylchitin: chemical composition and DSC investigation - Chitosan emulsification properties Chemistry of Chitin and Chitosan Chemically modified chitinous materials: preparation and properties - Progress on the modification of chitosan - The graft copolymerization of chitosan with methyl acrylate using an organohalide-manganese carbonyl coinitiator system - Grafting of 4-vinylpyridine, maleic acid and maleic anhydride onto chitin and chitosan - Peptide synthesis on chitosan/chitin - Graft copolymerization of methyl methacrylate onto mercapto-chitin - Thermal depolymerization of chitosan salts - Radiolysis and sonolysis of chitosan - two convenient techniques for a controlled reduction of molecular weight - Thermal and UV degradation of chitosan - Heat-induced physicochemical changes in highly deacetylated chitosan - Chitosan fiber and its chemical N-modification at the fiber state for use as functional materials - Preparation of a fiber reactive chitosan derivative with enhanced microbial activity - Chromatographic separation of rare earths with complexane types of chemically modified chitosan - The effects of detergents on chitosan - Chitosan-alginate PEC films prepared from chitosan of different molecular weights - Enzymology of Chitin and Chitosan Biosynthesis and Degradation Enzymes of chitin metabolism for the design of antifungals - Enzymatic degradation of chitin by microorganisms - Kinetic behaviours of chitinase isozymes - An acidic chitinase from gizzards of broiler (Gallus gallus L.) - On the contribution of conserved acidic residues to catalytic activity of chitinase B from Serratia marcescens - Detection, isolation and preliminary characterisation of a new hyperthermophilic chitinase from the anaerobic archaebacterium Thermococcus chitonophagus - Biochemical and genetic engineering studies on chitinase A from Serratia marcescens - Induction of chitinase production by Serratia marcescens, using a synthetic N-acetylglucosamine derivative - Libraries of chito-oligosaccharides of mixed acetylation patterns and their interactions with chitinases - Approaches towards the design of new chitinase inhibitors - Allosamidin inhibits the fragmentation and autolysis of Penicillium chrysogenum - cDNA encoding chitinase in the midge, Chironomus tentans - Extraction and purification of chitosanase from Bacillus cereus - Substrate binding mechanism of chitosanase from Streptomyces sp. N174 - Chitosanase-catalyzed hydrolysis of 4-methylumbelliferyl ?-chitotrioside - A rust fungus turns chitin into chitosan upon plant tissue colonization to evade recognition by the host - Antibiotic kanosamine is an inhibitor of chitin biosynthesis in fungi - PCR amplification of chitin deacetylase genes - Amplification of antifungal effect of GlcN-6-P synthase and chitin synthase inhibitors - ?-N-Acetylhexosaminidases: two enzyme families, two mechanisms - Purification and characterisation of chitin deacetylase from Absidia orchidis - Effect of aluminium ion on hydrolysis reaction of carboxymethyl- and dihydroxypropyl-chitin with lysozyme - Structure and function relatioship of human N-acetyl-D-glucosamine 2-epimerase (renin binding protein) - Identification of active site residue(s)
We numerically investigate nonlinear asymmetric square patterns in a horizontal convection layer with up-down reflection symmetry. As a novel feature we find the patterns to appear via the skewed varicose instability of rolls. The time-independent nonlinear state is generated by two unstable checkerboard (symmetric square) patterns and their nonlinear interaction. As the bouyancy forces increase, the interacting modes give rise to bifurcations leading to a periodic alternation between a nonequilateral hexagonal pattern and the square pattern or to different kinds of standing oscillations.
Contents: 1 Introduction 2 Experiment 3 Data 4 Symbolic dynamics 4.1 Symbolic dynamics as a tool for data analysis 4.2 2-symbols coding 4.3 3-symbols coding 5 Measures of complexity 5.1 Word statistics 5.2 Shannon entropy 6 Testing for stationarity 6.1 Stationarity 6.2 Time series of cycle durations 6.3 Chi-square test 7 Control parameters in the production of rhythms 8 Analysis of relative phases 9 Discussion 10 Outlook
Research on monolayers of amphiphilic lipids on aqueous solution is of basic importance in surface science. Due to the applicability of a variety of surface sensitive techniques, floating insoluble monolayers are very suitable model systems for the study of order, structure formation and material transport in two dimensions or the interactions of molecules at the interface with ions or molecules in the bulk (headword 'molecular recognition'). From the behavior of monolayers conclusions can be drawn on the properties of lipid layers on solid substrates or in biological membranes. This work deals with specific and fundamental interactions in monolayers both on the molecular and on the microscopic scale and with their relation to the lattice structure, morphology and thermodynamic behavior of monolayers at the air-water interface. As model system especially monolayers of long chain fatty acids are used, since there the molecular interactions can be gradually adjusted by varying the degree of dissociation by means of the suphase pH value. For manipulating the molecular interactions besides the subphase composition also temperature and monolayer composition are systematically varied. The change in the monolayer properties as a function of an external parameter is analyzed by means of isotherm and surface potential measurements, Brewster-angle microscopy, X-ray diffraction at grazing incidence and polarization modulated infrared reflection absorption spectroscopy. For this a quantitative measure for the molecular interactions and for the chain conformational order is derived from the X-ray data. The most interesting results of this work are the elucidation of the origin of regular polygonal and dendritic domain shapes, the various effects of cholesterol on molecular packing and lattice order of long chain amphiphiles, as well as the detection of an abrupt change in the head group bonding interactions, the chain conformational order and the phase transition pressure between tilted phases in fatty acid monolayers near pH 9. For the interpretation of the latter point a model of the head group bonding structure in fatty acid monolayers as a function of the pH value is developed.
Merapi volcano is one of the most active and dangerous volcanoes of the earth. Located in central part of Java island (Indonesia), even a moderate eruption of Merapi poses a high risk to the highly populated area. Due to the close relationship between the volcanic unrest and the occurrence of seismic events at Mt. Merapi, the monitoring of Merapi's seismicity plays an important role for recognizing major changes in the volcanic activity. An automatic seismic event detection and classification system, which is capable to characterize the actual seismic activity in near real-time, is an important tool which allows the scientists in charge to take immediate decisions during a volcanic crisis. In order to accomplish the task of detecting and classifying volcano-seismic signals automatically in the continuous data streams, a pattern recognition approach has been used. It is based on the method of hidden Markov models (HMM), a technique, which has proven to provide high recognition rates at high confidence levels in classification tasks of similar complexity (e.g. speech recognition). Any pattern recognition system relies on the appropriate representation of the input data in order to allow a reasonable class-decision by means of a mathematical test function. Based on the experiences from seismological observatory practice, a parametrization scheme of the seismic waveform data is derived using robust seismological analysis techniques. The wavefield parameters are summarized into a real-valued feature vector per time step. The time series of this feature vector build the basis for the HMM-based classification system. In order to make use of discrete hidden Markov (DHMM) techniques, the feature vectors are further processed by applying a de-correlating and prewhitening transformation and additional vector quantization. The seismic wavefield is finally represented as a discrete symbol sequence with a finite alphabet. This sequence is subject to a maximum likelihood test against the discrete hidden Markov models, learned from a representative set of training sequences for each seismic event type of interest. A time period from July, 1st to July, 5th, 1998 of rapidly increasing seismic activity prior to the eruptive cycle between July, 10th and July, 19th, 1998 at Merapi volcano is selected for evaluating the performance of this classification approach. Three distinct types of seismic events according to the established classification scheme of the Volcanological Survey of Indonesia (VSI) have been observed during this time period. Shallow volcano-tectonic events VTB (h < 2.5 km), very shallow dome-growth related seismic events MP (h < 1 km) and seismic signals connected to rockfall activity originating from the active lava dome, termed Guguran. The special configuration of the digital seismic station network at Merapi volcano, a combination of small-aperture array deployments surrounding Merapi's summit region, allows the use of array methods to parametrize the continuously recorded seismic wavefield. The individual signal parameters are analyzed to determine their relevance for the discrimination of seismic event classes. For each of the three observed event types a set of DHMMs has been trained using a selected set of seismic events with varying signal to noise ratios and signal durations. Additionally, two sets of discrete hidden Markov models have been derived for the seismic noise, incorporating the fact, that the wavefield properties of the ambient vibrations differ considerably during working hours and night time. A total recognition accuracy of 67% is obtained. The mean false alarm (FA) rate can be given by 41 FA/class/day. However, variations in the recognition capabilities for the individual seismic event classes are significant. Shallow volcano-tectonic signals (VTB) show very distinct wavefield properties and (at least in the selected time period) a stable time pattern of wavefield attributes. The DHMM-based classification performs therefore best for VTB-type events, with almost 89% recognition accuracy and 2 FA/day. Seismic signals of the MP- and Guguran-classes are more difficult to detect and classify. Around 64% of MP-events and 74% of Guguran signals are recognized correctly. The average false alarm rate for MP-events is 87 FA/day, whereas for Guguran signals 33 FA/day are obtained. However, the majority of missed events and false alarms for both MP and Guguran events are due to confusion errors between these two event classes in the recognition process. The confusion of MP and Guguran events is interpreted as being a consequence of the selected parametrization approach for the continuous seismic data streams. The observed patterns of the analyzed wavefield attributes for MP and Guguran events show a significant amount of similarity, thus providing not sufficient discriminative information for the numerical classification. The similarity of wavefield parameters obtained for seismic events of MP and Guguran type reflect the commonly observed dominance of path effects on the seismic wave propagation in volcanic environments. The recognition rates obtained for the five-day period of increasing seismicity show, that the presented DHMM-based automatic classification system is a promising approach for the difficult task of classifying volcano-seismic signals. Compared to standard signal detection algorithms, the most significant advantage of the discussed technique is, that the entire seismogram is detected and classified in a single step.
In this work we consider statistical learning problems. A learning machine aims to extract information from a set of training examples such that it is able to predict the associated label on unseen examples. We consider the case where the resulting classification or regression rule is a combination of simple rules - also called base hypotheses. The so-called boosting algorithms iteratively find a weighted linear combination of base hypotheses that predict well on unseen data. We address the following issues: o The statistical learning theory framework for analyzing boosting methods. We study learning theoretic guarantees on the prediction performance on unseen examples. Recently, large margin classification techniques emerged as a practical result of the theory of generalization, in particular Boosting and Support Vector Machines. A large margin implies a good generalization performance. Hence, we analyze how large the margins in boosting are and find an improved algorithm that is able to generate the maximum margin solution. o How can boosting methods be related to mathematical optimization techniques? To analyze the properties of the resulting classification or regression rule, it is of high importance to understand whether and under which conditions boosting converges. We show that boosting can be used to solve large scale constrained optimization problems, whose solutions are well characterizable. To show this, we relate boosting methods to methods known from mathematical optimization, and derive convergence guarantees for a quite general family of boosting algorithms. o How to make Boosting noise robust? One of the problems of current boosting techniques is that they are sensitive to noise in the training sample. In order to make boosting robust, we transfer the soft margin idea from support vector learning to boosting. We develop theoretically motivated regularized algorithms that exhibit a high noise robustness. o How to adapt boosting to regression problems? Boosting methods are originally designed for classification problems. To extend the boosting idea to regression problems, we use the previous convergence results and relations to semi-infinite programming to design boosting-like algorithms for regression problems. We show that these leveraging algorithms have desirable theoretical and practical properties. o Can boosting techniques be useful in practice? The presented theoretical results are guided by simulation results either to illustrate properties of the proposed algorithms or to show that they work well in practice. We report on successful applications in a non-intrusive power monitoring system, chaotic time series analysis and a drug discovery process. --- Anmerkung: Der Autor ist Träger des von der Mathematisch-Naturwissenschaftlichen Fakultät der Universität Potsdam vergebenen Michelson-Preises für die beste Promotion des Jahres 2001/2002.
Nonlinear multistable systems under the influence of noise exhibit a plethora of interesting dynamical properties. A medium noise level causes hopping between the metastable states. This attractorhopping process is characterized through laminar motion in the vicinity of the attractors and erratic motion taking place on chaotic saddles, which are embedded in the fractal basin boundary. This leads to noise-induced chaos. The investigation of the dissipative standard map showed the phenomenon of preference of attractors through the noise. It means, that some attractors get a larger probability of occurrence than in the noisefree system. For a certain noise level this prefernce achieves a maximum. Other attractors are occur less often. For sufficiently high noise they are completely extinguished. The complexity of the hopping process is examined for a model of two coupled logistic maps employing symbolic dynamics. With the variation of a parameter the topological entropy, which is used together with the Shannon entropy as a measure of complexity, rises sharply at a certain value. This increase is explained by a novel saddle merging bifurcation, which is mediated by a snapback repellor. Scaling laws of the average time spend on one of the formerly disconnected parts and of the fractal dimension of the connected saddle describe this bifurcation in more detail. If a chaotic saddle is embedded in the open neighborhood of the basin of attraction of a metastable state, the required escape energy is lowered. This enhancement of noise-induced escape is demonstrated for the Ikeda map, which models a laser system with time-delayed feedback. The result is gained using the theory of quasipotentials. This effect, as well as the two scaling laws for the saddle merging bifurcation, are of experimental relevance.
One of the rules-of-thumb of colloid and surface physics is that most surfaces are charged when in contact with a solvent, usually water. This is the case, for instance, in charge-stabilized colloidal suspensions, where the surface of the colloidal particles are charged (usually with a charge of hundreds to thousands of e, the elementary charge), monolayers of ionic surfactants sitting at an air-water interface (where the water-loving head groups become charged by releasing counterions), or bilayers containing charged phospholipids (as cell membranes). In this work, we look at some model-systems that, although being a simplified version of reality, are expected to capture some of the physical properties of real charged systems (colloids and electrolytes). We initially study the simple double layer, composed by a charged wall in the presence of its counterions. The charges at the wall are smeared out and the dielectric constant is the same everywhere. The Poisson-Boltzmann (PB) approach gives asymptotically exact counterion density profiles around charged objects in the weak-coupling limit of systems with low-valent counterions, surfaces with low charge density and high temperature (or small Bjerrum length). Using Monte Carlo simulations, we obtain the profiles around the charged wall and compare it with both Poisson-Boltzmann (in the low coupling limit) and the novel strong coupling (SC) theory in the opposite limit of high couplings. In the latter limit, the simulations show that the SC leads in fact to asymptotically correct density profiles. We also compare the Monte Carlo data with previously calculated corrections to the Poisson-Boltzmann theory. We also discuss in detail the methods used to perform the computer simulations. After studying the simple double layer in detail, we introduce a dielectric jump at the charged wall and investigate its effect on the counterion density distribution. As we will show, the Poisson-Boltzmann description of the double layer remains a good approximation at low coupling values, while the strong coupling theory is shown to lead to the correct density profiles close to the wall (and at all couplings). For very large couplings, only systems where the difference between the dielectric constants of the wall and of the solvent is small are shown to be well described by SC. Another experimentally relevant modification to the simple double layer is to make the charges at the plane discrete. The counterions are still assumed to be point-like, but we constraint the distance of approach between ions in the plane and counterions to a minimum distance D. The ratio between D and the distance between neighboring ions in the plane is, as we will see, one of the important quantities in determining the influence of the discrete nature of the charges at the wall over the density profiles. Another parameter that plays an important role, as in the previous case, is the coupling as we will demonstrate, systems with higher coupling are more subject to discretization effects than systems with low coupling parameter. After studying the isolated double layer, we look at the interaction between two double layers. The system is composed by two equally charged walls at distance d, with the counterions confined between them. The charge at the walls is smeared out and the dielectric constant is the same everywhere. Using Monte-Carlo simulations we obtain the inter-plate pressure in the global parameter space, and the pressure is shown to be negative (attraction) at certain conditions. The simulations also show that the equilibrium plate separation (where the pressure changes from attractive to repulsive) exhibits a novel unbinding transition. We compare the Monte Carlo results with the strong-coupling theory, which is shown to describe well the bound states of systems with moderate and high couplings. The regime where the two walls are very close to each other is also shown to be well described by the SC theory. Finally, Using a field-theoretic approach, we derive the exact low-density ("virial") expansion of a binary mixture of positively and negatively charged hard spheres (two-component hard-core plasma, TCPHC). The free energy obtained is valid for systems where the diameters d_+ and d_- and the charge valences q_+ and q_- of positive and negative ions are unconstrained, i.e., the same expression can be used to treat dilute salt solutions (where typically d_+ ~ d_- and q_+ ~ q_-) as well as colloidal suspensions (where the difference in size and valence between macroions and counterions can be very large). We also discuss some applications of our results.
Line driven winds are accelerated by the momentum transfer from photons to a plasma, by absorption and scattering in numerous spectral lines. Line driving is most efficient for ultraviolet radiation, and at plasma temperatures from 10^4 K to 10^5 K. Astronomical objects which show line driven winds include stars of spectral type O, B, and A, Wolf-Rayet stars, and accretion disks over a wide range of scales, from disks in young stellar objects and cataclysmic variables to quasar disks. It is not yet possible to solve the full wind problem numerically, and treat the combined hydrodynamics, radiative transfer, and statistical equilibrium of these flows. The emphasis in the present writing is on wind hydrodynamics, with severe simplifications in the other two areas. I consider three topics in some detail, for reasons of personal involvement. 1. Wind instability, as caused by Doppler de-shadowing of gas parcels. The instability causes the wind gas to be compressed into dense shells enclosed by strong shocks. Fast clouds occur in the space between shells, and collide with the latter. This leads to X-ray flashes which may explain the observed X-ray emission from hot stars. 2. Wind runaway, as caused by a new type of radiative waves. The runaway may explain why observed line driven winds adopt fast, critical solutions instead of shallow (or breeze) solutions. Under certain conditions the wind settles on overloaded solutions, which show a broad deceleration region and kinks in their velocity law. 3. Magnetized winds, as launched from accretion disks around stars or in active galactic nuclei. Line driving is assisted by centrifugal forces along co-rotating poloidal magnetic field lines, and by Lorentz forces due to toroidal field gradients. A vortex sheet starting at the inner disk rim can lead to highly enhanced mass loss rates.
One of the classical ways to describe the dynamics of nonlinear systems is to analyze theur Fourier spectra. For periodic and quasiperiodic processes the Fourier spectrum consists purely of discrete delta-functions. On the contrary, the spectrum of a chaotic motion is marked by the presence of the continuous component. In this work, we describe the peculiar, neither regular nor completely chaotic state with so called singular-continuous power spectrum. Our investigations concern various cases from most different fields, where one meets the singular continuous (fractal) spectra. The examples include both the physical processes which can be reduced to iterated discrete mappings or even symbolic sequences, and the processes whose description is based on the ordinary or partial differential equations.
The dissertation examines aspects of the interlingual lexical processes of word recognition and word retrieval in Hungarian-German bilinguals learning English as a foreign language, with particular respect to the role of cognates. The purpose of the study is to describe the process of lexical activaton in a polyglot system and to model the mental lexicons and the ways entries in the lexicons are connected and activated (e.g. activation through direct word association or through concept mediation). Three dependent variables are studied in quantitative and qualitative analysis of empirical data taken from experiments: rate of accurate responses, response latencies and phonological interference. The results of the experiments are interpreted in the framework of a multiple language network model.
Die vorliegende Arbeit stellt eine kritische Übersicht über den Forschungsstand zu multiplen Wh-Konstruktionen im Slavischen dar. Das Ziel ist es, die Unklarheit der Datenlage und die Widersprüchlichkeit der auf solchen "unklaren" Daten basierten Theorien aufzuzeigen. Inhalt: Historischer Hintergrund (Wachowicz 1974) Einige ältere Ansätze Höhepunkt: die folgenschwere Arbeit von Rudin (1988) Probleme: - Das Problem der Zuverlässlichkeit von Daten - Das Problem der Relevanz von Daten "Harte" Fakten: - Strikte Superioritätseffekte im Bulgarischen - Obligatorische Wh-Anhebung im Slavischen Neuere Ansätze: - "Qualitative" Ansätze - "Quantitative" Ansätze - Alternative Ansätze
This study examines how the size of trade unions relative to the la- bor force impacts on the desirability of different organizational forms of self-financing unemployment insurance (UI) for workers, firms, and with reference to an efficiency criterion. For this purpose, we respectively nu- merically compare the outcome of a model with a uniform payroll tax to a model where workers pay taxes according to their systematic risk of unemployment. Our results highlight the importance of the bargaining structure for the assessment of a particular UI scheme. Most importantly, it depends on the size of the unions whether efficiency favors a uniform or a differentiated UI scheme.
We examine the effects of regionalising the budget of unemployment insurance (UI) on wages, employment, and on UI parameters, which, for their part, determine the agents’ preferences concerning such a reform. A numerical example shows that, under reasonable assumptions, the intuition that the reform would enhance efficiency and improve the economic situation of agents from the low- unemployment region to the disadvantage of agents from the high- unemployment region is not valid in general.
Our analysis is concerned with the impact of a regionalisation of unemployment insurance (UI) on workers’ preferences, on firms’ profits, and on effciency. The existence and the extent of UI are endogenously derived by maximising an objective function of the state. Three different types of regionalisation are considered which differ with respect to the area the UI objective function is related to, and with respect to the policy variable used to maximise it. It comes to light that workers are always in favour of central UI, while it depends on the type of regionalisation whether or not firms are better off with regional or with central UI. The same somewhat surprising result applies for efficiency.
Distributed optimality
(2001)
In this thesis I propose a synthesis (Distributed Optimality, DO) between Optimality Theory (OT, Prince & Smolensky, 1993) and a morphological framework in a genuine derivational tradition, namely Distributed Morphology (DM) as developed by Halle & Marantz (1993). By carrying over the apparatus of OT to DM, phenomena which are captured in DM by language-specific rules or features of lexical entries, are given a more principled account in the terms of ranked universal constraints. On the other hand, also the DM part makes two contributions, namely strong locality and impoverishment. The first gives rise to a simple formal interpretation of DO, while the latter is shown to be indispensable in any theoretically satisfying account of agreement morphology. The empirical basis of the work is given by the complex agreement morphology of genetically different languages. Theoretical focus is mainly on two areas: First, so-called direction marking which is shown to be preferably treated in terms of constraints on feature realization. Second, the effects of precedence constraints which are claimed to regulate the status of agreement affixes as prefixes or suffixes and their respective order. A universal typology for the order of agreement categories by means of OT-constraints is proposed.
The scientist as Weltbürger
(2001)
Subject of this work is the investigation of generic synchronization phenomena in interacting complex systems. These phenomena are observed, among all, in coupled deterministic chaotic systems. At very weak interactions between individual systems a transition to a weakly coherent behavior of the systems can take place. In coupled continuous time chaotic systems this transition manifests itself with the effect of phase synchronization, in coupled chaotic discrete time systems with the effect of non-vanishing macroscopic mean field. Transition to coherence in a chain of locally coupled oscillators described with phase equations is investigated with respect to the symmetries in the system. It is shown that the reversibility of the system caused by these symmetries results to non-trivial topological properties of trajectories so that the system constructed to be dissipative reveals in a whole parameter range quasi-Hamiltonian features, i.e. the phase volume is conserved on average and Lyapunov exponents come in symmetric pairs. Transition to coherence in an ensemble of globally coupled chaotic maps is described with the loss of stability of the disordered state. The method is to break the self-consistensy of the macroscopic field and to characterize the ensemble in analogy to an amplifier circuit with feedback with a complex linear transfer function. This theory is then generalized for several cases of theoretic interest.
Subject of this work is the investigation of universal scaling laws which are observed in coupled chaotic systems. Progress is made by replacing the chaotic fluctuations in the perturbation dynamics by stochastic processes. First, a continuous-time stochastic model for weakly coupled chaotic systems is introduced to study the scaling of the Lyapunov exponents with the coupling strength (coupling sensitivity of chaos). By means of the the Fokker-Planck equation scaling relations are derived, which are confirmed by results of numerical simulations. Next, the new effect of avoided crossing of Lyapunov exponents of weakly coupled disordered chaotic systems is described, which is qualitatively similar to the energy level repulsion in quantum systems. Using the scaling relations obtained for the coupling sensitivity of chaos, an asymptotic expression for the distribution function of small spacings between Lyapunov exponents is derived and compared with results of numerical simulations. Finally, the synchronization transition in strongly coupled spatially extended chaotic systems is shown to resemble a continuous phase transition, with the coupling strength and the synchronization error as control and order parameter, respectively. Using results of numerical simulations and theoretical considerations in terms of a multiplicative noise partial differential equation, the universality classes of the observed two types of transition are determined (Kardar-Parisi-Zhang equation with saturating term, directed percolation).
Structural and spectroscopical study of crystals of 1,3,4-oxadiazole derivatives at high pressure
(2002)
In recent years the search for new materials of technological interest has given new impulses to the study of organic compounds. Organic substances possess a great number of advantages such as the possibility to adjust their properties for a given purpose by different chemical and physical techniques in the preparation process. Oxadiazole derivatives are interesting due to their use as material for light emitting diodes (LED) as well as scintillators. The physical properties of a solid depend on its structure. Different structures induce different intra- and intermolecular interactions. An advantageous method to modify the intra- as well as the intermolecular interactions of a given substance is the application of high pressure. Furthermore, using this method the chemical features of the compound are not influenced. We have investigated the influence of high pressure and high temperature on the super-molecular structure of several oxadiazole derivatives in crystalline state. From the results of this investigation an equation of state for these crystals was determined. Furthermore, the spectroscopical features of these materials under high pressure were characterized.
Deep convection is an essential part of the circulation in the North Atlantic Ocean. It influences the northward heat transport achieved by the thermohaline circulation. Understanding its stability and variability is therefore necessary for assessing climatic changes in the area of the North Atlantic. This thesis aims at improving the conceptual understanding of the stability and variability of deep convection. Observational data from the Labrador Sea show phases with and without deep convection. A simple two-box model is fitted to these data. The results suggest that the Labrador Sea has two coexisting stable states, one with regular deep convection and one without deep convection. This bistability arises from a positive salinity feedback that is due to the net freshwater input into the surface layer. The convecting state can easily become unstable if the mean forcing shifts to warmer or less saline conditions. The weather-induced variability of the external forcing is included into the box model by adding a stochastic forcing term. It turns out that deep convection is then switched "on" and "off" frequently. The mean residence time in either state is a measure of its stochastic stability. The stochastic stability depends smoothly on the forcing parameters, in contrast to the deterministic (non-stochastic) stability which may change abruptly. The mean and the variance of the stochastic forcing both have an impact on the frequency of deep convection. For instance, a decline in convection frequency due to a surface freshening may be compensated for by an increased heat flux variability. With a further simplified box model some stochastic stability features are studied analytically. A new effect is described, called wandering monostability: even if deep convection is not a stable state due to changed forcing parameters, the stochastic forcing can still trigger convection events frequently. The analytical expressions explicitly show how wandering monostability and other effects depend on the model parameters. This dependence is always exponential for the mean residence times, but for the probability of long nonconvecting phases it is exponential only if this probability is small. It is to be expected that wandering monostability is relevant in other parts of the climate system as well. All in all, the results demonstrate that the stability of deep convection in the Labrador Sea reacts very sensitively to the forcing. The presence of variability is crucial for understanding this sensitivity. Small changes in the forcing can already significantly lower the frequency of deep convection events, which presumably strongly affects the regional climate. ----Anmerkung: Der Autor ist Träger des durch die Physikalische Gesellschaft zu Berlin vergebenen Carl-Ramsauer-Preises 2003 für die jeweils beste Dissertation der vier Universitäten Freie Universität Berlin, Humboldt-Universität zu Berlin, Technische Universität Berlin und Universität Potsdam.
Semi-arid areas are, due to their climatic setting, characterized by small water resources. An increasing water demand as a consequence of population growth and economic development as well as a decreasing water availability in the course of possible climate change may aggravate water scarcity in future, which often exists already for present-day conditions in these areas. Understanding the mechanisms and feedbacks of complex natural and human systems, together with the quantitative assessment of future changes in volume, timing and quality of water resources are a prerequisite for the development of sustainable measures of water management to enhance the adaptive capacity of these regions. For this task, dynamic integrated models, containing a hydrological model as one component, are indispensable tools. The main objective of this study is to develop a hydrological model for the quantification of water availability in view of environmental change over a large geographic domain of semi-arid environments. The study area is the Federal State of Ceará (150 000 km2) in the semi-arid north-east of Brazil. Mean annual precipitation in this area is 850 mm, falling in a rainy season with duration of about five months. Being mainly characterized by crystalline bedrock and shallow soils, surface water provides the largest part of the water supply. The area has recurrently been affected by droughts which caused serious economic losses and social impacts like migration from the rural regions. The hydrological model Wasa (Model of Water Availability in Semi-Arid Environments) developed in this study is a deterministic, spatially distributed model being composed of conceptual, process-based approaches. Water availability (river discharge, storage volumes in reservoirs, soil moisture) is determined with daily resolution. Sub-basins, grid cells or administrative units (municipalities) can be chosen as spatial target units. The administrative units enable the coupling of Wasa in the framework of an integrated model which contains modules that do not work on the basis of natural spatial units. The target units mentioned above are disaggregated in Wasa into smaller modelling units within a new multi-scale, hierarchical approach. The landscape units defined in this scheme capture in particular the effect of structured variability of terrain, soil and vegetation characteristics along toposequences on soil moisture and runoff generation. Lateral hydrological processes at the hillslope scale, as reinfiltration of surface runoff, being of particular importance in semi-arid environments, can thus be represented also within the large-scale model in a simplified form. Depending on the resolution of available data, small-scale variability is not represented explicitly with geographic reference in Wasa, but by the distribution of sub-scale units and by statistical transition frequencies for lateral fluxes between these units. Further model components of Wasa which respect specific features of semi-arid hydrology are: (1) A two-layer model for evapotranspiration comprises energy transfer at the soil surface (including soil evaporation), which is of importance in view of the mainly sparse vegetation cover. Additionally, vegetation parameters are differentiated in space and time in dependence on the occurrence of the rainy season. (2) The infiltration module represents in particular infiltration-excess surface runoff as the dominant runoff component. (3) For the aggregate description of the water balance of reservoirs that cannot be represented explicitly in the model, a storage approach respecting different reservoirs size classes and their interaction via the river network is applied. (4) A model for the quantification of water withdrawal by water use in different sectors is coupled to Wasa. (5) A cascade model for the temporal disaggregation of precipitation time series, adapted to the specific characteristics of tropical convective rainfall, is applied for the generating rainfall time series of higher temporal resolution. All model parameters of Wasa can be derived from physiographic information of the study area. Thus, model calibration is primarily not required. Model applications of Wasa for historical time series generally results in a good model performance when comparing the simulation results of river discharge and reservoir storage volumes with observed data for river basins of various sizes. The mean water balance as well as the high interannual and intra-annual variability is reasonably represented by the model. Limitations of the modelling concept are most markedly seen for sub-basins with a runoff component from deep groundwater bodies of which the dynamics cannot be satisfactorily represented without calibration. Further results of model applications are: (1) Lateral processes of redistribution of runoff and soil moisture at the hillslope scale, in particular reinfiltration of surface runoff, lead to markedly smaller discharge volumes at the basin scale than the simple sum of runoff of the individual sub-areas. Thus, these processes are to be captured also in large-scale models. The different relevance of these processes for different conditions is demonstrated by a larger percentage decrease of discharge volumes in dry as compared to wet years. (2) Precipitation characteristics have a major impact on the hydrological response of semi-arid environments. In particular, underestimated rainfall intensities in the rainfall input due to the rough temporal resolution of the model and due to interpolation effects and, consequently, underestimated runoff volumes have to be compensated in the model. A scaling factor in the infiltration module or the use of disaggregated hourly rainfall data show good results in this respect. The simulation results of Wasa are characterized by large uncertainties. These are, on the one hand, due to uncertainties of the model structure to adequately represent the relevant hydrological processes. On the other hand, they are due to uncertainties of input data and parameters particularly in view of the low data availability. Of major importance is: (1) The uncertainty of rainfall data with regard to their spatial and temporal pattern has, due to the strong non-linear hydrological response, a large impact on the simulation results. (2) The uncertainty of soil parameters is in general of larger importance on model uncertainty than uncertainty of vegetation or topographic parameters. (3) The effect of uncertainty of individual model components or parameters is usually different for years with rainfall volumes being above or below the average, because individual hydrological processes are of different relevance in both cases. Thus, the uncertainty of individual model components or parameters is of different importance for the uncertainty of scenario simulations with increasing or decreasing precipitation trends. (4) The most important factor of uncertainty for scenarios of water availability in the study area is the uncertainty in the results of global climate models on which the regional climate scenarios are based. Both a marked increase or a decrease in precipitation can be assumed for the given data. Results of model simulations for climate scenarios until the year 2050 show that a possible future change in precipitation volumes causes a larger percentage change in runoff volumes by a factor of two to three. In the case of a decreasing precipitation trend, the efficiency of new reservoirs for securing water availability tends to decrease in the study area because of the interaction of the large number of reservoirs in retaining the overall decreasing runoff volumes.
Value education of youth
(2002)
The value priorities of students and teachers were measured at eight different schools at the beginning and the end of the school year 2000/2001. This study once again confirmed the theoretical model of a universal structure of human values (Schwartz, 1992). At both measurement times, similar gender differences, as well as positive correlations between religiosity and school commitment were found. The students from the non-religious schools determined Hedonism as their highest, and Tradition as their lowest value priority. In the religious schools, Benevolence and Self-Direction were the highest values, whereas Power was found to be the lowest value priority. The change of the values Conformity, Hedonism, and Universalism was predicted both through the students′ religiosity and their type of school. The change of the values Power, Tradition, Benevolence, and Achievement, however, was mainly predicted through their religiosity. In three out of four schools the student-teacher similarity correlated positively with the school commitment of the students. Across all schools student-teacher similarity correlated positively with academic achievement.
New polymers and low molecular compounds, suitable for organic light emitting devices and organic electronic applications, have been synthesised in this years in order to obtain electron transport characteristics compatible with requirements for applications in real plastic devices. However, despite of the technological importance and of the relevant progress in devices manufacture, fundamental physical properties of such class of materials are still not enough studied. In particular extensive presence of distributions of localised states inside the band gap has a deep impact on their electronic properties. Such presence of shallow traps as well as the influence of the sample preparation conditions on deep and shallow localised states have not been, until now, systematically explored. The thermal techniques are powerful tools in order to study localised levels in inorganic and organic materials. Thermally stimulated luminescence (TSL), thermally stimulated currents (TSC) and thermally stimulated depolarisation currents (TSDC) allow to deeply look to shallow and deep trap levels as well as they permit to study, in synergy with dielectric spectroscopy (DES), polarisation and depolarisation effects. We studied, by means of numerical simulations, the first and the second order kinetic equations characterised by negligible and strong re-trapping respectively. We included in the equations Gaussian, exponential and quasi-continuous distributions of localised states. The shapes of the theoretical peaks have been investigated by means of systematic variation of the two main parameters of the equations, i. e. the energy trap depth E and the frequency factor a and of the parameters regulating the distributions, in particular for a Gaussian distribution the distribution width s and the integration limits. The theoretical findings have been applied to experimental glow curves. Thin films of polymers and low molecular compounds. Polyphenylquinoxalines, trisphenylquinoxalines and oxadiazoles, studied because of their technological relevance, show complex thermograms, having several levels of localised states and depolarisation peaks. In particular well ordered films of an amphiphilic substituted 2-(p-nitrophenyl)-5-(p-undecylamidophenyl)-1,3,4-oxadiazole (NADPO) are characterised by rich TSL thermograms. A wide region of shallow traps, localised at Em = 4 meV, has been successfully fit by means of a first order kinetic equation having a Gaussian distribution of localised states. Two further peaks, having a different origin, have been characterised. The peaks at Tm = 221.5 K and Tm = 254.2 have activation energy of Em= 0.63 eV and Em = 0.66 eV, frequency factor s = 2.4x1012 s-1 and s = 1.85x1011 s-1, distribution width s = 0.045 eV and s = 0.088 eV respectively. Increasing the number of thermal cycle, a peak, probably connected with structural defects, appears at Tm = 197.7 K. The numerical analysis of this peak was performed by means of a first order equation containing a Gaussian distribution of traps. The activation energy of the trap level is centred at Em = 0.55 eV. The distribution is perfectly symmetric with a quite small width s = 0.028 eV. The frequency factor is s = 1.15 x 1012 s-1, resulting of the same order of magnitude of its neighbour peak at Tm = 221.5 K, having both, probably, the same origin. Furthermore the work demonstrates that the shape of the glow curves is strongly influenced by the excitation temperature and by the thermal cycles. For that reason Gaussian distributions of localised states can be confused with exponential distributions if the previous thermal history of the samples is not adequately considered.
The primary focus on the present study was to identify early risk factors for infant aggression in a sample of high risk, low-income teenager mothers and their infants. Despite the amount of research on externalizing behavior, relatively little is known about its development in early childhood. Because chronically aggressive school-age children tend to be those who first display symptoms during preschool years, an examination of the early manifestations of aggressive behavior and the development of measurements for infants is needed. The present study explored a model of infant aggression development that emphasized infant aggression developing largely through the interaction of infant′s dispositional characteristics with their caregiving environment. The study addressed the following relations: (1) Maternal psychosocial functioning with reported and observed infant aggression and negative emotionality, (2) reported measurements of infant aggression and negative emotionality with observed infant measurements of infant aggression and negative emotionality, (3) infant negative emotionality and infant aggression, (4) infant emotion regulation with infant aggression and negative emotionality, (5) the interaction between emotion regulation and negative emotionality in relation to infant aggression, and (6) attachment classification with infant aggression and negative emotionality. Finally, the question of whether these six relations would differ by gender was also addressed. Maternal psychosocial functioning was assessed with self-reported measurements. Infant aggression, negative emotionality and emotion regulation were measured during two standardized assessments, the Strange Situation and the Bayley Scales of Infant Development Assessment and maternal reported with the Infant-Toddler Social and Emotional Assessment. Several interesting findings emerged. One of the main findings concerned maternal attribution and its possible role as a risk factor for later externalizing behaviors. That is, mothers, especially depressed and stressed mothers, tended to report higher levels of infant aggression and negative emotionality than was noted by more objective observers. This tendency was particularly evident in mothers with girl infants. Another important finding concerned emotion regulation. Even at this early age, clear differences in emotion regulation could be seen. Interestingly, infants with high negative emotionality and low emotion regulation were observed to be the most aggressive. Also significant relations emerged for infant negative emotionality and aggression and vise versa. Thus, for purposes of treatment and scientific study, the three constructs (emotion regulation, negative emotionality, and aggression) should be considered in combination. Investigating each alone may not prove fruitful in future examinations. Additionally, different emotion regulation behaviors were observed for girl and boy infants. Aggressive girls looked more at the environment, their toys and their mother, whereas aggressive boys looked less at the environment and their mother and explored their toys more, although looked at the toys less. Although difficult to interpret at this point, it is nonetheless interesting that gender differences exist at this young age in emotion regulatory behaviors. In conclusion, although preliminary, findings from the present study provide intriguing directions for future research. More studies need to conducted focusing on infant aggression, as well as longitudinal studies following the infants over time.
The length of the vegetation period (VP) plays a central role for the interannual variation of carbon fixation of terrestrial ecosystems. Observational data analysis has indicated that the length of the VP has increased in the last decades in the northern latitudes mainly due to an advancement of bud burst (BB). This phenomenon has been widely discussed in the context of Global Warming because phenology is correlated to temperatures. Analyzing the patterns of spring phenology over the last century in Southern Germany provided two main findings: - The strong advancement of spring phases especially in the decade before 1999 is not a singular event in the course of the 20th century. Similar trends were also observed in earlier decades. Distinct periods of varying trend behavior for important spring phases could be distinguished. - Marked differences in trend behavior between the early and late spring phases were detected. Early spring phases changed as regards the magnitude of their negative trends from strong negative trends between 1931 and 1948 to moderate negative trends between 1948 and 1984 and back to strong negative trends between 1984 and 1999. Late spring phases showed a different behavior. Negative trends between 1931 and 1948 are followed by marked positive trends between 1948 and 1984 and then strong negative trends between 1984 and 1999. This marked difference in trend development between early and late spring phases was also found all over Germany for the two periods 1951 to 1984 and 1984 to 1999. The dominating influence of temperature on spring phenology and its modifying effect on autumn phenology was confirmed in this thesis. However, - temperature functions determining spring phenology were not significantly correlated with a global annual CO2 signal which was taken as a proxy for a Global Warming pattern. - an index for large scale regional circulation patterns (NAO index) could only to a small part explain the observed phenological variability in spring. The observed different trend behavior of early and late spring phases is explained by the differing behavior of mean March and April temperatures. Mean March temperatures have increased on average over the 20th century accompanied by an increasing variation in the last 50 years. April temperatures, however, decreased between the end of the 1940s and the mid-1980s, followed by a marked warming after the mid-1980s. It can be concluded that the advancement of spring phenology in recent decades are part of multi-decadal fluctuations over the 20th century that vary with the species and the relevant seasonal temperatures. Because of these fluctuations a correlation with an observed Global Warming signal could not be found. On average all investigated spring phases advanced between 5 and 20 days between 1951 and 1999 for all Natural Regions in Germany. A marked difference be! tween late and early spring phases is due to the above mentioned differing behavior before and after the mid-1980s. Leaf coloring (LC) was delayed between 1951 and 1984 for all tree species. However, after 1984 LC was advanced. Length of the VP increased between 1951 and 1999 for all considered tree species by an average of ten days throughout Germany. It is predominately the change in spring phases which contributes to a change in the potentially absorbed radiation. Additionally, it is the late spring species that are relatively more favored by an advanced BB because they can additionally exploit longer days and higher temperatures per day advancement. To assess the relative change in potentially absorbed radiation among species, changes in both spring and autumn phenology have to be considered as well as where these changes are located in the year. For the detection of the marked difference between early and late spring phenology a new time series construction method was developed. This method allowed the derivation of reliable time series that spanned over 100 years and the construction of locally combined time series increasing the available data for model development. Apart from analyzed protocolling errors, microclimatic site influences, genetic variation and the observers were identified as sources of uncertainty of phenological observational data. It was concluded that 99% of all phenological observations at a certain site will vary within approximately 24 days around the parametric mean. This supports to the proposed 30-day rule to detect outliers. New phenology models that predict local BB from daily temperature time series were developed. These models were based on simple interactions between inhibitory and promotory agents that are assumed to control the developmental status of a plant. Apart from the fact that, in general, the new models fitted and predicted the observations better than classical models, the main modeling results were: - The bias of the classical models, i.e. overestimation of early observations and underestimation of late observations, could be reduced but not completely removed. - The different favored model structures for each species indicated that for the late spring phases photoperiod played a more dominant role than for early spring phases. - Chilling only plays a subordinate role for spring BB compared to temperatures directly preceding BB.
Comparative study of gene expression during the differentiation of white and brown preadipocytes
(2002)
Introduction Mammals have two types of adipose tissue: the lipid storing white adipose tissue and the brown adipose tissue characterised by its capacity for non-shivering thermogenesis. White and brown adipocytes have the same origin in mesodermal stem cells. Yet nothing is known so far about the commitment of precursor cells to the white and brown adipose lineage. Several experimental approaches indicate that they originate from the differentiation of two distinct types of precursor cells, white and brown preadipocytes. Based on this hypothesis, the aim of this study was to analyse the gene expression of white and brown preadipocytes in a systematic approach. Experimental approach The white and brown preadipocytes to compare were obtained from primary cell cultures of preadipocytes from the Djungarian dwarf hamster. Representational difference analysis was used to isolate genes potentially differentially expressed between the two cell types. The thus obtained cDNA libraries were spotted on microarrays for a large scale gene expression analysis in cultured preadipocytes and adipocytes and in tissue samples. Results 4 genes with higher expression in white preadipocytes (3 members of the complement system and a fatty acid desaturase) and 8 with higher expression in brown preadipocytes were identified. From the latter 3 coded for structural proteins (fibronectin, metargidin and a actinin 4), 3 for proteins involved in transcriptional regulation (necdin, vigilin and the small nuclear ribonucleoprotein polypeptide A) and 2 are of unknown function. Cluster analysis was applied to the gene expression data in order to characterise them and led to the identification of four major typical expression profiles: genes up-regulated during differentiation, genes down-regulated during differentiation, genes higher expressed in white preadipocytes and genes higher expressed in brown preadipocytes. Conclusion This study shows that white and brown preadipocytes can be distinguished by different expression levels of several genes. These results draw attention to interesting candidate genes for the determination of white and brown preadipocytes (necdin, vigilin and others) and furthermore indicate that potential importance of several functional groups in the differentiation of white and brown preadipocytes, mainly the complement system and extracellular matrix.
Today, analytical chemistry does not longer consist of only the big measuring devices and methods which are time consuming and expensive, which can furthermore only be handled by the qualified staff and in addition the results can also only be evaluated by this qualified staff. Usually, this technique, which shall be described in the following as 'classic analytic measuring technique', requires also rooms equipped especially and often a relative big quantity of the test compounds which should be prepared especially. Beside this classic analytic measuring technique, limited on definite substance groups and requests, a new measuring technique has gained acceptance particularly within the last years, which one can often be used by a layman, too. Often the new measuring technique has very little pieces of equipment. The needed sample volumes are also small and a special sample preparation isn't required. In addition, the new measuring instruments are simple to handle. They are cheap both in their production and in the use and they permit even a continuous measurement recording usually. Numerous of this new measuring instruments base on the research in the field of Biosensorik during the last 40 years. Since Clark and Lyon in the year 1962 were able to measure glucose with a simple oxygen electrode, completed by an enzyme the development of the new measuring technique did not have to be held back any longer. Biosensors, special pickups which consists of a combination from a biological component (permits a specific recognition of the analyte also without purification of the sample previously) and a physical pickup (convert the primary physicochemical effect into an electronically measurable signal), conquered the market. In the context of this thesis different tyrosinasesensors were developed which fulfilling the various requests, depending on origin and features of the used tyrosinase. One of the tyrosinasesensors for example was used for quantification of phenolic compounds in river and sea water and the results could correlated very well with the corresponding DIN-test for the determination of phenolic compounds. An other developed tyrosinasesensor showed a very high sensitiveness for catecholamines, substances which are of special importance in the medical diagnostics. In addition, the investigations of two different tyrosinases, which were carried out also in the context of this thesis, have shown, that a special tyrosinase (tyrosinase from Streptomyces antibioticus) will be the better choice as tyrosinase from Agaricus bisporus, which is used in the area of biosensor research till now, if one wants to develop in future even more sensitive tyrosinasesensors. Furthermore, first successes became reached on a molecular biological field, the production of tyrosinasemutants with special, before well-considered features. These successes can be used to develop a new generation of tyrosinasesensors, tyrosinasesensors in which tyrosinase can be bound directionally both to the corresponding physical pickup or also to another enzyme. From this one expects to achieve ways minimized which the substance to be determined (or whose product) otherwise must cover. Finally, this should result in an clearly visible increase of sensitivity of the Biosensor.
This MA thesis examines novels by Native American authors of the 20th century in regard to their representation of conflicts between the indigenous population of North America and the dominant Christian religion of the mainstream society. Several major points can be followed throughout the century, which have been presented repeatedly and discussed in various perspectives. Historical conflicts of colonization and Christianization, as well as the perpetual question of Native American Christians -- 'How can you go to a church that killed so many Indians?' [Alexie, Reservation Blues] -- are debated in these novels and analyzed in this paper. Furthermore, I have tried to position and classify the works according to their representation of these problems within literary history. Following Charles Larson's chronologic and thematic examination of American Indian Fiction, the categories rejection, (syncretic) adaptation, and postmodern-ironic revision are introduced to describe the various forms of representation. On the basis of five main examples, we can observe an evolution of contemporary Native American literature, which has liberated itself from the narrow definition of the 1960s and 1970s, in favor of a broader and more varied approach. In so doing, and by means of intercultural and intertextual referencing, postmodern irony, and a new Indian self-confidence, it has also taken a new position towards the religion of the former colonizer.
Our every-day experience is connected with different acoustical noise or music. Usually noise plays the role of nuisance in any communication and destroys any order in a system. Similar optical effects are known: strong snowing or raining decreases quality of a vision. In contrast to these situations noisy stimuli can also play a positive constructive role, e.g. a driver can be more concentrated in a presence of quiet music. Transmission processes in neural systems are of especial interest from this point of view: excitation or information will be transmitted only in the case if a signal overcomes a threshold. Dr. Alexei Zaikin from the Potsdam University studies noise-induced phenomena in nonlinear systems from a theoretical point of view. Especially he is interested in the processes, in which noise influences the behaviour of a system twice: if the intensity of noise is over a threshold, it induces some regular structure that will be synchronized with the behaviour of neighbour elements. To obtain such a system with a threshold one needs one more noise source. Dr. Zaikin has analyzed further examples of such doubly stochastic effects and developed a concept of these new phenomena. These theoretical findings are important, because such processes can play a crucial role in neurophysics, technical communication devices and living sciences.
In the honey bee, responsiveness to sucrose correlates with many behavioural parameters such as age of first foraging, foraging role and learning. Sucrose responsiveness can be measured using the proboscis extension response (PER) by applying sucrose solutions of increasing concentrations to the antenna of a bee. We tested whether the biogenic amines octopamine, tyramine and dopamine, and the dopamine receptor agonist 2-amino-6,7-dihydroxy-1,2,3,4-tetrahydronaphthalene (6,7-ADTN) can modulate sucrose responsiveness. The compounds were either injected into the thorax or fed in sucrose solution to compare different methods of application. Injection and feeding of tyramine or octopamine significantly increased sucrose responsiveness. Dopamine decreased sucrose responsiveness when injected into the thorax. Feeding of dopamine had no effect. Injection of 6,7-ADTN into the thorax and feeding of 6,7-ADTN reduced sucrose responsiveness significantly. These data demonstrate that sucrose responsiveness in honey bees can be modulated by biogenic amines, which has far reaching consequences for other types of behaviour in this insect. (C) 2002 Elsevier Science B.V. All rights reserved.
Highly collimated, high velocity streams of hot plasma – the jets – are observed as a general phenomenon being found in a variety of astrophysical objects regarding their size and energy output. Known as jet sources are protostellar objects (T Tauri stars, embedded IR sources), galactic high energy sources ("microquasars"), and active galactic nuclei (extragalactic radio sources and quasars). Within the last two decades our knowledge regarding the processes involved in astro-physical jet formation has condensed in a kind of standard model. This is the scenario of a magnetohydrodynamically accelerated and collimated jet stream launched from the innermost part of an accretion disk close to the central object. Traditionally, the problem of jet formation is divided in two categories. One is the question how to collimate and accelerate an uncollimated low velocity disk wind into a jet. The second is the question how to initiate that outflow from a disk, i.e. how to turn accretion of matter into an ejection as a disk wind. My own work is mainly related to the first question, the collimation and acceleration process. Due to the complexity of both, the physical processes believed to be responsible for the jet launching and also the spatial configuration of the physical components of the jet source, the enigma of jet formation is not yet completely understood. On the theoretical side, there has been a substantial advancement during the last decade from purely station-ary models to time-dependent simulations lead by the vast increase of computer power. Observers, on the other hand, do not yet have the instruments at hand in order to spatially resolve observe the very jet origin. It can be expected that also the next years will yield a substantial improvement on both tracks of astrophysical research. Three-dimensional magnetohydrodynamic simu-lations will improve our understanding regarding the jet-disk interrelation and the time-dependent character of jet formation, the generation of the magnetic field in the jet source, and the interaction of the jet with the ambient medium. Another step will be the combina-tion of radiation transfer computations and magnetohydrodynamic simulations providing a direct link to the observations. At the same time, a new generation of telescopes (VLT, NGST) in combination with new instrumental techniques (IR-interferometry) will lead to a "quantum leap" in jet observation, as the resolution will then be sufficient in order to zoom into the innermost region of jet formation.
The external dispersal ("epizoochory") of vascular plant diaspores (seeds and fruits) by roe deer and wild boar, i.e. the most common wild large mammals with a large home range in central Europe, was investigated in a 6.5-km² forest area in NE Germany dominated by mesic deciduous forests. The study involved brushing out the diaspores from the coats and hooves of 25 shot roe deer and nine wild boar. The results were compared with the forest vegetation of the study area. Whilst wild boar transported large amounts of various diaspores in the coat, the significance of roe deer for epizoochory was low due to their sleek fur and different behaviour compared to wild boar. Altogether, 55 vascular plant species were transported externally. Since only a limited number of seeds came from woodland habitats, the open landscape was at least as important as a source of attached seeds as the forest vegetation. Thus, most plant species occurring in the studied forest area, especially characteristic woodland herbs, showed no adaptations to epizoochorous dispersal, although being very abundant in the herb layer. We conclude that hoofed game play a particular role concerning the dispersal of ruderal and grassland species in the agricultural landscape of central Europe. However, the actual spread of some herb species in forests of northern Germany, e.g. Agrostis capillaris, Brachypodium sylvaticum, Deschampsia flexuosa, Galium aparine and Urtica dioica, may be mainly facilitated by wild ungulates. Though dispersal by large mammals is an important mechanism for long-distance dispersal of plants in general, our results suggest that most of the characteristic herb species of mesic deciduous forests have only low epizoochorous dispersal potentials. The implications for nature conservation and silviculture are discussed.
The polit-economic situation in germany : chances for changes in resource and energy economics
(2002)
Contents: Regional Management, Land Use and Energy Production -Biophysical View -First Hypothesis -International and Interregional Cooperation -Second Hypothesis -Partnership with Nature Sustainability and the Agricultural Sector -Traditional Farming -Mono-cultural Bio-industry -Liquid Manure Problems -Clean Drinking Water -Integrated Agro-industrial System -Ecological Farming -Ecotones and Bio-manipulation Regional Economic and Agricultural Policy -New Roles for the Agricultural Sector
Jets are highly collimated flows of matter. They are present in a large variety of astrophysical sources: young stars, stellar mass black holes (microquasars), galaxies with an active nucleus (AGN) and presumably also intense flashes of gamma-rays. In particular, the jets of microquasars, powered by accretion disks, are probably small-scale versions of the outflows from AGN. Beside observations of astrophysical jet sources, also theoretical considerations have shown that magnetic fields play an important role in jet formation, acceleration and collimation. Collimated jets seem to be systematically associated with the presence of an accretion disk around a star or a collapsed object. If the central object is a black hole, the surrounding accretion disk is the only possible location for a magnetic field generation. We are interested in the formation process of highly relativistic jets as observed from microquasars and AGN. We theoretically investigate the jet collimation region, whose physical dimensions are extremely tiny even compared to radio telescopes spatial resolution. Thus, for most of the jet sources, global theoretical models are, at the moment, the only possibility to gain information about the physical processes in the innermost jet region. For the first time, we determine the global two-dimensional field structure of stationary, axisymmetric, relativistic, strongly magnetized (force-free) jets collimating into an asymptotically cylindrical jet (taken as boundary condition) and anchored into a differentially rotating accretion disk. This approach allows for a direct connection between the accretion disk and the asymptotic collimated jet. Therefore, assuming that the foot points of the field lines are rotating with Keplerian speed, we are able to achieve a direct scaling of the jet magnetosphere in terms of the size of the central object. We find a close compatibility between the results of our model and radio observations of the M87 galaxy innermost jet. We also calculate the X-ray emission in the energy range 0.2--10.1\,keV from a microquasar relativistic jet close to its source of 5 solar masses. In order to do it, we apply the jet flow parameters (densities, velocities, temperatures of each volume element along the collimating jet) derived in the literature from the relativistic magnetohydrodynamic equations. We obtain theoretical thermal X-ray spectra of the innermost jet as composition of the spectral contributions of the single volume elements along the jet. Since relativistic effects as Doppler shift and Doppler boosting due to the motion of jets toward us might be important, we investigate how the spectra are affected by them considering different inclinations of the line of sight to the jet axis. Emission lines of highly ionized iron are clearly visible in our spectra, probably also observed in the Galactic microquasars GRS 1915+105 and XTE J1748-288. The Doppler shift of the emission lines is always evident. Due to the chosen geometry of the magnetohydrodynamic jet, the inner X-ray emitting part is not yet collimated. Ergo, depending on the viewing angle, the Doppler boosting does not play a major role in the total spectra. This is the first time that X-ray spectra have been calculated from the numerical solution of a magnetohydrodynamic jet.
Combined structural and magnetotelluric investigation across the West Fault Zone in northern Chile
(2002)
The characterisation of the internal architecture of large-scale fault zones is usually restricted to the outcrop-based investigation of fault-related structural damage on the Earth's surface. A method to obtain information on the downward continuation of a fault is to image the subsurface electrical conductivity structure. This work deals with such a combined investigation of a segment of the West Fault, which itself is a part of the more than 2000 km long trench-linked Precordilleran Fault System in the northern Chilean Andes. Activity on the fault system lasted from Eocene to Quaternary times. In the working area (22°04'S, 68°53'W), the West Fault exhibits a clearly defined surface trace with a constant strike over many tens of kilometers. Outcrop condition and morphology of the study area allow ideally for a combination of structural geology investigation and magnetotelluric (MT) / geomagnetic depth sounding (GDS) experiments. The aim was to achieve an understanding of the correlation of the two methods and to obtain a comprehensive view of the West Fault's internal architecture. Fault-related brittle damage elements (minor faults and slip-surfaces with or without striation) record prevalent strike-slip deformation on subvertically oriented shear planes. Dextral and sinistral slip events occurred within the fault zone and indicate reactivation of the fault system. Youngest deformation increments mapped in the working area are extensional and the findings suggest a different orientation of the extension axes on either side of the fault. Damage element density increases with approach to the fault trace and marks an approximately 1000 m wide damage zone around the fault. A region of profound alteration and comminution of rocks, about 400 m wide, is centered in the damage zone. Damage elements in this central part are predominantly dipping steeply towards the east (70-80°). Within the same study area, the electrical conductivity image of the subsurface was measured along a 4 km long MT/GDS profile. This main profile trends perpendicular to the West Fault trace. The MT stations of the central 2 km were 100 m apart from each other. A second profile with 300 m site spacing and 9 recording sites crosses the fault a few kilometers away from the main study area. Data were recorded in the frequency range from 1000 Hz to 0.001 Hz with four real time instruments S.P.A.M. MkIII. The GDS data reveal the fault zone for both profiles at frequencies above 1 Hz. Induction arrows indicate a zone of enhanced conductivity several hundred meters wide, that aligns along the WF strike and lies mainly on the eastern side of the surface trace. A dimensionality analysis of the MT data justifies a two dimensional model approximation of the data for the frequency range from 1000 Hz to 0.1 Hz. For this frequency range a regional geoelectric strike parallel to the West Fault trace could be recovered. The data subset allows for a resolution of the conductivity structure of the uppermost crust down to at least 5 km. Modelling of the MT data is based on an inversion algorithm developed by Mackie et al. (1997). The features of the resulting resistivity models are tested for their robustness using empirical sensitivity studies. This involves variation of the properties (geometry, conductivity) of the anomalies, the subsequent calculation of forward or constrained inversion models and check for consistency of the obtained model results with the data. A fault zone conductor is resolved on both MT profiles. The zones of enhanced conductivity are located to the east of the West Fault surface trace. On the dense MT profile, the conductive zone is confined to a width of about 300 m and the anomaly exhibits a steep dip towards the east (about 70°). Modelling implies that the conductivity increase reaches to a depth of at least 1100 m and indicates a depth extent of less than 2000 m. Further conductive features are imaged but their geometry is less well constrained. The fault zone conductors of both MT profiles coincide in position with the alteration zone. For the dense profile, the dip of the conductive anomaly and the dip of the damage elements of the central part of the fault zone correlate. This suggests that the electrical conductivity enhancement is causally related to a mesh of minor faults and fractures, which is a likely pathway for fluids. The interconnected rock-porosity that is necessary to explain the observed conductivity enhancement by means of fluids is estimated on the basis of the salinity of several ground water samples (Archie's Law). The deeper the source of the water sample, the more saline it is due to longer exposure to fluid-rock interaction and the lower is the fluid's resistivity. A rock porosity in the range of 0.8% - 4% would be required at a depth of 200 m. That indicates that fluids penetrating the damaged fault zone from close to the surface are sufficient to explain the conductivity anomalies. This is as well supported by the preserved geochemical signature of rock samples in the alteration zone. Late stage alteration processes were active in a low temperature regime (<95°C) and the involvement of ascending brines from greater depth is not indicated. The limited depth extent of the fault zone conductors is a likely result of sealing and cementation of the fault fracture mesh due to dissolution and precipitation of minerals at greater depth and increased temperature. Comparison of the results of the apparently inactive West Fault with published studies on the electrical conductivity structure of the currently active San Andreas Fault, suggests that the depth extent and conductivity of the fault zone conductor may be correlated to fault activity. Ongoing deformation will keep the fault/fracture mesh permeable for fluids and impede cementation and sealing of fluid pathways.
Motivated by recent proposals on the experimental detectability of quantum gravity effects, the present thesis investigates assumptions and methods which might be used for the prediction of such effects within the framework of loop quantum gravity. To this end, a scalar field coupled to gravity is considered as a model system. Starting from certain assumptions about the dynamics of the coupled gravity-matter system, a quantum theory for the scalar field is proposed. Then, assuming that the gravitational field is in a semiclassical state, a "QFT on curved space-time limit" of this theory is defined. In contrast to ordinary quantum field theory on curved space-time however, in this limit the theory describes a quantum scalar field propagating on a (classical) random lattice. Then, methods to obtain the low energy limit of such a lattice theory, especially regarding the resulting modified dispersion relations, are discussed and applied to simple model systems. Finally, under certain simplifying assumptions, using the methods developed before as well as a specific class of semiclassical states, corrections to the dispersion relations for the scalar and the electromagnetic field are computed within the framework of loop quantum gravity. These calculations are of preliminary character, as many assumptions enter whose validity remains to be studied more thoroughly. However they exemplify the problems and possibilities of making predictions based on loop quantum gravity that are in principle testable by experiment.
In order to face the rapidly increasing need for computational resources of various scientific and engineering applications one has to think of new ways to make more efficient use of the worlds current computational resources. In this respect, the growing speed of wide area networks made a new kind of distributed computing possible: Metacomputing or (distributed) Grid computing. This is a rather new and uncharted field in computational science. The rapidly increasing speed of networks even outperforms the average increase of processor speed: Processor speeds double on average each 18 month whereas network bandwidths double every 9 months. Due to this development of local and wide area networks Grid computing will certainly play a key role in the future of parallel computing. This type of distributed computing, however, distinguishes from the traditional parallel computing in many ways since it has to deal with many problems not occurring in classical parallel computing. Those problems are for example heterogeneity, authentication and slow networks to mention only a few. Some of those problems, e.g. the allocation of distributed resources along with the providing of information about these resources to the application have been already attacked by the Globus software. Unfortunately, as far as we know, hardly any application or middle-ware software takes advantage of this information, since most parallelizing algorithms for finite differencing codes are implicitly designed for single supercomputer or cluster execution. We show that although it is possible to apply classical parallelizing algorithms in a Grid environment, in most cases the observed efficiency of the executed code is very poor. In this work we are closing this gap. In our thesis, we will - show that an execution of classical parallel codes in Grid environments is possible but very slow - analyze this situation of bad performance, nail down bottlenecks in communication, remove unnecessary overhead and other reasons for low performance - develop new and advanced algorithms for parallelisation that are aware of a Grid environment in order to generelize the traditional parallelization schemes - implement and test these new methods, replace and compare with the classical ones - introduce dynamic strategies that automatically adapt the running code to the nature of the underlying Grid environment. The higher the performance one can achieve for a single application by manual tuning for a Grid environment, the lower the chance that those changes are widely applicable to other programs. In our analysis as well as in our implementation we tried to keep the balance between high performance and generality. None of our changes directly affect code on the application level which makes our algorithms applicable to a whole class of real world applications. The implementation of our work is done within the Cactus framework using the Globus toolkit, since we think that these are the most reliable and advanced programming frameworks for supporting computations in Grid environments. On the other hand, however, we tried to be as general as possible, i.e. all methods and algorithms discussed in this thesis are independent of Cactus or Globus.
In recent years, there has been a dramatic increase in available compute capacities. However, these “Grid resources” are rarely accessible in a continuous stream, but rather appear scattered across various machine types, platforms and operating systems, which are coupled by networks of fluctuating bandwidth. It becomes increasingly difficult for scientists to exploit available resources for their applications. We believe that intelligent, self-governing applications should be able to select resources in a dynamic and heterogeneous environment: Migrating applications determine a resource when old capacities are used up. Spawning simulations launch algorithms on external machines to speed up the main execution. Applications are restarted as soon as a failure is detected. All these actions can be taken without human interaction. A distributed compute environment possesses an intrinsic unreliability. Any application that interacts with such an environment must be able to cope with its failing components: deteriorating networks, crashing machines, failing software. We construct a reliable service infrastructure by endowing a service environment with a peer-to-peer topology. This “Grid Peer Services” infrastructure accommodates high-level services like migration and spawning, as well as fundamental services for application launching, file transfer and resource selection. It utilizes existing Grid technology wherever possible to accomplish its tasks. An Application Information Server acts as a generic information registry to all participants in a service environment. The service environment that we developed, allows applications e.g. to send a relocation requests to a migration server. The server selects a new computer based on the transmitted resource requirements. It transfers the application's checkpoint and binary to the new host and resumes the simulation. Although the Grid's underlying resource substrate is not continuous, we achieve persistent computations on Grids by relocating the application. We show with our real-world examples that a traditional genome analysis program can be easily modified to perform self-determined migrations in this service environment.
In this thesis, I investigated the factors influencing the growth and vertical distribution of planktonic algae in extremely acidic mining lakes (pH 2-3). In the focal study site, Lake 111 (pH 2.7; Lusatia, Germany), the chrysophyte, Ochromonas sp., dominates in the upper water strata and the chlorophyte, Chlamydomonas sp., in the deeper strata, forming a pronounced deep chlorophyll maximum (DCM). Inorganic carbon (IC) limitation influenced the phototrophic growth of Chlamydomonas sp. in the upper water strata. Conversely, in deeper strata, light limited its phototrophic growth. When compared with published data for algae from neutral lakes, Chlamydomonas sp. from Lake 111 exhibited a lower maximum growth rate, an enhanced compensation point and higher dark respiration rates, suggesting higher metabolic costs due to the extreme physico-chemical conditions. The photosynthetic performance of Chlamydomonas sp. decreased in high-light-adapted cells when IC limited. In addition, the minimal phosphorus (P) cell quota was suggestive of a higher P requirement under IC limitation. Subsequently, it was shown that Chlamydomonas sp. was a mixotroph, able to enhance its growth rate by taking up dissolved organic carbon (DOC) via osmotrophy. Therefore, it could survive in deeper water strata where DOC concentrations were higher and light limited. However, neither IC limitation, P availability nor in situ DOC concentrations (bottom-up control) could fully explain the vertical distribution of Chlamydomonas sp. in Lake 111. Conversely, when a novel approach was adopted, the grazing influence of the phagotrophic phototroph, Ochromonas sp., was found to exert top-down control on its prey (Chlamydomonas sp.) reducing prey abundance in the upper water strata. This, coupled with the fact that Chlamydomonas sp. uses DOC for growth, leads to a pronounced accumulation of Chlamydomonas sp. cells at depth; an apparent DCM. Therefore, grazing appears to be the main factor influencing the vertical distribution of algae observed in Lake 111. The knowledge gained from this thesis provides information essential for predicting the effect of strategies to neutralize the acidic mining lakes on the food-web.
Encounters with neighbours
(2003)
In this work, different aspects and applications of the recurrence plot analysis are presented. First, a comprehensive overview of recurrence plots and their quantification possibilities is given. New measures of complexity are defined by using geometrical structures of recurrence plots. These measures are capable to find chaos-chaos transitions in processes. Furthermore, a bivariate extension to cross recurrence plots is studied. Cross recurrence plots exhibit characteristic structures which can be used for the study of differences between two processes or for the alignment and search for matching sequences of two data series. The selected applications of the introduced techniques to various kind of data demonstrate their ability. Analysis of recurrence plots can be adopted to the specific problem and thus opens a wide field of potential applications. Regarding the quantification of recurrence plots, chaos-chaos transitions can be found in heart rate variability data before the onset of life threatening cardiac arrhythmias. This may be of importance for the therapy of such cardiac arrhythmias. The quantification of recurrence plots allows to study transitions in brain during cognitive experiments on the base of single trials. Traditionally, for the finding of these transitions the averaging of a collection of single trials is needed. Using cross recurrence plots, the existence of an El Niño/Southern Oscillation-like oscillation is traced in northwestern Argentina 34,000 yrs. ago. In further applications to geological data, cross recurrence plots are used for time scale alignment of different borehole data and for dating a geological profile with a reference data set. Additional examples from molecular biology and speech recognition emphasize the suitability of cross recurrence plots.
Late Miocene to Quaternary volcanic rocks from the frontal arc to the back-arc region of the Central Volcanic Zone in the Andes show a wide range of delta 11B values (+4 to -7 ‰) and boron concentrations (6 to 60 ppm). Positive delta 11B values of samples from the volcanic front indicate involvement of a 11B-enriched slab component, most likely derived from altered oceanic crust, despite the thick Andean continental lithosphere, and rule out a pure crust-mantle origin for these lavas. The delta 11B values and B concentrations in the lavas decrease systematically with increasing depth of the Wadati-Benioff Zone. This across-arc variation in delta 11B values and decreasing B/Nb ratios from the arc to the back-arc samples are attributed to the combined effects of B-isotope fractionation during progressive dehydration in the slab and a steady decrease in slab-fluid flux towards the back arc, coupled with a relatively constant degree of crustal contamination as indicated by similar Sr, Nd and Pb isotope ratios in all samples. Modelling of fluid-mineral B-isotope fractionation as a function of temperature fits the across-arc variation in delta 11B and we conclude that the B-isotope composition of arc volcanics is dominated by changing delta 11B composition of B-rich slab-fluids during progressive dehydration. Crustal contamination becomes more important towards the back-arc due to the decrease in slab-derived fluid flux. Because of this isotope fractionation effect, high delta 11B signatures in volcanic arcs need not necessarily reflect differences in the initial composition of the subducting slab. Three-component mixing calculations for slab-derived fluid, the mantle wedge and the continental crust based on B, Sr and Nd isotope data indicate that the slab-fluid component dominates the B composition of the fertile mantle and that the primary arc magmas were contaminated by an average addition of 15 to 30 % crustal material.