Refine
Year of publication
- 2008 (459) (remove)
Document Type
- Article (206)
- Doctoral Thesis (116)
- Conference Proceeding (39)
- Monograph/Edited Volume (36)
- Postprint (36)
- Preprint (11)
- Other (6)
- Master's Thesis (3)
- Review (3)
- Habilitation Thesis (2)
- Working Paper (1)
Language
- English (459) (remove)
Keywords
- Chitooligosaccharide (3)
- Chitooligosaccharides (3)
- Erdbeben (3)
- Seismology (3)
- magnetic fields (3)
- Array Seismology (2)
- Arrayseismologie (2)
- Chile (2)
- Chitinase (2)
- Earthquake (2)
Institute
- Institut für Chemie (82)
- Institut für Biochemie und Biologie (73)
- Department Psychologie (33)
- Extern (28)
- Institut für Anglistik und Amerikanistik (27)
- Institut für Künste und Medien (21)
- Institut für Mathematik (20)
- Wirtschaftswissenschaften (20)
- Department Linguistik (19)
- Institut für Physik und Astronomie (19)
Stellar magnetic fields, as a crucial component of star formation and evolution, evade direct observation at least with current and near future instruments. However investigating whether magnetic fields are generated by a dynamo process or represent relics from the formation process, or whether they show a behavior similar to the sun or something very different, it is essential to investigate their structure and temporal evolution. Fortunately nature provides us with the possibility to indirectly observe surface topologies on distant stars by means of Doppler shift and polarization of light, though not without its challenges. Based on the mentioned effects, the so called Zeeman-Doppler Imaging technique is a powerful method to retrieve magnetic fields from rapid rotating stars based on measurements of spectropolarimetric observations in terms of Stokes profiles. In recent years, a large number of stellar magnetic field distributions could be reconstructed by Zeeman-Doppler Imaging (ZDI). However, the implementation of this method often relies on many approximations because, as an inversion method, it entails enormous computational requirements. The aim of this thesis is to develop methods for a ZDI, designed to invert time-resolved spectropolarimetric data of active late type stars, and to account for the expected complex and small scale magnetic fields on these stars. In order to reliably reconstruct the detailed field orientation and strength, the inversion method is employed to be able to use of all four Stokes components. Furthermore it is based on fully polarized radiative transfer calculations to account for the intricate interplay between temperature and magnetic field. Finally, the application of a newly developed ZDI code to Stokes I and V observations of II Pegasi (short: II Peg) was supposed to deliver the first magnetic surface maps for this highly active star. To accomplish the high computational burden of a radiative transfer based ZDI, we developed a novel approximation method to speed up the inversion process. It is based on Principal Component Analysis and Artificial Neural Networks. The latter approximate the functional mapping between atmospheric parameters and the corresponding local Stokes profiles. Inverse problems, as we are dealing with, are potentially ill-posed and require a regularization method. We propose a new regularization scheme, which implements a local entropy function that accounts for the peculiarities of the reconstruction of localized magnetic fields. To deal with the relatively large noise that is always present in polarimetric data, we developed a multi-line denoising technique based on Principal Component Analysis. In contrast to other multi-line techniques that extract from a large number of spectral lines a sort of mean profile, this method allows to extract individual spectral lines and thus allows for an inversion on the basis of specific lines. All these methods are incorporated in our newly developed ZDI code iMap, which is based on a conjugated gradient method. An in depth validation of our new synthesis method demonstrates the reliability and accuracy of this approach as well as a gain in computation time by almost three orders of magnitude relative to the conventional radiative transfer calculations. We investigated the influence of the different Stokes components (IV / IVQU) on the ability to reconstruct a known synthetic field configuration. In doing so we validate the capability of our inversion code, and we also assess limitations of magnetic field inversions in general. In a first application to II Peg, a K2 IV subgiant, we derived temperature and magnetic field surface distributions from spectropolarimetric data obtained in 2004 and 2007. It gives for the first time simultaneously the temporal evolution of the surface temperature and magnetic field distribution on II Peg.
Xenophobia
(2008)
Workplace-related anxieties and workplace phobia : a concept of domain-specific mental disorders
(2008)
Background: Anxiety in the workplace is a special problem as workplaces are especially prone to provoke anxiety: There are social hierarchies, rivalries between colleagues, sanctioning through superiors, danger of accidents, failure, and worries of job security. Workplace phobia is a phobic anxiety reaction with symptoms of panic occurring when thinking of or approaching the workplace, and with clear tendency of avoidance. Objectives: What characterizes workplace-related anxieties and workplace phobia as domain-specific mental disorders in contrast to conventional anxiety disorders? Method: 230 patients from an inpatient psychosomatic rehabilitation center were interviewed with the (semi-)structured Mini-Work-Anxiety-Interview and the Mini International Neuropsychiatric Interview, concerning workplace-related anxieties and conventional mental disorders. Additionally, the patients filled in the self-rating questionnaires Job-Anxiety-Scale (JAS) and the Symptom Checklist (SCL-90-R)measuring job-related and general psychosomatic symptom load. Results: Workplace-related anxieties occurred together with conventional anxiety disorders in 35% of the patients, but also alone in others (23%). Workplace phobia could be found in 17% of the interviewed, any diagnosis of workplace-related anxiety was stated in 58%. Workplace phobic patients had significantly higher scores in job-anxiety than patients without workplace phobia. Patients with workplace phobia were significantly longer on sick leave in the past 12 months (23,5 weeks) than patients without workplace phobia (13,4 weeks). Different qualities of workplace-related anxieties lead with different frequencies to work participation disorders. Conclusion: Workplace phobia cannot be described by only assessing the general level of psychosomatic symptom load and conventional mental disorders. Workplace-related anxieties and workplace phobia have an own clinical value which is mainly defined by specific workplace-related symptom load and work-participation disorders. They require special therapeutic attention and treatment instead of a “sick leave” certification by the general health physician. Workplace phobia should be named with a proper diagnosis according to ICD-10 chapter V, F 40.8: “workplace phobia”.
One of the informal properties often used to describe a new virtual world is its degree of openness. Yet what is an “open” virtual world? Does the phrase mean generally the same thing to different people? What distinguishes an open world from a less open world? Why does openness matter anyway? The answers to these questions cast light on an important, but shadowy, and uneasy, topic for virtual worlds: the relationship between those who construct the virtual, and those who use these constructions.
* 1. Large female insects usually have high potential fecundity. Therefore selection should favour an increase in body size given that these females get opportunities to realize their potential advantage by maturing and laying more eggs. However, ectotherm physiology is strongly temperature-dependent, and activities are carried out sufficiently only within certain temperature ranges. Thus it remains unclear if the fecundity advantage of a large size is fully realized in natural environments, where thermal conditions are limiting. * 2. Insect fecundity might be limited by temperature at two levels; first eggs need to mature, and then the female needs time for strategic ovipositing of the egg. Since a female cannot foresee the number of oviposition opportunities that she will encounter on a given day, the optimal rate of egg maturation will be governed by trade-offs associated with egg- and time-limited oviposition. As females of different sizes will have different amounts of body reserves, size-dependent allocation trade-offs between the mother"s condition and her egg production might be expected. * 3. In the temperate butterfly Pararge aegeria, the time and temperature dependence of oviposition and egg maturation, and the interrelatedness of these two processes were investigated in a series of laboratory experiments, allowing a decoupling of the time budgets for the respective processes. * 4. The results show that realized fecundity of this species can be limited by both the temperature dependence of egg maturation and oviposition under certain thermal regimes. Furthermore, rates of oviposition and egg maturation seemed to have regulatory effects upon each other. Early reproductive output was correlated with short life span, indicating a cost of reproduction. Finally, large females matured more eggs than small females when deprived of oviposition opportunities. Thus, the optimal allocation of resources to egg production seems dependent on female size. * 5. This study highlights the complexity of processes underlying rates of egg maturation and oviposition in ectotherms under natural conditions. We further discuss the importance of temperature variation for egg- vs. time-limited fecundity and the consequences for the evolution of female body size in insects.
We prove a local in time existence and uniqueness theorem of classical solutions of the coupled Einstein{Euler system, and therefore establish the well posedness of this system. We use the condition that the energy density might vanish or tends to zero at infinity and that the pressure is a certain function of the energy density, conditions which are used to describe simplified stellar models. In order to achieve our goals we are enforced, by the complexity of the problem, to deal with these equations in a new type of weighted Sobolev spaces of fractional order. Beside their construction, we develop tools for PDEs and techniques for hyperbolic and elliptic equations in these spaces. The well posedness is obtained in these spaces.
The anisotropic effect of the olefinic C=C double bond has been calculated by employing the NICS (nucleus independent chemical shift) concept and visualized as an anisotropic cone by a through space NMR shielding grid. Sign and size of this spatial effect on 1H chemical shifts of protons in norbornene, exo- and endo-2-methylnorbornenes, and in three highly congested tetracyclic norbornene analogs have been compared with the experimental 1H NMR spectra as far as published. 1H NMR spectra have also been calculated at the HF/6-31G* level of theory to get a full, comparable set of proton chemical shifts. Differences between ;(1H)/ppm and the calculated anisotropic effect of the C=C double bond are discussed in terms of the steric compression that occurs in the compounds studied.
Nowadays, reactions on surfaces are attaining great scientific interest because of their diverse applications. Some well known examples are production of ammonia on metal surfaces for fertilizers and reduction of poisonous gases from automobiles using catalytic converters. More recently, also photoinduced reactions at surfaces, useful, \textit{e.g.}, for photocatalysis, were studied in detail. Often, very short laser pulses are used for this purpose. Some of these reactions are occurring on femtosecond (1 fs=$10^{-15}$ s) time scales since the motion of atoms (which leads to bond breaking and new bond formation) belongs to this time range. This thesis investigates the femtosecond laser induced associative photodesorption of hydrogen, H$_2$, and deuterium, D$_2$, from a ruthenium metal surface. Many interesting features of this reaction were explored by experimentalists: (i) a huge isotope effect in the desorption probability of H$_2$ and D$_2$, (ii) the desorption yield increases non-linearly with the applied visible (vis) laser fluence, and (iii) unequal energy partitioning to different degrees of freedom. These peculiarities are due to the fact that an ultrashort vis pulse creates hot electrons in the metal. These hot electrons then transfer energy to adsorbate vibrations which leads to desorption. In fact, adsorbate vibrations are strongly coupled to metal electrons, \textit{i.e.}, through non-adiabatic couplings. This means that, surfaces introduce additional channels for energy exchange which makes the control of surface reactions more difficult than the control of reactions in the gas phase. In fact, the quantum yield of surface photochemical reactions is often notoriously small. One of the goals of the present thesis is to suggest, on the basis of theoretical simulations, strategies to control/enhance the photodesorption yield of H$_2$ and D$_2$ from Ru(0001). For this purpose, we suggest a \textit{hybrid scheme} to control the reaction, where the adsorbate vibrations are initially excited by an infrared (IR) pulse, prior to the vis pulse. Both \textit{adiabatic} and \textit{non-adiabatic} representations for photoinduced desorption problems are employed here. The \textit{adiabatic} representation is realized within the classical picture using Molecular Dynamics (MD) with electronic frictions. In a quantum mechanical description, \textit{non-adiabatic} representations are employed within open-system density matrix theory. The time evolution of the desorption process is studied using a two-mode reduced dimensionality model with one vibrational coordinate and one translational coordinate of the adsorbate. The ground and excited electronic state potentials, and dipole function for the IR excitation are taken from first principles. The IR driven vibrational excitation of adsorbate modes with moderate efficiency is achieved by (modified) $\pi$-pulses or/and optimal control theory. The fluence dependence of the desorption reaction is computed by including the electronic temperature of the metal calculated from the two-temperature model. Here, our theoretical results show a good agreement with experimental and previous theoretical findings. We then employed the IR+vis strategy in both models. Here, we found that vibrational excitation indeed promotes the desorption of hydrogen and deuterium. To summarize, we conclude that photocontrol of this surface reaction can be achieved by our IR+vis scheme.
Multinuclear dynamic NMR spectroscopy of 5-trifluoromethylsulfonyl-1,3,5-dioxaazinane (4) revealed the existence of two close in energy chair conformers with differently oriented CF3 groups with respect to the ring. Of the two alternative routes for their interconversion, the ring inversion path with intermediate formation of the corresponding 2,5-twist-conformer is preferred, with the energy barrier of 11.2 kcal/mol in excellent agreement with the experimental value (11.7 kcal/mol). The Perlin effect is studied experimentally and calculated theoretically for all CH2 groups and found to be subject to the nature of the adjacent heteroatoms O and N, respectively.
Using ESTs for phylogenomics
(2008)
Background
While full genome sequences are still only available for a handful of taxa, large collections of partial gene sequences are available for many more. The alignment of partial gene sequences results in a multiple sequence alignment containing large gaps that are arranged in a staggered pattern. The consequences of this pattern of missing data on the accuracy of phylogenetic analysis are not well understood. We conducted a simulation study to determine the accuracy of phylogenetic trees obtained from gappy alignments using three commonly used phylogenetic reconstruction methods (Neighbor Joining, Maximum Parsimony, and Maximum Likelihood) and studied ways to improve the accuracy of trees obtained from such datasets.
Results
We found that the pattern of gappiness in multiple sequence alignments derived from partial gene sequences substantially compromised phylogenetic accuracy even in the absence of alignment error. The decline in accuracy was beyond what would be expected based on the amount of missing data. The decline was particularly dramatic for Neighbor Joining and Maximum Parsimony, where the majority of gappy alignments contained 25% to 40% incorrect quartets. To improve the accuracy of the trees obtained from a gappy multiple sequence alignment, we examined two approaches. In the first approach, alignment masking, potentially problematic columns and input sequences are excluded from from the dataset. Even in the absence of alignment error, masking improved phylogenetic accuracy up to 100-fold. However, masking retained, on average, only 83% of the input sequences. In the second approach, alignment subdivision, the missing data is statistically modelled in order to retain as many sequences as possible in the phylogenetic analysis. Subdivision resulted in more modest improvements to alignment accuracy, but succeeded in including almost all of the input sequences.
Conclusion
These results demonstrate that partial gene sequences and gappy multiple sequence alignments can pose a major problem for phylogenetic analysis. The concern will be greatest for high-throughput phylogenomic analyses, in which Neighbor Joining is often the preferred method due to its computational efficiency. Both approaches can be used to increase the accuracy of phylogenetic inference from a gappy alignment. The choice between the two approaches will depend upon how robust the application is to the loss of sequences from the input set, with alignment masking generally giving a much greater improvement in accuracy but at the cost of discarding a larger number of the input sequences.
Tuning of the excited-state properties and photovoltaic performance in PPV-based polymer blends
(2008)
The through space NMR shielding (TSNMRS) values of two tricyclobutabenzene (TCBB) derivatives 2, of the corresponding hexamethylene and hexaoxo TCBB derivatives 3, of [4n]annuleno[4n + 2]annulene 5 and of its tricyclobutadiene parent compound 4 have been ab initio calculated by the GIAO perturbation method employing the nucleus- independent chemical shift (NICS) concept of Paul von Ragué Schleyer, and visualized as iso-chemical shielding surfaces (ICSS). TSNMRS values can be successfully employed to quantify and visualize the aromaticity of the central, and in 5 also of the terminal benzene ring moieties.
Efficient triplet exciton emission has allowed improved operation of organic light-emitting diodes (LEDs). To enhance the device performance, it is necessary to understand what governs the motion of triplet excitons through the organic semiconductor. Here, we have investigated triplet diffusion using a model compound that has weak energetic disorder. The Dexter-type triplet energy transfer is found to be thermally activated down to a transition temperature T- T, below which the transfer rate is only weakly temperature dependent. We show that above the transition temperature, Dexter energy transfer can be described within the framework of Marcus theory. We suggest that below T-T, the nature of the transfer changes from phonon-assisted hopping to quantum-mechanical tunneling. The lower electron-phonon coupling and higher electronic coupling in the polymer compared to the monomer results in an enhanced triplet diffusion rate.
This paper explores the role of the intentional stance in games, arguing that any question of artificial intelligence has as much to do with the co-option of the player’s interpretation of actions as intelligent as any actual fixed-state systems attached to agents. It demonstrates how simply using a few simple and, in system terms, cheap tricks, existing AI can be both supported and enhanced. This includes representational characteristics, importing behavioral expectations from real life, constraining these expectations using diegetic devices, and managing social interrelationships to create the illusion of a greater intelligence than is ever actually present. It is concluded that complex artificial intelligence is often of less importance to the experience of intelligent agents in play than the creation of a space where the intentional stance can be evoked and supported.
Translation in plastids : elucidation of decoding mechanisms and functions of ribosomal components
(2008)
Generalized Two-Level Grammar (GTWOL) provides a new method for compilation of parallel replacement rules into transducers. The current paper identifies the role of generalized lenient composition (GLC) in this method. Thanks to the GLC operation, the compilation method becomes bipartite and easily extendible to capture various application modes. In the light of three notions of obligatoriness, a modification to the compilation method is proposed. We argue that the bipartite design makes implementation of parallel obligatoriness, directionality, length and rank based application modes extremely easy, which is the main result of the paper.
Traffic of molecular motors
(2008)
We propose a network structure-based model for heterosis, and investigate it relying on metabolite profiles from Arabidopsis. A simple feed-forward two-layer network model (the Steinbuch matrix) is used in our conceptual approach. It allows for directly relating structural network properties with biological function. Interpreting heterosis as increased adaptability, our model predicts that the biological networks involved show increasing connectivity of regulatory interactions. A detailed analysis of metabolite profile data reveals that the increasing-connectivity prediction is true for graphical Gaussian models in our data from early development. This mirrors properties of observed heterotic Arabidopsis phenotypes. Furthermore, the model predicts a limit for increasing hybrid vigor with increasing heterozygosity—a known phenomenon in the literature.
Towards a financial perspective on virtual communities : the case of the Berlin Stock Exchange
(2008)
pH sensing in living cells represents one of the most prominent topics in biochemistry and physiology. In this study we performed one-photon and two-photon time-domain fluorescence lifetime imaging with a laser-scanning microscope using the time-correlated single-photon counting technique for imaging intracellular pH levels. The suitability of different commercial fluorescence dyes for lifetime-based pH sensing is discussed on the basis of in vitro as well of in situ measurements. Although the tested dyes are suitable for intensity-based ratiometric measurements, for lifetime- based techniques in the time-domain so far only BCECF seems to meet the requirements of reliable intracellular pH recordings in living cells.
It has always been enigmatic which processes control the accretion of the North American terranes towards the Pacific plate and the landward migration of the San Andreas plate boundary. One of the theories suggests that the Pacific plate first cools and captures the uprising mantle in the slab window, and then it causes the accretion of the continental crustal blocks. The alternative theory attributes the accretion to the capture of Farallon plate fragments (microplates) stalled in the ceased Farallon-North America subduction zone. Quantitative judgement between these two end-member concepts requires a 3D thermomechanical numerical modeling. However, the software tool required for such modeling is not available at present in the geodynamic modeling community. The major aim of the presented work is comprised basically of two interconnected tasks. The first task is the development and testing of the research Finite Element code with sufficiently advanced facilities to perform the three-dimensional geological time scale simulations of lithospheric deformation. The second task consists in the application of the developed tool to the Neogene deformations of the crust and the mantle along the San Andreas Fault System in Central and northern California. The geological time scale modeling of lithospheric deformation poses numerous conceptual and implementation challenges for the software tools. Among them is the necessity to handle the brittle-ductile transition within the single computational domain, adequately represent the rock rheology in a broad range of temperatures and stresses, and resolve the extreme deformations of the free surface and internal boundaries. In the framework of this thesis the new Finite Element code (SLIM3D) has been successfully developed and tested. This code includes a coupled thermo-mechanical treatment of deformation processes and allows for an elasto-visco-plastic rheology with diffusion, dislocation and Peierls creep mechanisms and Mohr-Coulomb plasticity. The code incorporates an Arbitrary Lagrangian Eulerian formulation with free surface and Winkler boundary conditions. The modeling technique developed is used to study the aspects influencing the Neogene lithospheric deformation in central and northern California. The model setup is focused on the interaction between three major tectonic elements in the region: the North America plate, the Pacific plate and the Gorda plate, which join together near the Mendocino Triple Junction. Among the modeled effects is the influence of asthenosphere upwelling in the opening slab window on the overlying North American plate. The models also incorporate the captured microplate remnants in the fossil Farallon subduction zone, simplified subducting Gorda slab, and prominent crustal heterogeneity such as the Salinian block. The results show that heating of the mantle roots beneath the older fault zones and the transpression related to fault stepping, altogether, render cooling in the slab window alone incapable to explain eastward migration of the plate boundary. From the viewpoint of the thermomechanical modeling, the results confirm the geological concept, which assumes that a series of microplate capture events has been the primary reason of the inland migration of the San Andreas plate boundary over the recent 20 Ma. The remnants of the Farallon slab, stalled in the fossil subduction zone, create much stronger heterogeneity in the mantle than the cooling of the uprising asthenosphere, providing the more efficient and direct way for transferring the North American terranes to Pacific plate. The models demonstrate that a high effective friction coefficient on major faults fails to predict the distinct zones of strain localization in the brittle crust. The magnitude of friction coefficient inferred from the modeling is about 0.075, which is far less than typical values 0.6 – 0.8 obtained by variety of borehole stress measurements and laboratory data. Therefore, the model results presented in this thesis provide additional independent constrain which supports the “weak-fault” hypothesis in the long-term ongoing debate over the strength of major faults in the SAFS.
Thermal radiation processes
(2008)
We discuss the different physical processes that are important to understand the thermal X-ray emission and absorption spectra of the diffuse gas in clusters of galaxies and the warm-hot intergalactic medium. The ionisation balance, line and continuum emission and absorption properties are reviewed and several practical examples are given that illustrate the most important diagnostic features in the X-ray spectra.
This text compares the special characteristics of the game space in computer-generated environments with that in non-computerized playing-situations. Herewith, the concept of the magic circle as a deliberately delineated playing sphere with specific rules to be upheld by the players, is challenged. Yet, computer games also provide a virtual playing environment containing the rules of the game as well as the various action possibilities. But both the hardware and software facilitate the player’s actions rather than constraining them. This makes computer games fundamentally different: in contrast to traditional game spaces or limits, the computer-generated environment does not rely on the awareness of the player in upholding these rules. – Thus, there is no magic circle.
Contents: Artem Polyvanny, Sergey Smirnow, and Mathias Weske The Triconnected Abstraction of Process Models 1 Introduction 2 Business Process Model Abstraction 3 Preliminaries 4 Triconnected Decomposition 4.1 Basic Approach for Process Component Discovery 4.2 SPQR-Tree Decomposition 4.3 SPQR-Tree Fragments in the Context of Process Models 5 Triconnected Abstraction 5.1 Abstraction Rules 5.2 Abstraction Algorithm 6 Related Work and Conclusions
Over the last decades Britain´s ethnic minorities have successfully established themselves in a multicultural society. In particular, Indian – Hindu communities generally improved their social and economic situation. In this context, the third generation of British Indians is now growing up. In contrast to the previous generation of the Indian diaspora, these children grow up in an established ethnic community, which learned to retain its religion, traditions and culture in a foreign environment. At the same time, these children are part of the multicultural British society. Based on the academic discussion about the second generation of immigrated ethnic communities, when the youth often suffered from cultural differences, racism and discrimination and therefore rejected aspects of their culture of origin, this paper assumes that the loss of the culture of origin further increases in the third generation. This thesis follows the main theories about the connection between generation and integration. It is believed that the preference of western culture influences the personal, ethnic and cultural identity of young people. This leads to the rejection of traditional bonds. Before introducing this thesis various theoretical concepts are discussed which are inevitable for the comprehension of the diasporic situation in which British Indian youngsters grow up. As part of the worldwide Asian Indian diaspora Indian families in Britain maintain manifold links to Indian communities in various countries. Particularly, the link to India plays a decisive role; the subcontinent is referred to as an abstract homeland, especially by the first generation. While the grandparents strongly adhere to their Indian culture and Hindu religion, the second generation already generated cultural change. In this process various cultural values of the Indian ethnic community have been questioned and modified. Further, the second generation pushed the integration into the British society by giving up the dependence on the ethnic network. This paper is based on a hybrid and fluent definition of culture. This definition also applies to the underlying understanding of identity and ethnicity. Due to migration, cultural contact and the multilocality of the diaspora, diasporic and post-diasporic identities and cultures are characterized by hybridity, heterogeneity, fragmentation and flexibility. Particularly, in the younger generation – though dependent on a number of social and structural factors - cultural change and mixture happen; in this process new ethnicities and identities evolve. In the second and third part of this paper the thesis of loss of culture of origin is refuted on the basis of findings from empirical research. British - Indian youngsters in London have been questioned for the study. Half of the youngsters are related to a sampradaya, a Hindu sect. This enables the author to compare youngsters who do not belong to a particular religious group with those who are included into a religious and / or ethnic community through a sampradaya. The analysis of the findings which are based on qualitative and quantitative social research shows that the young people have great interest in their culture of origin and that they aim to maintain this culture in the diaspora. They identify as Indian and are proud of their cultural differences. In this, they differ from the second generation. In contrast to the generation of their grandparents the Indian identity of the third generation is not based on nostalgic memories. They confirm and emphasize their postdiasporic difference in a western multicultural society. The findings from the survey hereby exceed the thesis from Hansen’s theory about the rediscovery of the culture of origin in the third generation. The comparison of both groups shows that in the context of the differentiation of postmodern and postcolonial communities also ethnic groups become increasingly differentiated. Therefore, the Indian heritage and culture does not play the same role for every young British Indian.
Plant population modelling has been around since the 1970s, providing a valuable approach to understanding plant ecology from a mechanistic standpoint. It is surprising then that this area of research has not grown in prominence with respect to other approaches employed in modelling plant systems. In this review, we provide an analysis of the development and role of modelling in the field of plant population biology through an exploration of where it has been, where it is now and, in our opinion, where it should be headed. We focus, in particular, on the role plant population modelling could play in ecological forecasting, an urgent need given current rates of regional and global environmental change. We suggest that a critical element limiting the current application of plant population modelling in environmental research is the trade-off between the necessary resolution and detail required to accurately characterize ecological dynamics pitted against the goal of generality, particularly at broad spatial scales. In addition to suggestions how to overcome the current shortcoming of data on the process-level we discuss two emerging strategies that may offer a way to overcome the described limitation: (1) application of a modern approach to spatial scaling from local processes to broader levels of interaction and (2) plant functional-type modelling. Finally we outline what we believe to be needed in developing these approaches towards a 'science of forecasting'.
The space-image
(2008)
In recent computer game research a paradigmatic shift is observable: Games today are first and foremost conceived as a new medium characterized by their status as an interactive image. The shift in attention towards this aspect becomes apparent in a new approach that is, first and foremost, aware of the spatiality of games or their spatial structures. This rejects traditional approaches on the basis that the medial specificity of games can no longer be reduced to textual or ludic properties, but has to be seen in medial constituted spatiality. For this purpose, seminal studies on the spatiality of computer games are resumed and their advantages and disadvantages are discussed. In connection with this, and against the background of the philosophical method of phenomenology, we propose three steps in describing computer games as space images: With this method it is possible to describe games with respect to the possible appearance of spatiality in a pictorial medium.
The South Chilean subduction zone between 41° and 43.5°S : seismicity, structure and state of stress
(2008)
While the northern and central part of the South American subduction zone has been intensively studied, the southern part has attracted less attention, which may be due to its difficult accessibility and lower seismic activity. However, the southern part exhibits strong seismic and tsunamogenic potential with the prominent example of the Mw=9.5 May 22, 1960 Valdivia earthquake. In this study data from an amphibious seismic array (Project TIPTEQ) is presented. The network reached from the trench to the active magmatic arc incorporating the Island of Chiloé and the north-south trending Liquiñe-Ofqui fault zone (LOFZ). 364 local events were observed in an 11-month period from November 2004 until October 2005. The observed seismicity allows to constrain for the first time the current state of stress of the subducting plate and magmatic arc, as well as the local seismic velocity structure. The downgoing Benioff zone is readily identifiable as an eastward dipping plane with an inclination of ~30°. Main seismic activity occurred predominantly in a belt parallel to the coast of Chiloé Island in a depth range of 12-30 km, which is presumably related to the plate interface. The down-dip termination of abundant intermediate depth seismicity at approximately 70 km depth seems to be related to the young age (and high temperature) of the oceanic plate. A high-quality subset of events was inverted for a 2-D velocity model. The vp model resolves the sedimentary basins and the downgoing slab. Increased velocities below the longitudinal valley and the eastern part of Chiloé Island suggest the existence of a mantle bulge. Apart from the events in the Benioff Zone, shallow crustal events were observed mainly in different clusters along the magmatic arc. These crustal clusters of seismicity are related to the LOFZ, as well as to the volcanoes Chaitén, Michinmahuida and Corcovado. Seismic activity up to a magnitude of 3.8 Mw reveals the recent activity of the fault zone. Focal mechanisms for the events along the LOFZ were calculated using a moment tensor inversion of amplitude spectra for body waves which mostly yield strike-slip mechanisms indicating a SW-NE striking of sigma_1 for the LOFZ. Focal mechanism stress inversion indicates a strike-slip regime along the arc and a thrust regime in the Benioff zone. The observed deformation - which is also revealed by teleseismic observations - suggests a confirmation for the proposed northward movement of a forearc sliver acting as a detached continental micro-plate.
This paper suggests an approach to studying the rhetoric of persuasive computer games through comparative analysis. A comparison of the military propaganda game AMERICA’S ARMY to similar shooter games reveals an emphasis on discipline and constraints in all main aspects of the games, demonstrating a preoccupation with ethos more than pathos. Generalizing from this, a model for understanding game rhetoric through balances of freedom and constraints is proposed.
Adenylates are metabolites with essential function in metabolism and signaling in all living organisms. As Cofactors, they enable thermodynamically unfavorable reactions to be catalyzed enzymatically within cells. Outside the cell, adenylates are involved in signalling processes in animals and emerging evidence suggests similar signaling mechanisms in the plants’ apoplast. Presumably, apoplastic apyrases are involved in this signaling by hydrolyzing the signal mediating molecules ATP and ADP to AMP. This PhD thesis focused on the role of adenylates on metabolism and development of potato (Solanum tuberosum) by using reverse genetics and biochemical approaches. To study the short and long term effect of cellular ATP and the adenylate energy charge on potato tuber metabolism, an apyrase from Escherichia coli targeted into the amyloplast was expressed inducibly and constitutively. Both approaches led to the identification of adaptations to reduced ATP/energy charge levels on the molecular and developmental level. These comprised a reduction of metabolites and pathway fluxes that require significant amounts of ATP, like amino acid or starch synthesis, and an activation of processes that produce ATP, like respiration and an immense increase in the surface-to-volume ratio. To identify extracellular enzymes involved in adenylate conversion, green fluorescent protein and activity localization studies in potato tissue were carried out. It was found that extracellular ATP is imported into the cell by an apoplastic enzyme complement consisting of apyrase, unspecific phosphatase, adenosine nucleosidase and an adenine transport system. By changing the expression of a potato specific apyrase via transgenic approaches, it was found that this enzyme has strong impact on plant and particular tuber development in potato. Whereas metabolite levels were hardly altered, transcript profiling of tubers with reduced apyrase activity revealed a significant upregulation of genes coding for extensins, which are associated with polar growth. The results are discussed in context of adaptive responses of plants to changes in the adenylate levels and the proposed role of apyrase in apoplastic purinergic signaling and ATP salvaging. In summary, this thesis provides insight into adenylate regulated processes within and outside non-photosynthetic plant cells.
Linking elements in German are generally assumed to have developed either from suffixes indicating the genitive singular or from plural markers. In this paper it is argued that only the linking element -(e)s- evolved from an inflectional suffix, that of the genitive case, but not the syllabic linking elements -e-, -er- and -(e)n- homophonous with plural markers. For these linking elements the explanation is doubtful for a number of reasons. The present paper proposes an alternative explanation for the development of such interfixes, according to which both linking elements and plural markers have been grammaticalized from the same old Indo-European stem suffixes which indicated the declension class of the noun.. Their homophony is due to the fact that they both evolved from the same source. After the decline of the original endings, the indicators of moribund inflectional classes became afunctional 'junc' and were then reanalysed either as plural markers or as linking elements. This development of linking elements can thus be shown as a case of exaptation or regrammaticalization.
KEPI is a protein kinase C-potentiated inhibitory protein for type 1 Ser/Thr protein phosphatases. We found no or reduced expression of KEPI in breast cancer cell lines, breast tumors and metastases in comparison to normal breast cell lines and tissues, respectively. KEPI protein expression and ubiquitous localization was detected with a newly generated antibody. Ectopic KEPI expression in MCF7 breast cancer cells induced differential expression of 95 genes, including the up-regulation of the tumor suppressors EGR1 (early growth response 1) and PTEN (phosphatase and tensin homolog), which is regulated by EGR1. We further show that the up-regulation of EGR1 in MCF7/KEPI cells is mediated by MEK-ERK signaling. The inhibition of this pathway by the MEK inhibitor UO126 led to a strong decrease in EGR1 expression in MCF7/KEPI cells. These results reveal a novel role for KEPI in the regulation of the tumor suppressor gene EGR1 via activation of the MEK-ERK MAPK pathway.
Jesper Juul has convincingly argued that the conflict over the proper object of study has shifted from “rules or story” to “player or game.” But a key component of digital games is still missing from either of these oppositions: that of the computer itself. This paper offers a way of thinking about the phenomenology of the videogame from the perspective of the computer rather than the game or the player.
This paper highlights the different ways of perceiving video games and video game content, incorporating interactive and non-interactive methods. It examines varying cognitive and emotive reactions by persons who are used to play video games as well as persons who are unfamiliar with the aesthetics and the most basic game play rules incorporated within video games. Additionally, the principle of “Flow” serves as a theoretical and philosophical foundation. A small case-study featuring two games has been made to emphasize the numerous possible ways of perception of video games.
The present dissertation focuses on the question whether and under which conditions infants recognise clauses in fluent speech and the role a prosodic marker such as a pause may have in the segmentation process. In the speech signal, syntactic clauses often coincide with intonational phrases (IPhs) (Nespor & Vogel, 1986, p. 190), the boundaries of which are marked by changes in fundamental frequency (e.g., Price, Ostendorf, Shattuck-Hufnagel & Fong, 1991), lengthening of the final syllable (e.g., Cooper & Paccia-Cooper, 1980) and the occurrence of a pause (Nespor & Vogel, 1986, p. 188). Thus, IPhs seem to be reliably marked in the speech stream and infants may use these cues to recognise them. Furthermore, corpus studies on the occurrence and distribution of pauses have revealed that there is a strong correlation between the duration of a pause and the type of boundary it marks (e.g., Butcher, 1981, for German). Pauses between words are either non-existent or short, pauses between phrases are a bit longer, and pauses between clauses and at sentence boundaries further increase in duration. This suggests the existence of a natural pause hierarchy that complements the prosodic hierarchy described by Nespor and Vogel (1986). These hierarchies on the side of the speech signal correspond to the syntactic hierarchy of a language. In the present study, five experiments using the Headturn preference paradigm (Hirsh-Pasek, Kemler Nelson, Jusczyk, Cassidy, Druss & Kennedy, 1987) were conducted to investigate German-learning 6- and 8-month-olds’ use of pauses to recognise clauses in the signal and their sensitivity to the natural pause hierarchy. Previous studies on English-learning infants’ recognition of clauses (Hirsh-Pasek et al., 1987; Nazzi, Kemler Nelson, Jusczyk & Jusczyk, 2000) have found that infants as young as 6 months recognise clauses in fluent speech. Recently, Seidl and colleagues have begun to investigate the status the pause may have in this process (Seidl, 2007; Johnson & Seidl, 2008; Seidl & Cristià, 2008). However, none of these studies investigated infants’ sensitivity to the natural pause hierarchy and especially the sensitivity to the correlation between pause durations and the respective within-sentence clause boundaries / sentence boundaries. To address these questions highly controlled stimuli were used. In all five experiments the stimuli were sentences consisting of two IPhs which each coincided with a syntactic clause. In the first three experiments pauses were inserted either at clause and sentence boundaries or within the first clause and the sentence boundaries. The duration of the pauses varied between the experiments. The results show that German-learning 6-month-olds recognise clauses in the speech stream, but only in a condition in which the duration of the pauses conforms to the mean duration of pauses found at the respective boundaries in German. Experiments 4 and 5 explicitly addressed the question of infants’ sensitivity to the natural pause hierarchy by inserting pauses at the clause and sentence boundaries only. Their durations were either conforming to the natural pause hierarchy or were being reversed. The results of these experiments provide evidence that 8-, but not 6-month-olds seem to be sensitive to the correlation of the duration of pauses and the type of boundary they demarcate. The present study provides first evidence that infants not only use pauses to recognise clause and sentence boundaries, but are sensitive to the duration and distribution of pauses in their native language as reflected in the natural pause hierarchy.