Refine
Year of publication
- 2013 (1317) (remove)
Document Type
- Article (912)
- Doctoral Thesis (174)
- Postprint (60)
- Preprint (48)
- Conference Proceeding (42)
- Review (36)
- Monograph/Edited Volume (30)
- Other (7)
- Habilitation Thesis (2)
- Master's Thesis (2)
Language
- English (1317) (remove)
Keywords
- Curriculum Framework (17)
- European values education (17)
- Europäische Werteerziehung (17)
- Familie (17)
- Family (17)
- Lehrevaluation (17)
- Studierendenaustausch (17)
- Unterrichtseinheiten (17)
- curriculum framework (17)
- lesson evaluation (17)
Institute
- Institut für Biochemie und Biologie (254)
- Institut für Physik und Astronomie (186)
- Institut für Geowissenschaften (181)
- Institut für Chemie (141)
- Department Psychologie (71)
- Institut für Mathematik (63)
- Wirtschaftswissenschaften (41)
- Department Linguistik (37)
- Institut für Informatik und Computational Science (35)
- Institut für Umweltwissenschaften und Geographie (34)
- Department Sport- und Gesundheitswissenschaften (33)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (29)
- Institut für Ernährungswissenschaft (27)
- Extern (24)
- Sozialwissenschaften (17)
- Institut für Anglistik und Amerikanistik (15)
- Mathematisch-Naturwissenschaftliche Fakultät (14)
- Institut für Romanistik (10)
- Philosophische Fakultät (8)
- Humanwissenschaftliche Fakultät (7)
- Sonderforschungsbereich 632 - Informationsstruktur (7)
- Öffentliches Recht (7)
- Strukturbereich Kognitionswissenschaften (6)
- Institut für Jüdische Studien und Religionswissenschaft (5)
- Institut für Germanistik (4)
- Wirtschafts- und Sozialwissenschaftliche Fakultät (4)
- Institut für Slavistik (3)
- Vereinigung für Jüdische Studien e. V. (3)
- Bürgerliches Recht (2)
- Department Erziehungswissenschaft (2)
- Referat für Presse- und Öffentlichkeitsarbeit (2)
- Department Grundschulpädagogik (1)
- Department für Inklusionspädagogik (1)
- Fachgruppe Politik- & Verwaltungswissenschaft (1)
- Fachgruppe Volkswirtschaftslehre (1)
- Historisches Institut (1)
- Potsdam Transfer - Zentrum für Gründung, Innovation, Wissens- und Technologietransfer (1)
The paper discusses the distribution and meaning of the additive particle -m@s in Ishkashimi. -m@s receives different semantic associations while staying in the same syntactic position. Thus, structurally combined with an object, it can semantically associate with the focused object or with the whole focused VP; similarly, combined with the subject it can semantically associate with the focused subject and with the whole focused sentence.
Several chronometric biases in numerical cognition have informed our understanding of a mental number line (MNL). Complementing this approach, we investigated spatial performance in a magnitude comparison task. Participants located the larger or smaller number of a pair on a horizontal line representing the interval from 0 to 10. Experiments 1 and 2 used only number pairs one unit apart and found that digits were localized farther to the right with "select larger" instructions than with "select smaller" instructions. However, when numerical distance was varied (Experiment 3), digits were localized away from numerically near neighbors. This repulsion effect reveals context-specific distortions in number representation not previously noticed with chronometric measures.
Integrated and concurrent cultures in rice fields are a promising approach to sustainable farming as the demand for aquacultural and agricultural products continues to grow while land and water resources become increasingly scarce. Prawn farming mainly takes place in coastal regions in improved extensive to semi-intensive aquacultures but a trend to shift the industry to inland regions has been noticed. This inland study in Northern Bangladesh used different input regimes such as fertilizer and additional feed to compare the performance of prawn and fish in flooded paddy fields in regard to water quality measurements. Maximal net yields and body weight gain with minimized negative impact on water quality were found when initial body weights of prawn were optimized. Regarding yield factors in reference to the reduction of costs due to the avoidance of expensive fertilizer/feed and effort, prawn performed better than integrated fish cultures considering a higher market value of prawn with net yields of up to 97 +/- 55 kg ha(-1) for unfed and 151 +/- 61 kg ha(-1) for fed treatments. Rice yields of up to 4.7 +/- 0.1 t ha(-1) for unfed and 4.4 +/- 0.1 t ha(-1) were achieved for fed treatments. The findings suggest that for small scale farmers, prawn cum rice cultures are an economically profitable and comparatively easily manageable alternative to rice cum fish cultures.
Two optically obscured Wolf-Rayet (WR) stars have been recently discovered by means of their infrared (IR) circumstellar shells, which show signatures of interaction with each other. Following the systematics of the WR star catalogues, these stars obtain the names WR 120bb and WR 120bc. In this paper, we present and analyse new near-IR, J-, H- and K-band spectra using the Potsdam Wolf-Rayet model atmosphere code. For that purpose, the atomic data base of the code has been extended in order to include all significant lines in the near-IR bands.
The spectra of both stars are classified as WN9h. As their spectra are very similar the parameters that we obtained by the spectral analyses hardly differ. Despite their late spectral subtype, we found relatively high stellar temperatures of 63 kK. The wind composition is dominated by helium, while hydrogen is depleted to 25 per cent by mass.
Because of their location in the Scutum-Centaurus Arm, WR 120bb and WR 120bc appear highly reddened, A(Ks) approximate to 2 mag. We adopt a common distance of 5.8 kpc to both stars, which complies with the typical absolute K-band magnitude for the WN9h subtype of -6.5 mag, is consistent with their observed extinction based on comparison with other massive stars in the region, and allows for the possibility that their shells are interacting with each other. This leads to luminosities of log(L/L-circle dot) = 5.66 and 5.54 for WR 120bb and WR 120bc, with large uncertainties due to the adopted distance.
The values of the luminosities of WR 120bb and WR 120bc imply that the immediate precursors of both stars were red supergiants (RSG). This implies in turn that the circumstellar shells associated with WR 120bb and WR 120bc were formed by interaction between the WR wind and the dense material shed during the preceding RSG phase.
Management of data to produce scientific knowledge is a key challenge for biological research in the 21st century. Emerging high-throughput technologies allow life science researchers to produce big data at speeds and in amounts that were unthinkable just a few years ago. This places high demands on all aspects of the workflow: from data capture (including the experimental constraints of the experiment), analysis and preservation, to peer-reviewed publication of results. Failure to recognise the issues at each level can lead to serious conflicts and mistakes; research may then be compromised as a result of the publication of non-coherent protocols, or the misinterpretation of published data. In this report, we present the results from a workshop that was organised to create an ontological data-modelling framework for Laboratory Protocol Standards for the Molecular Methods Database (MolMeth). The workshop provided a set of short- and long-term goals for the MolMeth database, the most important being the decision to use the established EXACT description of biomedical ontologies as a starting point.
We tested the limits of working-memory capacity (WMC) of young adults, old adults, and children with a memory-updating task. The task consisted of mentally shifting spatial positions within a grid according to arrows, their color signaling either only go (control) or go/no-go conditions. The interference model (IM) of Oberauer and Kliegl (2006) was simultaneously fitted to the data of all groups. In addition to the 3 main model parameters (feature overlap, noise, and processing rate), we estimated the time for switching between go and no-go steps as a new model parameter. In this study, we examined the IM parameters across the life span. The IM parameter estimates show that (a) conditions were not different in interference by feature overlap and interference by confusion; (b) switching costs time; (c) young adults and children were less susceptible than old adults to interference due to feature overlap; (d) noise was highest for children, followed by old and young adults; (e) old adults differed from children and young adults in lower processing rate; and (f) children and old adults had a larger switch cost between go steps and no-go steps. Thus, the results of this study indicated that across age, the IM parameters contribute distinctively for explaining the limits of WMC.
Deserts are a major source of loess and may undergo substantial wind-erosion as evidenced by yardang fields, deflation pans, and wind-scoured bedrock landscapes. However, there are few quantitative estimates of bedrock removal by wind abrasion and deflation. Here, we report wind-erosion rates in the western Qaidam Basin in central China based on measurements of cosmogenic Be-10 in exhumed Miocene sedimentary bedrock. Sedimentary bedrock erosion rates range from 0.05 to 0.4 mm/yr, although the majority of measurements cluster at 0.125 +/- 0.05 mm/yr. These results, combined with previous work, indicate that strong winds, hyper-aridity, exposure of friable Neogene strata, and ongoing rock deformation and uplift in the western Qaidam Basin have created an environment where wind, instead of water, is the dominant agent of erosion and sediment transport. Its geographic location (upwind) combined with volumetric estimates suggest that the Qaidam Basin is a major source (up to 50%) of dust to the Chinese Loess Plateau to the east. The cosmogenically derived wind erosion rates are within the range of erosion rates determined from glacial and fluvial dominated landscapes worldwide, exemplifying the effectiveness of wind to erode and transport significant quantities of bedrock.
Matrix product states and their continuous analogues are variational classes of states that capture quantum many-body systems or quantum fields with low entanglement; they are at the basis of the density-matrix renormalization group method and continuous variants thereof. In this work we show that, generically, N-point functions of arbitrary operators in discrete and continuous translation invariant matrix product states are completely characterized by the corresponding two- and three-point functions. Aside from having important consequences for the structure of correlations in quantum states with low entanglement, this result provides a new way of reconstructing unknown states from correlation measurements, e. g., for one-dimensional continuous systems of cold atoms. We argue that such a relation of correlation functions may help in devising perturbative approaches to interacting theories.
Where girls the role of boys in CS - attitudes of CS students in a female-dominated environment
(2013)
A survey has been carried out in the Computer Science (CS) department at the University of Baghdad to investigate the attitudes of CS students in a female dominant environment, showing the differences between male and female students in different academic years. We also compare the attitudes of the freshman students of two different cultures (University of Baghdad, Iraq, and the University of Potsdam).
In this paper we report on our experiments in teaching computer science concepts with a mix of tangible and abstract object manipulations. The goal we set ourselves was to let pupils discover the challenges one has to meet to automatically manipulate formatted text. We worked with a group of 25 secondary school pupils (9-10th grade), and they were actually able to “invent” the concept of mark-up language. From this experiment we distilled a set of activities which will be replicated in other classes (6th grade) under the guidance of maths teachers.
Eye movement data have proven to be very useful for investigating human sentence processing. Eyetracking research has addressed a wide range of questions, such as recovery mechanisms following garden-pathing, the timing of processes driving comprehension, the role of anticipation and expectation in parsing, the role of semantic, pragmatic, and prosodic information, and so on. However, there are some limitations regarding the inferences that can be made on the basis of eye movements. One relates to the nontrivial interaction between parsing and the eye movement control system which complicates the interpretation of eye movement data. Detailed computational models that integrate parsing with eye movement control theories have the potential to unpack the complexity of eye movement data and can therefore aid in the interpretation of eye movements. Another limitation is the difficulty of capturing spatiotemporal patterns in eye movements using the traditional word-based eyetracking measures. Recent research has demonstrated the relevance of these patterns and has shown how they can be analyzed. In this review, we focus on reading, and present examples demonstrating how eye movement data reveal what events unfold when the parser runs into difficulty, and how the parsing system interacts with eye movement control. WIREs Cogn Sci 2013, 4:125134. doi: 10.1002/wcs.1209 For further resources related to this article, please visit the WIREs website.
In sedimentary basins, rock thermal conductivity can vary both laterally and vertically, thus altering the basin’s thermal structure locally and regionally. Knowledge of the thermal conductivity of geological formations and its spatial variations is essential, not only for quantifying basin evolution and hydrocarbon maturation processes, but also for understanding geothermal conditions in a geological setting. In conjunction with the temperature gradient, thermal conductivity represents the basic input parameter for the determination of the heat-flow density; which, in turn, is applied as a major input parameter in thermal modeling at different scales. Drill-core samples, which are necessary to determine thermal properties by laboratory measurements, are rarely available and often limited to previously explored reservoir formations. Thus, thermal conductivities of Mesozoic rocks in the North German Basin (NGB) are largely unknown. In contrast, geophysical borehole measurements are often available for the entire drilled sequence. Therefore, prediction equations to determine thermal conductivity based on well-log data are desirable. In this study rock thermal conductivity was investigated on different scales by (1) providing thermal-conductivity measurements on Mesozoic rocks, (2) evaluating and improving commonly applied mixing models which were used to estimate matrix and pore-filled rock thermal conductivities, and (3) developing new well-log based equations to predict thermal conductivity in boreholes without core control. Laboratory measurements are performed on sedimentary rock of major geothermal reservoirs in the Northeast German Basin (NEGB) (Aalenian, Rhaethian-Liassic, Stuttgart Fm., and Middle Buntsandstein). Samples are obtained from eight deep geothermal wells that approach depths of up to 2,500 m. Bulk thermal conductivities of Mesozoic sandstones range between 2.1 and 3.9 W/(m∙K), while matrix thermal conductivity ranges between 3.4 and 7.4 W/(m∙K). Local heat flow for the Stralsund location averages 76 mW/m², which is in good agreement to values reported previously for the NEGB. For the first time, in-situ bulk thermal conductivity is indirectly calculated for entire borehole profiles in the NEGB using the determined surface heat flow and measured temperature data. Average bulk thermal conductivity, derived for geological formations within the Mesozoic section, ranges between 1.5 and 3.1 W/(m∙K). The measurement of both dry- and water-saturated thermal conductivities allow further evaluation of different two-component mixing models which are often applied in geothermal calculations (e.g., arithmetic mean, geometric mean, harmonic mean, Hashin-Shtrikman mean, and effective-medium theory mean). It is found that the geometric-mean model shows the best correlation between calculated and measured bulk thermal conductivity. However, by applying new model-dependent correction, equations the quality of fit could be significantly improved and the error diffusion of each model reduced. The ‘corrected’ geometric mean provides the most satisfying results and constitutes a universally applicable model for sedimentary rocks. Furthermore, lithotype-specific and model-independent conversion equations are developed permitting a calculation of water-saturated thermal conductivity from dry-measured thermal conductivity and porosity within an error range of 5 to 10%. The limited availability of core samples and the expensive core-based laboratory measurements make it worthwhile to use petrophysical well logs to determine thermal conductivity for sedimentary rocks. The approach followed in this study is based on the detailed analyses of the relationships between thermal conductivity of rock-forming minerals, which are most abundant in sedimentary rocks, and the properties measured by standard logging tools. By using multivariate statistics separately for clastic, carbonate and evaporite rocks, the findings from these analyses allow the development of prediction equations from large artificial data sets that predict matrix thermal conductivity within an error of 4 to 11%. These equations are validated successfully on a comprehensive subsurface data set from the NGB. In comparison to the application of earlier published approaches formation-dependent developed for certain areas, the new developed equations show a significant error reduction of up to 50%. These results are used to infer rock thermal conductivity for entire borehole profiles. By inversion of corrected in-situ thermal-conductivity profiles, temperature profiles are calculated and compared to measured high-precision temperature logs. The resulting uncertainty in temperature prediction averages < 5%, which reveals the excellent temperature prediction capabilities using the presented approach. In conclusion, data and methods are provided to achieve a much more detailed parameterization of thermal models.
We know exactly what you want the development of a completely individualised conjoint analysis
(2013)
Improving the predictive validity of conjoint analysis has been an important research objective for many years. Whereas the majority of attempts have been different approaches to preference modelling, data collection or product presentation, only a few scholars have tried to improve predictive validity by individualising conjoint designs. This comes as a surprise because many markets have observed an augmented demand for customised products and highly heterogeneous customers' preferences. Against this background, the authors develop a conjoint variant based on a completely individualised conjoint design. More concretely, the new approach not only individualises the attributes, but also the attribute levels. The results of a comprehensive empirical study yield a significantly higher validity than existing standardised-level conjoint approaches. Consequently, they help marketers to gain deeper insights into their customers' preferences.
The dynamics of external contributions to the geomagnetic field is investigated by applying time-frequency methods to magnetic observatory data. Fractal models and multiscale analysis enable obtaining maximum quantitative information related to the short-term dynamics of the geomagnetic field activity. The stochastic properties of the horizontal component of the transient external field are determined by searching for scaling laws in the power spectra. The spectrum fits a power law with a scaling exponent beta, a typical characteristic of self-affine time-series. Local variations in the power-law exponent are investigated by applying wavelet analysis to the same time-series. These analyses highlight the self-affine properties of geomagnetic perturbations and their persistence. Moreover, they show that the main phases of sudden storm disturbances are uniquely characterized by a scaling exponent varying between 1 and 3, possibly related to the energy contained in the external field. These new findings suggest the existence of a long-range dependence, the scaling exponent being an efficient indicator of geomagnetic activity and singularity detection. These results show that by using magnetogram regularity to reflect the magnetosphere activity, a theoretical analysis of the external geomagnetic field based on local power-law exponents is possible.
The dynamics of external contributions to the geomagnetic field is investigated by applying time-frequency methods to magnetic observatory data. Fractal models and multiscale analysis enable obtaining maximum quantitative information related to the short-term dynamics of the geomagnetic field activity. The stochastic properties of the horizontal component of the transient external field are determined by searching for scaling laws in the power spectra. The spectrum fits a power law with a scaling exponent β, a typical characteristic of self-affine time-series. Local variations in the power-law exponent are investigated by applying wavelet analysis to the same time-series. These analyses highlight the self-affine properties of geomagnetic perturbations and their persistence. Moreover, they show that the main phases of sudden storm disturbances are uniquely characterized by a scaling exponent varying between 1 and 3, possibly related to the energy contained in the external field. These new findings suggest the existence of a long-range dependence, the scaling exponent being an efficient indicator of geomagnetic activity and singularity detection. These results show that by using magnetogram regularity to reflect the magnetosphere activity, a theoretical analysis of the external geomagnetic field based on local power-law exponents is possible.
The expansion and intensification of soya bean agriculture in southeastern Amazonia can alter watershed hydrology and biogeochemistry by changing the land cover, water balance and nutrient inputs. Several new insights on the responses of watershed hydrology and biogeochemistry to deforestation in Mato Grosso have emerged from recent intensive field campaigns in this region. Because of reduced evapotranspiration, total water export increases threefold to fourfold in soya bean watersheds compared with forest. However, the deep and highly permeable soils on the broad plateaus on which much of the soya bean cultivation has expanded buffer small soya bean watersheds against increased stormflows. Concentrations of nitrate and phosphate do not differ between forest or soya bean watersheds because fixation of phosphorus fertilizer by iron and aluminium oxides and anion exchange of nitrate in deep soils restrict nutrient movement. Despite resistance to biogeochemical change, streams in soya bean watersheds have higher temperatures caused by impoundments and reduction of bordering riparian forest. In larger rivers, increased water flow, current velocities and sediment flux following deforestation can reshape stream morphology, suggesting that cumulative impacts of deforestation in small watersheds will occur at larger scales.
Background.Vocational interests play a central role in the vocational decision-making process and are decisive for the later job satisfaction and vocational success. Based on Ackerman's (1996) notion of trait complexes, specific interest profiles of gifted high-school graduates can be expected. Aims.Vocational interests of gifted and highly achieving adolescents were compared to those of their less intelligent/achieving peers according to Holland's (1997) RIASEC model. Further, the impact of intelligence and achievement on interests were analysed while statistically controlling for potentially influencing variables. Changes in interests over time were investigated. Sample.N= 4,694 German students (age: M= 19.5, SD= .80; 54.6% females) participated in the study (TOSCA; Koller, Watermann, Trautwein, & Ludtke, 2004). Method. Interests were assessed in participants' final year at school and again 2 years later (N= 2,318). Results.Gifted participants reported stronger investigative and realistic interests, but lower social interests than less intelligent participants. Highly achieving participants reported higher investigative and (in wave 2) higher artistic interests. Considerable gender differences were found: gifted girls had a flat interest profile, while gifted boys had pronounced realistic and investigative and low social interests. Multilevel multiple regression analyses predicting interests by intelligence and school achievement revealed stable interest profiles. Beyond a strong gender effect, intelligence and school achievement each contributed substantially to the prediction of vocational interests. Conclusions.At the time around graduation from high school, gifted young adults show stable interest profiles, which strongly differ between gender and intelligence groups. These differences are relevant for programmes for the gifted and for vocational counselling.
Preclinical work indicates that calcitriol restores vascular function by normalizing the endothelial expression of cyclooxygenase-2 and thromboxane-prostanoid receptors in conditions of estrogen deficiency and thus prevents the thromboxane-prostanoid receptor activation-induced inhibition of nitric oxide synthase. Since endothelial dysfunction is a key factor in the pathogenesis of cardiovascular diseases, this finding may have an important translational impact. It provides a clear rationale to use endothelial function in clinical trials aiming to find the optimal dose of vitamin D for the prevention of cardiovascular events in postmenopausal women.
Two experiments investigated (1) how activation of manual affordances is triggered by visual and linguistic cues to manipulable objects and (2) whether graspable object parts play a special role in this process. Participants pressed a key to categorize manipulable target objects copresented with manipulable distractor objects on a computer screen. Three factors were varied in Experiment 1: (1) the target's and (2) the distractor's handles' orientation congruency with the lateral manual response and (3) the Visual Focus on one of the objects. In Experiment 2, a linguistic cue factor was added to these three factors-participants heard the name of one of the two objects prior to the target display onset. Analysis of participants' motor and oculomotor behaviour confirmed that perceptual and linguistic cues potentiated activation of grasp affordances. Both target- and distractor-related affordance effects were modulated by the presence of visual and linguistic cues. However, a differential visual attention mechanism subserved activation of compatibility effects associated with target and distractor objects. We also registered an independent implicit attention attraction effect from objects' handles, suggesting that graspable parts automatically attract attention during object viewing. This effect was further amplified by visual but not linguistic cues, thus providing initial evidence for a recent hypothesis about differential roles of visual and linguistic information in potentiating stable and variable affordances (Borghi in Language and action in cognitive neuroscience. Psychology Press, London, 2012).
We easily recover the causal properties of visual events, enabling us to understand and predict changes in the physical world. We see a tennis racket hitting a ball and sense that it caused the ball to fly over the net; we may also have an eerie but equally compelling experience of causality if the streetlights turn on just as we slam our car's door. Both perceptual [1] and cognitive [2] processes have been proposed to explain these spontaneous inferences, but without decisive evidence one way or the other, the question remains wide open [3-8]. Here, we address this long-standing debate using visual adaptation-a powerful tool to uncover neural populations that specialize in the analysis of specific visual features [9-12]. After prolonged viewing of causal collision events called "launches" [1], subsequently viewed events were judged more often as noncausal. These negative aftereffects of exposure to collisions are spatially localized in retinotopic coordinates, the reference frame shared by the retina and visual cortex. They are not explained by adaptation to other stimulus features and reveal visual routines in retinotopic cortex that detect and adapt to cause and effect in simple collision stimuli.
Requirements engineers have to elicit, document, and validate how stakeholders act and interact to achieve their common goals in collaborative scenarios. Only after gathering all information concerning who interacts with whom to do what and why, can a software system be designed and realized which supports the stakeholders to do their work. To capture and structure requirements of different (groups of) stakeholders, scenario-based approaches have been widely used and investigated. Still, the elicitation and validation of requirements covering collaborative scenarios remains complicated, since the required information is highly intertwined, fragmented, and distributed over several stakeholders. Hence, it can only be elicited and validated collaboratively. In times of globally distributed companies, scheduling and conducting workshops with groups of stakeholders is usually not feasible due to budget and time constraints. Talking to individual stakeholders, on the other hand, is feasible but leads to fragmented and incomplete stakeholder scenarios. Going back and forth between different individual stakeholders to resolve this fragmentation and explore uncovered alternatives is an error-prone, time-consuming, and expensive task for the requirements engineers. While formal modeling methods can be employed to automatically check and ensure consistency of stakeholder scenarios, such methods introduce additional overhead since their formal notations have to be explained in each interaction between stakeholders and requirements engineers. Tangible prototypes as they are used in other disciplines such as design, on the other hand, allow designers to feasibly validate and iterate concepts and requirements with stakeholders. This thesis proposes a model-based approach for prototyping formal behavioral specifications of stakeholders who are involved in collaborative scenarios. By simulating and animating such specifications in a remote domain-specific visualization, stakeholders can experience and validate the scenarios captured so far, i.e., how other stakeholders act and react. This interactive scenario simulation is referred to as a model-based virtual prototype. Moreover, through observing how stakeholders interact with a virtual prototype of their collaborative scenarios, formal behavioral specifications can be automatically derived which complete the otherwise fragmented scenarios. This, in turn, enables requirements engineers to elicit and validate collaborative scenarios in individual stakeholder sessions – decoupled, since stakeholders can participate remotely and are not forced to be available for a joint session at the same time. This thesis discusses and evaluates the feasibility, understandability, and modifiability of model-based virtual prototypes. Similarly to how physical prototypes are perceived, the presented approach brings behavioral models closer to being tangible for stakeholders and, moreover, combines the advantages of joint stakeholder sessions and decoupled sessions.
The time-dependent approach to electronic spectroscopy, as popularized by Heller and coworkers in the 1980's, is applied here in conjunction with linear-response, time-dependent density functional theory to study vibronic absorption, emission and resonance Raman spectra of several diamondoids. Two-state models, the harmonic and the Condon approximations, are used for the calculations, making them easily applicable to larger molecules. The method is applied to nine pristine lower and higher diamondoids: adamantane, diamantane, triamantane, and three isomers each of tetramantane and pentamantane. We also consider a hybrid species “Dia = Dia” – a shorthand notation for a recently synthesized molecule comprising two diamantane units connected by a C[double bond, length as m-dash]C double bond. We resolve and interpret trends in optical and vibrational properties of these molecules as a function of their size, shape, and symmetry, as well as effects of “blending” with sp2-hybridized C-atoms. Time-dependent correlation functions facilitate the computations and shed light on the vibrational dynamics following electronic transitions.
Two images, taken by the Cassini spacecraft near Saturn's equinox in 2009 August, show the Earhart propeller casting a 350 km long shadow, offering the opportunity to watch how the ring height, excited by the propeller moonlet, relaxes to an equilibrium state. From the shape of the shadow cast and a model of the azimuthal propeller height relaxation, we determine the exponential cooling constant of this process to be lambda = 0.07 +/- 0.02 km(-1), and thereby determine the collision frequency of the ring particles in the vertically excited region of the propeller to be omega(c)/Omega = 0.9 +/- 0.2.
We report results from TeV gamma-ray observations of the microquasar Cygnus X-3. The observations were made with the Very Energetic Radiation Imaging Telescope Array System (VERITAS) over a time period from 2007 June 11 to 2011 November 28. VERITAS is most sensitive to gamma rays at energies between 85 GeV and 30 TeV. The effective exposure time amounts to a total of about 44 hr, with the observations covering six distinct radio/X-ray states of the object. No significant TeV gamma-ray emission was detected in any of the states, nor with all observations combined. The lack of a positive signal, especially in the states where GeV gamma rays were detected, places constraints on TeV gamma-ray production in Cygnus X-3. We discuss the implications of the results.
Previous research has shown that high phonotactic frequencies
facilitate the production of regularly inflected verbs in English-learning
children with specific language impairment (SLI) but not with typical
development (TD). We asked whether this finding can be replicated
for German, a language with a much more complex inflectional
verb paradigm than English. Using an elicitation task, the production
of inflected nonce verb forms (3 rd person singular with -t suffix)
with either high- or low-frequency subsyllables was tested in
sixteen German-learning children with SLI (ages 4;1–5 ;1), sixteen
TD-children matched for chronological age (CA) and fourteen TD-
children matched for verbal age (VA) (ages 3;0–3 ;11). The findings
revealed that children with SLI, but not CA- or VA-children, showed
differential performance between the two types of verbs, producing
more inflectional errors when the verb forms resulted in low-frequency
subsyllables than when they resulted in high-frequency subsyllables,
replicating the results from English-learning children.
This article examines two so-far-understudied verb doubling constructions in Mandarin Chinese, viz., verb doubling clefts and verb doubling lianaEuro broken vertical bar dou. We show that these constructions have the same internal syntax as regular clefts and lianaEuro broken vertical bar dou sentences, the doubling effect being epiphenomenal; therefore, we classify them as subtypes of the general cleft and lianaEuro broken vertical bar dou constructions, respectively, rather than as independent constructions. Additionally, we also show that, as in many other languages with comparable constructions, the two instances of the verb are part of a single movement chain, which has the peculiarity of allowing Spell-Out of more than one link.
The main intention of the PhD project was to create a varve chronology for the Suigetsu Varves 2006' (SG06) composite profile from Lake Suigetsu (Japan) by thin section microscopy. The chronology was not only to provide an age-scale for the various palaeo-environmental proxies analysed within the SG06 project, but also and foremost to contribute, in combination with the SG06 14C chronology, to the international atmospheric radiocarbon calibration curve (IntCal). The SG06 14C data are based on terrestrial leaf fossils and therefore record atmospheric 14C values directly, avoiding the corrections necessary for the reservoir ages of the marine datasets, which are currently used beyond the tree-ring limit in the IntCal09 dataset (Reimer et al., 2009). The SG06 project is a follow up of the SG93 project (Kitagawa & van der Plicht, 2000), which aimed to produce an atmospheric calibration dataset, too, but suffered from incomplete core recovery and varve count uncertainties. For the SG06 project the complete Lake Suigetsu sediment sequence was recovered continuously, leaving the task to produce an improved varve count. Varve counting was carried out using a dual method approach utilizing thin section microscopy and micro X-Ray Fluorescence (µXRF). The latter was carried out by Dr. Michael Marshall in cooperation with the PhD candidate. The varve count covers 19 m of composite core, which corresponds to the time frame from ≈10 to ≈40 kyr BP. The count result showed that seasonal layers did not form in every year. Hence, the varve counts from either method were incomplete. This rather common problem in varve counting is usually solved by manual varve interpolation. But manual interpolation often suffers from subjectivity. Furthermore, sedimentation rate estimates (which are the basis for interpolation) are generally derived from neighbouring, well varved intervals. This assumes that the sedimentation rates in neighbouring intervals are identical to those in the incompletely varved section, which is not necessarily true. To overcome these problems a novel interpolation method was devised. It is computer based and automated (i.e. avoids subjectivity and ensures reproducibility) and derives the sedimentation rate estimate directly from the incompletely varved interval by statistically analysing distances between successive seasonal layers. Therefore, the interpolation approach is also suitable for sediments which do not contain well varved intervals. Another benefit of the novel method is that it provides objective interpolation error estimates. Interpolation results from the two counting methods were combined and the resulting chronology compared to the 14C chronology from Lake Suigetsu, calibrated with the tree-ring derived section of IntCal09 (which is considered accurate). The varve and 14C chronology showed a high degree of similarity, demonstrating that the novel interpolation method produces reliable results. In order to constrain the uncertainties of the varve chronology, especially the cumulative error estimates, U-Th dated speleothem data were used by linking the low frequency 14C signal of Lake Suigetsu and the speleothems, increasing the accuracy and precision of the Suigetsu calibration dataset. The resulting chronology also represents the age-scale for the various palaeo-environmental proxies analysed in the SG06 project. One proxy analysed within the PhD project was the distribution of event layers, which are often representatives of past floods or earthquakes. A detailed microfacies analysis revealed three different types of event layers, two of which are described here for the first time for the Suigetsu sediment. The types are: matrix supported layers produced as result of subaqueous slope failures, turbidites produced as result of landslides and turbidites produced as result of flood events. The former two are likely to have been triggered by earthquakes. The vast majority of event layers was related to floods (362 out of 369), which allowed the construction of a respective chronology for the last 40 kyr. Flood frequencies were highly variable, reaching their greatest values during the global sea level low-stand of the Glacial, their lowest values during Heinrich Event 1. Typhoons affecting the region represent the most likely control on the flood frequency, especially during the Glacial. However, also local, non-climatic controls are suggested by the data. In summary, the work presented here expands and revises knowledge on the Lake Suigetsu sediment and enabls the construction of a far more precise varve chronology. The 14C calibration dataset is the first such derived from lacustrine sediments to be included into the (next) IntCal dataset. References: Kitagawa & van der Plicht, 2000, Radiocarbon, Vol 42(3), 370-381 Reimer et al., 2009, Radiocarbon, Vol 51(4), 1111-1150
The human face shows individual features and features that are characteristic for sex and age (the loss of childlike characteristics during maturation). The analysis of facial dimensions is essential for identifying individual features also for forensic issues.
The analysis of facial proportions was performed on photogrammetric data from front views of 125 children. The data were pooled from 2 different studies. The children's data were obtained from a longitudinal study and reduced by random generator to ensure the data of adults from a separate cross-sectional study.
We applied principal component analysis on photogrammetric facial proportions of 169 individuals: 125 children (63 boys and 62 girls) aged 2-7 years and 44 adults (18 men and 26 women) aged 18-65 years.
Facial proportions depend on age and sex. Three components described age: (1) proportions of facial height to head height, (2) proportions that involve endocanthal breadth, and (3) bigonial to bizygonial proportions. Proportions that associate with sex are connected with nasal distances and nasal to bizygonial distances.
Twenty-three percent of the variance, particularly variance that are connected with proportions of lower and middle face heights to head height, do neither depend on sex nor on age and thus appear useful for screening purposes, eg, for dysmorphic genetic syndromes.
Value creation in scene-based music production - the case of electronic club music in Germany
(2013)
The focus of this article is on the variability of value creation in the popular music industry. Recent trends in electronic music have been based on both the valorization of global tastes and of local specialities in performance and production. Depending on musical styles and market niches, local scenes have become important forces behind heterogeneous globalocal markets. At the same time, technological change and the virtualization of music production and distribution contribute to increasingly differentiated configurations of value creation. It is therefore necessary to reconstruct theoretically and empirically the new interplay among the local music production, digital media markets, and virtual communities that are involved. On the basis of empirical explorations in a German hot spot of electronic club-music production (the city of Berlin), the article indentifies local interaction practice and constellations of stakeholders. The findings show that value creation in these rapidly changing production scenes has moved away from the large-scale distribution of producer-induced media to audience-induced live performance and interactive soundtrack production. This change involves the rising importance of cultural embeddings such as taste building, reputation building among artists and producers, and local community building. Starting from an open theoretical problematization of value creation with regard to fluid scenes and shifting modes of production, the results of first empirical reconstructions are taken as inputs to an evolving discussion on the configurations of value creation in consumer-based strands of music production.
From 6 to 9 August 2012, intense rainfall hit the northern Philippines, causing massive floods in Metropolitan Manila and nearby regions. Local rain gauges recorded almost 1000mm within this period. However, the recently installed Philippine network of weather radars suggests that Metropolitan Manila might have escaped a potentially bigger flood just by a whisker, since the centre of mass of accumulated rainfall was located over Manila Bay. A shift of this centre by no more than 20 km could have resulted in a flood disaster far worse than what occurred during Typhoon Ketsana in September 2009.
The challenge is providing teachers with the resources they need to strengthen their instructions and better prepare students for the jobs of the 21st Century. Technology can help meet the challenge. Teachers’ Tryscience is a noncommercial offer, developed by the New York Hall of Science, TeachEngineering, the National Board for Professional Teaching Standards and IBM Citizenship to provide teachers with such resources. The workshop provides deeper insight into this tool and discussion of how to support teaching of informatics in schools.
Scientific writing is an important skill for computer science and computer engineering professionals. In this paper we present a writing concept across the curriculum program directed towards scientific writing. The program is built around a hierarchy of learning outcomes. The hierarchy is constructed through analyzing the learning outcomes in relation to competencies that are needed to fulfill them.
Isolation of recombinant antibodies from antibody libraries is commonly performed by different molecular display formats including phage display and ribosome display or different cell-surface display formats. We describe a new method which allows the selection of Escherichia coil cells producing the required single chain antibody by cultivation in presence of ampicillin conjugated to the antigen of interest. The method utilizes the neutralization of the conjugate by the produced single chain antibody which is secreted to the periplasm. Therefore, a new expression system based on the pET26b vector was designed and a library was constructed. The method was successfully established first for the selection of E. coli BL21 Star (DE3) cells expressing a model single chain antibody (anti-fluorescein) by a simple selection assay on LB-agar plates. Using this selection assay, we could identify a new single chain antibody binding biotin by growing E. coil BL21 Star (DE3) containing the library in presence of a biotin-ampicillin conjugate. In contrast to methods as molecular or cell surface display our selection system applies the soluble single chain antibody molecule and thereby avoids undesired effects, e.g. by the phage particle or the yeast fusion protein. By selecting directly in an expression strain, production and characterization of the selected single chain antibody is possible without any further cloning or transformation steps.
1. Porter strategic competitive analysis 2. A Porter analysis of the competitive advantage of banks in business lending and proprietary trading 3. Summary, competitive advantage of banks in business lending and proprietary trading 4. JPMorgan’s “London Whale” speculation 5. A common misapprehension about hedged positions in corporate debt 6. Conclusion
Assessing diversity is among the major tasks in ecology and conservation science. In ecological and conservation studies, epiphytic cryptogams are usually sampled up to accessible heights in forests. Thus, their diversity, especially of canopy specialists, likely is underestimated. If the proportion of those species differs among forest types, plot-based diversity assessments are biased and may result in misleading conservation recommendations. We sampled bryophytes and lichens in 30 forest plots of 20 m x 20 m in three German regions, considering all substrates, and including epiphytic litter fall. First, the sampling of epiphytic species was restricted to the lower 2 m of trees and shrubs. Then, on one representative tree per plot, we additionally recorded epiphytic species in the crown, using tree climbing techniques. Per tree, on average 54% of lichen and 20% of bryophyte species were overlooked if the crown was not been included. After sampling all substrates per plot, including the bark of all shrubs and trees, still 38% of the lichen and 4% of the bryophyte species were overlooked if the tree crown of the sampled tree was not included. The number of overlooked lichen species varied strongly among regions. Furthermore, the number of overlooked bryophyte and lichen species per plot was higher in European beech than in coniferous stands and increased with increasing diameter at breast height of the sampled tree. Thus, our results indicate a bias of comparative studies which might have led to misleading conservation recommendations of plot-based diversity assessments.
Through the reactions of 1-aminomethyl-2-naphthol and substituted 1-aminobenzyl-2-naphthols with 3,4-dihydroisoquinoline or 6,7-dimethoxy-3,4-dihydroisoquinoline under microwave conditions, naphth[1,2-e][1,3]oxazino[2,3-a]-isoquinoline derivatives were prepared in good yields. The latter reaction was extended by using 2-aminoarylmethyl-1-naphthols, leading to isomeric naphth-[2,1-e][1,3]oxazino[2,3-a] isoquinolines. Beside the detailed NMR spectroscopic and theoretical study of both stereochemistry and dynamic behaviour of these new conformational flexible heterocyclic ring systems an unexpected dynamic process between two diastereomers was observed in solution, studied by variable temperature H-1 NMR spectroscopy and the mechanism proved by theoretical DFT computations.
Large Central European flood events of the past have demonstrated that flooding can affect several river basins at the same time leading to catastrophic economic and humanitarian losses that can stretch emergency resources beyond planned levels of service. For Germany, the spatial coherence of flooding, the contributing processes and the role of trans-basin floods for a national risk assessment is largely unknown and analysis is limited by a lack of systematic data, information and knowledge on past events. This study investigates the frequency and intensity of trans-basin flood events in Germany. It evaluates the data and information basis on which knowledge about trans-basin floods can be generated in order to improve any future flood risk assessment. In particu-lar, the study assesses whether flood documentations and related reports can provide a valuable data source for understanding trans-basin floods. An adaptive algorithm was developed that systematically captures trans-basin floods using series of mean daily discharge at a large number of sites of even time series length (1952-2002). It identifies the simultaneous occurrence of flood peaks based on the exceedance of an initial threshold of a 10 year flood at one location and consecutively pools all causally related, spatially and temporally lagged peak recordings at the other locations. A weighted cumulative index was developed that accounts for the spatial extent and the individual flood magnitudes within an event and allows quantifying the overall event severity. The parameters of the method were tested in a sensitivity analysis. An intensive study on sources and ways of information dissemination of flood-relevant publications in Germany was conducted. Based on the method of systematic reviews a strategic search approach was developed to identify relevant documentations for each of the 40 strongest trans-basin flood events. A novel framework for assessing the quality of event specific flood reports from a user’s perspective was developed and validated by independent peers. The framework was designed to be generally applicable for any natural hazard type and assesses the quality of a document addressing accessibility as well as representational, contextual, and intrinsic dimensions of quality. The analysis of time-series of mean daily discharge resulted in the identification of 80 trans-basin flood events within the period 1952-2002 in Germany. The set is dominated by events that were recorded in the hydrological winter (64%); 36% occurred during the summer months. The occurrence of floods is characterised by a distinct clustering in time. Dividing the study period into two sub-periods, we find an increase in the percentage of winter events from 58% in the first to 70.5% in the second sub-period. Accordingly, we find a significant increase in the number of extreme trans-basin floods in the second sub-period. A large body of 186 flood relevant documentations was identified. For 87.5% of the 40 strongest trans-basin floods in Germany at least one report has been found and for the most severe floods a substantial amount of documentation could be obtained. 80% of the material can be considered grey literature (i.e. literature not controlled by commercial publishers). The results of the quality assessment show that the majority of flood event specific reports are of a good quality, i.e. they are well enough drafted, largely accurate and objective, and contain a substantial amount of information on the sources, pathways and receptors/consequences of the floods. The inclusion of this information in the process of knowledge building for flood risk assessment is recommended. Both the results as well as the data produced in this study are openly accessible and can be used for further research. The results of this study contribute to an improved spatial risk assessment in Germany. The identified set of trans-basin floods provides the basis for an assessment of the chance that flooding occurs simultaneously at a number of sites. The information obtained from flood event documentation can usefully supplement the analysis of the processes that govern flood risk.
User-centered design processes are the first choice when new interactive systems or services are developed to address real customer needs and provide a good user experience. Common tools for collecting user research data, conducting brainstormings, or sketching ideas are whiteboards and sticky notes. They are ubiquitously available, and no technical or domain knowledge is necessary to use them. However, traditional pen and paper tools fall short when saving the content and sharing it with others unable to be in the same location. They are also missing further digital advantages such as searching or sorting content. Although research on digital whiteboard and sticky note applications has been conducted for over 20 years, these tools are not widely adopted in company contexts. While many research prototypes exist, they have not been used for an extended period of time in a real-world context. The goal of this thesis is to investigate what the enablers and obstacles for the adoption of digital whiteboard systems are. As an instrument for different studies, we developed the Tele-Board software system for collaborative creative work. Based on interviews, observations, and findings from former research, we tried to transfer the analog way of working to the digital world. Being a software system, Tele-Board can be used with a variety of hardware and does not depend on special devices. This feature became one of the main factors for adoption on a larger scale. In this thesis, I will present three studies on the use of Tele-Board with different user groups and foci. I will use a combination of research methods (laboratory case studies and data from field research) with the overall goal of finding out when a digital whiteboard system is used and in which cases not. Not surprisingly, the system is used and accepted if a user sees a main benefit that neither analog tools nor other applications can offer. However, I found that these perceived benefits are very different for each user and usage context. If a tool provides possibilities to use in different ways and with different equipment, the chances of its adoption by a larger group increase. Tele-Board has now been in use for over 1.5 years in a global IT company in at least five countries with a constantly growing user base. Its use, advantages, and disadvantages will be described based on 42 interviews and usage statistics from server logs. Through these insights and findings from laboratory case studies, I will present a detailed analysis of digital whiteboard use in different contexts with design implications for future systems.
We investigate the temporal and spectral correlations between flux and anisotropy fluctuations of TeV-band cosmic rays in light of recent data taken with IceCube. We find that for a conventional distribution of cosmic-ray sources, the dipole anisotropy is higher than observed, even if source discreteness is taken into account. Moreover, even for a shallow distribution of galactic cosmic-ray sources and a reacceleration model, fluctuations arising from source discreteness provide a probability only of the order of 10% that the cosmic-ray anisotropy limits of the recent IceCube analysis are met. This probability estimate is nearly independent of the exact choice of source rate, but generous for a large halo size. The location of the intensity maximum far from the Galactic Center is naturally reproduced.
Beta diversity is a conceptual link between diversity at local and regional scales. Various additional methodologies of quantifying this and related phenomena have been applied. Among them, measures of pairwise (dis)similarity of sites are particularly popular. Undersampling, i.e. not recording all taxa present at a site, is a common situation in ecological data. Bias in many metrics related to beta diversity must be expected, but only few studies have explicitly investigated the properties of various measures under undersampling conditions. On the basis of an empirical data set, representing near-complete local inventories of the Lepidoptera from an isolated Pacific island, as well as simulated communities with varying properties, we mimicked different levels of undersampling. We used 14 different approaches to quantify beta diversity, among them dataset-wide multiplicative partitioning (i.e. true beta diversity') and pairwise site x site dissimilarities. We compared their values from incomplete samples to true results from the full data. We used these comparisons to quantify undersampling bias and we calculated correlations of the dissimilarity measures of undersampled data with complete data of sites. Almost all tested metrics showed bias and low correlations under moderate to severe undersampling conditions (as well as deteriorating precision, i.e. large chance effects on results). Measures that used only species incidence were very sensitive to undersampling, while abundance-based metrics with high dependency on the distribution of the most common taxa were particularly robust. Simulated data showed sensitivity of results to the abundance distribution, confirming that data sets of high evenness and/or the application of metrics that are strongly affected by rare species are particularly sensitive to undersampling. The class of beta measure to be used should depend on the research question being asked as different metrics can lead to quite different conclusions even without undersampling effects. For each class of metric, there is a trade-off between robustness to undersampling and sensitivity to rare species. In consequence, using incidence-based metrics carries a particular risk of false conclusions when undersampled data are involved. Developing bias corrections for such metrics would be desirable.
Saturated hydraulic conductivity (K-s) is an important soil characteristic affecting soil water storage, runoff generation and erosion processes. In some areas where high-intensity rainfall coincides with low K-s values at shallow soil depths, frequent overland flow entails dense drainage networks. Consequently, linear structures such as flowlines alternate with inter-flowline areas. So far, investigations of the spatial variability of K-s mainly relied on isotropic covariance models which are unsuitable to reveal patterns resulting from linear structures. In the present study, we applied two sampling approaches so as to adequately characterize K-s spatial variability in a tropical forest catchment that features a high density of flowlines: A classical nested sampling survey and a purposive sampling strategy adapted to the presence of flowlines. The nested sampling approach revealed the dominance of small-scale variability, which is in line with previous findings. Our purposive sampling, however, detected a strong spatial gradient: surface K-s increased substantially as a function of distance to flowline; 10 m off flowlines, values were similar to the spatial mean of K-s. This deterministic trend can be included as a fixed effect in a linear mixed modeling framework to obtain realistic spatial fields of K-s. In a next step we used probability maps based on those fields and prevailing rainfall intensities to assess the hydrological relevance of the detected pattern. This approach suggests a particularly good agreement between the probability statements of K-s exceedance and observed overland flow occurrence during wet stages of the rainy season.
The Late Permian Zechstein Group in northeastern Germany is characterized by shelf and slope carbonates that rimmed a basin extending from eastern England through the Netherlands and Germany to Poland. Conventional reservoirs are found in grainstones rimming islands created by pre-existing paleohighs and platform-rimming shoals that compose steep margins in the north and ramp deposits in the southern part. The slope and basin deposits are characterized by debris flows and organic-rich mudstones. Lagoonal and basinal evaporites formed the seal for these carbonate and underlying sandstone reservoirs. The objective of this investigation is to evaluate potential unconventional reservoirs in organic-rich, fine-grained and/or tight mudrocks in slope and basin as well as platform carbonates occurring in this stratigraphic interval. Therefore, a comprehensive study was conducted that included sedimentology, sequence stratigraphy, petrography, and geochemistry. Sequence stratigraphic correlations from shelf to basin are crucial in establishing a framework that allows correlation of potential productive facies in fine-grained, organic-rich basinal siliceous and calcareous mudstones or interfingering tight carbonates and siltstones, ranging from the lagoon, to slope to basin, which might be candidates for forming an unconventional reservoir. Most organic-rich shales worldwide are associated with eustatic transgressions. The basal Zechstein cycles, Z1 and Z2, contain organic-rich siliceous and calcareous mudstones and carbonates that form major transgressive deposits in the basin. Maturities range from over-mature (gas) in the basin to oil-generation on the slope with variable TOC contents. This sequence stratigraphic and sedimentologic evaluation of the transgressive facies in the Z1 and Z2 assesses the potential for shale-gas/oil and hybrid unconventional plays. Potential unconventional reservoirs might be explored in laminated organic-rich mudstones within the oil window along the northern and southern slopes of the basin. Although the Zechstein Z1 and Z2 cycles might have limited shale-gas potential because of low thickness and deep burial depth to be economic at this point, unconventional reservoir opportunities that include hybrid and shale-oil potential are possible in the study area.
Detection of cancer precursors contributes to cancer prevention, for example, in the case of colorectal cancer. To record more patients early, ultrasensitive methods are required for the purpose of noninvasive precursor detection in body fluids. Our aim was to develop a method for enrichment and detection of known as well as unknown driver mutations in the Adenomatous polyposis coli (APC) gene. By coupled wild-type blocking (WTB) PCR and high-resolution melting (HRM), referred to as WTB-HRM, a minimum detection limit of 0.01% mutant in excess wild-type was achieved according to as little as 1 pg mutated DNA in the assay. The technique was applied to 80 tissue samples from patients with colorectal cancer (n = 17), adenomas (n = 50), serrated lesions (n = 8), and normal mucosa (n = 5). Any kind of known and unknown APC mutations (deletions, insertions, and base exchanges) being situated inside the mutation cluster region was distinguishable from wild-type DNA. Furthermore, by WTB-HRM, nearly twice as many carcinomas and 1.5 times more precursor lesions were identified to be mutated in APC, as compared with direct sequencing. By analyzing 31 associated stool DNA specimens all but one of the APC mutations could be recovered. Transferability of the WTB-HRM method to other genes was proven using the example of KRAS mutation analysis. In summary, WTB-HRM is a new approach for ultrasensitive detection of cancer-initiating mutations. In this sense, it appears especially applicable for noninvasive detection of colon cancer precursors in body fluids with excess wild-type DNA like stool. Cancer Prev Res; 6(9); 898-907. (C) 2013 AACR.
Irradiating a ferromagnet with a femtosecond laser pulse is known to induce an ultrafast demagnetization within a few hundred femtoseconds. Here we demonstrate that direct laser irradiation is in fact not essential for ultrafast demagnetization, and that electron cascades caused by hot electron currents accomplish it very efficiently. We optically excite a Au/Ni layered structure in which the 30 nm Au capping layer absorbs the incident laser pump pulse and subsequently use the X-ray magnetic circular dichroism technique to probe the femtosecond demagnetization of the adjacent 15 nm Ni layer. A demagnetization effect corresponding to the scenario in which the laser directly excites the Ni film is observed, but with a slight temporal delay. We explain this unexpected observation by means of the demagnetizing effect of a superdiffusive current of non-equilibrium, non-spin-polarized electrons generated in the Au layer.
Ultrafast soft X-ray emission spectroscopy of surface adsorbates using an X-ray free electron laser
(2013)
We report on an experimental system designed to probe chemical reactions on solid surfaces on a sub-picosecond timescale using soft X-ray emission spectroscopy at the Linac Coherent Light Source (LCLS) free electron laser (FEL) at the SLAC National Accelerator Laboratory. We analyzed the O 1s X-ray emission spectra recorded from atomic oxygen adsorbed on a Ru(0001) surface at a synchrotron beamline (SSRL, BL13-2) and an FEL beamline (LCLS, SXR). We have demonstrated conditions that provide negligible amount of FEL induced damage of the sample. In addition we show that the setup is capable of tracking the temporal evolution of electronic structure during a surface reaction of submonolayer quantities of CO molecules desorbing from the surface.
A diffractometer setup is presented, based on a laser-driven plasma X-ray source for reciprocal-space mapping with femtosecond temporal resolution. In order to map out the reciprocal space, an X-ray optic with a convergent beam is used with an X-ray area detector to detect symmetrically and asymmetrically diffracted X-ray photons simultaneously. The setup is particularly suited for measuring thin films or imperfect bulk samples with broad rocking curves. For quasi-perfect crystalline samples with insignificant in-plane Bragg peak broadening, the measured reciprocal-space maps can be corrected for the known resolution function of the diffractometer in order to achieve high-resolution rocking curves with improved data quality. In this case, the resolution of the diffractometer is not limited by the convergence of the incoming X-ray beam but is solely determined by its energy bandwidth.
Within the course of this thesis, I have investigated the complex interplay between electron and lattice dynamics in nanostructures of perovskite oxides. Femtosecond hard X-ray pulses were utilized to probe the evolution of atomic rearrangement directly, which is driven by ultrafast optical excitation of electrons. The physics of complex materials with a large number of degrees of freedom can be interpreted once the exact fingerprint of ultrafast lattice dynamics in time-resolved X-ray diffraction experiments for a simple model system is well known. The motion of atoms in a crystal can be probed directly and in real-time by femtosecond pulses of hard X-ray radiation in a pump-probe scheme. In order to provide such ultrashort X-ray pulses, I have built up a laser-driven plasma X-ray source. The setup was extended by a stable goniometer, a two-dimensional X-ray detector and a cryogen-free cryostat. The data acquisition routines of the diffractometer for these ultrafast X-ray diffraction experiments were further improved in terms of signal-to-noise ratio and angular resolution. The implementation of a high-speed reciprocal-space mapping technique allowed for a two-dimensional structural analysis with femtosecond temporal resolution. I have studied the ultrafast lattice dynamics, namely the excitation and propagation of coherent phonons, in photoexcited thin films and superlattice structures of the metallic perovskite SrRuO3. Due to the quasi-instantaneous coupling of the lattice to the optically excited electrons in this material a spatially and temporally well-defined thermal stress profile is generated in SrRuO3. This enables understanding the effect of the resulting coherent lattice dynamics in time-resolved X-ray diffraction data in great detail, e.g. the appearance of a transient Bragg peak splitting in both thin films and superlattice structures of SrRuO3. In addition, a comprehensive simulation toolbox to calculate the ultrafast lattice dynamics and the resulting X-ray diffraction response in photoexcited one-dimensional crystalline structures was developed in this thesis work. With the powerful experimental and theoretical framework at hand, I have studied the excitation and propagation of coherent phonons in more complex material systems. In particular, I have revealed strongly localized charge carriers after above-bandgap femtosecond photoexcitation of the prototypical multiferroic BiFeO3, which are the origin of a quasi-instantaneous and spatially inhomogeneous stress that drives coherent phonons in a thin film of the multiferroic. In a structurally imperfect thin film of the ferroelectric Pb(Zr0.2Ti0.8)O3, the ultrafast reciprocal-space mapping technique was applied to follow a purely strain-induced change of mosaicity on a picosecond time scale. These results point to a strong coupling of in- and out-of-plane atomic motion exclusively mediated by structural defects.
Types of Body Shape
(2013)
In Germany, active bat rabies surveillance was conducted between 1993 and 2012. A total of 4546 oropharyngeal swab samples from 18 bat species were screened for the presence of EBLV-1- , EBLV-2- and BBLV-specific RNA. Overall, 0 center dot 15% of oropharyngeal swab samples tested EBLV-1 positive, with the majority originating from Eptesicus serotinus. Interestingly, out of seven RT-PCR-positive oropharyngeal swabs subjected to virus isolation, viable virus was isolated from a single serotine bat (E. serotinus). Additionally, about 1226 blood samples were tested serologically, and varying virus neutralizing antibody titres were found in at least eight different bat species. The detection of viral RNA and seroconversion in repeatedly sampled serotine bats indicates long-term circulation of the virus in a particular bat colony. The limitations of random-based active bat rabies surveillance over passive bat rabies surveillance and its possible application of targeted approaches for future research activities on bat lyssavirus dynamics and maintenance are discussed.
Turning shy on winter's day effects of season on personality and stress response in Microtus arvalis
(2013)
TURBO2 - a MATLAB simulation to study the effects of bioturbation on paleoceanographic time series
(2013)
Bioturbation (or benthic mixing) causes significant distortions in marine stable isotope signals and other palaeoceanographic records. Although the influence of bioturbation on these records is well known it has rarely been dealt systematically. The MATLAB program called TURBO2 can be used to simulate the effect of bioturbation on individual sediment particles. It can therefore be used to model the distortion of all physical, chemical, and biological signals in deep-sea sediments, such as Mg/Ca ratios and UK37-based sea-surface temperature (SST) variations. In particular, it can be used to study the distortions in paleoceanographic records that are based on individual sediment particles, such as SST records based on foraminifera assemblages. Furthermore. TURBO2 provides a tool to study the effect of benthic mixing of isotope signals such as C-14, delta O-18, and delta C-13, measured in a stratigraphic carrier such as foraminifera shells.
Ag-TiO2 and Au-TiO2 hybrid electrodes were designed by covalent attachment of TiO2 nanoparticles to Ag or Au electrodes via an organic linker. The optical and electronic properties of these systems were investigated using the cytochrome b(5) (Cyt b(5)) domain of sulfite oxidase, exclusively attached to the TiO2 surface, as a Raman marker and model redox enzyme. Very strong SERR signals of Cyt b(5) were obtained for Ag-supported systems due to plasmonic field enhancement of Ag. Time-resolved surface-enhanced resonance Raman spectroscopic measurements yielded a remarkably fast electron transfer kinetic (k = 60 s(-1)) of Cyt b(5) to Ag. A much lower Raman intensity was observed for Au-supported systems with undefined and slow redox behavior. We explain this phenomenon on the basis of the different potential of zero charge of the two metals that largely influence the electronic properties of the TiO2 island film.
Transcriptome analysis through next-generation sequencing technologies allows the generation of detailed gene catalogs for non-model species, at the cost of new challenges with regards to computational requirements and bioinformatics expertise. Here, we present TRAPID, an online tool for the fast and efficient processing of assembled RNA-Seq transcriptome data, developed to mitigate these challenges. TRAPID offers high-throughput open reading frame detection, frameshift correction and includes a functional, comparative and phylogenetic toolbox, making use of 175 reference proteomes. Benchmarking and comparison against state-of-the-art transcript analysis tools reveals the efficiency and unique features of the TRAPID system.
TRAPID
(2013)
Transcriptome analysis through next-generation sequencing technologies allows the generation of detailed gene catalogs for non-model species, at the cost of new challenges with regards to computational requirements and bioinformatics expertise. Here, we present TRAPID, an online tool for the fast and efficient processing of assembled RNA-Seq transcriptome data, developed to mitigate these challenges. TRAPID offers high-throughput open reading frame detection, frameshift correction and includes a functional, comparative and phylogenetic toolbox, making use of 175 reference proteomes. Benchmarking and comparison against state-of-the-art transcript analysis tools reveals the efficiency and unique features of the TRAPID system. TRAPID is freely available at http://bioinformatics.psb.ugent.be/webtools/trapid/.
The nutrient exchange between plant and fungus is the key element of the arbuscular mycorrhizal (AM) symbiosis. The fungus improves the plant’s uptake of mineral nutrients, mainly phosphate, and water, while the plant provides the fungus with photosynthetically assimilated carbohydrates. Still, the knowledge about the mechanisms of the nutrient exchange between the symbiotic partners is very limited. Therefore, transport processes of both, the plant and the fungal partner, are investigated in this study. In order to enhance the understanding of the molecular basis underlying this tight interaction between the roots of Medicago truncatula and the AM fungus Rhizophagus irregularis, genes involved in transport processes of both symbiotic partners are analysed here. The AM-specific regulation and cell-specific expression of potential transporter genes of M. truncatula that were found to be specifically regulated in arbuscule-containing cells and in non-arbusculated cells of mycorrhizal roots was confirmed. A model for the carbon allocation in mycorrhizal roots is suggested, in which carbohydrates are mobilized in non-arbusculated cells and symplastically provided to the arbuscule-containing cells. New insights into the mechanisms of the carbohydrate allocation were gained by the analysis of hexose/H+ symporter MtHxt1 which is regulated in distinct cells of mycorrhizal roots. Metabolite profiling of leaves and roots of a knock-out mutant, hxt1, showed that it indeed does have an impact on the carbohydrate balance in the course of the symbiosis throughout the whole plant, and on the interaction with the fungal partner. The primary metabolite profile of M. truncatula was shown to be altered significantly in response to mycorrhizal colonization. Additionally, molecular mechanisms determining the progress of the interaction in the fungal partner of the AM symbiosis were investigated. The R. irregularis transcriptome in planta and in extraradical tissues gave new insight into genes that are differentially expressed in these two fungal tissues. Over 3200 fungal transcripts with a significantly altered expression level in laser capture microdissection-collected arbuscules compared to extraradical tissues were identified. Among them, six previously unknown specifically regulated potential transporter genes were found. These are likely to play a role in the nutrient exchange between plant and fungus. While the substrates of three potential MFS transporters are as yet unknown, two potential sugar transporters are might play a role in the carbohydrate flow towards the fungal partner. In summary, this study provides new insights into transport processes between plant and fungus in the course of the AM symbiosis, analysing M. truncatula on the transcript and metabolite level, and provides a dataset of the R. irregularis transcriptome in planta, providing a high amount of new information for future works.
Stochastic processes driven by stationary fractional Gaussian noise, that is, fractional Brownian motion and fractional Langevin-equation motion, are usually considered to be ergodic in the sense that, after an algebraic relaxation, time and ensemble averages of physical observables coincide. Recently it was demonstrated that fractional Brownian motion and fractional Langevin-equation motion under external confinement are transiently nonergodic-time and ensemble averages behave differently-from the moment when the particle starts to sense the confinement. Here we show that these processes also exhibit transient aging, that is, physical observables such as the time-averaged mean-squared displacement depend on the time lag between the initiation of the system at time t = 0 and the start of the measurement at the aging time t(a). In particular, it turns out that for fractional Langevin-equation motion the aging dependence on ta is different between the cases of free and confined motion. We obtain explicit analytical expressions for the aged moments of the particle position as well as the time-averaged mean-squared displacement and present a numerical analysis of this transient aging phenomenon.
The aim of the present study was to examine how different types of tracking— between-school streaming, within-school streaming, and course-by-course tracking—shape students’ mathematics self-concept. This was done in an internationally comparative framework using data from the Programme for International Student Assessment (PISA). After controlling for individual and track mean achievement, results indicated that generally for students in course-by-course tracking, high-track students had higher mathematics self-concepts and low-track students had lower mathematics self-concepts. For students in between-school and within-school streaming, the reverse pattern was found. These findings suggest a solution to the ongoing debate about the effects of tracking on students’ academic self-concept and suggest that the reference groups to which students compare themselves differ according to the type of tracking.
The aim of the present study was to examine how different types of tracking—
between-school streaming, within-school streaming, and course-by-course
tracking—shape students’ mathematics self-concept. This was done in an
internationally comparative framework using data from the Programme for
International Student Assessment (PISA). After controlling for individual
and track mean achievement, results indicated that generally for students
in course-by-course tracking, high-track students had higher mathematics
self-concepts and low-track students had lower mathematics self-concepts.
For students in between-school and within-school streaming, the reverse pat-
tern was found. These findings suggest a solution to the ongoing debate about
the effects of tracking on students’ academic self-concept and suggest that the
reference groups to which students compare themselves differ according to the
type of tracking.
Cratons with their thick lithospheric roots can influence the thermal structure, and thus the convective flow, in the surrounding mantle. As mantle temperatures are hard to measure directly, depth variations in the mantle transition zone (MTZ) discontinuities are often employed as a proxy. Here, we use a large new data set of P-receiver functions to map the 410 km and 660 km discontinuities beneath the western edge of the East European Craton and adjacent Phanerozoic Europe across the most fundamental lithospheric boundary in Europe, the Trans-European Suture Zone (TESZ). We observe significantly shorter travel times for conversions from both MTZ discontinuities within the craton, caused by the high velocities of the cratonic root. By contrast, the differential travel time across the MTZ is normal to only slightly raised. This implies that any insulating effect of the cratonic keel does not reach the MTZ. In contrast to earlier observations in Siberia, we do not find any trace of a discontinuity at 520 km depth, which indicates a rather dry MTZ beneath the western edge of the craton. Within most of covered Phanerozoic Europe, the MTZ differential travel time is remarkably uniform and in agreement with standard Earth models. No widespread thermal effects of the various episodes of Caledonian and Variscan subduction that took place during the amalgamation of the continent remain. Only more recent tectonic events, related to Alpine subduction and Quarternary volcanism in the Eifel area, can be traced. While the East European craton shows no distinct imprint into the MTZ, we discover the signature of the TESZ in the MTZ in the form of a linear region of about 350 km width with a 1.5 s increase in differential travel time, which could either be caused by high water content or decreased temperature. Taking into account results of recent S-wave tomographies, raised water content in the MTZ cannot be the main cause for this observation. Accordingly, we explain the increase, equivalent to a 15 km thicker MTZ, by a temperature decrease of about 80 K. We discuss two alternative models for this temperature reduction, either a remnant of subduction or an indication of downwelling due to small-scale, edge-driven convection caused by the contrast in lithospheric thickness across the TESZ. Any subducted lithosphere found in the MTZ at this location is unlikely to be related to Variscan subduction along the TESZ, though, as Eurasia has moved significantly northward since the Variscan orogeny.
The generation of antibodies with designated specificity requires cost-intensive and time-consuming screening procedures. Here we present a new method by which hybridoma cells can be selected based on the specificity of the produced antibody by the use of antigen-toxin-conjugates thus eliminating the need of a screening procedure. Initial experiments were done with methotrexate as low molecular weight toxin and fluorescein as model antigen. Methotrexate and a methotrexate-fluorescein conjugate were characterized regarding their toxicity. Afterwards the effect of the fluorescein-specific antibody B13-DE1 on the toxicity of the methotrexate-fluorescein conjugate was determined. Finally, first results showed that hybridoma cells that produce fluorescein specific antibodies are able to grow in the presence of fluorescein-toxin-conjugates.
Algal tests have developed into routine tools for testing toxicity of pollutants in aquatic environments. Meanwhile, in addition to algal growth rates, an increasing number of fluorescence based methods are used for rapid and sensitive toxicity measures. The present study stresses the suitability of delayed fluorescence (DF) as a promising parameter for biotests. DF is based on the recombination fluorescence at the reaction centre of photosystem II, which is emitted only by photosynthetically active cells. We analyzed the effects of three chemicals (3-(3,4-dichlorophenyl)-1,1-dimethylurea (DCMU), 3,5 Dichlorophenol (3,5 DCP) and copper) on the shape of the DF decay kinetics for potential use in phytoplankton toxicity tests. The short incubation tests were done with four phytoplankton species, with special emphasis on the cyanobacterium Microcystis aeruginosa. All species exhibited a high sensitivity to DCMU, but cyanobacteria were more affected by copper and less by 3,5 DCP than the tested green algae. Analyses of changes in the DF decay curve in response to the added chemicals indicated the feasibility of the DF decay approach as a rapid and sensitive testing tool.
1. The health of managed and wild honeybee colonies appears to have declined substantially in Europe and the United States over the last decade. Sustainability of honeybee colonies is important not only for honey production, but also for pollination of crops and wild plants alongside other insect pollinators. A combination of causal factors, including parasites, pathogens, land use changes and pesticide usage, are cited as responsible for the increased colony mortality. 2. However, despite detailed knowledge of the behaviour of honeybees and their colonies, there are no suitable tools to explore the resilience mechanisms of this complex system under stress. Empirically testing all combinations of stressors in a systematic fashion is not feasible. We therefore suggest a cross-level systems approach, based on mechanistic modelling, to investigate the impacts of (and interactions between) colony and land management. 3. We review existing honeybee models that are relevant to examining the effects of different stressors on colony growth and survival. Most of these models describe honeybee colony dynamics, foraging behaviour or honeybee - varroa mite - virus interactions. 4. We found that many, but not all, processes within honeybee colonies, epidemiology and foraging are well understood and described in the models, but there is no model that couples in-hive dynamics and pathology with foraging dynamics in realistic landscapes. 5. Synthesis and applications. We describe how a new integrated model could be built to simulate multifactorial impacts on the honeybee colony system, using building blocks from the reviewed models. The development of such a tool would not only highlight empirical research priorities but also provide an important forecasting tool for policy makers and beekeepers, and we list examples of relevant applications to bee disease and landscape management decisions.
Poststroke spasticity (PSS)-related disability is emerging as a significant health issue for stroke survivors. There is a need for predictors and early identification of PSS in order to minimize complications and maladaptation from spasticity. Reviewing the literature on stroke and upper motor neuron syndrome, spasticity, contracture, and increased muscle tone measured with the Modified Ashworth Scale and the Tone Assessment Scale provided data on the dynamic time course of PSS. Prevalence estimates of PSS were highly variable, ranging from 4% to 42.6%, with the prevalence of disabling spasticity ranging from 2% to 13%. Data on phases of the PSS continuum revealed evidence of PSS in 4% to 27% of those in the early time course (1-4 weeks poststroke), 19% to 26.7% of those in the postacute phase (1-3 months poststroke), and 17% to 42.6% of those in the chronic phase (>3 months poststroke). Data also identified key risk factors associated with the development of spasticity, including lower Barthel Index scores, severe degree of paresis, stroke-related pain, and sensory deficits. Although such indices could be regarded as predictors of PSS and thus enable early identification and treatment, the different measures of PSS used in those studies limit the strength of the findings. To optimize evaluation in the different phases of care, the best possible assessment of PSS would make use of a combination of indicators for clinical impairment, motor performance, activity level, quality of life, and patient-reported outcome measures. Applying these recommended measures, as well as increasing our knowledge of the physiologic predictors of PSS, will enable us to perform clinical and epidemiologic studies that will facilitate identification and early, multimodal treatment.
The task of expert finding is to rank the experts in the search space given a field of expertise as an input query. In this paper, we propose a topic modeling approach for this task. The proposed model uses latent Dirichlet allocation (LDA) to induce probabilistic topics. In the first step of our algorithm, the main topics of a document collection are extracted using LDA. The extracted topics present the connection between expert candidates and user queries. In the second step, the topics are used as a bridge to find the probability of selecting each candidate for a given query. The candidates are then ranked based on these probabilities. The experimental results on the Text REtrieval Conference (TREC) Enterprise track for 2005 and 2006 show that the proposed topic-based approach outperforms the state-of-the-art profile- and document-based models, which use information retrieval methods to rank experts. Moreover, we present the superiority of the proposed topic-based approach to the improved document-based expert finding systems, which consider additional information such as local context, candidate prior, and query expansion.
Silver nanoparticles (SNPs) are among the most commercialized nanoparticles because of their antibacterial effects. Besides being employed, e. g. as a coatingmaterial for sterile surfaces in household articles and appliances, the particles are also used in a broad range of medical applications. Their antibacterial properties make SNPs especially useful for wound disinfection or as a coating material for prostheses and surgical instruments. Because of their optical characteristics, the particles are of increasing interest in biodetection as well. Despite the widespread use of SNPs, there is little knowledge of their toxicity. Time-of-flight secondary ion mass spectrometry (ToF-SIMS) and laser post-ionization secondary neutral mass spectrometry (Laser-SNMS) were used to investigate the effects of SNPs on human macrophages derived from THP-1 cells in vitro. For this purpose, macrophages were exposed to SNPs. The SNP concentration ranges were chosen with regard to functional impairments of the macrophages. To optimize the analysis of the macrophages, a special silicon wafer sandwich preparation technique was employed; ToF-SIMS was employed to characterize fragments originating from macrophage cell membranes. With the use of this optimized sample preparation method, the SNP-exposed macrophages were analyzed with ToF-SIMS and with Laser-SNMS. With Laser-SNMS, the three-dimensional distribution of SNPs in cells could be readily detected with very high efficiency, sensitivity, and submicron lateral resolution. We found an accumulation of SNPs directly beneath the cell membrane in a nanoparticular state as well as agglomerations of SNPs inside the cells.
Silver nanoparticles (SNP) are among the most commercialized nanoparticles. Here, we show that peptide-coated SNP cause functional impairment of human macrophages. A dose-dependent inhibition of phagocytosis is observed after nanoparticle treatment, and pretreatment of cells with N-acetyl cysteine (NAC) can counteract the phagocytosis disturbances caused by SNP.
Using the surface-sensitive mode of time-of-flight secondary ion mass spectrometry, in combination with multivariate statistical methods, we studied the composition of cell membranes in human macrophages upon exposure to SNP with and without NAC preconditioning. This method revealed characteristic changes in the lipid pattern of the cellular membrane outer leaflet in those cells challenged by SNP. Statistical analyses resulted in 19 characteristic ions, which can be used to distinguish between NAC pretreated and untreated macrophages. The present study discusses the assignments of surface cell membrane phospholipids for the identified ions and the resulting changes in the phospholipid pattern of treated cells. We conclude that the adverse effects in human macrophages caused by SNP can be partially reversed through NAC administration. Some alterations, however, remained.
Calcium (Ca2+) is a ubiquitous intracellular second messenger and involved in a plethora of cellular processes. Thus, quantification of the intracellular Ca2+ concentration ([Ca2+](i)) and of its dynamics is required for a comprehensive understanding of physiological processes and potential dysfunctions. A powerful approach for studying [Ca2+](i) is the use of fluorescent Ca2+ indicators. In addition to the fluorescence intensity as a common recording parameter, the fluorescence lifetime imaging microscopy (FLIM) technique provides access to the fluorescence decay time of the indicator dye. The nanosecond lifetime is mostly independent of variations in dye concentration, allowing more reliable quantification of ion concentrations in biological preparations. In this study, the feasibility of the fluorescent Ca2+ indicator Oregon Green Bapta-1 (OGB-1) for two-photon fluorescence lifetime imaging microscopy (2P-FLIM) was evaluated. In aqueous solution, OGB-1 displayed a Ca2+-dependent biexponential fluorescence decay behaviour, indicating the presence of a Ca2+-free and Ca2+-bound dye form. After sufficient dye loading into living cells, an in situ calibration procedure has also unravelled the Ca2+-free and Ca2+-bound dye forms from a global biexponential fluorescence decay analysis, although the dye's Ca2+ sensitivity is reduced. Nevertheless, quantitative [Ca2+](i) recordings and its stimulus-induced changes in salivary gland cells could be performed successfully. These results suggest that OGB-1 is suitable for 2P-FLIM measurements, which can gain access to cellular physiology.
Dynamics in materials typically involve different degrees of freedom, like charge, lattice, orbital and spin in a complex interplay. Time-resolved resonant inelastic X-ray scattering (RIXS) as a highly selective tool can provide unique insight and follow the details of dynamical processes while resolving symmetries, chemical and charge states, momenta, spin configurations, etc. In this paper, we review examples where the intrinsic scattering duration time is used to study femtosecond phenomena. Free-electron lasers access timescales starting in the sub-ps range through pump-probe methods and synchrotrons study the time scales longer than tens of ps. In these examples, time-resolved resonant inelastic X-ray scattering is applied to solids as well as molecular systems.
The geothermal potential in Tarutung is controlled by both the Sumatra Fault system and young arc volcanism. In this study we use the spatial distribution of seismic attenuation, calculated from local earthquake recordings, to image the 3-D seismic attenuation of the area and relate it with the temperature anomalies and the fluid distribution of the subsurface. A temporary seismic network of 42 stations was deployed around Tarutung and Sarulla (south of Tarutung) for a period of 10 months starting in 2011 May. Within this period, the network recorded 2586 local events. A high-quality subset of 229 events recorded by at least 10 stations was used for the attenuation inversion (tomography). Path-average attenuation (t(p)*) was calculated by using a spectral inversion method. The spread function, the contour lines of the model resolution matrix and the recovery test results show that our 3-D attenuation model (Q(p)) has good resolution around the Tarutung Basin and along the Sarulla graben. High attenuation (low Q(p)) related to the geothermal system is found in the northeast of the Tarutung Basin suggesting fluid pathways from below the Sumatra Fault. The upper part of the studied geothermal system in the Tarutung district seems to be mainly controlled by the fault structure rather than by magmatic activities. In the southwest of the Tarutung Basin, the high attenuation zone is associated with the Martimbang volcano. In the Sarulla region, a low-Q(p) anomaly is found along the graben within the vicinity of the Hopong caldera.
A problem encountered by many distributed hydrological modelling studies is high simulation errors at interior gauges when the model is only globally calibrated at the outlet. We simulated river runoff in the Elbe River basin in central Europe (148 268 km(2)) with the semi-distributed eco-hydrological model SWIM (Soil and Water Integrated Model). While global parameter optimisation led to Nash-Sutcliffe efficiencies of 0.9 at the main outlet gauge, comparisons with measured runoff series at interior points revealed large deviations. Therefore, we compared three different strategies for deriving sub-basin evapotranspiration: (1) modelled by SWIM without any spatial calibration, (2) derived from remotely sensed surface temperatures, and (3) calculated from long-term precipitation and discharge data. The results show certain consistencies between the modelled and the remote sensing based evapotranspiration rates, but there seems to be no correlation between remote sensing and water balance based estimations. Subsequent analyses for single sub-basins identify amongst others input weather data and systematic error amplification in inter-gauge discharge calculations as sources of uncertainty. The results encourage careful utilisation of different data sources for enhancements in distributed hydrological modelling.
Thermodynamic stability of the a-Helical membrane-interacting protein mistic in detergent micelles
(2013)
Basement-cored ranges formed by reverse faulting within intracontinental mountain belts are often composed of poly-deformed lithologies. Geological data capable of constraining the timing, magnitude, and distribution of the most recent deformational phase are usually missing in such ranges. In this paper, we present new low temperature thermochronological and geological data from a transect through the basement-cored Terskey Range, located in the Kyrgyz Tien Shan. Using these data, we are able to investigate the range's late Cenozoic deformation for the first time. Displacements on reactivated faults are constrained and deformation of thermochronologically derived structural markers is assessed. These structural markers postdate the earlier deformational phases, providing the only record of Cenozoic deformation and of the reactivation of structures within the Terskey Range. Overall, these structural markers have a southern inclination, interpreted to reflect the decreasing inclination of the reverse fault bounding the Terskey Range. Our thermochronological data are also used to investigate spatial and temporal variations in the exhumation of the Terskey Range, identifying a three-stage Cenozoic exhumation history: (1) virtually no exhumation in the Paleogene, (2) increase to slightly higher exhumation rates at similar to 26-20Ma, and (3) significant increase in exhumation starting at similar to 10Ma.
In the Western Alps, the Piemont-Ligurian oceanic domain records blueschist to eclogite metamorphic conditions during the Alpine orogeny. This domain is classically divided into two "zones" (Combin and Zermatt-Saas), with contrasting metamorphic evolution, and separated tectonically by the Combin fault. This study presents new metamorphic and temperature (RSCM thermometry) data obtained in Piemont-Ligurian metasediments and proposes a reevaluation of the P-T evolution of this domain. In the upper unit (or "Combin zone") temperatures are in the range of 420-530 A degrees C, with an increase of temperature from upper to lower structural levels. Petrological evidences show that these temperatures are related to the retrograde path and to deformation at greenschist metamorphic conditions. This highlights heating during exhumation of HP metamorphic rocks. In the lower unit (or "Zermatt-Saas zone"), temperatures are very homogeneous in the range of 500-540 A degrees C. This shows almost continuous downward temperature increase in the Piemont-Ligurian domain. The observed thermal structure is interpreted as the result of the upper and lower unit juxtaposition along shear zones at a temperature of similar to 500 A degrees C during the Middle Eocene. This juxtaposition probably occurred at shallow crustal levels (similar to 15-20 km) within a subduction channel. We finally propose that the Piemont-Ligurian Domain should not be viewed as two distinct "zones", but rather as a stack of several tectonic slices.
This study presents the theory, applicability, and merits of the new THERIAK_D add-on for the open source Theriak/Domino software package. The add-on works as an interface between Theriak and user-generated scripts, providing the opportunity to process phase equilibrium computation parameters in a programming environment (e. g., C or MATLABV (R)). THERIAK_D supports a wide range of features such as calculating the solid rock density or testing the stability of mineral phases along any pressure-temperature (P-T) path and P-T grid. To demonstrate applicability, an example is given in which the solid rock density of a 2-D-temperature-pressure field is calculated, portraying a simplified subduction zone. Consequently, the add-on effectively combines thermodynamics and geodynamic modeling. The carefully documented examples could be easily adapted for a broad range of applications. THERIAK_D is free, and the program, user manual, and source codes may be downloaded from http://www.min.unikiel.de/similar to ed/theriakd/.
There Is No Return To Egypt
(2013)
Who are those Polish Jews, who in the wake of the Antizionist Campaign of the year 1968 left their home country and migrated to Israel? How do they, 40 years after these traumatic events, look back at their own history? Which development have they made in the Jewish State, a society torn by wars and inner political tensions? How do they live in Israel at the beginning of the 21st century? In the documentary There Is No Return To Egypt seven members of the Polish-Jewish migration cohort of the late 1960s, early 1970s and there todays environment are represented. These people, while being on camera in their mid-fifties till late seventies of age, allow an intimate view into their Israeli-Polish daily-life and into their world of memories. Interestingly, having survived the atrocities of the Shoah and being forced out of Poland some twenty years later, the older interviewees draw their very own conclusions for their further lives in Israel. In contrast, the younger interviewees deal very differently with the loss of their home and the break in their career life caused by the Antizionist Campaign. The personalities presented in this documentary have various professions: There is a successful musician, a former employee at the Israeli broadcasting service, and there are skilled workers. Their religious identities widely vary: from Jewish orthodox and national-religious to atheist to Judeo-Christian. The protagonists in There Is No Return To Egypt do also represent the political spectrum of Israel: from members of the chauvinist-militarist camp through to members of the peace movement. At the same time, the shooting locations in the documentary are important stages of life for the seven 1968ers: the home being decorated for Shabbat or for Israels national holiday Yom ha-atzmaut, the working place, an army museum, a Jewish settlement in the Palestinian Westbank, a Shoah memorial event at the university campus, a pop concert and a peace demonstration.
Galaxy clusters are the largest known gravitationally bound objects, their study is important for both an intrinsic understanding of their systems and an investigation of the large scale structure of the universe. The multi- component nature of galaxy clusters offers multiple observable signals across the electromagnetic spectrum. At X-ray wavelengths, galaxy clusters are simply identified as X-ray luminous, spatially extended, and extragalactic sources. X-ray observations offer the most powerful technique for constructing cluster catalogues. The main advantages of the X-ray cluster surveys are their excellent purity and completeness and the X-ray observables are tightly correlated with mass, which is indeed the most fundamental parameter of clusters. In my thesis I have conducted the 2XMMi/SDSS galaxy cluster survey, which is a serendipitous search for galaxy clusters based on the X-ray extended sources in the XMM-Newton Serendipitous Source Catalogue (2XMMi-DR3). The main aims of the survey are to identify new X-ray galaxy clusters, investigate their X-ray scaling relations, identify distant cluster candidates, and study the correlation of the X-ray and optical properties. The survey is constrained to those extended sources that are in the footprint of the Sloan Digital Sky Survey (SDSS) in order to be able to identify the optical counterparts as well as to measure their redshifts that are mandatory to measure their physical properties. The overlap area be- tween the XMM-Newton fields and the SDSS-DR7 imaging, the latest SDSS data release at the starting of the survey, is 210 deg^2. The survey comprises 1180 X-ray cluster candidates with at least 80 background-subtracted photon counts, which passed the quality control process. To measure the optical redshifts of the X-ray cluster candidates, I used three procedures; (i) cross-matching these candidates with the recent and largest optically selected cluster catalogues in the literature, which yielded the photometric redshifts of about a quarter of the X-ray cluster candidates. (ii) I developed a finding algorithm to search for overdensities of galaxies at the positions of the X-ray cluster candidates in the photometric redshift space and to measure their redshifts from the SDSS-DR8 data, which provided the photometric redshifts of 530 groups/clusters. (iii) I developed an algorithm to identify the cluster candidates associated with spectroscopically targeted Luminous Red Galaxies (LRGs) in the SDSS-DR9 and to measure the cluster spectroscopic redshift, which provided 324 groups and clusters with spectroscopic confirmation based on spectroscopic redshift of at least one LRG. In total, the optically confirmed cluster sample comprises 574 groups and clusters with redshifts (0.03 ≤ z ≤ 0.77), which is the largest X-ray selected cluster catalogue to date based on observations from the current X-ray observatories (XMM-Newton, Chandra, Suzaku, and Swift/XRT). Among the cluster sample, about 75 percent are newly X-ray discovered groups/clusters and 40 percent are new systems to the literature. To determine the X-ray properties of the optically confirmed cluster sample, I reduced and analysed their X-ray data in an automated way following the standard pipelines of processing the XMM-Newton data. In this analysis, I extracted the cluster spectra from EPIC(PN, MOS1, MOS2) images within an optimal aperture chosen to maximise the signal-to-noise ratio. The spectral fitting procedure provided the X-ray temperatures kT (0.5 - 7.5 keV) for 345 systems that have good quality X-ray data. For all the optically confirmed cluster sample, I measured the physical properties L500 (0.5 x 10^42 – 1.2 x 10^45 erg s-1 ) and M500 (1.1 x 10^13 – 4.9 x 10^14 M⊙) from an iterative procedure using published scaling relations. The present X-ray detected groups and clusters are in the low and intermediate luminosity regimes apart from few luminous systems, thanks to the XMM-Newton sensitivity and the available XMM-Newton deep fields The optically confirmed cluster sample with measurements of redshift and X-ray properties can be used for various astrophysical applications. As a first application, I investigated the LX - T relation for the first time based on a large cluster sample of 345 systems with X-ray spectroscopic parameters drawn from a single survey. The current sample includes groups and clusters with wide ranges of redshifts, temperatures, and luminosities. The slope of the relation is consistent with the published ones of nearby clusters with higher temperatures and luminosities. The derived relation is still much steeper than that predicted by self-similar evolution. I also investigated the evolution of the slope and the scatter of the LX - T relation with the cluster redshift. After excluding the low luminosity groups, I found no significant changes of the slope and the intrinsic scatter of the relation with redshift when dividing the sample into three redshift bins. When including the low luminosity groups in the low redshift subsample, I found its LX - T relation becomes after than the relation of the intermediate and high redshift subsamples. As a second application of the optically confirmed cluster sample from our ongoing survey, I investigated the correlation between the cluster X-ray and the optical parameters that have been determined in a homogenous way. Firstly, I investigated the correlations between the BCG properties (absolute magnitude and optical luminosity) and the cluster global proper- ties (redshift and mass). Secondly, I computed the richness and the optical luminosity within R500 of a nearby subsample (z ≤ 0.42, with a complete membership detection from the SDSS data) with measured X-ray temperatures from our survey. The relation between the estimated optical luminosity and richness is also presented. Finally, the correlation between the cluster optical properties (richness and luminosity) and the cluster global properties (X-ray luminosity, temperature, mass) are investigated.
We obtained four pointings of over 100 ks each of the well-studied Wolf-Rayet star WR 6 with the XMM-Newton satellite. With a first paper emphasizing the results of spectral analysis, this follow-up highlights the X-ray variability clearly detected in all four pointings. However, phased light curves fail to confirm obvious cyclic behavior on the well-established 3.766 day period widely found at longer wavelengths. The data are of such quality that we were able to conduct a search for event clustering in the arrival times of X-ray photons. However, we fail to detect any such clustering. One possibility is that X-rays are generated in a stationary shock structure. In this context we favor a corotating interaction region (CIR) and present a phenomenological model for X-rays from a CIR structure. We show that a CIR has the potential to account simultaneously for the X-ray variability and constraints provided by the spectral analysis. Ultimately, the viability of the CIR model will require both intermittent long-term X-ray monitoring of WR 6 and better physical models of CIR X-ray production at large radii in stellar winds.