Refine
Year of publication
- 2014 (1681) (remove)
Document Type
- Article (1121)
- Doctoral Thesis (202)
- Postprint (102)
- Monograph/Edited Volume (55)
- Review (54)
- Preprint (47)
- Conference Proceeding (42)
- Part of a Book (20)
- Part of Periodical (14)
- Other (11)
Language
Is part of the Bibliography
- yes (1681) (remove)
Keywords
- anomalous diffusion (14)
- prevention (10)
- radiation mechanisms: non-thermal (9)
- Eye movements (8)
- Holocene (8)
- gamma rays: galaxies (8)
- living cells (8)
- Earthquake source observations (7)
- gamma rays: general (7)
- violence (7)
Institute
- Institut für Biochemie und Biologie (246)
- Institut für Physik und Astronomie (237)
- Institut für Geowissenschaften (226)
- Institut für Chemie (188)
- Department Psychologie (90)
- Wirtschaftswissenschaften (59)
- Institut für Ernährungswissenschaft (58)
- Sozialwissenschaften (55)
- Institut für Mathematik (53)
- Historisches Institut (50)
Injection of nanoscale zero-valent iron (nZVI) has recently gained great interest as emerging technology for in-situ remediation of chlorinated organic compounds from groundwater systems. Zero-valent iron (ZVI) is able to reduce organic compounds and to render it to less harmful substances. The use of nanoscale particles instead of granular or microscale particles can increase dechlorination rates by-orders of magnitude due to its high surface area. However, classical nZVI appears to be hampered in its environmental application by its limited mobility. One approach is colloid supported transport of nZVI, where the nZVI gets transported by a Mobile colloid. In this study transport properties of activated carbon colloid supported nZVI (c-nZVI; d(50) = 2.4 mu m) are investigated in column tests using columns of 40 cm length, which were filled with porous media. A suspension was pumped through the column under different physicochemical conditions (addition of a polyanionic stabilizer and changes in pH and ionic strength). Highest observed breakthrough was 62% of the injected concentration in glass beads with addition of stabilizer. Addition of mono- and bivalent salt, e.g. more than 0.5 mM/L CaCl2, can decrease mobility and changes in pH to values below six can inhibit mobility at all. Measurements of colloid sizes and zeta potentials show changes in the mean particle size by a factor of ten and an increase of zeta potential from -62 mV to -80 mV during the transport experiment. However, results suggest potential applicability of c-nZVI under field conditions. (C) 2014 Elsevier B.V. All rights reserved.
Background: Travel-related conditions have impact on the quality of oral anticoagulation therapy (OAT) with vitamin K-antagonists. No predictors for travel activity and for travel-associated haemorrhage or thromboembolic complications of patients on OAT are known.
Methods: A standardised questionnaire was sent to 2500 patients on long-term OAT in Austria, Switzerland and Germany. 997 questionnaires were received (responder rate 39.9%). Ordinal or logistic regression models with travel activity before and after onset of OAT or travel-associated haemorrhages and thromboembolic complications as outcome measures were applied.
Results: 43.4% changed travel habits since onset of OAT with 24.9% and 18.5% reporting decreased or increased travel activity, respectively. Long-distance worldwide before OAT or having suffered from thromboembolic complications was associated with reduced travel activity. Increased travel activity was associated with more intensive travel experience, increased duration of OAT, higher education, or performing patient self-management (PSM). Travel-associated haemorrhages or thromboennbolic complications were reported by 6.5% and 0.9% of the patients, respectively. Former thromboennbolic complications, former bleedings and PSM were significant predictors of travel-associated complications.
Conclusions: OAT also increases travel intensity. Specific medical advice prior travelling to prevent complications should be given especially to patients with former bleedings or thromboennbolic complications and to those performing PSM. (C) 2014 Elsevier Ltd. All rights reserved.
Background: Chronic kidney disease (CKD) is a frequent comorbidity among elderly patients and those with cardiovascular disease. CKD carries prognostic relevance. We aimed to describe patient characteristics, risk factor management and control status of patients in cardiac rehabilitation (CR), differentiated by presence or absence of CKD.
Design and methods: Data from 92,071 inpatients with adequate information to calculate glomerular filtration rate (GFR) based on the Cockcroft-Gault formula were analyzed at the beginning and the end of a 3-week CR stay. CKD was defined as estimated GFR <60 ml/min/1.73 m(2).
Results: Compared with non-CKD patients, CKD patients were significantly older (72.0 versus 58.0 years) and more often had diabetes mellitus, arterial hypertension, and atherothrombotic manifestations (previous stroke, peripheral arterial disease), but fewer were current or previous smokers had a CHD family history. Exercise capacity was much lower in CKD (59 vs. 92Watts). Fewer patients with CKD were treated with percutaneous coronary intervention (PCI), but more had coronary artery bypass graft (CABG) surgery. Patients with CKD compared with non-CKD less frequently received statins, acetylsalicylic acid (ASA), clopidogrel, beta blockers, and angiotensin converting enzyme (ACE) inhibitors, and more frequently received angiotensin receptor blockers, insulin and oral anticoagulants. In CKD, mean low density lipoprotein cholesterol (LDL-C), total cholesterol, and high density lipoprotein cholesterol (HDL-C) were slightly higher at baseline, while triglycerides were substantially lower. This lipid pattern did not change at the discharge visit, but overall control rates for all described parameters (with the exception of HDL-C) were improved substantially. At discharge, systolic blood pressure (BP) was higher in CKD (124 versus 121 mmHg) and diastolic BP was lower (72 versus 74 mmHg). At discharge, 68.7% of CKD versus 71.9% of non-CKD patients had LDL-C <100 mg/dl. Physical fitness on exercise testing improved substantially in both groups. When the Modification of Diet in Renal Disease (MDRD) formula was used for CKD classification, there was no clinically relevant change in these results.
Conclusion: Within a short period of 3-4 weeks, CR led to substantial improvements in key risk factors such as lipid profile, blood pressure, and physical fitness for all patients, even if CKD was present.
Background: Chronic kidney disease (CKD) is a frequent comorbidity among elderly patients and those with cardiovascular disease. CKD carries prognostic relevance. We aimed to describe patient characteristics, risk factor management and control status of patients in cardiac rehabilitation (CR), differentiated by presence or absence of CKD.
Design and methods: Data from 92,071 inpatients with adequate information to calculate glomerular filtration rate (GFR) based on the Cockcroft-Gault formula were analyzed at the beginning and the end of a 3-week CR stay. CKD was defined as estimated GFR <60 ml/min/1.73 m(2).
Results: Compared with non-CKD patients, CKD patients were significantly older (72.0 versus 58.0 years) and more often had diabetes mellitus, arterial hypertension, and atherothrombotic manifestations (previous stroke, peripheral arterial disease), but fewer were current or previous smokers had a CHD family history. Exercise capacity was much lower in CKD (59 vs. 92Watts). Fewer patients with CKD were treated with percutaneous coronary intervention (PCI), but more had coronary artery bypass graft (CABG) surgery. Patients with CKD compared with non-CKD less frequently received statins, acetylsalicylic acid (ASA), clopidogrel, beta blockers, and angiotensin converting enzyme (ACE) inhibitors, and more frequently received angiotensin receptor blockers, insulin and oral anticoagulants. In CKD, mean low density lipoprotein cholesterol (LDL-C), total cholesterol, and high density lipoprotein cholesterol (HDL-C) were slightly higher at baseline, while triglycerides were substantially lower. This lipid pattern did not change at the discharge visit, but overall control rates for all described parameters (with the exception of HDL-C) were improved substantially. At discharge, systolic blood pressure (BP) was higher in CKD (124 versus 121 mmHg) and diastolic BP was lower (72 versus 74 mmHg). At discharge, 68.7% of CKD versus 71.9% of non-CKD patients had LDL-C <100 mg/dl. Physical fitness on exercise testing improved substantially in both groups. When the Modification of Diet in Renal Disease (MDRD) formula was used for CKD classification, there was no clinically relevant change in these results.
Conclusion: Within a short period of 3-4 weeks, CR led to substantial improvements in key risk factors such as lipid profile, blood pressure, and physical fitness for all patients, even if CKD was present.
Bats are important components in tropical mammal assemblages. Unravelling the mechanisms allowing multiple syntopic bat species to coexist can provide insights into community ecology. However, dietary information on component species of these assemblages is often difficult to obtain. Here we measuredstable carbon and nitrogen isotopes in hair samples clipped from the backs of 94 specimens to indirectly examine whether trophic niche differentiation and microhabitat segregation explain the coexistence of 16 bat species at Ankarana, northern Madagascar. The assemblage ranged over 4.4% in delta N-15 and was structured into two trophic levels with phytophagous Pteropodidae as primary consumers (c. 3% enriched over plants) and different insectivorous bats as secondary consumers (c. 4% enriched over primary consumers). Bat species utilizing different microhabitats formed distinct isotopic clusters (metric analyses of delta C-13-delta N-15 bi-plots), but taxa foraging in the same microhabitat did not show more pronounced trophic differentiation than those occupying different microhabitats. As revealed by multivariate analyses, no discernible feeding competition was found in the local assemblage amongst congeneric species as compared with non-congeners. In contrast to ecological niche theory, but in accordance with studies on New and Old World bat assemblages, competitive interactions appear to be relaxed at Ankarana and not a prevailing structuring force.
Mueller, J, Mueller, S, Stoll, J, Baur, H, and Mayer, F. Trunk extensor and flexor strength capacity in healthy young elite athletes aged 11-15 years. J Strength Cond Res 28(5): 1328-1334, 2014-Differences in trunk strength capacity because of gender and sports are well documented in adults. In contrast, data concerning young athletes are sparse. The purpose of this study was to assess the maximum trunk strength of adolescent athletes and to investigate differences between genders and age groups. A total of 520 young athletes were recruited. Finally, 377 (n = 233/144 M/F; 13 +/- 1 years; 1.62 +/- 0.11 m height; 51 +/- 12 kg mass; training: 4.5 +/- 2.6 years; training sessions/week: 4.3 +/- 3.0; various sports) young athletes were included in the final data analysis. Furthermore, 5 age groups were differentiated (age groups: 11, 12, 13, 14, and 15 years; n = 90, 150, 42, 43, and 52, respectively). Maximum strength of trunk flexors (Flex) and extensors (Ext) was assessed in all subjects during isokinetic concentric measurements (60 degrees center dot s(-1); 5 repetitions; range of motion: 55 degrees). Maximum strength was characterized by absolute peak torque (Flex(abs), Ext(abs); N center dot m), peak torque normalized to body weight (Flex(norm), Ext(norm); N center dot m center dot kg(-1) BW), and Flex(abs)/Ext(abs) ratio (RKquot). Descriptive data analysis (mean +/- SD) was completed, followed by analysis of variance (alpha = 0.05; post hoc test [Tukey-Kramer]). Mean maximum strength for all athletes was 97 +/- 34 N center dot m in Flex(abs) and 140 +/- 50 N center dot m in Ext(abs) (Flex(norm) = 1.9 +/- 0.3 N center dot m center dot kg(-1) BW, Ext(norm) = 2.8 +/- 0.6 N center dot m center dot kg(-1) BW). Males showed statistically significant higher absolute and normalized values compared with females (p < 0.001). Flex(abs) and Ext(abs) rose with increasing age almost 2-fold for males and females (Flex(abs), Ext(abs): p < 0.001). Flex(norm) and Ext(norm) increased with age for males (p < 0.001), however, not for females (Flex(norm): p = 0.26; Ext(norm): p = 0.20). RKquot (mean +/- SD: 0.71 +/- 0.16) did not reveal any differences regarding age (p = 0.87) or gender (p = 0.43). In adolescent athletes, maximum trunk strength must be discussed in a gender- and age-specific context. The Flex(abs)/Ext(abs) ratio revealed extensor dominance, which seems to be independent of age and gender. The values assessed may serve as a basis to evaluate and discuss trunk strength in athletes.
Zinc oxide (ZnO) is regarded as a promising alternative material for transparent conductive electrodes in optoelectronic devices. However, ZnO suffers from poor chemical stability. ZnO also has a moderate work function (WF), which results in substantial charge injection barriers into common (organic) semiconductors that constitute the active layer in a device. Controlling and tuning the ZnO WF is therefore necessary but challenging. Here, a variety of phosphonic acid based self-assembled monolayers (SAMs) deposited on ZnO surfaces are investigated. It is demonstrated that they allow the tuning the WF over a wide range of more than 1.5 eV, thus enabling the use of ZnO as both the hole-injecting and electron-injecting contact. The modified ZnO surfaces are characterized using a number of complementary techniques, demonstrating that the preparation protocol yields dense, well-defined molecular monolayers.
This article examines and discusses aspects of the acquisition of Turkish literacy in the minority context in Germany. After describing the particular sociolinguistic and language contact situation of Turkish in Germany, the article focuses on two empirical aspects of the acquisition of Turkish literacy within this situation. First, the development of noun phrase complexity is analyzed in a pseudo-longitudinal approach investigating Turkish texts of German-Turkish bilingual pupils of different grades. Second, strategies of literacy are analyzed in the investigation of Turkish texts from bilingual high school pupils of the 12th grade.
Animal personalities are by definition stable over time, but to what extent they may change during development and in adulthood to adjust to environmental change is unclear. Animals of temperate environments have evolved physiological and behavioural adaptations to cope with the cyclic seasonal changes. This may also result in changes in personality: suites of behavioural and physiological traits that vary consistently among individuals. Winter, typically the adverse season challenging survival, may require individuals to have shy/cautious personality, whereas during summer, energetically favourable to reproduction, individuals may benefit from a bold/risk-taking personality. To test the effects of seasonal changes in early life and in adulthood on behaviours (activity, exploration and anxiety), body mass and stress response, we manipulated the photoperiod and quality of food in two experiments to simulate the conditions of winter and summer. We used the common voles (Microtus arvalis) as they have been shown to display personality based on behavioural consistency over time and contexts. Summer-born voles allocated to winter conditions at weaning had lower body mass, a higher corticosterone increase after stress and a less active, more cautious behavioural phenotype in adulthood compared to voles born in and allocated to summer conditions. In contrast, adult females only showed plasticity in stress-induced corticosterone levels, which were higher in the animals that were transferred to the winter conditions than to those staying in summer conditions. These results suggest a sensitive period for season-related behavioural plasticity in which juveniles shift over the bold-shy axis.
In Germany, active bat rabies surveillance was conducted between 1993 and 2012. A total of 4546 oropharyngeal swab samples from 18 bat species were screened for the presence of EBLV-1- , EBLV-2- and BBLV-specific RNA. Overall, 0 center dot 15% of oropharyngeal swab samples tested EBLV-1 positive, with the majority originating from Eptesicus serotinus. Interestingly, out of seven RT-PCR-positive oropharyngeal swabs subjected to virus isolation, viable virus was isolated from a single serotine bat (E. serotinus). Additionally, about 1226 blood samples were tested serologically, and varying virus neutralizing antibody titres were found in at least eight different bat species. The detection of viral RNA and seroconversion in repeatedly sampled serotine bats indicates long-term circulation of the virus in a particular bat colony. The limitations of random-based active bat rabies surveillance over passive bat rabies surveillance and its possible application of targeted approaches for future research activities on bat lyssavirus dynamics and maintenance are discussed.
Two of a kind?
(2014)
School attacks are attracting increasing attention in aggression research. Recent systematic analyses provided new insights into offense and offender characteristics. Less is known about attacks in institutes of higher education (e.g., universities). It is therefore questionable whether the term “school attack” should be limited to institutions of general education or could be extended to institutions of higher education. Scientific literature is divided in distinguishing or unifying these two groups and reports similarities as well as differences. We researched 232 school attacks and 45 attacks in institutes of higher education throughout the world and conducted systematic comparisons between the two groups. The analyses yielded differences in offender (e.g., age, migration background) and offense characteristics (e.g., weapons, suicide rates), and some similarities (e.g., gender). Most differences can apparently be accounted for by offenders’ age and situational influences. We discuss the implications of our findings for future research and the development of preventative measures.
Current chemical risk assessment procedures may result in imprecise estimates of risk due to sometimes arbitrary simplifying assumptions. As a way to incorporate ecological complexity and improve risk estimates, mechanistic effect models have been recommended. However, effect modeling has not yet been extensively used for regulatory purposes, one of the main reasons being uncertainty about which model type to use to answer specific regulatory questions. We took an individual-based model (IBM), which was developed for risk assessment of soil invertebrates and includes avoidance of highly contaminated areas, and contrasted it with a simpler, more standardized model, based on the generic metapopulation matrix model RAMAS. In the latter the individuals within a sub-population are not treated as separate entities anymore and the spatial resolution is lower. We explored consequences of model aggregation in terms of assessing population-level effects for different spatial distributions of a toxic chemical. For homogeneous contamination of the soil, we found good agreement between the two models, whereas for heterogeneous contamination, at different concentrations and percentages of contaminated area, RAMAS results were alternatively similar to IBM results with and without avoidance, and different food levels. This inconsistency is explained on the basis of behavioral responses that are included in the IBM but not in RAMAS. Overall, RAMAS was less sensitive than the IBM in detecting population-level effects of different spatial patterns of exposure. We conclude that choosing the right model type for risk assessment of chemicals depends on whether or not population-level effects of small-scale heterogeneity in exposure need to be detected. We recommend that if in doubt, both model types should be used and compared. Describing both models following the same standard format, the ODD protocol, makes them equally transparent and understandable. The simpler model helps to build up trust for the more complex model and can be used for more homogeneous exposure patterns. The more complex model helps detecting and understanding the limitations of the simpler model and is needed to ensure ecological realism for more complex exposure scenarios. (C) 2013 Elsevier B.V. All rights reserved.
Two-photon polymerization of hydrogels – versatile solutions to fabricate well-defined 3D structures
(2014)
Hydrogels are cross-linked water-containing polymer networks that are formed by physical, ionic or covalent interactions. In recent years, they have attracted significant attention because of their unique physical properties, which make them promising materials for numerous applications in food and cosmetic processing, as well as in drug delivery and tissue engineering. Hydrogels are highly water-swellable materials, which can considerably increase in volume without losing cohesion, are biocompatible and possess excellent tissue-like physical properties, which can mimic in vivo conditions. When combined with highly precise manufacturing technologies, such as two-photon polymerization (2PP), well-defined three-dimensional structures can be obtained. These structures can become scaffolds for selective cell-entrapping, cell/drug delivery, sensing and prosthetic implants in regenerative medicine. 2PP has been distinguished from other rapid prototyping methods because it is a non-invasive and efficient approach for hydrogel cross-linking. This review discusses the 2PP-based fabrication of 3D hydrogel structures and their potential applications in biotechnology. A brief overview regarding the 2PP methodology and hydrogel properties relevant to biomedical applications is given together with a review of the most important recent achievements in the field.
The UDKM1DSIM toolbox is a collection of MATLAB (MathWorks Inc.) classes and routines to simulate the structural dynamics and the according X-ray diffraction response in one-dimensional crystalline sample structures upon an arbitrary time-dependent external stimulus, e.g. an ultrashort laser pulse. The toolbox provides the capabilities to define arbitrary layered structures on the atomic level including a rich database of corresponding element-specific physical properties. The excitation of ultrafast dynamics is represented by an N-temperature model which is commonly applied for ultrafast optical excitations. Structural dynamics due to thermal stress are calculated by a linear-chain model of masses and springs. The resulting X-ray diffraction response is computed by dynamical X-ray theory. The UDKM1DSIM toolbox is highly modular and allows for introducing user-defined results at any step in the simulation procedure.
Program summary
Program title: udkm1Dsim
Catalogue identifier: AERH_v1_0
Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AERH_v1_0.html
Licensing provisions: BSD
No. of lines in distributed program, including test data, etc.: 130221
No. of bytes in distributed program, including test data, etc.: 2746036
Distribution format: tar.gz
Programming language: Matlab (MathWorks Inc.).
Computer: PC/Workstation.
Operating system: Running Matlab installation required (tested on MS Win XP -7, Ubuntu Linux 11.04-13.04).
Has the code been vectorized or parallelized?: Parallelization for dynamical XRD computations. Number of processors used: 1-12 for Matlab Parallel Computing Toolbox; 1 - infinity for Matlab Distributed Computing Toolbox
External routines:
Optional: Matlab Parallel Computing Toolbox, Matlab Distributed Computing Toolbox Required (included in the package): mtimesx Fast Matrix Multiply for Matlab by James Tursa, xml io tools by Jaroslaw Tuszynski, textprogressbar by Paul Proteus
Nature of problem:
Simulate the lattice dynamics of 1D crystalline sample structures due to an ultrafast excitation including thermal transport and compute the corresponding transient X-ray diffraction pattern.
Solution method:
Restrictions:
The program is restricted to 1D sample structures and is further limited to longitudinal acoustic phonon modes and symmetrical X-ray diffraction geometries.
Unusual features: The program is highly modular and allows the inclusion of user-defined inputs at any time of the simulation procedure.
Running time: The running time is highly dependent on the number of unit cells in the sample structure and other simulation parameters such as time span or angular grid for X-ray diffraction computations. However, the example files are computed in approx. 1-5 min each on a 8 Core Processor with 16 GB RAM available.
Using ultrafast X-ray diffraction, we study the coherent picosecond lattice dynamics of photoexcited thin films in the two limiting cases, where the photoinduced stress profile decays on a length scale larger and smaller than the film thickness. We solve a unifying analytical model of the strain propagation for acoustic impedance-matched opaque films on a semi-infinite transparent substrate, showing that the lattice dynamics essentially depend on two parameters: One for the spatial profile and one for the amplitude of the strain. We illustrate the results by comparison with high-quality ultrafast X-ray diffraction data of SrRuO3 films on SrTiO3 substrates. (C) 2014 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 Unported License.
In processing and data storage mainly ferromagnetic (FM) materials are being used. Approaching physical limits, new concepts have to be found for faster, smaller switches, for higher data densities and more energy efficiency. Some of the discussed new concepts involve the material classes of correlated oxides and materials with antiferromagnetic coupling. Their applicability depends critically on their switching behavior, i.e., how fast and how energy efficient material properties can be manipulated. This thesis presents investigations of ultrafast non-equilibrium phase transitions on such new materials. In transition metal oxides (TMOs) the coupling of different degrees of freedom and resulting low energy excitation spectrum often result in spectacular changes of macroscopic properties (colossal magneto resistance, superconductivity, metal-to-insulator transitions) often accompanied by nanoscale order of spins, charges, orbital occupation and by lattice distortions, which make these material attractive. Magnetite served as a prototype for functional TMOs showing a metal-to-insulator-transition (MIT) at T = 123 K. By probing the charge and orbital order as well as the structure after an optical excitation we found that the electronic order and the structural distortion, characteristics of the insulating phase in thermal equilibrium, are destroyed within the experimental resolution of 300 fs. The MIT itself occurs on a 1.5 ps timescale. It shows that MITs in functional materials are several thousand times faster than switching processes in semiconductors. Recently ferrimagnetic and antiferromagnetic (AFM) materials have become interesting. It was shown in ferrimagnetic GdFeCo, that the transfer of angular momentum between two opposed FM subsystems with different time constants leads to a switching of the magnetization after laser pulse excitation. In addition it was theoretically predicted that demagnetization dynamics in AFM should occur faster than in FM materials as no net angular momentum has to be transferred out of the spin system. We investigated two different AFM materials in order to learn more about their ultrafast dynamics. In Ho, a metallic AFM below T ≈ 130 K, we found that the AFM Ho can not only be faster but also ten times more energy efficiently destroyed as order in FM comparable metals. In EuTe, an AFM semiconductor below T ≈ 10 K, we compared the loss of magnetization and laser-induced structural distortion in one and the same experiment. Our experiment shows that they are effectively disentangled. An exception is an ultrafast release of lattice dynamics, which we assign to the release of magnetostriction. The results presented here were obtained with time-resolved resonant soft x-ray diffraction at the Femtoslicing source of the Helmholtz-Zentrum Berlin and at the free-electron laser in Stanford (LCLS). In addition the development and setup of a new UHV-diffractometer for these experiments will be reported.
A new concept for shortening hard X-ray pulses emitted from a third-generation synchrotron source down to few picoseconds is presented. The device, called the PicoSwitch, exploits the dynamics of coherent acoustic phonons in a photo-excited thin film. A characterization of the structure demonstrates switching times of <= 5 ps and a peak reflectivity of similar to 10(-3). The device is tested in a real synchrotron-based pump-probe experiment and reveals features of coherent phonon propagation in a second thin film sample, thus demonstrating the potential to significantly improve the temporal resolution at existing synchrotron facilities.
Ultraschall Berlin
(2014)
Ultraschall Berlin
(2014)
Scientific inquiry requires that we formulate not only what we know, but also what we do not know and by how much. In climate data analysis, this involves an accurate specification of measured quantities and a consequent analysis that consciously propagates the measurement errors at each step. The dissertation presents a thorough analytical method to quantify errors of measurement inherent in paleoclimate data. An additional focus are the uncertainties in assessing the coupling between different factors that influence the global mean temperature (GMT).
Paleoclimate studies critically rely on `proxy variables' that record climatic signals in natural archives. However, such proxy records inherently involve uncertainties in determining the age of the signal. We present a generic Bayesian approach to analytically determine the proxy record along with its associated uncertainty, resulting in a time-ordered sequence of correlated probability distributions rather than a precise time series. We further develop a recurrence based method to detect dynamical events from the proxy probability distributions. The methods are validated with synthetic examples and
demonstrated with real-world proxy records. The proxy estimation step reveals the interrelations between proxy variability and uncertainty. The recurrence analysis of the East Asian Summer Monsoon during the last 9000 years confirms the well-known `dry' events at 8200 and 4400 BP, plus an additional significantly dry event at 6900 BP.
We also analyze the network of dependencies surrounding GMT. We find an intricate, directed network with multiple links between the different factors at multiple time delays. We further uncover a significant feedback from the GMT to the El Niño Southern Oscillation at quasi-biennial timescales. The analysis highlights the need of a more nuanced formulation of influences between different climatic factors, as well as the limitations in trying to estimate such dependencies.
The aim of the present thesis is to answer the question to what degree the processes involved in sentence comprehension are sensitive to task demands. A central phenomenon in this regard is the so-called ambiguity advantage, which is the finding that ambiguous sentences can be easier to process than unambiguous sentences. This finding may appear counterintuitive, because more meanings should be associated with a higher computational effort. Currently, two theories exist that can explain this finding.
The Unrestricted Race Model (URM) by van Gompel et al. (2001) assumes that several sentence interpretations are computed in parallel, whenever possible, and that the first interpretation to be computed is assigned to the sentence. Because the duration of each structure-building process varies from trial to trial, the parallelism in structure-building predicts that ambiguous sentences should be processed faster. This is because when two structures are permissible, the chances that some interpretation will be computed quickly are higher than when only one specific structure is permissible. Importantly, the URM is not sensitive to task demands such as the type of comprehension questions being asked.
A radically different proposal is the strategic underspecification model by Swets et al. (2008). It assumes that readers do not attempt to resolve ambiguities unless it is absolutely necessary. In other words, they underspecify. According the strategic underspecification hypothesis, all attested replications of the ambiguity advantage are due to the fact that in those experiments, readers were not required to fully understand the sentence.
In this thesis, these two models of the parser’s actions at choice-points in the sentence are presented and evaluated. First, it is argued that the Swets et al.’s (2008) evidence against the URM and in favor of underspecification is inconclusive. Next, the precise predictions of the URM as well as the underspecification model are refined. Subsequently, a self-paced reading experiment involving the attachment of pre-nominal relative clauses in Turkish is presented, which provides evidence against strategical underspecification. A further experiment is presented which investigated relative clause attachment in German using the speed-accuracy tradeoff (SAT) paradigm. The experiment provides evidence against strategic underspecification and in favor of the URM. Furthermore the results of the experiment are used to argue that human sentence comprehension is fallible, and that theories of parsing should be able to account for that fact. Finally, a third experiment is presented, which provides evidence for the sensitivity to task demands in the treatment of ambiguities. Because this finding is incompatible with the URM, and because the strategic underspecification model has been ruled out, a new model of ambiguity resolution is proposed: the stochastic multiple-channel model of ambiguity resolution (SMCM). It is further shown that the quantitative predictions of the SMCM are in agreement with experimental data.
In conclusion, it is argued that the human sentence comprehension system is parallel and fallible, and that it is sensitive to task-demands.
Background: Agrammatic speakers have problems with grammatical encoding and decoding. However, not all syntactic processes are equally problematic: present time reference, who questions, and reflexives can be processed by narrow syntax alone and are relatively spared compared to past time reference, which questions, and personal pronouns, respectively. The latter need additional access to discourse and information structures to link to their referent outside the clause (Avrutin, 2006). Linguistic processing that requires discourse-linking is difficult for agrammatic individuals: verb morphology with reference to the past is more difficult than with reference to the present (Bastiaanse et al., 2011). The same holds for which questions compared to who questions and for pronouns compared to reflexives (Avrutin, 2006). These results have been reported independently for different populations in different languages. The current study, for the first time, tested all conditions within the same population.
Aims: We had two aims with the current study. First, we wanted to investigate whether discourse-linking is the common denominator of the deficits in time reference, wh questions, and object pronouns. Second, we aimed to compare the comprehension of discourse-linked elements in people with agrammatic and fluent aphasia.
Methods and procedures: Three sentence-picture-matching tasks were administered to 10 agrammatic, 10 fluent aphasic, and 10 non-brain-damaged Russian speakers (NBDs): (1) the Test for Assessing Reference of Time (TART) for present imperfective (reference to present) and past perfective (reference to past), (2) the Wh Extraction Assessment Tool (WHEAT) for which and who subject questions, and (3) the Reflexive-Pronoun Test (RePro) for reflexive and pronominal reference.
Outcomes and results: NBDs scored at ceiling and significantly higher than the aphasic participants. We found an overall effect of discourse-linking in the TART and WHEAT for the agrammatic speakers, and in all three tests for the fluent speakers. Scores on the RePro were at ceiling.
Conclusions: The results are in line with the prediction that problems that individuals with agrammatic and fluent aphasia experience when comprehending sentences that contain verbs with past time reference, which question words and pronouns are caused by the fact that these elements involve discourse linking. The effect is not specific to agrammatism, although it may result from different underlying disorders in agrammatic and fluent aphasia.
A detailed knowledge of cell wall heterogeneity and complexity is crucial for understanding plant growth and development. One key challenge is to establish links between polysaccharide-rich cell walls and their phenotypic characteristics. It is of particular interest for some plant material, like cotton fibers, which are of both biological and industrial importance. To this end, we attempted to study cotton fiber characteristics together with glycan arrays using regression based approaches. Taking advantage of the comprehensive microarray polymer profiling technique (CoMPP), 32 cotton lines from different cotton species were studied. The glycan array was generated by sequential extraction of cell wall polysaccharides from mature cotton fibers and screening samples against eleven extensively characterized cell wall probes. Also, phenotypic characteristics of cotton fibers such as length, strength, elongation and micronaire were measured. The relationship between the two datasets was established in an integrative manner using linear regression methods. In the conducted analysis, we demonstrated the usefulness of regression based approaches in establishing a relationship between glycan measurements and phenotypic traits. In addition, the analysis also identified specific polysaccharides which may play a major role during fiber development for the final fiber characteristics. Three different regression methods identified a negative correlation between micronaire and the xyloglucan and homogalacturonan probes. Moreover, homogalacturonan and callose were shown to be significant predictors for fiber length. The role of these polysaccharides was already pointed out in previous cell wall elongation studies. Additional relationships were predicted for fiber strength and elongation which will need further experimental validation.
Unfälle der Sprache
(2014)
Der Begriff »Katastrophe« hat in unserer Alltags- und Mediensprache Hochkonjunktur. Was in der Abfolge von Kriegen, Attentaten, Erdbeben, Vulkanausbrüchen und Tsunamis als »Katastrophe« bezeichnet wird, verlangt nach einer zugespitzten Analyse. In der Literaturwissenschaft wird der Ausdruck als Bezeichnung für das schreckliche Unglück verwendet, mit dem eine Tragödie endet. Die »Strophe« bezeichnet dabei ursprünglich die körperliche Drehung, mit welcher der Chor in der antiken Tragödie seinen Gesang begleitete, bevor etwas Neues beginnt ..
Obwohl in den unionalen Verträgen bis heute keine Vorschrift bezüglich einer Staatshaftung der Mitgliedstaaten für Entscheidungen ihrer Gerichte existiert, hat der Gerichtshof der Europäischen Union (EuGH) in einer Reihe von Entscheidungen eine solche Haftung entwickelt und präzisiert. Die vorliegende Arbeit analysiert eingehend diese Rechtsprechung mitsamt den sich daraus ergebenden facettenreichen Rechtsfragen. Im ersten Kapitel widmet sich die Arbeit der historischen Entwicklung der unionsrechtlichen Staatshaftung im Allgemeinen, ausgehend von dem bekannten Francovich-Urteil aus dem Jahr 1991. Sodann werden im zweiten Kapitel die zur Haftung für judikatives Unrecht grundlegenden Entscheidungen in den Rechtssachen Köbler und Traghetti vorgestellt. In dem sich anschließenden dritten Kapitel wird der Rechtscharakter der unionsrechtlichen Staatshaftung – einschließlich der Frage einer Subsidiarität des unionsrechtlichen Anspruchs gegenüber bestehenden nationalen Staatshaftungsansprüchen – untersucht. Das vierte Kapitel widmet sich der Frage, ob eine unionsrechtliche Staatshaftung für judikatives Unrecht prinzipiell anzuerkennen ist, wobei die wesentlichen für und gegen eine solche Haftung sprechenden Argumente ausführlich behandelt und bewertet werden. Im fünften Kapitel werden die im Zusammenhang mit den unionsrechtlichen Haftungsvoraussetzungen stehenden Probleme der Haftung für letztinstanzliche Gerichtsentscheidungen detailliert erörtert. Zugleich wird der Frage nachgegangen, ob eine Haftung für fehlerhafte unterinstanzliche Gerichtsentscheidungen zu befürworten ist. Das sechste Kapitel befasst sich mit der Ausgestaltung der unionsrechtlichen Staatshaftung für letztinstanzliche Gerichtsentscheidungen durch die Mitgliedstaaten, wobei u.a. zur Anwendbarkeit der deutschen Haftungsprivilegien bei judikativem Unrecht auf den unionsrechtlichen Staatshaftungsanspruch Stellung genommen wird. Im letzten Kapitel wird der Frage nachgegangen, ob der EuGH überhaupt über eine Kompetenz zur Schaffung der Staatshaftung für letztinstanzliche Gerichtsentscheidungen verfügte. Abschließend werden die wichtigsten Ergebnisse der Arbeit präsentiert und ein Ausblick auf weitere mögliche Auswirkungen und Entwicklungen der unionsrechtlichen Staatshaftung für judikatives Unrecht gegeben.
Uno sguardo che renda omogenee le teorie della lingua relative al XVII e al XVIII secolo non può cogliere che a grandi linee la realtà delle concezioni della lingua abbracciate in questo periodo. Il riconoscimento di una teoria cartesiana della lingua come la spiegazione indifferenziata degli sviluppi conseguenti il passaggio da visioni razionalistiche a concezioni orientate ai sensi sono risultato di tale omogeneizzazione, un processo che contempla la realtà solo in parte.
Il pensiero linguistico era contrassegnato da un misto di forme di riflessione di carattere narrativo e di tipo concettuale-razionale che si completavano in modo reciproco. Se l’approccio concettuale tentava di rilevare le proprietà fondamentali della lingua e ordinarle razionalmente, le forme narrative della riflessione linguistica non si rivolgevano alla lingua in quanto oggetto concettuale. Piuttosto la rappresentavano come oggetto da comprendere. Gli approcci narrativi e concettuali alla lingua prevedono differenze discorsive nelle impostazioni teorico-linguistiche. Anche lo stampo del pensiero teoretico-linguistico contribuisce, attraverso tradizioni differenti, alla molteplicità delle vedute teoretico-linguistiche. Per tradizioni intendiamo posizioni dominanti nella riflessione metalinguistica, presenti in contesti regionali, che possono differenziarsi da altre tradizioni. Ad ogni modo, anche il ritardato sviluppo o la ricezione di teorie linguistiche può condurre a differenze caratteristiche. Le teorie linguistiche dell´Illuminismo furono per esempio recepite in Spagna più tardi che in altri Paesi europei. Ciò condusse all’accettazione sincronica di elementi teorici relativi a teorie diverse e consecutive. Se si concentra l’attenzione al di fuori dell’Europa si verrà attratti dallo sviluppo degli approcci analoghi alla riflessione linguistica che trovarono sviluppo in Cina all’inizio del XX secolo.
Unità e diversità sono tuttavia rintracciabili non solo sul piano della conoscenza metalinguistica ma anche sul piano dell’oggetto. Una sfida per la descrizione della lingua orientata alla tradizione greco-latina era rappresentata dalle lingue indigene con le quali si stava iniziando ad entrare in contatto attraverso i viaggi di scoperta e in seguito all’inizio del colonialismo. Da affiancare alla comunicazione esogena della trasmissione metalinguistica dei rapporti nell’ambito delle lingue europee sono presenti anche approcci per una percezione della specificità categoriale delle lingue americane. Sebbene in alcuni casi non verranno riconosciute le giuste categorizzazioni per le lingue descritte, per lo meno verrà assodato che le categorie rese note dalla grammatica latina non sussistono.
Nella ricerca degli ultimi decenni, la rappresentazione di un paradigma della filosofia della lingua del XVII e del XVIII secolo che postordini e subordini universalmente la molteplicità delle lingue a strutture valide di pensiero e che prescriva per la riflessione linguistica categorie fisse di una grammatica generale strettamente orientata alla logica razionalistica è stata più volte relativizzata. In quanto connessa con la fondatezza dell’unità e con l’inalterabilità del genere umano a seconda di spazio e tempo, la tesi che le lingue nella loro natura molteplice possano esistere solo alle dipendenze di una struttura universale del pensiero si lasciava catalogare tra quelle posizioni paradigmatiche sussistenti nell’ambito della filosofia della lingua di allora. Attraverso la conoscenza dell’origine storica dell’evoluzione dell’uomo, di tutti i suoi stili di vita e forme di comunicazione, assume rilievo un’altra posizione paradigmatica che attribuisce alla lingua un influsso formativo sul pensiero.
Attraverso la differenziazione ideologico-filosofica e la specificità nazionale delle sue tesi relative alla lingua in generale e alle lingue storiche in particolare la visione secolarizzata dell’uomo e della società elaborata all’apice dell’Illuminismo si associava allo sviluppo corrispettivo e al cambiamento delle posizioni teorico-linguistiche. Con la proclamazione della lingua e del pensiero come risultati di un lungo sviluppo corrispettivo nella storia dell’umanità viene assegnato nuovo valore alle prese di posizione sulla natura e sull’origine della lingua.
Unstetige Galerkin-Diskretisierung niedriger Ordnung in einem atmosphärischen Multiskalenmodell
(2014)
Die Dynamik der Atmosphäre der Erde umfasst einen Bereich von mikrophysikalischer Turbulenz über konvektive Prozesse und Wolkenbildung bis zu planetaren Wellenmustern. Für Wettervorhersage und zur Betrachtung des Klimas über Jahrzehnte und Jahrhunderte ist diese Gegenstand der Modellierung mit numerischen Verfahren. Mit voranschreitender Entwicklung der Rechentechnik sind Neuentwicklungen der dynamischen Kerne von Klimamodellen, die mit der feiner werdenden Auflösung auch entsprechende Prozesse auflösen können, notwendig. Der dynamische Kern eines Modells besteht in der Umsetzung (Diskretisierung) der grundlegenden dynamischen Gleichungen für die Entwicklung von Masse, Energie und Impuls, so dass sie mit Computern numerisch gelöst werden können. Die vorliegende Arbeit untersucht die Eignung eines unstetigen Galerkin-Verfahrens niedriger Ordnung für atmosphärische Anwendungen. Diese Eignung für Gleichungen mit Wirkungen von externen Kräften wie Erdanziehungskraft und Corioliskraft ist aus der Theorie nicht selbstverständlich. Es werden nötige Anpassungen beschrieben, die das Verfahren stabilisieren, ohne sogenannte „slope limiter” einzusetzen. Für das unmodifizierte Verfahren wird belegt, dass es nicht geeignet ist, atmosphärische Gleichgewichte stabil darzustellen. Das entwickelte stabilisierte Modell reproduziert eine Reihe von Standard-Testfällen der atmosphärischen Dynamik mit Euler- und Flachwassergleichungen in einem weiten Bereich von räumlichen und zeitlichen Skalen. Die Lösung der thermischen Windgleichung entlang der mit den Isobaren identischen charakteristischen Kurven liefert atmosphärische Gleichgewichtszustände mit durch vorgegebenem Grundstrom einstellbarer Neigung zu(barotropen und baroklinen)Instabilitäten, die für die Entwicklung von Zyklonen wesentlich sind. Im Gegensatz zu früheren Arbeiten sind diese Zustände direkt im z-System(Höhe in Metern)definiert und müssen nicht aus Druckkoordinaten übertragen werden.Mit diesen Zuständen, sowohl als Referenzzustand, von dem lediglich die Abweichungen numerisch betrachtet werden, und insbesondere auch als Startzustand, der einer kleinen Störung unterliegt, werden verschiedene Studien der Simulation von barotroper und barokliner Instabilität durchgeführt. Hervorzuheben ist dabei die durch die Formulierung von Grundströmen mit einstellbarer Baroklinität ermöglichte simulationsgestützte Studie des Grades der baroklinen Instabilität verschiedener Wellenlängen in Abhängigkeit von statischer Stabilität und vertikalem Windgradient als Entsprechung zu Stabilitätskarten aus theoretischen Betrachtungen in der Literatur.
Inferring the internal interaction patterns of a complex dynamical system is a challenging problem. Traditional methods often rely on examining the correlations among the dynamical units. However, in systems such as transcription networks, one unit's variable is also correlated with the rate of change of another unit's variable. Inspired by this, we introduce the concept of derivative-variable correlation, and use it to design a new method of reconstructing complex systems (networks) from dynamical time series. Using a tunable observable as a parameter, the reconstruction of any system with known interaction functions is formulated via a simple matrix equation. We suggest a procedure aimed at optimizing the reconstruction from the time series of length comparable to the characteristic dynamical time scale. Our method also provides a reliable precision estimate. We illustrate the method's implementation via elementary dynamical models, and demonstrate its robustness to both model error and observation error.
Es ist in dieser Arbeit gelungen, starre Oligospiroketal(OSK)-Stäbe als Grundbausteine für komplexe 2D- und 3D-Systeme zu verwenden. Dazu wurde ein difunktionalisierter starrer Stab synthetisiert, der mit seines Gleichen und anderen verzweigten Funktionalisierungseinheiten in Azid-Alkin-Klickreaktionen eingesetzt wurde. An zwei über Klickreaktion verknüpften OSK-Stäben konnten mittels theoretischer Berechnungen Aussagen über die neuartige Bimodalität der Konformation getroffen werden. Es wurde dafür der Begriff Gelenkstab eingeführt, da die Moleküle um ein Gelenk gedreht sowohl gestreckt als auch geknickt vorliegen können. Aufbauend auf diesen Erkenntnissen konnte gezeigt werden, dass nicht nur gezielt große Polymere aus bis zu vier OSK-Stäben synthetisiert werden können, sondern es auch möglich ist, durch gezielte Änderung von Reaktionsbedingungen der Klickreaktion auch Cyclen aus starren OSK-Stäben herzustellen. Die neu entwickelte Substanzklasse der Gelenkstäbe wurde im Hinblick auf die Steuerung des vorliegenden Gleichgewichts zwischen geknicktem und gestrecktem Gelenkstab hin untersucht. Dafür wurde der Gelenkstab mit Pyrenylresten in terminaler Position versehen. Es wurde durch Fluoreszenzmessungen festgestellt, dass das Gleichgewicht z. B. durch die Temperatur oder die Wahl des Lösungsmittels beeinflussbar ist. Für vielfache Anwendungen wurde eine vereinfachte Synthesestrategie gefunden, mit der eine beliebige Funktionalisierung in nur einem Syntheseschritt erreicht werden konnte. Es konnten photoaktive Gelenkstäbe synthetisiert werden, die gezielt zur intramolekularen Dimerisierung geführt werden konnten. Zusätzlich wurde durch Aminosäuren ein Verknüpfungselement am Ende der Gelenkstäbe gefunden, das eine stereoselektive Synthese von Mehrfachfunktionalisierungen zulässt. Die Synthese der komplexen Gelenkstäbe wurde als ein neuartiges Gebiet aufgezeigt und bietet ein breites Forschungspotential für weitere Anwendungen z. B. in der Biologie (als molekulare Schalter für Ionentransporte) und in der Materialchemie (als Ladungs- oder Energietransporteure).
Untersuchungen zur pro-inflammatorischen Wirkung von Serum-Amyloid A in glatten Gefäßmuskelzellen
(2014)
The Epoch of Reionization marks after recombination the second major change in the ionization state of the universe, going from a neutral to an ionized state. It starts with the appearance of the first stars and galaxies; a fraction of high-energy photons emitted from galaxies permeate into the intergalactic medium (IGM) and gradually ionize the hydrogen, until the IGM is completely ionized at z~6 (Fan et al., 2006). While the progress of reionization is driven by galaxy evolution, it changes the ionization and thermal state of the IGM substantially and affects subsequent structure and galaxy formation by various feedback mechanisms.
Understanding this interaction between reionization and galaxy formation is further impeded by a lack of understanding of the high-redshift galactic properties such as the dust distribution and the escape fraction of ionizing photons. Lyman Alpha Emitters (LAEs) represent a sample of high-redshift galaxies that are sensitive to all these galactic properties and the effects of reionization.
In this thesis we aim to understand the progress of reionization by performing cosmological simulations, which allows us to investigate the limits of constraining reionization by high-redshift galaxies as LAEs, and examine how galactic properties and the ionization state of the IGM affect the visibility and observed quantities of LAEs and Lyman Break galaxies (LBGs).
In the first part of this thesis we focus on performing radiative transfer calculations to simulate reionization. We have developed a mapping-sphere-scheme, which, starting from spherically averaged temperature and density fields, uses our 1D radiative transfer code and computes the effect of each source on the IGM temperature and ionization (HII, HeII, HeIII) profiles, which are subsequently mapped onto a grid. Furthermore we have updated the 3D Monte-Carlo radiative transfer pCRASH, enabling detailed reionization simulations which take individual source characteristics into account.
In the second part of this thesis we perform a reionization simulation by post-processing a smoothed-particle hydrodynamical (SPH) simulation (GADGET-2) with 3D radiative transfer (pCRASH), where the ionizing sources are modelled according to the characteristics of the stellar populations in the hydrodynamical simulation. Following the ionization fractions of hydrogen (HI) and helium (HeII, HeIII), and temperature in our simulation, we find that reionization starts at z~11 and ends at z~6, and high density regions near sources are ionized earlier than low density regions far from sources.
In the third part of this thesis we couple the cosmological SPH simulation and the radiative transfer simulations with a physically motivated, self-consistent model for LAEs, in order to understand the importance of the ionization state of the IGM, the escape fraction of ionizing photons from galaxies and dust in the interstellar medium (ISM) on the visibility of LAEs. Comparison of our models results with the LAE Lyman Alpha (Lya) and UV luminosity functions at z~6.6 reveals a three-dimensional degeneracy between the ionization state of the IGM, the ionizing photons escape fraction and the ISM dust distribution, which implies that LAEs act not only as tracers of reionization but also of the ionizing photon escape fraction and of the ISM dust distribution. This degeneracy does not even break down when we compare simulated with observed clustering of LAEs at z~6.6. However, our results show that reionization has the largest impact on the amplitude of the LAE angular correlation functions, and its imprints are clearly distinguishable from those of properties on galactic scales. These results show that reionization cannot be constrained tightly by exclusively using LAE observations. Further observational constraints, e.g. tomographies of the redshifted hydrogen 21cm line, are required.
In addition we also use our LAE model to probe the question when a galaxy is visible as a LAE or a LBG. Within our model galaxies above a critical stellar mass can produce enough luminosity to be visible as a LBG and/or a LAE. By finding an increasing duty cycle of LBGs with Lya emission as the UV magnitude or stellar mass of the galaxy rises, our model reveals that the brightest (and most massive) LBGs most often show Lya emission.
Predicting the Lya equivalent width (Lya EW) distribution and the fraction of LBGs showing Lya emission at z~6.6, we reproduce the observational trend of the Lya EWs with UV magnitude. However, the Lya EWs of the UV brightest LBGs exceed observations and can only be reconciled by accounting for an increased Lya attenuation of massive galaxies, which implies that the observed Lya brightest LAEs do not necessarily coincide with the UV brightest galaxies. We have analysed the dependencies of LAE observables on the properties of the galactic and intergalactic medium and the LAE-LBG connection, and this enhances our understanding of the nature of LAEs.
Herein, we report the use of upconversion agents to modify graphite carbon nitride (g-C3N4) by direct thermal condensation of a mixture of ErCl3 center dot 6H(2)O and the supramolecular precursor cyanuric acid-melamine. We show the enhancement of g-C3N4 photoactivity after Er3+ doping by monitoring the photodegradation of Rhodamine B dye under visible light. The contribution of the upconversion agent is demonstrated by measurements using only a red laser. The Er3+ doping alters both the electronic and the chemical properties of g-C3N4. The Er3+ doping reduces emission intensity and lifetime, indicating the formation of new, nonradiative deactivation pathways, probably involving charge-transfer processes.
Inorganic arsenicals are environmental toxins that have been connected with neuropathies and impaired cognitive functions. To investigate whether such substances accumulate in brain astrocytes and affect their viability and glutathione metabolism, we have exposed cultured primary astrocytes to arsenite or arsenate. Both arsenicals compromised the cell viability of astrocytes in a time- and concentration-dependent manner. However, the early onset of cell toxicity in arsenite-treated astrocytes revealed the higher toxic potential of arsenite compared with arsenate. The concentrations of arsenite and arsenate that caused within 24 h half-maximal release of the cytosolic enzyme lactate dehydrogenase were around 0.3 mM and 10 mM, respectively. The cellular arsenic contents of astrocytes increased rapidly upon exposure to arsenite or arsenate and reached after 4 h of incubation almost constant steady state levels. These levels were about 3-times higher in astrocytes that had been exposed to a given concentration of arsenite compared with the respective arsenate condition. Analysis of the intracellular arsenic species revealed that almost exclusively arsenite was present in viable astrocytes that had been exposed to either arsenate or arsenite. The emerging toxicity of arsenite 4 h after exposure was accompanied by a loss in cellular total glutathione and by an increase in the cellular glutathione disulfide content. These data suggest that the high arsenite content of astrocytes that had been exposed to inorganic arsenicals causes an increase in the ratio of glutathione disulfide to glutathione which contributes to the toxic potential of these substances.
Aims: Contrast media-induced nephropathy (CIN) is associated with increased morbidity and mortality. The renal endothelin system has been associated with disease progression of various acute and chronic renal diseases. However, robust data coming from adequately powered prospective clinical studies analyzing the short and long-term impacts of the renal ET system in patients with CIN are missing so far. We thus performed a prospective study addressing this topic.
Main methods: We included 327 patients with diabetes or renal impairment undergoing coronary angiography. Blood and spot urine were collected before and 24 h after contrast media (CM) application. Patients were followed for 90 days for major clinical events like need for dialysis, unplanned rehospitalization or death.
Key findings: The concentration of ET-1 and the urinary ET-1/creatinine ratio decreased in spot urine after CM application (ET-1 concentration: 0.91 +/- 1.23pg/ml versus 0.63 +/- 1.03pg/ml, p<0.001; ET-1/creatinine ratio: 0.14 +/- 0.23 versus 0.09 +/- 0.19, p<0.001). The urinary ET-1 concentrations in patients with CIN decreased significantly more than in patients without CIN (-0.26 +/- 1.42pg/ml vs. -0.79 +/- 1.69pg/ml, p=0.041), whereas the decrease of the urinary ET-1/creatinine ratio was not significantly different (non-CIN patients: -0.05 +/- 0.30; CIN patients: -0.11 +/- 0.21, p=0.223). Urinary ET-1 concentrations as well as the urinary ET-1/creatinine ratio were not associated with clinical events (need for dialysis, rehospitalization or death) during the 90day follow-up after contrast media exposure. However, the urinary ET-1 concentration and the urinary ET-1/creatinine ratio after CM application were higher in those patients who had a decrease of GFR of at least 25% after 90days of follow-up.
Significance: In general the ET-1 system in the kidney seems to be down-regulated after contrast media application in patients with moderate CIN risk. Major long-term complications of CIN (need for dialysis, rehospitalization or death) are not associated with the renal ET system. (C) 2014 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license.
Due to increasing demands and competition for high quality groundwater resources in many parts of the world, there is an urgent need for efficient methods that shed light on the interplay between complex natural settings and anthropogenic impacts. Thus a new approach is introduced, that aims to identify and quantify the predominant processes or factors of influence that drive groundwater and lake water dynamics on a catchment scale. The approach involves a non-linear dimension reduction method called Isometric feature mapping (Isomap). This method is applied to time series of groundwater head and lake water level data from a complex geological setting in Northeastern Germany. Two factors explaining more than 95% of the observed spatial variations are identified: (1) the anthropogenic impact of a waterworks in the study area and (2) natural groundwater recharge with different degrees of dampening at the respective sites of observation. The approach enables a presumption-free assessment to be made of the existing geological conception in the catchment, leading to an extension of the conception. Previously unknown hydraulic connections between two aquifers are identified, and connections revealed between surface water bodies and groundwater. (C) 2014 Elsevier B.V. All rights reserved.
The magnetosphere-ionosphere-thermosphere (MIT) dynamic system significantly depends on the highly variable solar wind conditions, in particular, on changes of the strength and orientation of the interplanetary magnetic field (IMF). The solar wind and IMF interactions with the magnetosphere drive the MIT system via the magnetospheric field-aligned currents (FACs). The global modeling helps us to understand the physical background of this complex system. With the present study, we test the recently developed high-resolution empirical model of field-aligned currents MFACE (a high-resolution Model of Field-Aligned Currents through Empirical orthogonal functions analysis). These FAC distributions were used as input of the time-dependent, fully self-consistent global Upper Atmosphere Model (UAM) for different seasons and various solar wind and IMF conditions. The modeling results for neutral mass density and thermospheric wind are directly compared with the CHAMP satellite measurements. In addition, we perform comparisons with the global empirical models: the thermospheric wind model (HWM07) and the atmosphere density model (Naval Research Laboratory Mass Spectrometer and Incoherent Scatter Extended 2000). The theoretical model shows a good agreement with the satellite observations and an improved behavior compared with the empirical models at high latitudes. Using the MFACE model as input parameter of the UAM model, we obtain a realistic distribution of the upper atmosphere parameters for the Northern and Southern Hemispheres during stable IMF orientation as well as during dynamic situations. This variant of the UAM can therefore be used for modeling the MIT system and space weather predictions.
Background: Knowing and, if necessary, altering competitive athletes' real attitudes towards the use of banned performance-enhancing substances is an important goal of worldwide doping prevention efforts. However athletes will not always be willing to reporting their real opinions. Reaction time-based attitude tests help conceal the ultimate goal of measurement from the participant and impede strategic answering. This study investigated how well a reaction time-based attitude test discriminated between athletes who were doping and those who were not. We investigated whether athletes whose urine samples were positive for at least one banned substance (dopers) evaluated doping more favorably than clean athletes (non-dopers).
Methods: We approached a group of 61 male competitive bodybuilders and collected urine samples for biochemical testing. The pictorial doping Brief Implicit Association Test (BIAT) was used for attitude measurement. This test quantifies the difference in response latencies (in milliseconds) to stimuli representing related concepts (i.e. doping-dislike/like-[health food]).
Results: Prohibited substances were found in 43% of all tested urine samples. Dopers had more lenient attitudes to doping than non-dopers (Hedges's g = -0.76). D-scores greater than -0.57 (CI95 = -0.72 to -0.46) might be indicative of a rather lenient attitude to doping. In urine samples evidence of administration of combinations of substances, complementary administration of substances to treat side effects and use of stimulants to promote loss of body fat was common.
Conclusion: This study demonstrates that athletes' attitudes to doping can be assessed indirectly with a reaction time-based test, and that their attitudes are related to their behavior. Although bodybuilders may be more willing to reveal their attitude to doping than other athletes, these results still provide evidence that the pictorial doping BIAT may be useful in athletes from other sports, perhaps as a complementary measure in evaluations of the effectiveness of doping prevention interventions.
Sedimentary proxies used to reconstruct marine productivity suffer from variable preservation and are sensitive to factors other than productivity. Therefore, proxy calibration is warranted. Here we map the spatial patterns of two paleoproductivity proxies, biogenic opal and barium fluxes, from a set of core-top sediments recovered in the Subarctic North Pacific. Comparisons of the proxy data with independent estimates of primary and export production, surface water macronutrient concentrations, and biological pCO(2) drawdown indicate that neither proxy shows a significant correlation with primary or export productivity for the entire region. Biogenic opal fluxes, when corrected for preservation using Th-230-normalized accumulation rates, show a good correlation with primary productivity along the volcanic arcs (tau = 0.71, p = 0.0024) and with export productivity throughout the western Subarctic North Pacific (tau = 0.71, p = 0.0107). Moderate and good correlations of biogenic barium flux with export production (tau = 0.57, p = 0.0022) and with surface water silicate concentrations (tau = 0.70, p = 0.0002) are observed for the central and eastern Subarctic North Pacific. For reasons unknown, however, no correlation is found in the western Subarctic North Pacific between biogenic barium flux and the reference data. Nonetheless, we show that barite saturation, uncertainty in the lithogenic barium corrections, and problems with the reference data sets are not responsible for the lack of a significant correlation between biogenic barium flux and the reference data. Further studies evaluating the factors controlling the variability of the biogenic constituents in the sediments are desirable in this region.
The International Project for the Evaluation of Educational Achievement (IEA) was formed in the 1950s (Postlethwaite, 1967). Since that time, the IEA has conducted many studies in the area of mathematics, such as the First International Mathematics Study (FIMS) in 1964, the Second International Mathematics Study (SIMS) in 1980-1982, and a series of studies beginning with the Third International Mathematics and Science Study (TIMSS) which has been conducted every 4 years since 1995. According to Stigler et al. (1999), in the FIMS and the SIMS, U.S. students achieved low scores in comparison with students in other countries (p. 1). The TIMSS 1995 “Videotape Classroom Study” was therefore a complement to the earlier studies conducted to learn “more about the instructional and cultural processes that are associated with achievement” (Stigler et al., 1999, p. 1). The TIMSS Videotape Classroom Study is known today as the TIMSS Video Study. From the findings of the TIMSS 1995 Video Study, Stigler and Hiebert (1999) likened teaching to “mountain ranges poking above the surface of the water,” whereby they implied that we might see the mountaintops, but we do not see the hidden parts underneath these mountain ranges (pp. 73-78). By watching the videotaped lessons from Germany, Japan, and the United States again and again, they discovered that “the systems of teaching within each country look similar from lesson to lesson. At least, there are certain recurring features [or patterns] that typify many of the lessons within a country and distinguish the lessons among countries” (pp. 77-78). They also discovered that “teaching is a cultural activity,” so the systems of teaching “must be understood in relation to the cultural beliefs and assumptions that surround them” (pp. 85, 88). From this viewpoint, one of the purposes of this dissertation was to study some cultural aspects of mathematics teaching and relate the results to mathematics teaching and learning in Vietnam. Another research purpose was to carry out a video study in Vietnam to find out the characteristics of Vietnamese mathematics teaching and compare these characteristics with those of other countries. In particular, this dissertation carried out the following research tasks: - Studying the characteristics of teaching and learning in different cultures and relating the results to mathematics teaching and learning in Vietnam - Introducing the TIMSS, the TIMSS Video Study and the advantages of using video study in investigating mathematics teaching and learning - Carrying out the video study in Vietnam to identify the image, scripts and patterns, and the lesson signature of eighth-grade mathematics teaching in Vietnam - Comparing some aspects of mathematics teaching in Vietnam and other countries and identifying the similarities and differences across countries - Studying the demands and challenges of innovating mathematics teaching methods in Vietnam – lessons from the video studies Hopefully, this dissertation will be a useful reference material for pre-service teachers at education universities to understand the nature of teaching and develop their teaching career.
The nature of the links between speech production and perception has been the subject of longstanding debate. The present study investigated the articulatory parameter of tongue height and the acoustic F1–F0 difference for the phonological distinction of vowel height in American English front vowels. Multiple repetitions of /i, ɪ, e, ɛ, æ/ in [(h)Vd] sequences were recorded in seven adult speakers. Articulatory (ultrasound) and acoustic data were collected simultaneously to provide a direct comparison of variability in vowel production in both domains. Results showed idiosyncratic patterns of articulation for contrasting the three front vowel pairs /i-ɪ/, /e-ɛ/, and /ɛ-æ/ across subjects, with the degree of variability in vowel articulation comparable to that observed in the acoustics for all seven participants. However, contrary to what was expected, some speakers showed reversals for tongue height for /ɪ/-/e/ that were also reflected in acoustics, with F1 higher for /ɪ/ than for /e/. The data suggest the phonological distinction of height is conveyed via speaker-specific articulatory-acoustic patterns that do not strictly match features descriptions. However, the acoustic signal is faithful to the articulatory configuration that generated it, carrying the crucial information for perceptual contrast.