Refine
Year of publication
- 2012 (1321) (remove)
Document Type
- Article (960)
- Doctoral Thesis (158)
- Conference Proceeding (52)
- Preprint (43)
- Postprint (39)
- Review (31)
- Monograph/Edited Volume (22)
- Other (9)
- Part of a Book (2)
- Master's Thesis (2)
Language
- English (1321) (remove)
Keywords
- Curriculum Framework (16)
- European values education (16)
- Europäische Werteerziehung (16)
- Lehrevaluation (16)
- Studierendenaustausch (16)
- Unterrichtseinheiten (16)
- curriculum framework (16)
- lesson evaluation (16)
- student exchange (16)
- teaching units (16)
Institute
- Institut für Biochemie und Biologie (235)
- Institut für Physik und Astronomie (215)
- Institut für Chemie (174)
- Institut für Geowissenschaften (170)
- Department Psychologie (75)
- Institut für Mathematik (60)
- Institut für Ernährungswissenschaft (55)
- Institut für Informatik und Computational Science (41)
- Department Linguistik (29)
- Department Sport- und Gesundheitswissenschaften (29)
Bad governance causes economic, social, developmental and environmental problems in many developing countries. Developing countries have adopted a number of reforms that have assisted in achieving good governance. The success of governance reform depends on the starting point of each country – what institutional arrangements exist at the out-set and who the people implementing reforms within the existing institutional framework are. This dissertation focuses on how formal institutions (laws and regulations) and informal institutions (culture, habit and conception) impact on good governance. Three characteristics central to good governance - transparency, participation and accountability are studied in the research.
A number of key findings were: Good governance in Hanoi and Berlin represent the two extremes of the scale, while governance in Berlin is almost at the top of the scale, governance in Hanoi is at the bottom. Good governance in Hanoi is still far from achieved. In Berlin, information about public policies, administrative services and public finance is available, reliable and understandable. People do not encounter any problems accessing public information. In Hanoi, however, public information is not easy to access. There are big differences between Hanoi and Berlin in the three forms of participation. While voting in Hanoi to elect local deputies is formal and forced, elections in Berlin are fair and free. The candidates in local elections in Berlin come from different parties, whereas the candidacy of local deputies in Hanoi is thoroughly controlled by the Fatherland Front. Even though the turnout of voters in local deputy elections is close to 90 percent in Hanoi, the legitimacy of both the elections and the process of representation is non-existent because the local deputy candidates are decided by the Communist Party.
The involvement of people in solving local problems is encouraged by the government in Berlin. The different initiatives include citizenry budget, citizen activity, citizen initiatives, etc. Individual citizens are free to participate either individually or through an association.
Lacking transparency and participation, the quality of public service in Hanoi is poor. Citizens seldom get their services on time as required by the regulations. Citizens who want to receive public services can bribe officials directly, use the power of relationships, or pay a third person – the mediator ("Cò" - in Vietnamese).
In contrast, public service delivery in Berlin follows the customer-orientated principle. The quality of service is high in relation to time and cost. Paying speed money, bribery and using relationships to gain preferential public service do not exist in Berlin.
Using the examples of Berlin and Hanoi, it is clear to see how transparency, participation and accountability are interconnected and influence each other. Without a free and fair election as well as participation of non-governmental organisations, civil organisations, and the media in political decision-making and public actions, it is hard to hold the Hanoi local government accountable.
The key differences in formal institutions (regulative and cognitive) between Berlin and Hanoi reflect the three main principles: rule of law vs. rule by law, pluralism vs. monopoly Party in politics and social market economy vs. market economy with socialist orientation.
In Berlin the logic of appropriateness and codes of conduct are respect for laws, respect of individual freedom and ideas and awareness of community development. People in Berlin take for granted that public services are delivered to them fairly. Ideas such as using money or relationships to shorten public administrative procedures do not exist in the mind of either public officials or citizens.
In Hanoi, under a weak formal framework of good governance, new values and norms (prosperity, achievement) generated in the economic transition interact with the habits of the centrally-planned economy (lying, dependence, passivity) and traditional values (hierarchy, harmony, family, collectivism) influence behaviours of those involved.
In Hanoi “doing the right thing” such as compliance with law doesn’t become “the way it is”.
The unintended consequence of the deliberate reform actions of the Party is the prevalence of corruption. The socialist orientation seems not to have been achieved as the gap between the rich and the poor has widened.
Good governance is not achievable if citizens and officials are concerned only with their self-interest. State and society depend on each other. Theoretically to achieve good governance in Hanoi, institutions (formal and informal) able to create good citizens, officials and deputies should be generated. Good citizens are good by habit rather than by nature.
The rule of law principle is necessary for the professional performance of local administrations and People’s Councils. When the rule of law is applied consistently, the room for informal institutions to function will be reduced.
Promoting good governance in Hanoi is dependent on the need and desire to change the government and people themselves. Good governance in Berlin can be seen to be the result of the efforts of the local government and citizens after a long period of development and continuous adjustment.
Institutional transformation is always a long and complicated process because the change in formal regulations as well as in the way they are implemented may meet strong resistance from the established practice. This study has attempted to point out the weaknesses of the institutions of Hanoi and has identified factors affecting future development towards good governance. But it is not easy to determine how long it will take to change the institutional setting of Hanoi in order to achieve good governance.
Landslides are a hazard for humans and artificial structures. From an ecological point of view, they represent an important ecosystem disturbance, especially in tropical montane forests. Here, shallow translational landslides are a frequent natural phenomenon and one local determinant of high levels of biodiversity. In this paper, we apply weighted ensembles of advanced phenomenological models from statistics and machine learning to analyze the driving factors of natural landslides in a tropical montane forest in South Ecuador. We exclusively interpret terrain attributes, derived from a digital elevation model, as proxies to several driving factors of landslides and use them as predictors in our models which are trained on a set of five historical landslide inventories. We check the model generality by transferring them in time and use three common performance criteria (i.e. AUC, explained deviance and slope of model calibration curve) to, on the one hand, compare several state-of-the-art model approaches and on the other hand, to create weighted model ensembles. Our results suggest that it is important to consider more than one single performance criterion.
Approaching our main question, we compare responses of weighted model ensembles that were trained on distinct functional units of landslides (i.e. initiation, transport and deposition zones). This way, we are able to show that it is quite possible to deduce driving factors of landslides, if the consistency between the training data and the processes is maintained. Opening the 'black box' of statistical models by interpreting univariate model response curves and relative importance of single predictors regarding their plausibility, we provide a means to verify this consistency.
With the exception of classification tree analysis, all techniques performed comparably well in our case study while being outperformed by weighted model ensembles. Univariate response curves of models trained on distinct functional units of landslides exposed different shapes following our expectations. Our results indicate the occurrence of landslides to be mainly controlled by factors related to the general position along a slope (i.e. ridge, open slope or valley) while landslide initiation seems to be favored by small scale convexities on otherwise plain open slopes.
Purpose: Previous investigations have shown that poly(ether imide) (PEI) membranes can be functionalized with aminated macromolecules. In this study we explored whether the characterization of PEI functionalized with oligo(ethylene glycol) (OEG) or linear, side chain methylated oligoglycerols (OGMe), by angle-dependent X-ray induced photoelectron spectroscopy (XPS) can be used to prove the functionalization, give insight into the reaction mechanism and reveal the spatial distribution of the grafts.
Methods: PEI membranes were functionalized under alkaline conditions using an aqueous solution with 2 wt% of alpha-amino-methoxy oligo(ethylene glycol) (M-n = 1,320 g.mol(-1)) or linear, side chain methylated monoamine oligoglycerols (M-n = 1,120, 1,800 or 2,270 g.mol(-1)), respectively. The functionalized membranes were investigated using XPS measurements at different detector angles to enable comparison between the signals related to the bulk and surface volume and were compared with untreated and alkaline-treated PEI membranes.
Results: While at a perpendicular detector angle the bulk signals of the PEI were prominent, at larger surface volume-related detector angles, the signals for OGMe and OEG were determinable.
Conclusion: The surface functionalization of PEI with OEG and OGMe could be verified by the angle-dependent XPS. The observations proved the functionalization at the PEI surface, as the polyethers were detected at angles providing signals of the surface volume. Furthermore, the chemical functions determined verified a covalent binding via the nucleophilic addition of the amine functionalized OGMe and OEG to the PEI imide function.
Improving Hemocompatibility of poly(ether imide) by surface functionalization with polyethers
(2012)
Connecting the new world
(2012)
This article explores the link between the profound technological transformations of the nineteenth century and the life and work of the Prussian scholar Alexander von Humboldt (1769-1859). It analyses how Humboldt sought to appropriate the revolutionary new communication and transportation technologies of the time in order to integrate the American continent into global networks of commercial, intellectual and material exchange. Recent scholarship on Humboldt’s expedition to the New World (1799-1804) has claimed that his descriptions of tropical landscapes opened up South America to a range of ‘transformative interventions’ (Pratt) by European capitalists and investors. These studies, however, have not analysed the motivations underlying Humboldt’s support for such intrusions into nature. Furthermore, they have not explored the role that such projects played in shaping Humboldt’s understanding of the forces behind the progress of societies. To comprehend Humboldt’s approval for human interventions in America’s natural world, this study first explores the role that eighteenth-century theories of progress and the notion of geographical determinism played in shaping his conception of civilisational development. It will look at concrete examples of transformative interventions in the American hemisphere that were actively proposed by Humboldt and intended to overcome natural obstacles to human interaction. These were the use of steamships, electric telegraphy, railroads and large-scale canals that together enabled global trade and communication to occur at an unprecedented pace. All these contemporary innovations will be linked to the four motifs of nets, mobility, progress and acceleration, which were driving forces behind the ‘transformation of the world’ that took place in the course of the nineteenth century.
Objectives: To compare the impact of short term training with resistance plus plyometric training (RT+P) or electromyostimulation plus plyometric training (EMS+P) on explosive force production in elite volleyball players. Design: Sixteen elite volleyball players of the first German division participated in a training study. Methods: The participants were randomly assigned to either the RT+P training group (n = 8) or the EMS+P training group (n= 8). Both groups participated in a 5-week lower extremity exercise program. Pre and post tests included squat jumps (Si), countermovement jumps (CMJ), and drop jumps (DJ) on a force plate. The three-step reach height (RH) was assessed using a custom-made vertec apparatus. Fifteen m straight and lateral sprint (S15s and S15l) were assessed using photoelectric cells with interims at 5 m and 10 m. Results: RT+P training resulted in significant improvements in Si (+2.3%) and RH (+0.4%) performance. The EMS+P training group showed significant increases in performance of CMJ (+3.8%), DJ (+6.4%), RH (+1.6%), S15l (-3.8%) and after 5 m and 10 m of the S15s (-2.6%; -0.5%). The comparison of training-induced changes between the two intervention groups revealed significant differences for the Si (p = 0.023) in favor of RT+P and for the S15s after 5 m (p = 0.006) in favor of EMS+P. Conclusions: The results indicate that RT+P training is effective in promoting jump performances and EMS+P training increases jump, speed and agility performances of elite volleyball players. (c) 2012 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
The Sun is surrounded by a 10^6 K hot atmosphere, the corona. The corona and the solar wind are fully ionized, and therefore in the plasma state. Magnetic fields play an important role in a plasma, since they bind electrically charged particles to their field lines. EUV spectroscopes, like the SUMER instrument on-board the SOHO spacecraft, reveal a preferred heating of coronal ions and strong temperature anisotropies. Velocity distributions of electrons can be measured directly in the solar wind, e.g. with the 3DPlasma instrument on-board the WIND satellite. They show a thermal core, an anisotropic suprathermal halo, and an anti-solar, magnetic-field-aligned, beam or "strahl". For an understanding of the physical processes in the corona, an adequate description of the plasma is needed. Magnetohydrodynamics (MHD) treats the plasma simply as an electrically conductive fluid. Multi-fluid models consider e.g. protons and electrons as separate fluids. They enable a description of many macroscopic plasma processes. However, fluid models are based on the assumption of a plasma near thermodynamic equilibrium. But the solar corona is far away from this. Furthermore, fluid models cannot describe processes like the interaction with electromagnetic waves on a microscopic scale. Kinetic models, which are based on particle velocity distributions, do not show these limitations, and are therefore well-suited for an explanation of the observations listed above. For the simplest kinetic models, the mirror force in the interplanetary magnetic field focuses solar wind electrons into an extremely narrow beam, which is contradicted by observations. Therefore, a scattering mechanism must exist that counteracts the mirror force. In this thesis, a kinetic model for electrons in the solar corona and wind is presented that provides electron scattering by resonant interaction with whistler waves. The kinetic model reproduces the observed components of solar wind electron distributions, i.e. core, halo, and a "strahl" with finite width. But the model is not only applicable on the quiet Sun. The propagation of energetic electrons from a solar flare is studied, and it is found that scattering in the direction of propagation and energy diffusion influence the arrival times of flare electrons at Earth approximately to the same degree. In the corona, the interaction of electrons with whistler waves does not only lead to scattering, but also to the formation of a suprathermal halo, as it is observed in interplanetary space. This effect is studied both for the solar wind as well as the closed volume of a coronal magnetic loop. The result is of fundamental importance for solar-stellar relations. The quiet solar corona always produces suprathermal electrons. This process is closely related to coronal heating, and can therefore be expected in any hot stellar corona. In the second part of this thesis it is detailed how to calculate growth or damping rates of plasma waves from electron velocity distributions. The emission and propagation of electron cyclotron waves in the quiet solar corona, and that of whistler waves during solar flares, is studied. The latter can be observed as so-called fiber bursts in dynamic radio spectra, and the results are in good agreement with observed bursts.
We investigate the crust, upper mantle and mantle transition zone of the Cape Verde hotspot by using seismic P and S receiver functions from several tens of local seismograph stations. We find a strong discontinuity at a depth of similar to 10 km underlain by a similar to 15-km thick layer with a high (similar to 1.9) Vp/Vs velocity ratio. We interpret this discontinuity and the underlying layer as the fossil Moho, inherited from the pre-hotspot era, and the plume-related magmatic underplate. Our uppermost-mantle models are very different from those previously obtained for this region: our S velocity is much lower and there are no indications of low densities. Contrary to previously published arguments for the standard transition zone thickness our data indicate that this thickness under the Cape Verde islands is up to similar to 30 km less than in the ambient mantle. This reduction is a combined effect of a depression of the 410-km discontinuity and an uplift of the 660-km discontinuity. The uplift is in contrast to laboratory data and some seismic data on a negligible dependence of depth of the 660-km discontinuity on temperature in hotspots. A large negative pressure-temperature slope which is suggested by our data implies that the 660-km discontinuity may resist passage of the plume.
Our data reveal beneath the islands a reduction of S velocity of a few percent between 470-km and 510-km depths. The low velocity layer in the upper transition zone under the Cape Verde archipelago is very similar to that previously found under the Azores and a few other hotspots. In the literature there are reports on a regional 520-km discontinuity, the impedance of which is too large to be explained by the known phase transitions. Our observations suggest that the 520-km discontinuity may present the base of the low-velocity layer in the transition zone.
P receiver functions from 23 stations of the SASE experiment in southern Africa are inverted simultaneously with SKS waveforms for azimuthal anisotropy in the upper mantle. Our analysis resolves the long-standing issue of depth dependence and origins of anisotropy beneath southern Africa. In the uppermost mantle we observe anisotropy with a nearly E-W fast direction, parallel to the trend of the Limpopo belt. This anisotropy may be frozen since the Archean. At a depth of 160 km the fast direction of anisotropy changes to 40 degrees and becomes close to the recent plate motion direction. This transition is nearly coincident in depth with activation of dominant glide systems in olivine and with a pronounced change in other properties of the upper mantle. Another large change in the fast direction of anisotropy corresponds to the previously found low-S-velocity layer atop the 410-km discontinuity. Citation: Vinnik, L., S. Kiselev, M. Weber, S. Oreshin, and L. Makeyeva (2012), Frozen and active seismic anisotropy beneath southern Africa, Geophys. Res. Lett., 39, L08301, doi: 10.1029/2012GL051326.
Recent studies have claimed the existence of very massive stars (VMS) up to 300 M⊙ in the local Universe. As this finding may represent a paradigm shift for the canonical stellar upper-mass limit of 150 M⊙, it is timely to discuss the status of the data, as well as the far-reaching implications of such objects. We held a Joint Discussion at the General Assembly in Beijing to discuss (i) the determination of the current masses of the most massive stars, (ii) the formation of VMS, (iii) their mass loss, and (iv) their evolution and final fate. The prime aim was to reach broad consensus between observers and theorists on how to identify and quantify the dominant physical processes.
Background: Endothelin-1 (ET-1) is a multifunctional peptide, which is implicated in the renal and cardiac physicology as well as in many pathologies of these systems. ET-1's actions take place after the activation of two receptors: ETA and ETB. The expression of these receptors may be modulated during the pathologic process. The analysis of the distribution and level of expression of the receptors in animal models is therefore crucial.
Methods: We developed a protocol for non-radioactive in situ hybridization for the mRNA of the two endothelin receptors on paraffin-embedded tissue using digoxigenin-labeled RNA probes.
Results: In heart and kidney, the staining was reliable and specific. In a mouse model for endothelin/nitric oxide imbalance, cardiac ETB expression was reduced. The distribution of the receptors was in accordance with the actual knowledge. Differences in cell specific expression are discussed.
Conclusions: We developed a protocol for the in situ hybridization of the endothelin receptors in mice. Given that the endothelin system is implicated in the development of many diseases, we believe that this protocol may be useful for a number of future preclinical studies.
Background: To assess the chronic effect of the DPP-4 inhibitor, linagliptin, alone, in combination with exenatide, and during exenatide withdrawal, in diet-induced obese (DIO) rats.
Methods: Female Wistar rats were exposed to a cafeteria diet to induce obesity. Animals were then dosed with vehicle or linagliptin (3 mg/kg PO) orally once-daily for a 28 day period. In a subsequent study, rats received exenatide (either 3 or 30 mu g/kg/day) or vehicle by osmotic mini-pump for 28 days. In addition, groups of animals were dosed orally with linagliptin either alone or in combination with a 3 mu g/kg/day exenatide dose for the study duration. In a final study, rats were administered exenatide (30 mu g/kg/day) or vehicle by osmotic mini-pump for eleven days. Subsequently, exenatide-treated animals were transferred to vehicle or continued exenatide infusion for a further ten days. Animals transferred from exenatide to vehicle were also dosed orally with either vehicle or linagliptin. In all studies, body weight, food and water intake were recorded daily and relevant plasma parameters and carcass composition were determined.
Results: In contrast to exenatide, linagliptin did not significantly reduce body weight or carcass fat in DIO rats versus controls. Linagliptin augmented the effect of exenatide to reduce body fat when given in combination but did not affect the body weight response. In rats withdrawn from exenatide, weight regain was observed such that body weight was not significantly different to controls. Linagliptin reduced weight regain after withdrawal of exenatide such that a significant difference from controls was evident.
Conclusions: These data demonstrate that linagliptin does not significantly alter body weight in either untreated or exenatide-treated DIO rats, although it delays weight gain after exenatide withdrawal. This finding may suggest the utility of DPP-4 inhibitors in reducing body weight during periods of weight gain.
1. Atmospheric nitrogen (N) deposition is expected to change forest understorey plant community composition and diversity, but results of experimental addition studies and observational studies are not yet conclusive. A shortcoming of observational studies, which are generally based on resurveys or sampling along large deposition gradients, is the occurrence of temporal or spatial confounding factors.
2. We were able to assess the contribution of N deposition versus other ecological drivers on forest understorey plant communities by combining a temporal and spatial approach. Data from 1205 (semi-)permanent vegetation plots taken from 23 rigorously selected understorey resurvey studies along a large deposition gradient across deciduous temperate forest in Europe were compiled and related to various local and regional driving factors, including the rate of atmospheric N deposition, the change in large herbivore densities and the change in canopy cover and composition.
3. Although no directional change in species richness occurred, there was considerable floristic turnover in the understorey plant community and a shift in species composition towards more shade-tolerant and nutrient-demanding species. However, atmospheric N deposition was not important in explaining the observed eutrophication signal. This signal seemed mainly related to a shift towards a denser canopy cover and a changed canopy species composition with a higher share of species with more easily decomposed litter.
4. Synthesis. Our multi-site approach clearly demonstrates that one should be cautious when drawing conclusions about the impact of atmospheric N deposition based on the interpretation of plant community shifts in single sites or regions due to other, concurrent, ecological changes. Even though the effects of chronically increased N deposition on the forest plant communities are apparently obscured by the effects of canopy changes, the accumulated N might still have a significant impact. However, more research is needed to assess whether this N time bomb will indeed explode when canopies will open up again.
An electrochemical detection system specifically designed for multi-parameter real-time monitoring of stem cell culturing/differentiation in a microfluidic system is presented. It is composed of a very compact 24-channel electronic board, compatible with arrays of microelectrodes and coupled to a microfluidic cell culture system. A versatile data acquisition software enables performing amperometry, cyclic voltammetry and impedance spectroscopy in each of the 12 independent chambers over a 100 kHz bandwidth with current resolution down to 5 pA for 100 ms measuring time. The design of the platform, its realization and experimental characterization are reported, with emphasis on the analysis of impact of input capacitance (i.e., microelectrode size) and microfluidic pump operation on current noise. Programmable sequences of successive injections of analytes (ferricyanide and dopamine) and rinsing buffer solution as well as the impedimetric continuous tracking for seven days of the proliferation of a colony of PC12 cells are successfully demonstrated.
The enzyme penicillin G acylase (EC 3.5.1.11) catalyzes amide-bond cleavage in benzylpenicillin (penicillin G) to yield 6-aminopenicillanic acid, an intermediate chemical used in the production of semisynthetic penicillins. A thermostable penicillin G acylase from Alcaligenes faecalis (AfPGA) has been crystallized using the hanging-drop vapour-diffusion method in two different space groups: C2221, with unit-cell parameters a = 72.9, b = 86.0, c = 260.2 angstrom, and P41212, with unit-cell parameters a = b = 85.6, c = 298.8 angstrom. Data were collected at 293 K and the structure was determined using the molecular-replacement method. Like other penicillin acylases, AfPGA belongs to the N-terminal nucleophilic hydrolase superfamily, has undergone post-translational processing and has a serine as the N-terminal residue of the beta-chain. A disulfide bridge has been identified in the structure that was not found in the other two known penicillin G acylase structures. The presence of the disulfide bridge is perceived to be one factor that confers higher stability to this enzyme.
Ostracodes (Ostracoda, Crustacea) are aquatic micro-crustaceans with a significant representation in the fossil record. If the environmental influence on the species composition of their communities is robustly quantified, past changes in ostracode communities reflected in fossil assemblages can be used for paleo-environmental reconstruction. We analyzed ostracode assemblages in recently deposited surface sediments from 56 lakes in western and central Mongolia, and simultaneously recorded local water chemistry and solute concentration in order to elucidate the distribution of individual ostracode species in relation to these broad environmental gradients. Multivariate analysis indicated that the species variation in ostracode assemblages could be mainly attributed to variations in percent calcium (%Ca) relative to total cation content, mean annual precipitation, calcium concentration, alkalinity, percent bicarbonate relative to total anion content, and mean July temperature. This matches well with the results of a similar analysis on presence/absence data of living ostracodes in nearshore samples, even though some differences exist between the faunal composition of both datasets. The documented response of ostracode species to environmental variation tracks the typical solute evolutionary pathway for surface waters in this region, characterized by calcite precipitation and consequent depletion in dissolved calcium. Hence, the best quantitative inference model (WA-PLS model with R-jack(2) = 0.70, RMSEP = 0.40) for paleolimnological application was obtained for %Ca. Comparison between this model and a specific conductance (SC) inference model based on the same dataset, and based on ostracode datasets from different regions, indicated that the %Ca inference model suffers less than the SC inference model from a step-change in reconstructed values. The statistical power of different inference models based on Mongolian ostracodes are variously affected by the common dominance of a single euryhaline species (Limnocythere inopinata), limited faunal turnover in the freshwater portion of the salinity gradient, and the bimodal frequency distribution of SC among regional lakes. The latter probably represents true scarcity of lakes with intermediate salinity rather than a biased representation in our dataset. In a broader context of ostracode ecology, and with respect to regional paleolimnological applications, we highlight the potential of fossil Mongolian ostracode assemblages to trace past hydrological shifts associated with changes in groundwater inflow.