Refine
Year of publication
- 2013 (1715) (remove)
Document Type
- Article (1053)
- Doctoral Thesis (284)
- Monograph/Edited Volume (132)
- Postprint (63)
- Review (53)
- Conference Proceeding (42)
- Preprint (39)
- Part of Periodical (18)
- Other (13)
- Master's Thesis (10)
Language
Is part of the Bibliography
- yes (1715) (remove)
Keywords
- climate change (7)
- Arabidopsis thaliana (6)
- Climate change (6)
- Eye movements (6)
- gamma rays: galaxies (6)
- Reading (5)
- children (5)
- galaxies: active (5)
- migration (5)
- morphology (5)
Institute
- Institut für Biochemie und Biologie (266)
- Institut für Geowissenschaften (187)
- Institut für Physik und Astronomie (185)
- Institut für Chemie (150)
- Department Psychologie (84)
- Wirtschaftswissenschaften (72)
- Institut für Romanistik (68)
- Institut für Mathematik (53)
- Department Linguistik (51)
- Institut für Germanistik (51)
Several chronometric biases in numerical cognition have informed our understanding of a mental number line (MNL). Complementing this approach, we investigated spatial performance in a magnitude comparison task. Participants located the larger or smaller number of a pair on a horizontal line representing the interval from 0 to 10. Experiments 1 and 2 used only number pairs one unit apart and found that digits were localized farther to the right with "select larger" instructions than with "select smaller" instructions. However, when numerical distance was varied (Experiment 3), digits were localized away from numerically near neighbors. This repulsion effect reveals context-specific distortions in number representation not previously noticed with chronometric measures.
Integrated and concurrent cultures in rice fields are a promising approach to sustainable farming as the demand for aquacultural and agricultural products continues to grow while land and water resources become increasingly scarce. Prawn farming mainly takes place in coastal regions in improved extensive to semi-intensive aquacultures but a trend to shift the industry to inland regions has been noticed. This inland study in Northern Bangladesh used different input regimes such as fertilizer and additional feed to compare the performance of prawn and fish in flooded paddy fields in regard to water quality measurements. Maximal net yields and body weight gain with minimized negative impact on water quality were found when initial body weights of prawn were optimized. Regarding yield factors in reference to the reduction of costs due to the avoidance of expensive fertilizer/feed and effort, prawn performed better than integrated fish cultures considering a higher market value of prawn with net yields of up to 97 +/- 55 kg ha(-1) for unfed and 151 +/- 61 kg ha(-1) for fed treatments. Rice yields of up to 4.7 +/- 0.1 t ha(-1) for unfed and 4.4 +/- 0.1 t ha(-1) were achieved for fed treatments. The findings suggest that for small scale farmers, prawn cum rice cultures are an economically profitable and comparatively easily manageable alternative to rice cum fish cultures.
Two optically obscured Wolf-Rayet (WR) stars have been recently discovered by means of their infrared (IR) circumstellar shells, which show signatures of interaction with each other. Following the systematics of the WR star catalogues, these stars obtain the names WR 120bb and WR 120bc. In this paper, we present and analyse new near-IR, J-, H- and K-band spectra using the Potsdam Wolf-Rayet model atmosphere code. For that purpose, the atomic data base of the code has been extended in order to include all significant lines in the near-IR bands.
The spectra of both stars are classified as WN9h. As their spectra are very similar the parameters that we obtained by the spectral analyses hardly differ. Despite their late spectral subtype, we found relatively high stellar temperatures of 63 kK. The wind composition is dominated by helium, while hydrogen is depleted to 25 per cent by mass.
Because of their location in the Scutum-Centaurus Arm, WR 120bb and WR 120bc appear highly reddened, A(Ks) approximate to 2 mag. We adopt a common distance of 5.8 kpc to both stars, which complies with the typical absolute K-band magnitude for the WN9h subtype of -6.5 mag, is consistent with their observed extinction based on comparison with other massive stars in the region, and allows for the possibility that their shells are interacting with each other. This leads to luminosities of log(L/L-circle dot) = 5.66 and 5.54 for WR 120bb and WR 120bc, with large uncertainties due to the adopted distance.
The values of the luminosities of WR 120bb and WR 120bc imply that the immediate precursors of both stars were red supergiants (RSG). This implies in turn that the circumstellar shells associated with WR 120bb and WR 120bc were formed by interaction between the WR wind and the dense material shed during the preceding RSG phase.
Management of data to produce scientific knowledge is a key challenge for biological research in the 21st century. Emerging high-throughput technologies allow life science researchers to produce big data at speeds and in amounts that were unthinkable just a few years ago. This places high demands on all aspects of the workflow: from data capture (including the experimental constraints of the experiment), analysis and preservation, to peer-reviewed publication of results. Failure to recognise the issues at each level can lead to serious conflicts and mistakes; research may then be compromised as a result of the publication of non-coherent protocols, or the misinterpretation of published data. In this report, we present the results from a workshop that was organised to create an ontological data-modelling framework for Laboratory Protocol Standards for the Molecular Methods Database (MolMeth). The workshop provided a set of short- and long-term goals for the MolMeth database, the most important being the decision to use the established EXACT description of biomedical ontologies as a starting point.
We tested the limits of working-memory capacity (WMC) of young adults, old adults, and children with a memory-updating task. The task consisted of mentally shifting spatial positions within a grid according to arrows, their color signaling either only go (control) or go/no-go conditions. The interference model (IM) of Oberauer and Kliegl (2006) was simultaneously fitted to the data of all groups. In addition to the 3 main model parameters (feature overlap, noise, and processing rate), we estimated the time for switching between go and no-go steps as a new model parameter. In this study, we examined the IM parameters across the life span. The IM parameter estimates show that (a) conditions were not different in interference by feature overlap and interference by confusion; (b) switching costs time; (c) young adults and children were less susceptible than old adults to interference due to feature overlap; (d) noise was highest for children, followed by old and young adults; (e) old adults differed from children and young adults in lower processing rate; and (f) children and old adults had a larger switch cost between go steps and no-go steps. Thus, the results of this study indicated that across age, the IM parameters contribute distinctively for explaining the limits of WMC.
Wohin nach der 10. Klasse?
(2013)
Im Lebenslauf ist die Berufswahl eine zentrale Entwicklungsaufgabe. Durch die Institutionalisierung des Lebenslaufes in modernen Gesellschaften wird der Prozess auch institutionell begleitet. Schule organisiert in Kooperation mit der Bundesagentur für Arbeit dazu berufsorientierende Angebote, die u.a. die Entwicklung der Berufswahlreife unterstützen sollen. So werden neben den Eltern auch die Schule und Berufsberatung zu zentralen Vermittlern (Gatekeepern) beim Übergang von der Schule in die Ausbildung. Im Rahmen der Analyse des Berufswahlprozesses ist es wichtig, die Interaktion zwischen „Umwelt und Person“ zu betrachten: Wie gelingt es Jugendlichen, diese Entwicklungsaufgabe anhand personaler und sozialer Ressourcen, sowie im Rahmen gesellschaftlicher Strukturen, zu bewältigen? Diese Fragestellung ist grundsätzlich nicht neu, gewinnt jedoch unter den aktuellen gesellschaftlichen und ökonomischen Übergangsbedingungen eine große Bedeutung. Schulen haben in den letzten Jahren verstärkt begonnen, ihre Berufsorientierung systematisch zu organisieren und weiterzuentwickeln. Die Fülle der neu entwickelten Konzepte und Programme zur Verbesserung der Berufsorientierung steht jedoch in keinem Verhältnis zum Stand der empirischen Forschung. Daher ist die vorliegende Forschungsarbeit von der zentralen Zielstellung geleitet, die empirische Evidenz zur Wirkung schulischer Berufsorientierungsangebote zu erweitern. Im Mittelpunkt der Studie steht die Fragestellung, wie sich der schulische Berufsorientierungsprozess für Schülerinnen und Schüler aller Bildungsgänge für einen verbesserten Übergang in weiterführende Bildungs- und Ausbildungssysteme optimieren lässt. Von Interesse ist dabei, ob und inwieweit schulische Angebote die Entwicklung der Berufswahlreife der Schülerinnen und Schüler beeinflussen, welche Angebote als besonders unterstützend oder weniger sinnvoll beurteilt werden müssen. Diese Fragestellungen wurden auf Basis von schriftlichen Befragungen im Zeitraum von 2008 bis 2010 von Oberschülerinnen und Oberschülern im Landes Brandenburg bearbeitet. Anhand von Querschnitts- und Panelanalysen werden Aussagen über die Wahrnehmung und den Einfluss der verschiedenen schulischen Angebote sowohl für einzelne Jahrgangsstufen als auch im Vergleich zwischen den Jahrgangsstufen getroffen.
Deserts are a major source of loess and may undergo substantial wind-erosion as evidenced by yardang fields, deflation pans, and wind-scoured bedrock landscapes. However, there are few quantitative estimates of bedrock removal by wind abrasion and deflation. Here, we report wind-erosion rates in the western Qaidam Basin in central China based on measurements of cosmogenic Be-10 in exhumed Miocene sedimentary bedrock. Sedimentary bedrock erosion rates range from 0.05 to 0.4 mm/yr, although the majority of measurements cluster at 0.125 +/- 0.05 mm/yr. These results, combined with previous work, indicate that strong winds, hyper-aridity, exposure of friable Neogene strata, and ongoing rock deformation and uplift in the western Qaidam Basin have created an environment where wind, instead of water, is the dominant agent of erosion and sediment transport. Its geographic location (upwind) combined with volumetric estimates suggest that the Qaidam Basin is a major source (up to 50%) of dust to the Chinese Loess Plateau to the east. The cosmogenically derived wind erosion rates are within the range of erosion rates determined from glacial and fluvial dominated landscapes worldwide, exemplifying the effectiveness of wind to erode and transport significant quantities of bedrock.
Folgt tatsächlich aus einem liberalen Wertekanon eine generative Selbstbestimmung, eine weitgehende elterliche Handlungsfreiheit bei eugenischen Maßnahmen, wie es Vertreter einer „liberalen Eugenik“ versichern? Diese Arbeit diskutiert die Rolle Staates und die Handlungsspielräume der Eltern bei der genetischen Gestaltung von Nachkommen im Rahmen eines liberalen Wertverständnisses.
Den Schwerpunkt/Fokus der Betrachtungen liegt hier Maßnahmen des genetic enhancement.
Darüber hinaus wird auch das Verhältnis der „liberalen Eugenik“ zur „autoritären Eugenik“ neu beleuchtet.
Die Untersuchung beginnt bei der Analyse zentraler liberaler Werte und Normen, wie Freiheit, Autonomie und Gerechtigkeit und deren Funktionen in der „liberalen Eugenik“. Wobei nur sehr eingeschränkt von der „liberalen Eugenik“ gesprochen werden kann, sondern viel mehr von Varianten einer „liberalen Eugenik“.
Darüber hinaus wird in dieser Arbeit die historische Entwicklung der „liberalen“ und der „autoritären Eugenik“, speziell des Sozialdarwinismus, untersucht und verglichen, insbesondere im Hinblick auf liberale Werte und Normen und der generativen Selbstbestimmung.
Den Kern der Arbeit bildet der Vergleich der „liberalen Eugenik“ mit der „liberalen Erziehung“. Da hier die grundlegenden Aufgaben der Eltern, aber auch des Staates, analysiert und deren Verhältnis diskutiert wird.
Es zeigt sich, dass sich aus einem liberalen Wertverständnisses heraus keine umfangreiche generative Selbstbestimmung ableiten lässt, sondern sich viel mehr staatlich kontrollierte enge Grenzen bei eugenischen Maßnahmen zum Wohle der zukünftigen Person, begründen.
Zudem wurde der Weg zur autoritären Eugenik nicht durch die Abkehr von der generativen Selbstbestimmung geebnet, sondern viel mehr durch die Übertragung des Fortschrittsgedankens auf den Menschen selbst. Damit verliert die generative Selbstbestimmung auch ihre Funktion als Brandmauer gegen eine autoritäre Eugenik. Nicht der Verlust der generativen Selbstbestimmung, sondern viel mehr die Idee der Perfektionierung des Menschen muss kritisch betrachtet und letztlich abgelehnt werden.
Ohne generative Selbstbestimmung und einer Perfektionierung des Menschen, bleibt nur eine Basis-Eugenik, bei der die Entwicklungsfähigkeit des Menschen sichergestellt wird, nicht jedoch seine Verbesserung.
Darüber hinaus muss auch über eine Entwicklungsmöglichkeit des zukünftigen Menschen gesprochen werden, d. h. ein minimales Potential zu gesellschaftlicher Integration muss gegeben sein. Nur wenn tatsächlich keine Möglichkeiten seitens der Gesellschaft bestehen eine Person zu integrieren und dieser eine Entwicklungsmöglichkeit zu bieten, wären eugenische Maßnahmen als letztes Mittel akzeptabel.
Matrix product states and their continuous analogues are variational classes of states that capture quantum many-body systems or quantum fields with low entanglement; they are at the basis of the density-matrix renormalization group method and continuous variants thereof. In this work we show that, generically, N-point functions of arbitrary operators in discrete and continuous translation invariant matrix product states are completely characterized by the corresponding two- and three-point functions. Aside from having important consequences for the structure of correlations in quantum states with low entanglement, this result provides a new way of reconstructing unknown states from correlation measurements, e. g., for one-dimensional continuous systems of cold atoms. We argue that such a relation of correlation functions may help in devising perturbative approaches to interacting theories.
Where girls the role of boys in CS - attitudes of CS students in a female-dominated environment
(2013)
Eye movement data have proven to be very useful for investigating human sentence processing. Eyetracking research has addressed a wide range of questions, such as recovery mechanisms following garden-pathing, the timing of processes driving comprehension, the role of anticipation and expectation in parsing, the role of semantic, pragmatic, and prosodic information, and so on. However, there are some limitations regarding the inferences that can be made on the basis of eye movements. One relates to the nontrivial interaction between parsing and the eye movement control system which complicates the interpretation of eye movement data. Detailed computational models that integrate parsing with eye movement control theories have the potential to unpack the complexity of eye movement data and can therefore aid in the interpretation of eye movements. Another limitation is the difficulty of capturing spatiotemporal patterns in eye movements using the traditional word-based eyetracking measures. Recent research has demonstrated the relevance of these patterns and has shown how they can be analyzed. In this review, we focus on reading, and present examples demonstrating how eye movement data reveal what events unfold when the parser runs into difficulty, and how the parsing system interacts with eye movement control. WIREs Cogn Sci 2013, 4:125134. doi: 10.1002/wcs.1209 For further resources related to this article, please visit the WIREs website.
The preference for fruits and vegetables is the main predictor for the longtime healthy eating behavior. There are many factors which affect the development of food preferences. The familiarity with different foods seems to be a special aspect associated with the corresponding preference. To establish a preference for fruits and vegetables during early childhood, we need to know more about the factors that affect this preference development. So far, research has mostly concentrated on the food intake and less on the corresponding preference. Additionally, it is often based on studies of the mere-exposure effect or on older children and their ability to label fruits and vegetables correctly. Findings about the level of food familiarity in young children and its relation to the actual food preference are still missing. Our study focuses on different aspects of food familiarity as well as on their relationship to the child's preference and presents results from 213 children aged 2 to 10 years. Using standardized photos, the food preference was measured with a computer-based method that ran automatically without influence from parents or interviewer. The children knew fewer of the presented vegetables (66 %) than fruits or sweets (78 % each). About the same number of vegetables (63 %) had already been tasted by the children and were considered tasty. Only 48 % of the presented vegetables were named correctly - an ability that increases in older children. Concerning the relationship between the familiarity with vegetables and their preference, the different familiarity aspects showed that vegetables of lower preference were less often recognized, tasted, considered tasty, or named correctly.
In sedimentary basins, rock thermal conductivity can vary both laterally and vertically, thus altering the basin’s thermal structure locally and regionally. Knowledge of the thermal conductivity of geological formations and its spatial variations is essential, not only for quantifying basin evolution and hydrocarbon maturation processes, but also for understanding geothermal conditions in a geological setting. In conjunction with the temperature gradient, thermal conductivity represents the basic input parameter for the determination of the heat-flow density; which, in turn, is applied as a major input parameter in thermal modeling at different scales. Drill-core samples, which are necessary to determine thermal properties by laboratory measurements, are rarely available and often limited to previously explored reservoir formations. Thus, thermal conductivities of Mesozoic rocks in the North German Basin (NGB) are largely unknown. In contrast, geophysical borehole measurements are often available for the entire drilled sequence. Therefore, prediction equations to determine thermal conductivity based on well-log data are desirable. In this study rock thermal conductivity was investigated on different scales by (1) providing thermal-conductivity measurements on Mesozoic rocks, (2) evaluating and improving commonly applied mixing models which were used to estimate matrix and pore-filled rock thermal conductivities, and (3) developing new well-log based equations to predict thermal conductivity in boreholes without core control. Laboratory measurements are performed on sedimentary rock of major geothermal reservoirs in the Northeast German Basin (NEGB) (Aalenian, Rhaethian-Liassic, Stuttgart Fm., and Middle Buntsandstein). Samples are obtained from eight deep geothermal wells that approach depths of up to 2,500 m. Bulk thermal conductivities of Mesozoic sandstones range between 2.1 and 3.9 W/(m∙K), while matrix thermal conductivity ranges between 3.4 and 7.4 W/(m∙K). Local heat flow for the Stralsund location averages 76 mW/m², which is in good agreement to values reported previously for the NEGB. For the first time, in-situ bulk thermal conductivity is indirectly calculated for entire borehole profiles in the NEGB using the determined surface heat flow and measured temperature data. Average bulk thermal conductivity, derived for geological formations within the Mesozoic section, ranges between 1.5 and 3.1 W/(m∙K). The measurement of both dry- and water-saturated thermal conductivities allow further evaluation of different two-component mixing models which are often applied in geothermal calculations (e.g., arithmetic mean, geometric mean, harmonic mean, Hashin-Shtrikman mean, and effective-medium theory mean). It is found that the geometric-mean model shows the best correlation between calculated and measured bulk thermal conductivity. However, by applying new model-dependent correction, equations the quality of fit could be significantly improved and the error diffusion of each model reduced. The ‘corrected’ geometric mean provides the most satisfying results and constitutes a universally applicable model for sedimentary rocks. Furthermore, lithotype-specific and model-independent conversion equations are developed permitting a calculation of water-saturated thermal conductivity from dry-measured thermal conductivity and porosity within an error range of 5 to 10%. The limited availability of core samples and the expensive core-based laboratory measurements make it worthwhile to use petrophysical well logs to determine thermal conductivity for sedimentary rocks. The approach followed in this study is based on the detailed analyses of the relationships between thermal conductivity of rock-forming minerals, which are most abundant in sedimentary rocks, and the properties measured by standard logging tools. By using multivariate statistics separately for clastic, carbonate and evaporite rocks, the findings from these analyses allow the development of prediction equations from large artificial data sets that predict matrix thermal conductivity within an error of 4 to 11%. These equations are validated successfully on a comprehensive subsurface data set from the NGB. In comparison to the application of earlier published approaches formation-dependent developed for certain areas, the new developed equations show a significant error reduction of up to 50%. These results are used to infer rock thermal conductivity for entire borehole profiles. By inversion of corrected in-situ thermal-conductivity profiles, temperature profiles are calculated and compared to measured high-precision temperature logs. The resulting uncertainty in temperature prediction averages < 5%, which reveals the excellent temperature prediction capabilities using the presented approach. In conclusion, data and methods are provided to achieve a much more detailed parameterization of thermal models.
We know exactly what you want the development of a completely individualised conjoint analysis
(2013)
Improving the predictive validity of conjoint analysis has been an important research objective for many years. Whereas the majority of attempts have been different approaches to preference modelling, data collection or product presentation, only a few scholars have tried to improve predictive validity by individualising conjoint designs. This comes as a surprise because many markets have observed an augmented demand for customised products and highly heterogeneous customers' preferences. Against this background, the authors develop a conjoint variant based on a completely individualised conjoint design. More concretely, the new approach not only individualises the attributes, but also the attribute levels. The results of a comprehensive empirical study yield a significantly higher validity than existing standardised-level conjoint approaches. Consequently, they help marketers to gain deeper insights into their customers' preferences.
The dynamics of external contributions to the geomagnetic field is investigated by applying time-frequency methods to magnetic observatory data. Fractal models and multiscale analysis enable obtaining maximum quantitative information related to the short-term dynamics of the geomagnetic field activity. The stochastic properties of the horizontal component of the transient external field are determined by searching for scaling laws in the power spectra. The spectrum fits a power law with a scaling exponent beta, a typical characteristic of self-affine time-series. Local variations in the power-law exponent are investigated by applying wavelet analysis to the same time-series. These analyses highlight the self-affine properties of geomagnetic perturbations and their persistence. Moreover, they show that the main phases of sudden storm disturbances are uniquely characterized by a scaling exponent varying between 1 and 3, possibly related to the energy contained in the external field. These new findings suggest the existence of a long-range dependence, the scaling exponent being an efficient indicator of geomagnetic activity and singularity detection. These results show that by using magnetogram regularity to reflect the magnetosphere activity, a theoretical analysis of the external geomagnetic field based on local power-law exponents is possible.
The dynamics of external contributions to the geomagnetic field is investigated by applying time-frequency methods to magnetic observatory data. Fractal models and multiscale analysis enable obtaining maximum quantitative information related to the short-term dynamics of the geomagnetic field activity. The stochastic properties of the horizontal component of the transient external field are determined by searching for scaling laws in the power spectra. The spectrum fits a power law with a scaling exponent β, a typical characteristic of self-affine time-series. Local variations in the power-law exponent are investigated by applying wavelet analysis to the same time-series. These analyses highlight the self-affine properties of geomagnetic perturbations and their persistence. Moreover, they show that the main phases of sudden storm disturbances are uniquely characterized by a scaling exponent varying between 1 and 3, possibly related to the energy contained in the external field. These new findings suggest the existence of a long-range dependence, the scaling exponent being an efficient indicator of geomagnetic activity and singularity detection. These results show that by using magnetogram regularity to reflect the magnetosphere activity, a theoretical analysis of the external geomagnetic field based on local power-law exponents is possible.
The expansion and intensification of soya bean agriculture in southeastern Amazonia can alter watershed hydrology and biogeochemistry by changing the land cover, water balance and nutrient inputs. Several new insights on the responses of watershed hydrology and biogeochemistry to deforestation in Mato Grosso have emerged from recent intensive field campaigns in this region. Because of reduced evapotranspiration, total water export increases threefold to fourfold in soya bean watersheds compared with forest. However, the deep and highly permeable soils on the broad plateaus on which much of the soya bean cultivation has expanded buffer small soya bean watersheds against increased stormflows. Concentrations of nitrate and phosphate do not differ between forest or soya bean watersheds because fixation of phosphorus fertilizer by iron and aluminium oxides and anion exchange of nitrate in deep soils restrict nutrient movement. Despite resistance to biogeochemical change, streams in soya bean watersheds have higher temperatures caused by impoundments and reduction of bordering riparian forest. In larger rivers, increased water flow, current velocities and sediment flux following deforestation can reshape stream morphology, suggesting that cumulative impacts of deforestation in small watersheds will occur at larger scales.
Was Bürger bem(a)erken
(2013)
Eingebettet in die aktuelle Open-Government-Debatte gewinnen E-Bürgerdienste weiter an Bedeutung. Zu den Vorreitern internetbasierter Bürgerdienste wird der Brandenburger Bürgerservice Maerker gezählt, da dieser eine einfache Möglichkeit der Kommunikation zwischen Bürger und Verwaltung über Infrastrukturprobleme in der Gemeinde bietet. Auf der Grundlage von Experteninterviews und einer Umfrage unter den teilnehmenden Kommunen evaluieren die Autoren die Einführung und Umsetzung des Maerker Brandenburgs. Im Ergebnis zeigen sich neben einer großen Breite an Akzeptanz und Zustimmung unter den beteiligten Akteuren auch unausgeschöpfte Potenziale zur Verbesserung der Prozesse innerhalb der Verwaltung. Dieser Artikel stellt die Ergebnisse der Evaluation des Maerkers dar und gibt einen Ausblick auf weitere Entwicklungspotenziale.
Was Bürger bem(a)erken
(2013)
Eingebettet in die aktuelle Open-Government-Debatte gewinnen E-Bürgerdienste weiter an Bedeutung. Zu den Vorreitern internetbasierter Bürgerdienste wird der Brandenburger Bürgerservice Maerker gezählt, da dieser eine einfache Möglichkeit der Kommunikation zwischen Bürger und Verwaltung über Infrastrukturprobleme in der Gemeinde bietet. Auf der Grundlage von Experteninterviews und einer Umfrage unter den teilnehmenden Kommunen evaluieren die Autoren die Einführung und Umsetzung des Maerker Brandenburgs. Im Ergebnis zeigen sich neben einer großen Breite an Akzeptanz und Zustimmung unter den beteiligten Akteuren auch unausgeschöpfte Potenziale zur Verbesserung der Prozesse innerhalb der Verwaltung. Dieser Artikel stellt die Ergebnisse der Evaluation des Maerkers dar und gibt einen Ausblick auf weitere Entwicklungspotenziale.
Vorwort
(2013)
Vorwort
(2013)
Vorwort
(2013)
Vorwort
(2013)
Vorwort
(2013)
Vorstandsvergütung : eine rechtsökonomische Analyse zur Angemessenheit der Vorstandsvergütung
(2013)
Vorlesung 2013-10-15
(2013)
Von wegen Kinderspiel
(2013)
Die kumulative Dissertation zur Projektdidaktik trägt den Titel „Von der Konzeption zur Praxis: Zur Entwicklung der Projektdidaktik am Oberstufen-Kolleg Bielefeld und ihre Impulsgebung und Modellbildung für das deutsche Regelschulwesen“. Die Dissertation versteht sich als beispielgebende Umsetzung und Implementierung der Projektdidaktik für das Regelschulsystem. Auf der Basis von 22 bereits erschienenen Publikationen und einer Monographie werden mit fünf methodischen Zugriffen (bildungshistorisch, dichte Beschreibung, Aktionsforschung, empirische Untersuchung an Regelschulen und Implementierungsforschung, s. Kapitel 1) in sieben Kapiteln (2- 8) des systematischen ersten Teils die Entwicklung der Unterrichtsform Projektunterricht in der BRD, Projektbegriff und Weiterentwicklung des Konzepts, Methodik, Bewertung sowie Organisation des Projektunterrichts am Oberstufen-Kolleg, der Versuchsschule des Landes NRW, in Auseinandersetzung mit der allgemeinen Projektdidaktik dargestellt sowie Formen und Verfahren der erprobten Implementierung in das Regelschulsystem präsentiert.
Ein Schlusskapitel (9) fasst die Ergebnisse zusammen. Im umfangreichen Anhang finden sich verschiedene Publikationen zu Aspekten der Projektdidaktik, auf die der systematische Teil jeweils Bezug nimmt.
Die bildungshistorische Analyse (Kapitel 2) untersucht das Verhältnis von pädagogischer Theorie und schulischer Praxis, die weder in Literatur und noch in Praxis genügend verbunden sind. Nach der Rezeption der gut erforschten Konzeptgeschichte pädagogischer Theorie in Anlehnung an Dewey und Kilpatrick wird durch eine erste Analyse der „Praxisgeschichte“ des Projektunterrichts auf ein Forschungsdesiderat hingewiesen, dies auch um die Projektpraxis am Oberstufen-Kolleg in Beziehung zu der in den Regelschulen setzen zu können. Dabei wurden seit 1975 sechs Entwicklungslinien herausgearbeitet: Start, Krise und ihre Überwindung durch Öffnung und Vernetzung (1975-1990), didaktisch-methodische Differenzierung und Notwendigkeit von Professionalisierung (ab 1990) sowie Schulentwicklung und Institutionalisierung (seit Ende der 1990er Jahre).
Projektunterricht besteht am Oberstufen-Kolleg seit der Gründung 1974 als fest eingerichtete Unterrichtsform (seit 2002 zweimal jährlich 2 Wochen) mit dem Ziel, für das Regelschulsystem die Projektdidaktik zu erproben und weiterzuentwickeln. Als wichtige praxisorientierte Ziele wurden ein praxistauglicher Begriff, Bildungswert und Kompetenzen im Unterschied zum Lehrgang herausgearbeitet (z.B. handlungs- und anwendungsorientierte Kompetenzen) und das Verhältnis zum Fachunterricht bestimmt (Kapitel 3). Letzteres wurde am Beispiel des Fachs Geschichte entwickelt und exemplarisch in Formen der Verzahnung dargestellt (Kapitel 6).
Auch für die methodische Dimension galt, die allgemeine Projektdidaktik weiterzuentwickeln durch ihre Abgrenzung zu anderen Methoden der Öffnung von Schule und Unterricht (Kapitel 4). Dabei wurde als zentrales methodisches Prinzip die Handlungsorientierung bestimmt sowie sieben Phasen und jeweilige Handlungsschritte festgelegt. Besonders Planung und Rollenwechsel bedürfen dabei besonderer Beachtung, um Selbsttätigkeit der ProjektteilnehmerInnen zu erreichen. Verschiedene methodische „Etüden“ ( z.B. Gruppenarbeit, recherchieren, sich öffentlich verhalten), handlungsorientierte Vorformen und projektorientiertes Arbeiten sollten die Vollform Projektunterricht vorbereiten helfen.
Die Bewertung von Projekten (Kapitel 5) stellt andere Anforderungen als der Lehrgang, weil sie unterschiedliche Bewertungsebenen (z.B. Prozessbedeutung, Produktbeurteilung, Gruppenbewertung) umfasst. Dazu sind am Oberstufen-Kolleg andere Bewertungsformen als die Ziffernnote entwickelt worden: z.B. ein „Reflexionsbericht“ als individuelle Rückmeldung von SchülerInnen und LehrerInnen und ein „Zertifikat“ für besondere Leistungen im Projekt.
Zentral für die Entwicklung von Projektunterricht ist jedoch die Organisationsfrage (Kapitel 7). Dazu bedarf es einer Organisationsgruppe Projekt, die die Unterrichtsform didaktisch betreut und in einem Hearing die angemeldeten Projekte berät. Das Oberstufen-Kolleg hat damit eine entwickelte „Projektkultur“ organisatorisch umgesetzt. Für eine empirische Untersuchung an sechs Regelschulen in Ostwestfalen ist dann eine idealtypische Merkmalsliste von schulischer „Projektkultur“ als Untersuchungsinstrument entstanden, das zugleich als Leitlinie für Schulentwicklung im Bereich Projektlernen in den Regelschulen dienen kann. Zu dieser Implementierung (Kapitel 8) wurden Konzepte und Erfahrungen vom Oberstufen-Kolleg für schulinterne und schulexterne Fortbildungsformen sowie eine exemplarische Fortbildungseinheit entwickelt. So konnten in zahlreichen Lehrerfortbildungen durch die Versuchsschule Impulse für das Regelschulsystem gegeben werden.
Background.Vocational interests play a central role in the vocational decision-making process and are decisive for the later job satisfaction and vocational success. Based on Ackerman's (1996) notion of trait complexes, specific interest profiles of gifted high-school graduates can be expected. Aims.Vocational interests of gifted and highly achieving adolescents were compared to those of their less intelligent/achieving peers according to Holland's (1997) RIASEC model. Further, the impact of intelligence and achievement on interests were analysed while statistically controlling for potentially influencing variables. Changes in interests over time were investigated. Sample.N= 4,694 German students (age: M= 19.5, SD= .80; 54.6% females) participated in the study (TOSCA; Koller, Watermann, Trautwein, & Ludtke, 2004). Method. Interests were assessed in participants' final year at school and again 2 years later (N= 2,318). Results.Gifted participants reported stronger investigative and realistic interests, but lower social interests than less intelligent participants. Highly achieving participants reported higher investigative and (in wave 2) higher artistic interests. Considerable gender differences were found: gifted girls had a flat interest profile, while gifted boys had pronounced realistic and investigative and low social interests. Multilevel multiple regression analyses predicting interests by intelligence and school achievement revealed stable interest profiles. Beyond a strong gender effect, intelligence and school achievement each contributed substantially to the prediction of vocational interests. Conclusions.At the time around graduation from high school, gifted young adults show stable interest profiles, which strongly differ between gender and intelligence groups. These differences are relevant for programmes for the gifted and for vocational counselling.
Preclinical work indicates that calcitriol restores vascular function by normalizing the endothelial expression of cyclooxygenase-2 and thromboxane-prostanoid receptors in conditions of estrogen deficiency and thus prevents the thromboxane-prostanoid receptor activation-induced inhibition of nitric oxide synthase. Since endothelial dysfunction is a key factor in the pathogenesis of cardiovascular diseases, this finding may have an important translational impact. It provides a clear rationale to use endothelial function in clinical trials aiming to find the optimal dose of vitamin D for the prevention of cardiovascular events in postmenopausal women.
Two experiments investigated (1) how activation of manual affordances is triggered by visual and linguistic cues to manipulable objects and (2) whether graspable object parts play a special role in this process. Participants pressed a key to categorize manipulable target objects copresented with manipulable distractor objects on a computer screen. Three factors were varied in Experiment 1: (1) the target's and (2) the distractor's handles' orientation congruency with the lateral manual response and (3) the Visual Focus on one of the objects. In Experiment 2, a linguistic cue factor was added to these three factors-participants heard the name of one of the two objects prior to the target display onset. Analysis of participants' motor and oculomotor behaviour confirmed that perceptual and linguistic cues potentiated activation of grasp affordances. Both target- and distractor-related affordance effects were modulated by the presence of visual and linguistic cues. However, a differential visual attention mechanism subserved activation of compatibility effects associated with target and distractor objects. We also registered an independent implicit attention attraction effect from objects' handles, suggesting that graspable parts automatically attract attention during object viewing. This effect was further amplified by visual but not linguistic cues, thus providing initial evidence for a recent hypothesis about differential roles of visual and linguistic information in potentiating stable and variable affordances (Borghi in Language and action in cognitive neuroscience. Psychology Press, London, 2012).
We easily recover the causal properties of visual events, enabling us to understand and predict changes in the physical world. We see a tennis racket hitting a ball and sense that it caused the ball to fly over the net; we may also have an eerie but equally compelling experience of causality if the streetlights turn on just as we slam our car's door. Both perceptual [1] and cognitive [2] processes have been proposed to explain these spontaneous inferences, but without decisive evidence one way or the other, the question remains wide open [3-8]. Here, we address this long-standing debate using visual adaptation-a powerful tool to uncover neural populations that specialize in the analysis of specific visual features [9-12]. After prolonged viewing of causal collision events called "launches" [1], subsequently viewed events were judged more often as noncausal. These negative aftereffects of exposure to collisions are spatially localized in retinotopic coordinates, the reference frame shared by the retina and visual cortex. They are not explained by adaptation to other stimulus features and reveal visual routines in retinotopic cortex that detect and adapt to cause and effect in simple collision stimuli.
Requirements engineers have to elicit, document, and validate how stakeholders act and interact to achieve their common goals in collaborative scenarios. Only after gathering all information concerning who interacts with whom to do what and why, can a software system be designed and realized which supports the stakeholders to do their work. To capture and structure requirements of different (groups of) stakeholders, scenario-based approaches have been widely used and investigated. Still, the elicitation and validation of requirements covering collaborative scenarios remains complicated, since the required information is highly intertwined, fragmented, and distributed over several stakeholders. Hence, it can only be elicited and validated collaboratively. In times of globally distributed companies, scheduling and conducting workshops with groups of stakeholders is usually not feasible due to budget and time constraints. Talking to individual stakeholders, on the other hand, is feasible but leads to fragmented and incomplete stakeholder scenarios. Going back and forth between different individual stakeholders to resolve this fragmentation and explore uncovered alternatives is an error-prone, time-consuming, and expensive task for the requirements engineers. While formal modeling methods can be employed to automatically check and ensure consistency of stakeholder scenarios, such methods introduce additional overhead since their formal notations have to be explained in each interaction between stakeholders and requirements engineers. Tangible prototypes as they are used in other disciplines such as design, on the other hand, allow designers to feasibly validate and iterate concepts and requirements with stakeholders. This thesis proposes a model-based approach for prototyping formal behavioral specifications of stakeholders who are involved in collaborative scenarios. By simulating and animating such specifications in a remote domain-specific visualization, stakeholders can experience and validate the scenarios captured so far, i.e., how other stakeholders act and react. This interactive scenario simulation is referred to as a model-based virtual prototype. Moreover, through observing how stakeholders interact with a virtual prototype of their collaborative scenarios, formal behavioral specifications can be automatically derived which complete the otherwise fragmented scenarios. This, in turn, enables requirements engineers to elicit and validate collaborative scenarios in individual stakeholder sessions – decoupled, since stakeholders can participate remotely and are not forced to be available for a joint session at the same time. This thesis discusses and evaluates the feasibility, understandability, and modifiability of model-based virtual prototypes. Similarly to how physical prototypes are perceived, the presented approach brings behavioral models closer to being tangible for stakeholders and, moreover, combines the advantages of joint stakeholder sessions and decoupled sessions.
The time-dependent approach to electronic spectroscopy, as popularized by Heller and coworkers in the 1980's, is applied here in conjunction with linear-response, time-dependent density functional theory to study vibronic absorption, emission and resonance Raman spectra of several diamondoids. Two-state models, the harmonic and the Condon approximations, are used for the calculations, making them easily applicable to larger molecules. The method is applied to nine pristine lower and higher diamondoids: adamantane, diamantane, triamantane, and three isomers each of tetramantane and pentamantane. We also consider a hybrid species “Dia = Dia” – a shorthand notation for a recently synthesized molecule comprising two diamantane units connected by a C[double bond, length as m-dash]C double bond. We resolve and interpret trends in optical and vibrational properties of these molecules as a function of their size, shape, and symmetry, as well as effects of “blending” with sp2-hybridized C-atoms. Time-dependent correlation functions facilitate the computations and shed light on the vibrational dynamics following electronic transitions.
Two images, taken by the Cassini spacecraft near Saturn's equinox in 2009 August, show the Earhart propeller casting a 350 km long shadow, offering the opportunity to watch how the ring height, excited by the propeller moonlet, relaxes to an equilibrium state. From the shape of the shadow cast and a model of the azimuthal propeller height relaxation, we determine the exponential cooling constant of this process to be lambda = 0.07 +/- 0.02 km(-1), and thereby determine the collision frequency of the ring particles in the vertically excited region of the propeller to be omega(c)/Omega = 0.9 +/- 0.2.
In den romanischen Sprachen stehen Verbalmodi, Modalverben, Modaladverbien und Modalpartikeln zum Ausdruck von Modalität zur Verfügung, die schon häufig mit dem Deutschen verglichen wurden. In diesem Beitrag soll es um Entsprechungen versteckter, sogenannter coverter Modalität gehen (vgl. Abraham / Leiss 2012, Haßler 2012). Zunächst soll das Verständnis von Modalität dargelegt werden, um dann ihre coverten Formen einzuordnen. Die gegenseitigen Entsprechungen werden an Beispielen analysiert und schließlich wird ein Vorschlag entwickelt, wie die Mittel der Modalisierung als sprachspezifisch und zu einem Kontinuum gehörend beschrieben werden können.
We report results from TeV gamma-ray observations of the microquasar Cygnus X-3. The observations were made with the Very Energetic Radiation Imaging Telescope Array System (VERITAS) over a time period from 2007 June 11 to 2011 November 28. VERITAS is most sensitive to gamma rays at energies between 85 GeV and 30 TeV. The effective exposure time amounts to a total of about 44 hr, with the observations covering six distinct radio/X-ray states of the object. No significant TeV gamma-ray emission was detected in any of the states, nor with all observations combined. The lack of a positive signal, especially in the states where GeV gamma rays were detected, places constraints on TeV gamma-ray production in Cygnus X-3. We discuss the implications of the results.
Escherichia (E.) coli ist als kommensales Bakterium ein wichtiger Bestandteil des Mikrobioms von Säugern, jedoch zudem der häufigste Infektionserreger des Menschen. Entsprechend des Infektionsortes werden intestinal (InPEC) und extraintestinal pathogene E. coli (ExPEC) unterschieden. Die Pathogenese von E. coli-Infektionen ist durch Virulenzfaktoren determiniert, welche von jeweils spezifischen virulenzassoziierten Genen (inVAGs und exVAGs) kodiert werden. Häufig werden exVAGs auch in E. coli-Isolaten aus dem Darm gesunder Wirte nachgewiesen. Dies führte zu der Vermutung, dass exVAGs die intestinale Kolonisierung des Wirtes durch E. coli unterstützen. Das Hauptziel dieser Arbeit bestand darin, das Wissen über den Einfluss von exVAGs auf die Besiedlung und damit die Adhäsion von E. coli an Epithelzellen des Darmtraktes zu erweitern. Die Durchführung einer solch umfassenden E. coli-Populationsstudie erforderte die Etablierung neuer Screeningmethoden. Für die genotypische Charakterisierung wurden mikropartikelbasierte Multiplex-PCR-Assays zum Nachweis von 44 VAGs und der Phylogenie etabliert. Für die phänotypische Charakterisierung wurden Adhäsions- und Zytotoxizitätsassays etabliert. Die Screeningmethoden basieren auf der VideoScan-Technologie, einem automatisierten bildbasierten Multifluoreszenzdetektionssystem. Es wurden 398 E. coli-Isolate aus 13 Wildsäugerarten und 5 Wildvogelarten sowie aus gesunden und harnwegserkrankten Menschen und Hausschweinen charakterisiert. Die Adhäsionsassays hatten zum Ziel, sowohl die Adhäsionsraten als auch die Adhäsionsmuster der 317 nicht hämolytischen Isolate auf 5 Epithelzelllinien zu bestimmen. Die Zytotoxizität der 81 hämolytischen Isolate wurde in Abhängigkeit der Inkubationszeit auf 4 Epithelzelllinien geprüft. In den E. coli-Isolaten wurde eine Reihe von VAGs nachgewiesen. Potentielle InPEC, insbesondere shigatoxinproduzierende und enteropathogene E. coli wurden aus Menschen, Hausschweinen und Wildtieren, vor allem aus Rehen und Feldhasen isoliert. exVAGs wurden mit stark variierender Prävalenz in Isolaten aus allen Arten detektiert. Die größte Anzahl und das breiteste Spektrum an exVAGs wurde in Isolaten aus Urin harnwegserkrankter Menschen, gefolgt von Isolaten aus Dachsen und Rehen nachgewiesen. In Isolaten der phylogenetischen Gruppe B2 wurden mehr exVAGs detektiert als in den Isolaten der phylogenetischen Gruppen A, B1 und D. Die Ergebnisse der Adhäsionsassays zeigten, dass die meisten Isolate zelllinien-, gewebe- oder wirtsspezifisch adhärierten. Ein Drittel der Isolate adhärierte an keiner Zelllinie und nur zwei Isolate adhärierten stark an allen Zelllinien. Grundsätzlich adhärierten mehr Isolate an humanen sowie an intestinalen Zelllinien. Besonders Isolate aus Eichhörnchen und Amseln sowie aus Urin harnwegserkrankter Menschen und Hausschweine waren in der Lage, stark zu adhärieren. Hierbei bildeten die Isolate als Adhäsionsmuster diffuse Adhäsion, Mikrokolonien, Ketten und Agglomerationen. Mittels statistischer Analysen wurden Assoziationen zwischen exVAGs und einer hohen Adhäsionsrate ersichtlich. So war beispielsweise das Vorkommen von afa/dra mit einer höheren Adhäsionsrate auf Caco-2- und 5637-Zellen und von sfa/foc auf IPEC-J2-Zellen assoziiert. Die Ergebnisse der Zytotoxizitätsassays zeigten eine sehr starke und zeitabhängige Zerstörung der Monolayer aller Epithelzelllinien durch die α-Hämolysin-positiven Isolate. Auffallend war die hohe Toxizität hämolytischer Isolate aus Wildtieren gegenüber den humanen Zelllinien. Mit den innerhalb dieser Arbeit entwickelten Screeningmethoden war es möglich, große Mengen an Bakterien zu charakterisieren. Es konnte ein Überblick über die Verbreitung von VAGs in E. coli aus unterschiedlichen Wirten gewonnen werden. Besonders Wildtiere wurden sowohl durch den Nachweis von VAGs in den entsprechenden Isolaten, verbunden mit deren Adhäsionsfähigkeit und ausgeprägter Zytotoxizität als Reservoire pathogener E. coli identifiziert. Ebenso wurde eine zelllinienspezifische Adhäsion von Isolaten mit bestimmten exVAGs deutlich. Damit konnte der mögliche Einfluss von exVAGs auf die intestinale Kolonisierung bestätigt werden. In weiterführenden Arbeiten sind jedoch Expressions- und Funktionsanalysen der entsprechenden Proteine unerlässlich. Es wird anhand der Mikrokoloniebildung durch kommensale E. coli vermutet, dass Adhäsionsmuster und demzufolge Kolonisierungsstrategien, die bisher pathogenen E. coli zugeschrieben wurden, eher als generelle Kolonisierungsstrategien zu betrachten sind. Das E. coli-α-Hämolysin wirkt im Allgemeinen zytotoxisch auf Epithelzellen. Ein in der Fachliteratur diskutierter adhäsionsunterstützender Mechanismus dieses Toxins ist demnach fragwürdig. Innerhalb dieser Arbeit konnte gezeigt werden, dass die entwickelten Screeningmethoden umfassende Analysen einer großen Anzahl an E. coli-Isolaten ermöglichen.
Verfassungsgerichtsbarkeit in der Russischen Föderation und in der Bundesrepublik Deutschland
(2013)
Der Tagungsband enthält die Referate und Diskussionsbeiträge des in Moskau an der Staatlichen Juristischen Kutafin-Universität am 9. und 10. Oktober 2012 durchgeführten Rundtischgespräches zur Verfassungsgerichtsbarkeit. Behandelt werden ausgewählte rechtshistorische und -politische Fragen sowie aktuelle rechtliche Probleme der Verfassungsgerichtsbarkeit in der Russischen Föderation und der Bundesrepublik Deutschland sowohl aus der Sicht der Rechtspraxis als auch der Wissenschaft: insbesondere die Entwicklung der Verfassungsgerichtsbarkeit in Geschichte und Gegenwart, Status, Rechtsnatur und Aufgaben des Verfassungsgerichts in den Subjekten der Föderation und in den Ländern sowie Verfassungsgericht und Gesetzgebung. Zudem werden Spezialfragen der Verfassungsgerichtsbarkeit erörtert, z.B. die Institution des Bevollmächtigten Vertreters des Präsidenten im Verfassungsgericht in Russland, der Eilrechtsschutz durch das BVerfG und der Rechtsschutz bei überlangen Verfahren vor dem BVerfG in Deutschland.
Die Komplexität heutiger Geschäftsabläufe und die Menge der zu verwaltenden Daten stellen hohe Anforderungen an die Entwicklung und Wartung von Geschäftsanwendungen. Ihr Umfang entsteht unter anderem aus der Vielzahl von Modellentitäten und zugehörigen Nutzeroberflächen zur Bearbeitung und Analyse der Daten. Dieser Bericht präsentiert neuartige Konzepte und deren Umsetzung zur Vereinfachung der Entwicklung solcher umfangreichen Geschäftsanwendungen. Erstens: Wir schlagen vor, die Datenbank und die Laufzeitumgebung einer dynamischen objektorientierten Programmiersprache zu vereinen. Hierzu organisieren wir die Speicherstruktur von Objekten auf die Weise einer spaltenorientierten Hauptspeicherdatenbank und integrieren darauf aufbauend Transaktionen sowie eine deklarative Anfragesprache nahtlos in dieselbe Laufzeitumgebung. Somit können transaktionale und analytische Anfragen in derselben objektorientierten Hochsprache implementiert werden, und dennoch nah an den Daten ausgeführt werden. Zweitens: Wir beschreiben Programmiersprachkonstrukte, welche es erlauben, Nutzeroberflächen sowie Nutzerinteraktionen generisch und unabhängig von konkreten Modellentitäten zu beschreiben. Um diese abstrakte Beschreibung nutzen zu können, reichert man die Domänenmodelle um vormals implizite Informationen an. Neue Modelle müssen nur um einige Informationen erweitert werden um bereits vorhandene Nutzeroberflächen und -interaktionen auch für sie verwenden zu können. Anpassungen, die nur für ein Modell gelten sollen, können unabhängig vom Standardverhalten, inkrementell, definiert werden. Drittens: Wir ermöglichen mit einem weiteren Programmiersprachkonstrukt die zusammenhängende Beschreibung von Abläufen der Anwendung, wie z.B. Bestellprozesse. Unser Programmierkonzept kapselt Nutzerinteraktionen in synchrone Funktionsaufrufe und macht somit Prozesse als zusammenhängende Folge von Berechnungen und Interaktionen darstellbar. Viertens: Wir demonstrieren ein Konzept, wie Endnutzer komplexe analytische Anfragen intuitiver formulieren können. Es basiert auf der Idee, dass Endnutzer Anfragen als Konfiguration eines Diagramms sehen. Entsprechend beschreibt ein Nutzer eine Anfrage, indem er beschreibt, was sein Diagramm darstellen soll. Nach diesem Konzept beschriebene Diagramme enthalten ausreichend Informationen, um daraus eine Anfrage generieren zu können. Hinsichtlich der Ausführungsdauer sind die generierten Anfragen äquivalent zu Anfragen, die mit konventionellen Anfragesprachen formuliert sind. Das Anfragemodell setzen wir in einem Prototypen um, der auf den zuvor eingeführten Konzepten aufsetzt.
In den zeitgenössischen slavischen Literaturen ist Gewalt allgegenwärtig – als Echo der Revolutionen, Kriege, Diktaturen und Systemumbrüche des 20. Jahrhunderts, als Reaktion auf andauernde und neu ausbrechende Konflikte, als Faszination, Sensation und Kaufanreiz. Gewalt erscheint als narrativ-ästhetischer, tradierter Bestandteil der literarischen Darstellung und als aussagekräftiges, tabubrechendes Motiv. Dieser Band trägt die Ergebnisse einer internationalen Konferenz an der Universität Hamburg zusammen, die sich im Herbst 2012 diesem Thema unter der Trias "Verbrechen – Fiktion - Vermarktung" gewidmet hat. Das breite Spektrum der untersuchten Literaturen (von ost- und west- über südslavische Literaturen, von Prosa über Lyrik und Dramatik) aber auch der Blick über die Literatur hinaus (unter anderem auf Film und Musik), die Vielfalt der Themen, Darstellungsweisen und analytischen Zugänge ergeben ein vielfältiges Bild, das eine Annäherung an die Frage nach den Spezifika literarischer Gewaltdarstellungen ermöglicht.
Previous research has shown that high phonotactic frequencies
facilitate the production of regularly inflected verbs in English-learning
children with specific language impairment (SLI) but not with typical
development (TD). We asked whether this finding can be replicated
for German, a language with a much more complex inflectional
verb paradigm than English. Using an elicitation task, the production
of inflected nonce verb forms (3 rd person singular with -t suffix)
with either high- or low-frequency subsyllables was tested in
sixteen German-learning children with SLI (ages 4;1–5 ;1), sixteen
TD-children matched for chronological age (CA) and fourteen TD-
children matched for verbal age (VA) (ages 3;0–3 ;11). The findings
revealed that children with SLI, but not CA- or VA-children, showed
differential performance between the two types of verbs, producing
more inflectional errors when the verb forms resulted in low-frequency
subsyllables than when they resulted in high-frequency subsyllables,
replicating the results from English-learning children.
This article examines two so-far-understudied verb doubling constructions in Mandarin Chinese, viz., verb doubling clefts and verb doubling lianaEuro broken vertical bar dou. We show that these constructions have the same internal syntax as regular clefts and lianaEuro broken vertical bar dou sentences, the doubling effect being epiphenomenal; therefore, we classify them as subtypes of the general cleft and lianaEuro broken vertical bar dou constructions, respectively, rather than as independent constructions. Additionally, we also show that, as in many other languages with comparable constructions, the two instances of the verb are part of a single movement chain, which has the peculiarity of allowing Spell-Out of more than one link.
The main intention of the PhD project was to create a varve chronology for the Suigetsu Varves 2006' (SG06) composite profile from Lake Suigetsu (Japan) by thin section microscopy. The chronology was not only to provide an age-scale for the various palaeo-environmental proxies analysed within the SG06 project, but also and foremost to contribute, in combination with the SG06 14C chronology, to the international atmospheric radiocarbon calibration curve (IntCal). The SG06 14C data are based on terrestrial leaf fossils and therefore record atmospheric 14C values directly, avoiding the corrections necessary for the reservoir ages of the marine datasets, which are currently used beyond the tree-ring limit in the IntCal09 dataset (Reimer et al., 2009). The SG06 project is a follow up of the SG93 project (Kitagawa & van der Plicht, 2000), which aimed to produce an atmospheric calibration dataset, too, but suffered from incomplete core recovery and varve count uncertainties. For the SG06 project the complete Lake Suigetsu sediment sequence was recovered continuously, leaving the task to produce an improved varve count. Varve counting was carried out using a dual method approach utilizing thin section microscopy and micro X-Ray Fluorescence (µXRF). The latter was carried out by Dr. Michael Marshall in cooperation with the PhD candidate. The varve count covers 19 m of composite core, which corresponds to the time frame from ≈10 to ≈40 kyr BP. The count result showed that seasonal layers did not form in every year. Hence, the varve counts from either method were incomplete. This rather common problem in varve counting is usually solved by manual varve interpolation. But manual interpolation often suffers from subjectivity. Furthermore, sedimentation rate estimates (which are the basis for interpolation) are generally derived from neighbouring, well varved intervals. This assumes that the sedimentation rates in neighbouring intervals are identical to those in the incompletely varved section, which is not necessarily true. To overcome these problems a novel interpolation method was devised. It is computer based and automated (i.e. avoids subjectivity and ensures reproducibility) and derives the sedimentation rate estimate directly from the incompletely varved interval by statistically analysing distances between successive seasonal layers. Therefore, the interpolation approach is also suitable for sediments which do not contain well varved intervals. Another benefit of the novel method is that it provides objective interpolation error estimates. Interpolation results from the two counting methods were combined and the resulting chronology compared to the 14C chronology from Lake Suigetsu, calibrated with the tree-ring derived section of IntCal09 (which is considered accurate). The varve and 14C chronology showed a high degree of similarity, demonstrating that the novel interpolation method produces reliable results. In order to constrain the uncertainties of the varve chronology, especially the cumulative error estimates, U-Th dated speleothem data were used by linking the low frequency 14C signal of Lake Suigetsu and the speleothems, increasing the accuracy and precision of the Suigetsu calibration dataset. The resulting chronology also represents the age-scale for the various palaeo-environmental proxies analysed in the SG06 project. One proxy analysed within the PhD project was the distribution of event layers, which are often representatives of past floods or earthquakes. A detailed microfacies analysis revealed three different types of event layers, two of which are described here for the first time for the Suigetsu sediment. The types are: matrix supported layers produced as result of subaqueous slope failures, turbidites produced as result of landslides and turbidites produced as result of flood events. The former two are likely to have been triggered by earthquakes. The vast majority of event layers was related to floods (362 out of 369), which allowed the construction of a respective chronology for the last 40 kyr. Flood frequencies were highly variable, reaching their greatest values during the global sea level low-stand of the Glacial, their lowest values during Heinrich Event 1. Typhoons affecting the region represent the most likely control on the flood frequency, especially during the Glacial. However, also local, non-climatic controls are suggested by the data. In summary, the work presented here expands and revises knowledge on the Lake Suigetsu sediment and enabls the construction of a far more precise varve chronology. The 14C calibration dataset is the first such derived from lacustrine sediments to be included into the (next) IntCal dataset. References: Kitagawa & van der Plicht, 2000, Radiocarbon, Vol 42(3), 370-381 Reimer et al., 2009, Radiocarbon, Vol 51(4), 1111-1150
The human face shows individual features and features that are characteristic for sex and age (the loss of childlike characteristics during maturation). The analysis of facial dimensions is essential for identifying individual features also for forensic issues.
The analysis of facial proportions was performed on photogrammetric data from front views of 125 children. The data were pooled from 2 different studies. The children's data were obtained from a longitudinal study and reduced by random generator to ensure the data of adults from a separate cross-sectional study.
We applied principal component analysis on photogrammetric facial proportions of 169 individuals: 125 children (63 boys and 62 girls) aged 2-7 years and 44 adults (18 men and 26 women) aged 18-65 years.
Facial proportions depend on age and sex. Three components described age: (1) proportions of facial height to head height, (2) proportions that involve endocanthal breadth, and (3) bigonial to bizygonial proportions. Proportions that associate with sex are connected with nasal distances and nasal to bizygonial distances.
Twenty-three percent of the variance, particularly variance that are connected with proportions of lower and middle face heights to head height, do neither depend on sex nor on age and thus appear useful for screening purposes, eg, for dysmorphic genetic syndromes.
Value creation in scene-based music production - the case of electronic club music in Germany
(2013)
The focus of this article is on the variability of value creation in the popular music industry. Recent trends in electronic music have been based on both the valorization of global tastes and of local specialities in performance and production. Depending on musical styles and market niches, local scenes have become important forces behind heterogeneous globalocal markets. At the same time, technological change and the virtualization of music production and distribution contribute to increasingly differentiated configurations of value creation. It is therefore necessary to reconstruct theoretically and empirically the new interplay among the local music production, digital media markets, and virtual communities that are involved. On the basis of empirical explorations in a German hot spot of electronic club-music production (the city of Berlin), the article indentifies local interaction practice and constellations of stakeholders. The findings show that value creation in these rapidly changing production scenes has moved away from the large-scale distribution of producer-induced media to audience-induced live performance and interactive soundtrack production. This change involves the rising importance of cultural embeddings such as taste building, reputation building among artists and producers, and local community building. Starting from an open theoretical problematization of value creation with regard to fluid scenes and shifting modes of production, the results of first empirical reconstructions are taken as inputs to an evolving discussion on the configurations of value creation in consumer-based strands of music production.
Der Singing Voice Handicap Index (SVHI) wurde zur Selbstbeurteilung einer Stimmstörung für Sänger in den USA entwickelt. Eine deutsche Übersetzung wurde erstellt und einer Reliabilitäts- und Validitätsprüfung unterzogen. Es wurden 54 dysphone Sänger (35 weiblich, 19 männlich), Patienten einer phoniatrischen Klinik, befragt. 130 stimmgesunde Opern- und Rundfunkchorsänger (74 weiblich, 56 männlich) bildeten die Kontrollgruppe. Die Reliabilität ergibt sich aus einer hochsignifikanten Test-Retest-Reliabilität (r = 0,960; p <= 0,001, Pearson-Korrelation) und einem Cronbach-? von 0,975. Eine Hauptkomponentenanalyse mit Varimaxrotation und die Ergebnisse des Screeplots legen die Interpretation des SVHI als einfaktorielle Skala nahe. Die Validität zeigt sich in einem hochsignifikanten Zusammenhang zwischen dem vom Patienten selbst eingeschätzten Schweregrad der Stimmstörung und dem SVHI-Gesamtscore. Patienten haben einen signifikant höheren SVHI-Gesamtscore als die Kontrollgruppe gesunder Sänger. Der SVHI ist als diagnostisches Instrument für den deutschsprachigen Raum geeignet.
From 6 to 9 August 2012, intense rainfall hit the northern Philippines, causing massive floods in Metropolitan Manila and nearby regions. Local rain gauges recorded almost 1000mm within this period. However, the recently installed Philippine network of weather radars suggests that Metropolitan Manila might have escaped a potentially bigger flood just by a whisker, since the centre of mass of accumulated rainfall was located over Manila Bay. A shift of this centre by no more than 20 km could have resulted in a flood disaster far worse than what occurred during Typhoon Ketsana in September 2009.
Isolation of recombinant antibodies from antibody libraries is commonly performed by different molecular display formats including phage display and ribosome display or different cell-surface display formats. We describe a new method which allows the selection of Escherichia coil cells producing the required single chain antibody by cultivation in presence of ampicillin conjugated to the antigen of interest. The method utilizes the neutralization of the conjugate by the produced single chain antibody which is secreted to the periplasm. Therefore, a new expression system based on the pET26b vector was designed and a library was constructed. The method was successfully established first for the selection of E. coli BL21 Star (DE3) cells expressing a model single chain antibody (anti-fluorescein) by a simple selection assay on LB-agar plates. Using this selection assay, we could identify a new single chain antibody binding biotin by growing E. coil BL21 Star (DE3) containing the library in presence of a biotin-ampicillin conjugate. In contrast to methods as molecular or cell surface display our selection system applies the soluble single chain antibody molecule and thereby avoids undesired effects, e.g. by the phage particle or the yeast fusion protein. By selecting directly in an expression strain, production and characterization of the selected single chain antibody is possible without any further cloning or transformation steps.
Urheberrecht
(2013)
Assessing diversity is among the major tasks in ecology and conservation science. In ecological and conservation studies, epiphytic cryptogams are usually sampled up to accessible heights in forests. Thus, their diversity, especially of canopy specialists, likely is underestimated. If the proportion of those species differs among forest types, plot-based diversity assessments are biased and may result in misleading conservation recommendations. We sampled bryophytes and lichens in 30 forest plots of 20 m x 20 m in three German regions, considering all substrates, and including epiphytic litter fall. First, the sampling of epiphytic species was restricted to the lower 2 m of trees and shrubs. Then, on one representative tree per plot, we additionally recorded epiphytic species in the crown, using tree climbing techniques. Per tree, on average 54% of lichen and 20% of bryophyte species were overlooked if the crown was not been included. After sampling all substrates per plot, including the bark of all shrubs and trees, still 38% of the lichen and 4% of the bryophyte species were overlooked if the tree crown of the sampled tree was not included. The number of overlooked lichen species varied strongly among regions. Furthermore, the number of overlooked bryophyte and lichen species per plot was higher in European beech than in coniferous stands and increased with increasing diameter at breast height of the sampled tree. Thus, our results indicate a bias of comparative studies which might have led to misleading conservation recommendations of plot-based diversity assessments.
Through the reactions of 1-aminomethyl-2-naphthol and substituted 1-aminobenzyl-2-naphthols with 3,4-dihydroisoquinoline or 6,7-dimethoxy-3,4-dihydroisoquinoline under microwave conditions, naphth[1,2-e][1,3]oxazino[2,3-a]-isoquinoline derivatives were prepared in good yields. The latter reaction was extended by using 2-aminoarylmethyl-1-naphthols, leading to isomeric naphth-[2,1-e][1,3]oxazino[2,3-a] isoquinolines. Beside the detailed NMR spectroscopic and theoretical study of both stereochemistry and dynamic behaviour of these new conformational flexible heterocyclic ring systems an unexpected dynamic process between two diastereomers was observed in solution, studied by variable temperature H-1 NMR spectroscopy and the mechanism proved by theoretical DFT computations.
Large Central European flood events of the past have demonstrated that flooding can affect several river basins at the same time leading to catastrophic economic and humanitarian losses that can stretch emergency resources beyond planned levels of service. For Germany, the spatial coherence of flooding, the contributing processes and the role of trans-basin floods for a national risk assessment is largely unknown and analysis is limited by a lack of systematic data, information and knowledge on past events. This study investigates the frequency and intensity of trans-basin flood events in Germany. It evaluates the data and information basis on which knowledge about trans-basin floods can be generated in order to improve any future flood risk assessment. In particu-lar, the study assesses whether flood documentations and related reports can provide a valuable data source for understanding trans-basin floods. An adaptive algorithm was developed that systematically captures trans-basin floods using series of mean daily discharge at a large number of sites of even time series length (1952-2002). It identifies the simultaneous occurrence of flood peaks based on the exceedance of an initial threshold of a 10 year flood at one location and consecutively pools all causally related, spatially and temporally lagged peak recordings at the other locations. A weighted cumulative index was developed that accounts for the spatial extent and the individual flood magnitudes within an event and allows quantifying the overall event severity. The parameters of the method were tested in a sensitivity analysis. An intensive study on sources and ways of information dissemination of flood-relevant publications in Germany was conducted. Based on the method of systematic reviews a strategic search approach was developed to identify relevant documentations for each of the 40 strongest trans-basin flood events. A novel framework for assessing the quality of event specific flood reports from a user’s perspective was developed and validated by independent peers. The framework was designed to be generally applicable for any natural hazard type and assesses the quality of a document addressing accessibility as well as representational, contextual, and intrinsic dimensions of quality. The analysis of time-series of mean daily discharge resulted in the identification of 80 trans-basin flood events within the period 1952-2002 in Germany. The set is dominated by events that were recorded in the hydrological winter (64%); 36% occurred during the summer months. The occurrence of floods is characterised by a distinct clustering in time. Dividing the study period into two sub-periods, we find an increase in the percentage of winter events from 58% in the first to 70.5% in the second sub-period. Accordingly, we find a significant increase in the number of extreme trans-basin floods in the second sub-period. A large body of 186 flood relevant documentations was identified. For 87.5% of the 40 strongest trans-basin floods in Germany at least one report has been found and for the most severe floods a substantial amount of documentation could be obtained. 80% of the material can be considered grey literature (i.e. literature not controlled by commercial publishers). The results of the quality assessment show that the majority of flood event specific reports are of a good quality, i.e. they are well enough drafted, largely accurate and objective, and contain a substantial amount of information on the sources, pathways and receptors/consequences of the floods. The inclusion of this information in the process of knowledge building for flood risk assessment is recommended. Both the results as well as the data produced in this study are openly accessible and can be used for further research. The results of this study contribute to an improved spatial risk assessment in Germany. The identified set of trans-basin floods provides the basis for an assessment of the chance that flooding occurs simultaneously at a number of sites. The information obtained from flood event documentation can usefully supplement the analysis of the processes that govern flood risk.
We investigate the temporal and spectral correlations between flux and anisotropy fluctuations of TeV-band cosmic rays in light of recent data taken with IceCube. We find that for a conventional distribution of cosmic-ray sources, the dipole anisotropy is higher than observed, even if source discreteness is taken into account. Moreover, even for a shallow distribution of galactic cosmic-ray sources and a reacceleration model, fluctuations arising from source discreteness provide a probability only of the order of 10% that the cosmic-ray anisotropy limits of the recent IceCube analysis are met. This probability estimate is nearly independent of the exact choice of source rate, but generous for a large halo size. The location of the intensity maximum far from the Galactic Center is naturally reproduced.
Beta diversity is a conceptual link between diversity at local and regional scales. Various additional methodologies of quantifying this and related phenomena have been applied. Among them, measures of pairwise (dis)similarity of sites are particularly popular. Undersampling, i.e. not recording all taxa present at a site, is a common situation in ecological data. Bias in many metrics related to beta diversity must be expected, but only few studies have explicitly investigated the properties of various measures under undersampling conditions. On the basis of an empirical data set, representing near-complete local inventories of the Lepidoptera from an isolated Pacific island, as well as simulated communities with varying properties, we mimicked different levels of undersampling. We used 14 different approaches to quantify beta diversity, among them dataset-wide multiplicative partitioning (i.e. true beta diversity') and pairwise site x site dissimilarities. We compared their values from incomplete samples to true results from the full data. We used these comparisons to quantify undersampling bias and we calculated correlations of the dissimilarity measures of undersampled data with complete data of sites. Almost all tested metrics showed bias and low correlations under moderate to severe undersampling conditions (as well as deteriorating precision, i.e. large chance effects on results). Measures that used only species incidence were very sensitive to undersampling, while abundance-based metrics with high dependency on the distribution of the most common taxa were particularly robust. Simulated data showed sensitivity of results to the abundance distribution, confirming that data sets of high evenness and/or the application of metrics that are strongly affected by rare species are particularly sensitive to undersampling. The class of beta measure to be used should depend on the research question being asked as different metrics can lead to quite different conclusions even without undersampling effects. For each class of metric, there is a trade-off between robustness to undersampling and sensitivity to rare species. In consequence, using incidence-based metrics carries a particular risk of false conclusions when undersampled data are involved. Developing bias corrections for such metrics would be desirable.