Refine
Has Fulltext
- yes (13349) (remove)
Year of publication
Document Type
- Article (4010)
- Postprint (3294)
- Doctoral Thesis (2537)
- Monograph/Edited Volume (971)
- Review (558)
- Part of Periodical (492)
- Preprint (446)
- Master's Thesis (264)
- Conference Proceeding (245)
- Working Paper (245)
Language
- German (7057)
- English (5992)
- Spanish (80)
- French (75)
- Multiple languages (62)
- Russian (62)
- Hebrew (9)
- Italian (6)
- Portuguese (2)
- Hungarian (1)
Keywords
- Germany (118)
- Deutschland (106)
- climate change (79)
- Sprachtherapie (77)
- Patholinguistik (73)
- patholinguistics (73)
- Logopädie (72)
- Zeitschrift (71)
- Nachhaltigkeit (61)
- European Union (59)
Institute
- Extern (1382)
- MenschenRechtsZentrum (943)
- Institut für Physik und Astronomie (714)
- Institut für Biochemie und Biologie (709)
- Wirtschaftswissenschaften (583)
- Institut für Chemie (556)
- Institut für Mathematik (519)
- Institut für Romanistik (514)
- Institut für Geowissenschaften (509)
- Mathematisch-Naturwissenschaftliche Fakultät (489)
Indirect resource competition and interference are widely occurring mechanisms of interspecific interactions. We have studied the seasonal expression of these two interaction types within a two-species, boreal small mammal system. Seasons differ by resource availability, individual breeding state and intraspecific social system. Live-trapping methods were used to monitor space use and reproduction in 14 experimental populations of bank voles Myodes glareolus in large outdoor enclosures with and without a dominant competitor, the field vole Microtus agrestis. We further compared vole behaviour using staged dyadic encounters in neutral arenas in both seasons. Survival of the non-breeding overwintering bank voles was not affected by competition. In the spring, the numbers of male bank voles, but not of females, were reduced significantly in the competition populations. Bank vole home ranges expanded with vole density in the presence of competitors, indicating food limitation. A comparison of behaviour between seasons based on an analysis of similarity revealed an avoidance of costly aggression against opponents, independent of species. Interactions were more aggressive during the summer than during the winter, and heterospecific encounters were more aggressive than conspecific encounters. Based on these results, we suggest that interaction types and their respective mechanisms are not either–or categories and may change over the seasons. During the winter, energy constraints and thermoregulatory needs decrease direct aggression, but food constraints increase indirect resource competition. Direct interference appears in the summer, probably triggered by each individual’s reproductive and hormonal state and the defence of offspring against conspecific and heterospecific intruders. Both interaction forms overlap in the spring, possibly contributing to spring declines in the numbers of subordinate species.
The Bruce effect revisited
(2017)
Pregnancy termination after encountering a strange male, the Bruce effect, is regarded as a counterstrategy of female mammals towards anticipated infanticide. While confirmed in caged rodent pairs, no verification for the Bruce effect existed from experimental field populations of small rodents. We suggest that the effect may be adaptive for breeding rodent females only under specific conditions related to populations with cyclically fluctuating densities. We investigated the occurrence of delay in birth date after experimental turnover of the breeding male under different population composition in bank voles (Myodes glareolus) in large outdoor enclosures: one-male–multiple-females (n = 6 populations/18 females), multiple-males–multiple-females (n = 15/45), and single-male–single-female (MF treatment, n = 74/74). Most delays were observed in the MF treatment after turnover. Parallel we showed in a laboratory experiment (n = 205 females) that overwintered and primiparous females, the most abundant cohort during population lows in the increase phase of cyclic rodent populations, were more likely to delay births after turnover of the male than year-born and multiparous females. Taken together, our results suggest that the Bruce effect may be an adaptive breeding strategy for rodent females in cyclic populations specifically at low densities in the increase phase, when isolated, overwintered animals associate in MF pairs. During population lows infanticide risk and inbreeding risk may then be higher than during population highs, while also the fitness value of a litter in an increasing population is higher. Therefore, the Bruce effect may be adaptive for females during annual population lows in the increase phases, even at the costs of delaying reproduction.
Wie kommt Farbe zur Sprache?
(2005)
„Was ist Migration?“
(2016)
Auch wenn die Appelle, die Bedeutung von Migration für Erwachsenenbildung deutlicher wahrzunehmen, unüberhörbar sind, bleiben sie bezüglich kategorialer Arbeit bemerkenswert wenig beachtet. Grundlagentheoretisch motivierte Arbeit am Begriff „Migration“ ist in der Erwachsenenbildung noch lange nicht hinreichend ausgeschöpft. Auch wenn sich einzelne Studien mit ihm auseinandersetzen, besteht dennoch der Eindruck, dass kategoriale Klärungsversuche singulär bleiben. Die nicht einfache Aufgabe, den Begriff Migration vor seiner kategorialen Stilllegung zu bewahren, bleibt eine ernsthafte Herausforderung für erwachsenenpädagogische Migrationsforschung, sofern sie daran interessiert ist, die Risiken eines bisher essentialistischen Kurses ernsthaft ins Visier zu nehmen.
Im Zuge des Spatial Turn, der in der erwachsenenpädagogischen Raumforschung unübersehbar ist, wird eine Orientierung wahrnehmbar, die dafür plädiert, sich der Materialität räumlicher Arrangements wieder stärker zuzuwenden. Diese Tendenz der Re-Etablierung von Materialität birgt das Risiko, hinter den erreichten Stand des Spatial Turn zurückzukehren. Eine Möglichkeit, der sich anbahnenden ‚Raumfalle’ auszuweichen, könnte darin liegen, sich einer topologischen Perspektive zuzuwenden. Eine solche Perspektive eröffnet nicht nur die Möglichkeit, bisherige Rezeptionssperren im Spatial Turn aufzuheben und dadurch eine Perspektive einzuführen, mit der alternative Blickpunkte exploriert werden können. Im Kontrast zu raumontologischen Auffassungen privilegiert Topologie geradezu die Abstraktion von einer materiellen Verhaftetheit. Der Beitrag verfolgt zweierlei: Einerseits will er das Vokabular erwachsenenpädagogischer Raumforschung ausdifferenzieren, andererseits eine topologische Perspektive einführen, um eine theoretische Grundlage für weiterführende Überlegungen zu schaffen.
In 2020, the project “iMooX – The MOOC Platform as a Service for all Austrian Universities” was launched. It is co-financed by the Austrian Ministry of Education, Science and Research. After half of the funding period, the project management wants to assess and share results and outcomes but also address (potential) additional “impacts” of the MOOC platform. Building upon work on OER impact assessment, this contribution describes in detail how the specific iMooX.at approach of impact measurement was developed. Literature review, stakeholder analysis, and problem-based interviews were the base for developing a questionnaire addressing the defined key stakeholder “MOOC creators”. The article also presents the survey results in English for the first time but focuses more on the development, strengths, and weaknesses of the selected methods. The article is seen as a contribution to the further development of impact assessment for MOOC platforms.
This research paper provides an overview of the current state of MOOCs (massive open online courses) and universities in Austria, focusing on the national MOOC platform iMooX.at. The study begins by presenting the results of an analysis of the performance agreements of 22 Austrian public universities for the period 2022–2024, with a specific focus on the mention of MOOC activities and iMooX. The authors find that 12 of 22 (55 %) Austrian public universities use at least one of these terms, indicating a growing interest in MOOCs and online learning. Additionally, the authors analyze internal documentation data to share insights into how many universities in Austria have produced and/or used a MOOC on the iMooX platform since its launch in 2014. These findings provide a valuable measure of the current usage and monitoring of MOOCs and iMooX among Austrian higher education institutions. Overall, this research contributes to a better understanding of the current state of MOOCs and their integration within Austrian higher education.
In der Philosophie des 20. Jahrhunderts wird deutlich, dass es in Frankreich und in Deutschland voneinander abweichende Sichtweisen auf die Frage gibt, ob der Mensch eine "Sonderstellung" in der Dynamik des biologischen und geschichtlichen Lebens genießt. Während sich in Deutschland die Tradition eines anthropologischen Denkens neu formiert, ist in Frankreich eine scharfe Skepsis gegenüber dem Erbe des Humanismus charakteristisch. Die Beiträge dieses zweisprachigen Buches untersuchen diese deutsch-französische Konstellation von Fragen und Autoren, und aktualisieren die Reflexion auf die (Grenzen der) Singularität des Menschen.
«Dilettanten des Lebens»
(2017)
In the last 10 years, the governments of most of the German Lander initiated administrative reforms. All of these ventures included the municipalization of substantial sets of tasks. As elsewhere, governments argue that service delivery by communes is more cost-efficient, effective and responsive. Empirical evidence to back these claims is inconsistent at best: a considerable number of case studies cast doubt on unconditionally positive appraisals. Decentralization effects seem to vary depending on the performance dimension and task considered. However, questions of generalizability arise as these findings have not yet been backed by more 'objective' archival data. We provide empirical evidence on decentralization effects for two different policy fields based on two studies. Thereby, the article presents alternative avenues for research on decentralization effects and matches the theoretical expectations on decentralization effects with more robust results. The analysis confirms that overly positive assertions concerning decentralization effects are only partially warranted. As previous case studies suggested, effects have to be looked at in a much more differentiated way, including starting conditions and distinguishing between the various relevant performance dimensions and policy fields.
Territorial reform is the most radical and contested reorganisation of local government. A sound evaluation of the outcome of such reforms is hence an important step to ensure the legitimation of any decision on the subject. However, in our view the discourse on the subject appears to be one sided, focusing primarily on overall fiscal effects scrutinised by economists. The contribution of this paper is hence threefold: Firstly, we provide an overview off territorial reforms in Europe, with a special focus on Eastern Germany as a promising case for cross-country comparisons. Secondly, we provide an over-view of the analytical classifications of these reforms and context factors to be considered in their evaluation. And thirdly, we analyse the literature on qualitative performance effects of these reforms. The results show that territorial reforms have a significant positive impact on functional performance, while the effects on participation and integration are indeed ambivalent. In doing so, we provide substantial arguments for a broader, more inclusive discussion on the success of territorial reforms.
Flux-P
(2012)
Quantitative knowledge of intracellular fluxes in metabolic networks is invaluable for inferring metabolic system behavior and the design principles of biological systems. However, intracellular reaction rates can not often be calculated directly but have to be estimated; for instance, via 13C-based metabolic flux analysis, a model-based interpretation of stable carbon isotope patterns in intermediates of metabolism. Existing software such as FiatFlux, OpenFLUX or 13CFLUX supports experts in this complex analysis, but requires several steps that have to be carried out manually, hence restricting the use of this software for data interpretation to a rather small number of experiments. In this paper, we present Flux-P as an approach to automate and standardize 13C-based metabolic flux analysis, using the Bio-jETI workflow framework. Exemplarily based on the FiatFlux software, it demonstrates how services can be created that carry out the different analysis steps autonomously and how these can subsequently be assembled into software workflows that perform automated, high-throughput intracellular flux analysis of high quality and reproducibility. Besides significant acceleration and standardization of the data analysis, the agile workflow-based realization supports flexible changes of the analysis workflows on the user level, making it easy to perform custom analyses.
Aufgrund der von Winkelmann und Schmalohr (1972) mitgeteilten Interkorrelationen der Subtests des HAWIK (außer Zahlennachsprechen), gewonnen an einer Stichprobe von N = 1020 sogenannten lernbehinderten Sonderschulanwärtern, wurde eine Reanalyse mit zwei unterschiedlichen faktorenanalytischen Techniken und zusätzlichen Kriterien zur Bestimmung der Zahl gemeinsamer Faktoren vorgenommen. Dabei wurde gezeigt, daß die von Winkelmann und Schmalohr (1972) ermittelte und von Schmalohr (1975) für die praktische Diagnostik empfohlene Lösung nicht invariant gegenüber verschiedenen faktorenanalytischen Techniken ist. Darüberhinaus genügt der dritte Faktor nicht dem sogenannten Fürntratt-Kriterium. Somit scheint lediglich die Extraktion von zwei Faktoren angemessen. Diese sind als Verbal- und Handlungsfaktor interpretierbar. Sie korrelieren in der gleichen Größenordnung miteinander wie der Verbal- und Handlungsteil des HAWIK und konfirmieren so die Annahme eines Generalfaktors. Die von Schmalohr für die Praxis empfohlene Prozedur zur Schätzung von Faktorwerten, welche sich auf eine orthogonale dreidimensionale Struktur bezieht, muß auf dem Hintergrund dieser Untersuchung als nicht adäquat bezeichnet werden.
Public pensions in the U.S.
(2005)
Contents: The Public Old Age Insurance of the U.S. -Historical overview -Technical details -Individual equity and social adequacy The Economic Problem of Old Age -Risks and economic security -Old age, retirement, and idividual precaution -Insurance markets, market failures, and social insurance -Options for public pension systems The Problems of Social Security -The financial balance of OASDI -Causes of the long-run problems -Rates of return -Conclusion - The case for Social Security reform Proposed Remedies -Full, partial, or no privatization? -The President's Commission to Strengthen Social Security -Kotlikoff's Personal Security System -The Diamond-Orszag Three-Part plan
We propose two strategies to characterize organisms with respect to their metabolic capabilities. The first, investigative, strategy describes metabolic networks in terms of their capability to utilize different carbon sources, resulting in the concept of carbon utilization spectra. In the second, predictive, approach minimal nutrient combinations are predicted from the structure of the metabolic networks, resulting in a characteristic nutrient profile. Both strategies allow for a quantification of functional properties of metabolic networks, allowing to identify groups of organisms with similar functions. We investigate whether the functional description reflects the typical environments of the corresponding organisms by dividing all species into disjoint groups based on whether they are aerotolerant and/or photosynthetic. Despite differences in the underlying concepts, both measures display some common features. Closely related organisms often display a similar functional behavior and in both cases the functional measures appear to correlate with the considered classes of environments. Carbon utilization spectra and nutrient profiles are complementary approaches toward a functional classification of organism-wide metabolic networks. Both approaches contain different information and thus yield different clusterings, which are both different from the classical taxonomy of organisms. Our results indicate that a sophisticated combination of our approaches will allow for a quantitative description reflecting the lifestyles of organisms.
The DNA in living cells can be effectively damaged by high-energy radiation, which can lead to cell death. Through the ionization of water molecules, highly reactive secondary species such as low-energy electrons (LEEs) with the most probable energy around 10 eV are generated, which are able to induce DNA strand breaks via dissociative electron attachment. Absolute DNA strand break cross sections of specific DNA sequences can be efficiently determined using DNA origami nanostructures as platforms exposing the target sequences towards LEEs. In this paper, we systematically study the effect of the oligonucleotide length on the strand break cross section at various irradiation energies. The present work focuses on poly-adenine sequences (d(A₄), d(A₈), d(A₁₂), d(A₁₆), and d(A₂₀)) irradiated with 5.0, 7.0, 8.4, and 10 eV electrons. Independent of the DNA length, the strand break cross section shows a maximum around 7.0 eV electron energy for all investigated oligonucleotides confirming that strand breakage occurs through the initial formation of negative ion resonances. When going from d(A₄) to d(A₁₆), the strand break cross section increases with oligonucleotide length, but only at 7.0 and 8.4 eV, i.e., close to the maximum of the negative ion resonance, the increase in the strand break cross section with the length is similar to the increase of an estimated geometrical cross section. For d(A₂₀), a markedly lower DNA strand break cross section is observed for all electron energies, which is tentatively ascribed to a conformational change of the dA₂₀ sequence. The results indicate that, although there is a general length dependence of strand break cross sections, individual nucleotides do not contribute independently of the absolute strand break cross section of the whole DNA strand. The absolute quantification of sequence specific strand breaks will help develop a more accurate molecular level understanding of radiation induced DNA damage, which can then be used for optimized risk estimates in cancer radiation therapy.
Ionizing radiation is used in cancer radiation therapy to effectively damage the DNA of tumors leading to cell death and reduction of the tumor tissue. The main damage is due to generation of highly reactive secondary species such as low-energy electrons (LEE) with the most probable energy around 10 eV through ionization of water molecules in the cells. A simulation of the dose distribution in the patient is required to optimize the irradiation modality in cancer radiation therapy, which must be based on the fundamental physical processes of high-energy radiation with the tissue. In the present work the accurate quantification of DNA radiation damage in the form of absolute cross sections for LEE-induced DNA strand breaks (SBs) between 5 and 20 eV is done by using the DNA origami technique. This method is based on the analysis of well-defined DNA target sequences attached to DNA origami triangles with atomic force microscopy (AFM) on the single molecule level. The present work focuses on poly-adenine sequences (5'-d(A4), 5'-d(A8), 5'-d(A12), 5'-d(A16), and 5'- d(A20)) irradiated with 5.0, 7.0, 8.4, and 10 eV electrons. Independent of the DNA length, the strand break cross section shows a maximum around 7.0 eV electron energy for all investigated oligonucleotides confirming that strand breakage occurs through the initial formation of negative ion resonances. Additionally, DNA double strand breaks from a DNA hairpin 5'-d(CAC)4T(Bt-dT)T2(GTG)4 are examined for the first time and are compared with those of DNA single strands 5'-d(CAC)4 and 5'- d(GTG)4. The irradiation is made in the most likely energy range of 5 to 20 eV with an anionic resonance maximum around 10 eV independently of the DNA sequence. There is a clear difference between σSSB and σDSB of DNA single and double strands, where the strand break for ssDNA are always higher in all electron energies compared to dsDNA by the factor 3. A further part of this work deals with the characterization and analysis of new types of radiosensitizers used in chemoradiotherapy, which selectively increases the DNA damage upon radiation. Fluorinated DNA sequences with 2'-fluoro-2'-deoxycytidine (dFC) show an increased sensitivity at 7 and 10 eV compared to the unmodified DNA sequences by an enhancement factor between 2.1 and 2.5. In addition, light-induced oxidative damage of 5'-d(GTG)4 and 5'-d((CAC)4T(Bt-dT)T2(GTG)4) modified DNA origami triangles by singlet oxygen 1O2 generated from three photoexcited DNA groove binders [ANT994], [ANT1083] and [Cr(ddpd)2][BF4]3 illuminated in different experiments with UV-Vis light at 430, 435 and 530 nm wavelength is demonstrated. The singlet oxygen induced generation of DNA damage could be detected in both aqueous and dry environments for [ANT1083] and [Cr(ddpd)2][BF4]3.
Indien macht sich Sorgen, ob seine betont nichtmilitärische Politik in Afghanistan nach Abzug der ISAF-Truppen Früchte trägt. Als einer der größten Entwicklungshilfegeber hat Indien nach Vertreibung der Taliban 2001 mehr als zwei Mrd. US-Dollar in das Land gepumpt und der Nachfrage nach militärischer Hilfe bislang erfolgreich getrotzt. Unter Umgehung des einflussreichen Grenzlandes Pakistan will Indien von den Bodenschätzen Afghanistans, seiner strategischen Lage und seinem Wirtschafts- und Handelspotenzial profitieren. Die Angst vor der Rückkehr der Taliban sitzt jedoch tief und die eigene Verwundbarkeit ist groß, wie die Bombenangriffe 2008 und 2009 auf indische Botschaften in Afghanistan zeigten. Langfristig wird Indien seine Interessen in diesem Raum nur über einen multilateralen Ansatz sichern können.
Renaissance des Bürgers
(2011)
Water resources from Central Asia’s mountain regions have a high relevance for the water supply of the water scarce lowlands. A good understanding of the water cycle in these mountain regions is therefore needed to develop water management strategies. Hydrological modeling helps to improve our knowledge of the regional water cycle, and it can be used to gain a better understanding of past changes or estimate future hydrologic changes in view of projected changes in climate. However, due to the scarcity of hydrometeorological data, hydrological modeling for mountain regions in Central Asia involves large uncertainties.
Addressing this problem, the first aim of this thesis was to develop hydrological modeling approaches that can increase the credibility of hydrological models in data sparse mountain regions. This was achieved by using additional data from remote sensing and atmospheric modeling. It was investigated whether spatial patterns from downscaled reanalysis data can be used for the interpolation of station-based precipitation data. This approach was compared to other precipitation estimates using a hydrologic evaluation based on hydrological modeling and a comparison of simulated and observed discharge, which demonstrated a generally good performance of this method. The study further investigated the value of satellite-derived snow cover data for model calibration. Trade-offs of good model performance in terms of discharge and snow cover were explicitly evaluated using a multiobjective optimization algorithm, and the results were contrasted with single-objective calibration and Monte Carlo simulations. The study clearly shows that the additional use of snow cover data improved the internal consistency of the hydrological model. In this context, it was further investigated for the first time how many snow cover scenes were required for hydrological model calibration.
The second aim of this thesis was the application of the hydrological model in order to investigate the causes of observed streamflow increases in two headwater catchments of the Tarim River over the recent decades. This simulation-based approach for trend attribution was complemented by a data-based approach. The hydrological model was calibrated to discharge and glacier mass balance data and considered changes in glacier geometry over time. The results show that in the catchment with a lower glacierization, increasing precipitation and temperature both contributed to the streamflow increases, while in the catchment with a stronger glacierization, increasing temperatures were identified as the dominant driver.
Das PNF-Konzept
(2011)
We developed an orbital tuned age model for the composite Chew Bahir sediment core, obtained from the Chew Bahir basin (CHB), southern Ethiopia. To account for the effects of sedimentation rate changes on the spectral expression of the orbital cycles we developed a new method: the Multi-band Wavelet Age modeling technique (MUBAWA). By using a Continuous Wavelet Transformation, we were able to track frequency shifts that resulted from changing sedimentation rates and thus calculated tuned age model encompassing the last 620 kyrs. The results show a good agreement with the directly dated age model that is available from the dating of volcanic ashes. Then we used the XRF data from CHB and developed a new and robust humid-arid index of east African climate during the last 620 kyrs. To disentangle the relationship of the selected elements we performed a principal component analysis (PCA). In a following step we applied a continuous wavelet transformation on the PC1, using the directly dated age model. The resulting wavelet power spectrum, unlike a normal power spectrum, displays the occurrence of cycles/frequencies in time. The results highlight that the precession cycles are most dominantly expressed under the 400 kyrs eccentricity maximum whereas weakly expressed during eccentricity minimum. This suggests that insolation is a key driver of the climatic variability observed at CHB throughout the last 620 kyrs. In addition, the prevalence of half-precession and obliquity signals was documented. The latter is attributed to the inter-tropical insolation gradient and not interpreted as an imprint of high latitudes forcing on climatic changes in the tropics. In addition, a windowed analysis of variability was used to detect changes in variance over time and showed that strong climate variability occurred especially along the transition from a dominant insolation-controlled humid climate background state towards a predominantly dry and less-insolation controlled climate. The last chapter dealt with non-linear aspects of climate changes represented by the sediments of the CHB. We use recurrence quantification analysis to detect non-linear changes within the potassium concentration of Chew Bahir sediment cores during the last 620 kyrs. The concentration of potassium in the sediments of the lake is subject to geochemical processes related to the evaporation rate of the lake water at the time of deposition. Based on recurrence analysis, two types of variabilities could be distinguished. Type 1 represents slow variations within the precession period bandwidth of 20 kyrs and a tendency towards extreme climatic events whereas type 2 represents fast, highly variable climatic transitions between wet and dry climate states. While type 1 variability is linked to eccentricity maxima, type 2 variability occurs during the 400 kyrs eccentricity minimum. The climate history presented here shows that during high eccentricity a strongly insolation-driven climate system prevailed, whereas during low eccentricity the climate was more strongly affected by short-term variability changes. The short-term environmental changes, reflected in the increased variability might have influenced the evolution, technological advances and expansion of early modern humans who lived in this region. In the Olorgesaille Basin the temporal changes in the occurrence of stone tools, which bracket the transition from Acheulean to Middle Stone Age (MSA) technologies at between 499–320 kyrs, could potentially correlate to the marked transition from a rather stable climate with less variability to a climate with increased variability in the CHB. We conclude that populations of early anatomically modern humans are more likely to have experienced climatic stress during episodes of low eccentricity, associated with dry and high variability climate conditions, which may have led to technological innovation, such as the transition from the Acheulean to the Middle Stone Age.
Learning from failure
(2022)
Regression testing is a widespread practice in today's software industry to ensure software product quality. Developers derive a set of test cases, and execute them frequently to ensure that their change did not adversely affect existing functionality. As the software product and its test suite grow, the time to feedback during regression test sessions increases, and impedes programmer productivity: developers wait longer for tests to complete, and delays in fault detection render fault removal increasingly difficult.
Test case prioritization addresses the problem of long feedback loops by reordering test cases, such that test cases of high failure probability run first, and test case failures become actionable early in the testing process. We ask, given test execution schedules reconstructed from publicly available data, to which extent can their fault detection efficiency improved, and which technique yields the most efficient test schedules with respect to APFD?
To this end, we recover regression 6200 test sessions from the build log files of Travis CI, a popular continuous integration service, and gather 62000 accompanying changelists. We evaluate the efficiency of current test schedules, and examine the prioritization results of state-of-the-art lightweight, history-based heuristics. We propose and evaluate a novel set of prioritization algorithms, which connect software changes and test failures in a matrix-like data structure.
Our studies indicate that the optimization potential is substantial, because the existing test plans score only 30% APFD. The predictive power of past test failures proves to be outstanding: simple heuristics, such as repeating tests with failures in recent sessions, result in efficiency scores of 95% APFD. The best-performing matrix-based heuristic achieves a similar score of 92.5% APFD. In contrast to prior approaches, we argue that matrix-based techniques are useful beyond the scope of effective prioritization, and enable a number of use cases involving software maintenance.
We validate our findings from continuous integration processes by extending a continuous testing tool within development environments with means of test prioritization, and pose further research questions. We think that our findings are suited to propel adoption of (continuous) testing practices, and that programmers' toolboxes should contain test prioritization as an existential productivity tool.
Inhalt: 1 Überblick über Inhalt und Tenor des Textes „Sozialgeographie“ 1.1 Aufbau und Inhalte des Studienbuches 1.2 Methodische und didaktische Eigenarten des Textes 2 Ein wissenschaftssoziologischer Rahmen der Diskussion 3 Hochschullehre als wissenschaftskritischer Diskurs 3.1 „Close reading“: Einblick in Umgangsweisen mit Wissensbeständenaus früherer Zeit 3.2 Empirie (methodisch-technische Zugänge zur Realität) 3.3 Wie soll es heißen, wie wollen wir es nennen: Sozial- oder HumanoderAnthropogeographie? 3.4 Raumbezogenes oder raumzentriertes oder räumliches Denken? 3.5 Räumliches im Erklärungszusammenhang 3.6 Werlens Nachmoderne: Die Welt, in der wir leben? 3.7 Ist Benno Werlen ein Konstruktivist? 4 Leuchttürme, Meilensteine, Gräben in der Wissenschaftslandschaft 4.1 Humangeographie als teleologischer Prozeß 4.2 Die „Sozialgeographie“ in der methodenorientierten Hochschullehre 4.3 Ausblick: Vom Nach- zum Miteinander
Rezensiertes Werk: Kaldor, Mary: Neue und alte Kriege : organisierte Gewalt im Zeitalter der Globalisierung / Mary Kaldor. Aus dem Engl. von Michael Adrian. - Frankfurt am Main : Suhrkamp, 2000. - 278 S. : graph. Darst., Tab. - (Edition Zweite Moderne) Einheitssacht.: New and old wars <dt.> ISBN 3-518-41131-4
Inhalt: - Ein Meilenstein der fachgeschichtlichen Dokumentation - Eine Anregung und ein Hinweis zu möglichen Wirkungen der Sammlung - Zur Auswahl und zur Repräsentativität der Texte - Wo sind die drei Ws, wo die zwei Bs? - Das Bild der Geographie als Insel … - … und Wissenschaftsfach mit wenig vernetzten, segregierten Denkkulturen... - … in teils stark normativ getönter metatheoretischer Rahmung - Für eine Belebung einer bestens fundierten intradisziplinären Konfliktkultur
Warschauer Topographie
(2019)
Bei der Suche nach Möglichkeiten, die Weiterbildung für Informatiklehrkräfte auszubauen, bietet sich der Einsatz virtueller Lernräume an. Dieses Papier berichtet über ein Projekt, in dem ein exemplarischer virtueller Lernraum für kollaboratives Lernen in der Lehrerweiterbildung in Informatik theoriegeleitet erstellt, erprobt und bewertet wurde. Die erzielten Ergebnisse über das Nutzungsverhalten können für weitere E-Learningprojekte in der Lehrerbildung hilfreich sein. Der Schwerpunkt dieses Papiers liegt auf der Gestaltung des Lernraums unter Beachtung der speziellen Situation der Informatiklehrkräfte, nicht auf der didaktischen Aufbereitung der betreffenden Lerneinheit.
Die Arbeit befasst sich zunächst mit der Analyse und Einordnung des Begriffs der Daseinsvorsorge und deren Erbringung durch den Staat. Schwerpunkt der Betrachtung bildet dabei die Energieversorgung als klassische Aufgabe staatlicher Daseinsvorsorge.
Weiterhin wird der durch die Liberalisierung der Energieversorgung im Jahr 1998 eingeleitete Wandel von sog. natürlichen Monopolen, hin zu einem wettbewerblichen System betrachtet. Dabei wird aufgezeigt, dass sich durch die Einführung des Wettbewerbs weder die damit erhofften Kostenreduzierungen, noch das von Kritikern befürchtete Sterben der kommunalen Energieversorger bewahrheitet haben. Statt einer freien Preisbildung im Wettbewerb ist es zu einer faktischen Verlagerung der früher staatlich festgesetzten Energiepreisgenehmigung auf die Gerichte gekommen, die hierfür jedoch nicht ausgelegt sind. Kommunale Stadtwerke haben sich in der wettbewerblichen Energieversorgung dagegen so gut behauptet, dass seit einiger Zeit ein Trend zur Rekommunalisierung von Energieversorgung auf kommunaler Ebene zu verzeichnen ist.
Diesem offensichtlichen Wunsch nach einer gesteigerten Einflussnahme der Gemeinden auf die örtliche Energieversorgung läuft der aktuelle Rechtsrahmen der energierechtlichen Konzessionsvergabe in Gestalt des § 46 EnWG und seiner Auslegung durch die Rechtsprechung der Zivilgerichte zuwider. Die Arbeit zeigt auf, dass von Beginn der Liberalisierung der kommunale Einfluss auf die örtliche Konzessionsvergabe schrittweise und stetig beschnitten wurde, so dass gegenwärtig ein Zustand der Aushöhlung erreicht ist, der als unzulässiger Eingriff in den geschützten Kernbereich der kommunalen Selbstverwaltungsgarantie i.S.d. Art. 28 II GG anzusehen ist.
The mobile-immobile model (MIM) has been established in geoscience in the context of contaminant transport in groundwater. Here the tracer particles effectively immobilise, e.g., due to diffusion into dead-end pores or sorption. The main idea of the MIM is to split the total particle density into a mobile and an immobile density. Individual tracers switch between the mobile and immobile state following a two-state telegraph process, i.e., the residence times in each state are distributed exponentially. In geoscience the focus lies on the breakthrough curve (BTC), which is the concentration at a fixed location over time. We apply the MIM to biological experiments with a special focus on anomalous scaling regimes of the mean squared displacement (MSD) and non-Gaussian displacement distributions. As an exemplary system, we have analysed the motion of tau proteins, that diffuse freely inside axons of neurons. Their free diffusion thereby corresponds to the mobile state of the MIM. Tau proteins stochastically bind to microtubules, which effectively immobilises the tau proteins until they unbind and continue diffusing. Long immobilisation durations compared to the mobile durations give rise to distinct non-Gaussian Laplace shaped distributions. It is accompanied by a plateau in the MSD for initially mobile tracer particles at relevant intermediate timescales. An equilibrium fraction of initially mobile tracers gives rise to non-Gaussian displacements at intermediate timescales, while the MSD remains linear at all times. In another setting bio molecules diffuse in a biosensor and transiently bind to specific receptors, where advection becomes relevant in the mobile state. The plateau in the MSD observed for the advection-free setting and long immobilisation durations persists also for the case with advection. We find a new clear regime of anomalous diffusion with non-Gaussian distributions and a cubic scaling of the MSD. This regime emerges for initially mobile and for initially immobile tracers. For an equilibrium fraction of initially mobile tracers we observe an intermittent ballistic scaling of the MSD. The long-time effective diffusion coefficient is enhanced by advection, which we physically explain with the variance of mobile durations. Finally, we generalize the MIM to incorporate arbitrary immobilisation time distributions and focus on a Mittag-Leffler immobilisation time distribution with power-law tail ~ t^(-1-mu) with 0<mu<1 and diverging mean immobilisation durations. A fit of our model to the BTC of experimental data from tracer particles in aquifers matches the BTC including the power-law tail. We use the fit parameters for plotting the displacement distributions and the MSD. We find Gaussian normal diffusion at short times and long-time power-law decay of mobile mass accompanied by anomalous diffusion at long times. The long-time diffusion is subdiffusive in the advection-free setting, while it is either subdiffusive for 0<mu<1/2 or superdiffusive for 1/2<mu<1 when advection is present. In the long-time limit we show equivalence of our model to a bi-fractional diffusion equation.
1. Die Rezeption der Französischen Revolution in den deutschen Staaten; 2. Deutsche Intellektuelle und der Ausbruch der Französischen Revolution; 3. Über das revolutionäre Demokratieverständnis während der „Grande Terreur“; 4. Deutsche Kritik an der Französischen Revolution; 5. Friedrich Schillers Rezeption der Ereignisse in Frankreich; 6. Die verschiedenen Einstellungen deutscher Intellektueller gegenüber der Revolution; 7. Christoph Martin Wieland über grundsätzlichen Unterschiede zu Frankreich; 8. Georg Forster und die Mainzer Republik; 9. Charakterisierung deutscher Intellektueller im Zuge der „Grande Terreur“; 10. Goethe und die Französische Revolution; 11. Zusammenfassung
Ein neuentwickeltes azobenzenhaltiges Material, das auf einem supramolekularen Konzept basiert, wird bezüglich seiner Strukturbildung während einer holografischen Belichtung bei 488 nm untersucht. Im Mittelpunkt stehen dabei eindimensionale, sinusförmige Reliefs mit Periodizitäten kleiner 500 nm. Es wird gezeigt, wie der Grad der Vernetzung der photosensitiven Schicht die Strukturbildung in diesem Größenbereich beeinflusst. Zur Maximierung der Strukturtiefe werden gezielt Prozessparameter der Belichtung sowie Materialparameter variiert. Unter Standardbedingungen und moderaten Belichtungsintensitäten von ca. 200 mW/cm² bilden sich innerhalb weniger Minuten bei einer Periode von 400 nm Strukturtiefen von bis zu 80nm aus. Durch die Beeinflussung von Materialparametern, wie Oberflächenspannung und Viskosität, wird die maximale Strukturtiefe auf 160nm verdoppelt. Durch Mehrfachbelichtungen wird auch die Bildung von zweidimensionalen Gittern untersucht. Die Originalstrukturen werden in einem Abformverfahren kopiert und in Schichten von unter UV-Licht aushärtenden Polymeren übertragen. Durch das Abformen kommt es zu einer geringfügigen Verschlechterung der Oberflächenqualität sowie Abnahme der Strukturtiefe. Dieser Verlust wird durch eine Verringerung der Prozesstemperatur verringert. Mithilfe kopierter Oberflächengitter werden organische Distributed Feedback-(DFB)-Laser zweiter Ordnung hergestellt, um den Einfluss von Gitterparametern auf die Emissionseigenschaften dieser Laser zu untersuchen. Dazu erfolgt zunächst die Charakterisierung der optischen Verstärkungseigenschaften ausgewählter organischer Emittermaterialien mittels der Variablen Strichlängenmethode. Das mit dem Laserfarbstoff Pyrromthen567 (PM567) dotierte Polystyrol (PS) zeigt dabei trotz konzentrationsbedingter geringer Absorption eine vergleichsweise geringe Gewinnschwelle von 50µJ/cm² bei ca. 575 nm. Das aktive Gast-Wirt-System der konjugierten Polymere MEH-PPV und F8BT* weist eine hohe Absorption und eine kleine Gewinnschwelle von 2,5 µJ/cm² bei 630 nm auf. Dieses Verhalten spiegelt sich auch in den Emissionseigenschaften der damit hergestellten DFB-Laser wieder. Die Dicke der aktiven Schichten liegen im Bereich hunderter Nanometer und wird so eingestellt, dass sich nur die transversalen Grundmoden im Wellenleiter ausbreiten können. Die Gitterperiode sind so gewählt, dass ein Lichtmode im Verstärkungsbereich des Emittermaterials liegt. Die Emissionslinien der Laser sind mit FWHM-Werten von bis zu 0,3 nm spektral sehr schmalbandig und weisen auf eine sehr gute Gitterqualität hin. Die Untersuchungen liefern minimale Laserschwellen und maximale differentielle Effizienzen von 4,0µJ/cm² und 8,4% für MEH-PPV in F8BT* (bei ca. 640nm) sowie 80 µJ/cm² und 0,9% für PM567 in PS (bei ca. 575 nm). Die Vergrößerung der Strukturtiefe von 40nm auf 80nm in mit MEH-PPV dotierten F8BT*-Lasern zu einem deutlichen Anstieg der ausgekoppelten Energie sowie der differentiellen Effizienz und einem geringen Absinken der Laserschwelle. Dies ist ein Resultat der erhöhten Kopplung von Lasermode und Gitter. Die Emission von DFB-Lasern mit zweidimensionalen Oberflächengittern zeigen eine Verringerung der Divergenz aber kein Einfluss auf die Laserschwelle. Abschließend erfolgt eine Vermessung der Photostabilität von DFB-Lasern unter verschiedenen Bedingungen. Das Einbringen eines konjugierten Polymers in eine aktive Matrix sowie der Betrieb in einer Stickstoffatmosphäre führen dabei zu einer Erhöhung der Lebensdauer auf über eine Million Pulse. Durch die Kombination von Oberflächengittern in PDMS-Filmen mit elektroaktiven Substraten wird eine elektrisch steuerbare Deformation des Beugungsgitters erreicht und auf einen DFB-Laser übertragen. Die spannungsinduzierte Verformung wird zunächst in Beugungsexperimenten charakterisiert und ein optimaler Arbeitspunkt bestimmt. Mit den beiden Elastomeren SEBS12 und VHB4910 werden in den Gittern maximale Periodenänderungen von 1,3% bzw. 3,4% bei einer Steuerspannung von 2 kV erreicht. Der Unterschied resultiert aus den verschiedenen Elastizitätsmoduln der Materialien. Übertragen auf DFB-Laser resultiert eine Variation der Gitterperiode senkrecht zu den Gitterlinien in einer kontinuierlichen Verschiebung der Emissionswellenlänge. Mit einem Spannungssignal von 3,25 kV wird die schmalbandige Emission eines elastischen DFB-Lasers kontinuierlich um fast 50nm von 604 nm zu 557 nm hin verschoben. Aus dem Deformationsverhalten sowohl der reinen Beugungsgitter als auch der Laser werden Rückschlüsse auf die Elastizität der verwendeten Materialien gezogen und erlauben Verbesserungen der Bauteile.
Was Bürger bem(a)erken
(2013)
Eingebettet in die aktuelle Open-Government-Debatte gewinnen E-Bürgerdienste weiter an Bedeutung. Zu den Vorreitern internetbasierter Bürgerdienste wird der Brandenburger Bürgerservice Maerker gezählt, da dieser eine einfache Möglichkeit der Kommunikation zwischen Bürger und Verwaltung über Infrastrukturprobleme in der Gemeinde bietet. Auf der Grundlage von Experteninterviews und einer Umfrage unter den teilnehmenden Kommunen evaluieren die Autoren die Einführung und Umsetzung des Maerker Brandenburgs. Im Ergebnis zeigen sich neben einer großen Breite an Akzeptanz und Zustimmung unter den beteiligten Akteuren auch unausgeschöpfte Potenziale zur Verbesserung der Prozesse innerhalb der Verwaltung. Dieser Artikel stellt die Ergebnisse der Evaluation des Maerkers dar und gibt einen Ausblick auf weitere Entwicklungspotenziale.
In der aktuellen Performance-Management-Forschung wurden bereits eine Vielzahl von Einflussfaktoren untersucht, die eine zielgerichtete Verwendung von Kennzahlen beeinflussen. Verwaltungskultur spielte hierbei nur eine nachgeordnete Rolle. Die vorliegende Untersuchung verwendet die Daten einer Umfrage in allen kreisfreien Städten Deutschlands, um den Zusammenhang zwischen verschiedenen Kulturtypen und der Verwendung von Kennzahlen zu untersuchen. Als Analyseschema für Verwaltungskultur wird die Grid/Group-Analysis verwendet. Die Ergebnisse sind zum Teil überraschend. Individualistische Kulturen scheinen einen negativen, hierarchistische Kulturen einen positiven Einfluss zu haben. Dennoch wird das Fehlen eines geeigneten Operationalisierungsschemas bemängelt.
Das Streben nach Qualität
(2015)
Qualität ist in den letzten Jahren zu einem intensiv diskutierten Thema im Gesundheitswesen geworden. Nach Hygiene- und Behand lungsskandalen steht vor allem der Krankenhaussektor unter Druck. Und obwohl in den vergangenen 15 Jahren eine ganze Reihe an Mechanismen und Regularien eingeführt wurde, so ist der Bereich nur teilweise erforscht. Dieser Artikel liefert einen Überblick über die Komplexität des Qualitätsbegriffs. Anschließend wird die Landschaft der Instrumente zur Qualitätskontrolle und -sicherung im deutschen Krankenhaussektor vorgestellt. Erkenntnisse aus der internationalen Forschung sollen einen vertieften Einblick in die Wirkungsweise gewähren und weitere Forschungslücken betonen.
The public encounter
(2019)
This thesis puts the citizen-state interaction at its center. Building on a comprehensive model incorporating various perspectives on this interaction, I derive selected research gaps. The three articles, comprising this thesis, tackle these gaps. A focal role plays the citizens’ administrative literacy, the relevant competences and knowledge necessary to successfully interact with public organizations. The first article elaborates on the different dimensions of administrative literacy and develops a survey instrument to assess these. The second study shows that public employees change their behavior according to the competences that citizens display during public encounters. They treat citizens preferentially that are well prepared and able to persuade them of their application’s potential. Thereby, they signal a higher success potential for bureaucratic success criteria which leads to the employees’ cream-skimming behavior. The third article examines the dynamics of employees’ communication strategies when recovering from a service failure. The study finds that different explanation strategies yield different effects on the client’s frustration. While accepting the responsibility and explaining the reasons for a failure alleviates the frustration and anger, refusing the responsibility leads to no or even reinforcing effects on the client’s frustration. The results emphasize the different dynamics that characterize the nature of citizen-state interactions and how they establish their short- and long-term outcomes.
Adipositas gilt seit einigen Jahren als eine der häufigsten chronischen Erkrankungen des Kindes- und Jugendalters. Welche Faktoren zu einer erfolgreichen Behandlung der Adipositas im Kindes- und Jugendalter führen, sind jedoch noch immer nicht ausreichend geklärt. Ein wichtiger – bisher jedoch weitgehend unbeachteter – Faktor, welcher möglicherweise wegweisend für den Therapieverlauf sein kann, ist das subjektive Krankheitskonzept der betroffenen Kinder. Das bedeutsamste theoretische Modell, welches den Einfluss der individuellen Krankheitsvorstellungen auf den Regulationsprozess eines Menschen im Umgang mit Erkrankungen beschreibt, ist das Common Sense Model of Illness Representation (CSM) von Howard Leventhal. Ziel der vorliegenden Arbeit war es die subjektiven Krankheitskonzepte adipöser Kinder zu erfassen und ihren Einfluss auf den Regulationsprozess zu analysieren. In einer ersten Untersuchung wurde mittels Daten von 168 adipösen Kindern im Alter von 8 bis 12 Jahren zunächst ein Fragebogen zur Erfassung der subjektiven Krankheitskonzepte entwickelt. Die Ergebnisse weisen darauf hin, dass der Fragebogen als reliabel und valide eingeschätzt werden kann. Mit Hilfe dieses Fragebogens konnte nachgewiesen werden, dass adipöse Kinder Konstrukte über ihre Erkrankung haben, welche in eigenständigen Dimensionen gespeichert werden. Die gefundenen initialen Krankheitskonzepte adipöser Kinder ergeben ein homogenes erwartungskonformes Bild. In einer zweiten Untersuchung wurden anschließend die subjektiven Krankheitskonzepte adipöser Kinder, die Bewältigungsstrategien sowie gesundheits- und krankheitsrelevante Kriteriumsvariablen untersucht. Die Befragungen erfolgten vor Beginn einer stationären Reha (T1), am Ende der Reha (T2) sowie sechs Monate nach Reha-Ende (T3). Von 107 Kindern liegen Daten zu allen drei Messzeitpunkten vor. Es konnte ein Zusammenhang zwischen Krankheitskonzepten, Bewältigungsstrategien und spezifischen Kriteriumsvariablen bei adipösen Kindern nachgewiesen werden. Die Analyse der Wirkzusammenhänge konnte zeigen, dass die kindlichen Krankheitskonzepte – neben den indirekten Einflüssen über die Bewältigungsstrategien – die Kriteriumsvariablen vor allem auch direkt beeinflussen können. Der Einfluss der initialen Krankheitskonzepte adipöser Kinder konnte hierbei sowohl im querschnittlichen als auch im längsschnittlichen Design bestätigt werden. Zudem konnten vielfältige Einflüsse der Veränderung der subjektiven Krankheitskonzepte während der Therapie gefunden werden. Die Veränderungen der Krankheitskonzepte wirken sowohl mittelfristig auf die individuellen Bewältigungsstrategien am Ende der Reha als auch längerfristig auf die adipositasspezifischen Kriteriumsvariablen Gewicht, Ernährung, Bewegung und Lebensqualität. Die Befunde stärken die Relevanz und das Potential der zielgerichteten Modifikation adaptiver bzw. maladaptiver Krankheitskonzepte innerhalb der stationären Therapie der kindlichen Adipositas. Zudem konnte bestätigt werden, dass subjektive Krankheitskonzepte und ihre Veränderung innerhalb der Therapie einen relevanten Beitrag zur Vorhersage des kindlichen Therapieerfolgs über einen längerfristigen Zeitraum leisten können.
Der traditionelle Weg in der Informatik besteht darin, Kompetenzen entweder normativ durch eine Expertengruppe festzulegen oder als Ableitungsergebnis eines Bildungsstandards aus einem externen Feld. Dieser Artikel stellt einen neuartigen und alternativen Ansatz vor, der sich der Methodik der Qualitativen Inhaltsanalyse (QI) bedient. Das Ziel war die Ableitung von informatischen Schlüsselkompetenzen anhand bereits etablierter und erprobter didaktischer Ansätze der Informatikdidaktik. Dazu wurde zunächst aus einer Reihe von Informatikdidaktikbüchern eine Liste mit möglichen Kandidaten für Kompetenzen generiert. Diese Liste wurde als QI-Kategoriensystem verwendet, mit der sechs verschiedene didaktische Ansätze analysiert wurden. Ein abschließender Verfeinerungsschritt erfolgte durch die Überprüfung, welche der gefundenen Kompetenzen in allen vier Kernbereichen der Informatik (theoretische, technische, praktische und angewandte Informatik) Anwendung finden. Diese Methode wurde für die informatische Schulausbildung exemplarisch entwickelt und umgesetzt, ist aber ebenfalls ein geeignetes Vorgehen für die Identifizierung von Schlüsselkompetenzen in anderen Gebieten, wie z. B. in der informatischen Hochschulausbildung, und soll deshalb hier kurz vorgestellt werden.
Seit den 60er Jahren gibt es im deutschsprachigen Raum Diskussionen um die Begriffe Schlüsselqualifikation und (Schlüssel-)Kompetenz, welche seit ca. 2000 auch in der Informatikdidaktik angekommen sind. Die Diskussionen der Fachdisziplinen und ihre Bedeutung für die Informatikdidaktik sind Gegenstand des ersten Teils dieser Dissertation. Es werden Rahmenmodelle zur Strukturierung und Einordnung von Kompetenzen entworfen, die für alle Fachdisziplinen nutzbar sind. Im zweiten Teil wird ein methodologischer Weg gezeigt, Schlüsselkompetenzen herzuleiten, ohne normativ vorgehen zu müssen. Hierzu wird das Verfahren der Qualitativen Inhaltsanalyse (QI) auf informatikdidaktische Ansätze angewendet. Die resultierenden Kompetenzen werden in weiteren Schritten verfeinert und in die zuvor entworfenen Rahmenmodelle eingeordnet. Das Ergebnis sind informatische Schlüsselkompetenzen, welche ein spezifisches Bild der Informatik zeichnen und zur Analyse bereits bestehender Curricula genutzt werden können. Zusätzlich zeigt das Verfahren einen Weg auf, wie Schlüsselkompetenzen auf nicht-normativem Wege generell hergeleitet werden können.
Zurzeit haben wir es mit der folgenden Situation an Universitäten zu tun: Studierende kommen mit unterschiedlichem Wissen und Kompetenzen zur Universität, um informatikbezogene Studiengänge zu belegen. Diesem Umstand muss in den universitären Kursen entgegengewirkt werden, um ein einheitliches Bildungsziel zu erreichen. Für einige Studierende bedeutet dies oft eine Lehrbelastung in einem ohnehin sehr zeitintensiven Studium, was nicht selten zum Studienabbruch führt. Ein anderes Problem ist die fehlende Transparenz bezüglich der Gegenstände des Informatikstudiums: einige angehende Studierende kommen mit einem von der Realität abweichenden Bild der Informatik zur Universität, andere entscheiden sich u. U. deshalb gegen ein Informatikstudium, da ihnen nicht bewusst ist, dass das Studium für sie interessant sein könnte. In diesem Artikel soll ein Lösungsvorschlag anhand eines Kompetenzrahmenmodells vorgestellt werden, mit dessen Hilfe eine Verbesserung der Hochschulsituation erreicht werden kann.
For interactive construction of CSG models understanding the layout of a model is essential for its efficient manipulation. To understand position and orientation of aggregated components of a CSG model, we need to realize its visible and occluded parts as a whole. Hence, transparency and enhanced outlines are key techniques to assist comprehension. We present a novel real-time rendering technique for visualizing design and spatial assembly of CSG models. As enabling technology we combine an image-space CSG rendering algorithm with blueprint rendering. Blueprint rendering applies depth peeling for extracting layers of ordered depth from polygonal models and then composes them in sorted order facilitating a clear insight of the models. We develop a solution for implementing depth peeling for CSG models considering their depth complexity. Capturing surface colors of each layer and later combining the results allows for generating order-independent transparency as one major rendering technique for CSG models. We further define visually important edges for CSG models and integrate an image-space edgeenhancement technique for detecting them in each layer. In this way, we extract visually important edges that are directly and not directly visible to outline a model’s layout. Combining edges with transparency rendering, finally, generates edge-enhanced depictions of image-based CSG models and allows us to realize their complex, spatial assembly.
Flassbeck’s article proposes to use demand management to enhance growth in Germany in order to increase employment. The author considers this kind of policy to release positive, but merely short-term effects. In the long run, he argues, government measures such as the deregulation of the labour market are necessary strategies for long-term growth.
Der vorliegende Beitrag beschäftigt sich mit der durch Sprachkontakt beeinflussten bzw. übernommenen Kodierung von Evidentialität im paraguayischen Spanischen. Es geht hierbei insbesondere um den Gebrauch der Guaraní-Partikel ndaje im paraguayischen Zeitungsspanischen. In diesem Zusammenhang wird der Versuch einer Einordnung des sprachlichen Phänomens vorgenommen und eine qualitative Korpusanalyse durchgeführt.
Exil oder Heimat?
(2013)
Die Volksrepublik Polen befand sich Ende der 1960er Jahre in einer wirtschaftlichen und innenpolitischen Krise. Das Regime in Warschau nahm den Sechs-Tage-Krieg zwischen Israel und den arabischen Staaten des Jahres 1967 zum Anlass, ein Exempel an den wenigen Zehntausend nach der Schoah im Land verbliebenen Juden zu statuieren und sie als politische Sündenböcke zu brandmarken. Über 3000 polnische Juden wählten in Folge der offiziell lancierten „Antizionistischen Kampagne“ Israel als neues Heimatland. Dort trafen sie auf eine Gesellschaft, die in zahllose Konflikte verstrickt war: den Krieg gegen die benachbarten arabischen Staaten, der Okkupation der Palästinensergebiete und den innenpolitischen Spannungen zwischen europäischen und orientalischen, religiösen und säkularen Juden. Neben einer historischen Einordnung der Migration nimmt der Autor auch deren Analyse unter migrationspsychologischen Aspekten vor. Die beschriebenen Erfahrungen werden im beiliegenden Dokumentarfilm „There Is No Return To Egypt“ veranschaulicht, in dem Zeitzeugen dieser sogenannten 1968er-Migration in ihrem heutigen Lebensumfeld in Israel zu Wort kommen.
DeepGeoMap
(2021)
In recent years, deep learning improved the way remote sensing data is processed. The classification of hyperspectral data is no exception. 2D or 3D convolutional neural networks have outperformed classical algorithms on hyperspectral image classification in many cases. However, geological hyperspectral image classification includes several challenges, often including spatially more complex objects than found in other disciplines of hyperspectral imaging that have more spatially similar objects (e.g., as in industrial applications, aerial urban- or farming land cover types). In geological hyperspectral image classification, classical algorithms that focus on the spectral domain still often show higher accuracy, more sensible results, or flexibility due to spatial information independence. In the framework of this thesis, inspired by classical machine learning algorithms that focus on the spectral domain like the binary feature fitting- (BFF) and the EnGeoMap algorithm, the author of this thesis proposes, develops, tests, and discusses a novel, spectrally focused, spatial information independent, deep multi-layer convolutional neural network, named 'DeepGeoMap’, for hyperspectral geological data classification. More specifically, the architecture of DeepGeoMap uses a sequential series of different 1D convolutional neural networks layers and fully connected dense layers and utilizes rectified linear unit and softmax activation, 1D max and 1D global average pooling layers, additional dropout to prevent overfitting, and a categorical cross-entropy loss function with Adam gradient descent optimization. DeepGeoMap was realized using Python 3.7 and the machine and deep learning interface TensorFlow with graphical processing unit (GPU) acceleration. This 1D spectrally focused architecture allows DeepGeoMap models to be trained with hyperspectral laboratory image data of geochemically validated samples (e.g., ground truth samples for aerial or mine face images) and then use this laboratory trained model to classify other or larger scenes, similar to classical algorithms that use a spectral library of validated samples for image classification. The classification capabilities of DeepGeoMap have been tested using two geological hyperspectral image data sets. Both are geochemically validated hyperspectral data sets one based on iron ore and the other based on copper ore samples. The copper ore laboratory data set was used to train a DeepGeoMap model for the classification and analysis of a larger mine face scene within the Republic of Cyprus, where the samples originated from. Additionally, a benchmark satellite-based dataset, the Indian Pines data set, was used for training and testing. The classification accuracy of DeepGeoMap was compared to classical algorithms and other convolutional neural networks. It was shown that DeepGeoMap could achieve higher accuracies and outperform these classical algorithms and other neural networks in the geological hyperspectral image classification test cases. The spectral focus of DeepGeoMap was found to be the most considerable advantage compared to spectral-spatial classifiers like 2D or 3D neural networks. This enables DeepGeoMap models to train data independently of different spatial entities, shapes, and/or resolutions.
Origin and symmetry of the observed global magnetic fields in galaxies are not fully understood. We intend to clarify the question of the magnetic field origin and investigate the global action of the magneto-rotational instability (MRI) in galactic disks with the help of 3D global magneto-hydrodynamical (MHD) simulations. The calculations were done with the time-stepping ZEUS 3D code using massive parallelization. The alpha-Omega dynamo is known to be one of the most efficient mechanisms to reproduce the observed global galactic fields. The presence of strong turbulence is a pre-requisite for the alpha-Omega dynamo generation of the regular magnetic fields. The observed magnitude and spatial distribution of turbulence in galaxies present unsolved problems to theoreticians. The MRI is known to be a fast and powerful mechanism to generate MHD turbulence and to amplify magnetic fields. We find that the critical wavelength increases with the increasing of magnetic fields during the simulation, transporting the energy from critical to larger scales. The final structure, if not disrupted by supernovae explosions, is the structure of `thin layers' of thickness of about 100 pcs. An important outcome of all simulations is the magnitude of the horizontal components of the Reynolds and Maxwell stresses. The result is that the MRI-driven turbulence is magnetic-dominated: its magnetic energy exceeds the kinetic energy by a factor of 4. The Reynolds stress is small and less than 1% of the Maxwell stress. The angular momentum transport is thus completely dominated by the magnetic field fluctuations. The volume-averaged pitch angle is always negative with a magnitude of about -30. The non-saturated MRI regime is lasting sufficiently long to fill the time between the galactic encounters, independently of strength and geometry of the initial field. Therefore, we may claim the observed pitch angles can be due to MRI action in the gaseous galactic disks. The MRI is also shown to be a very fast instability with e-folding time proportional to the time of one rotation. Steep rotation curves imply a stronger growth for the magnetic energy due to MRI. The global e-folding time is from 44 Myr to 100 Myr depending on the rotation profile. Therefore, MRI can explain the existence of rather large magnetic field in very young galaxies. We also have reproduced the observed rms values of velocities in the interstellar turbulence as it was observed in NGC 1058. We have shown with the simulations that the averaged velocity dispersion of about 5 km/s is a typical number for the MRI-driven turbulence in galaxies, which agrees with observations. The dispersion increases outside of the disk plane, whereas supernovae-driven turbulence is found to be concentrated within the disk. In our simulations the velocity dispersion increases a few times with the heights. An additional support to the dynamo alpha-effect in the galaxies is the ability of the MRI to produce a mix of quadrupole and dipole symmetries from the purely vertical seed fields, so it also solves the seed-fields problem of the galactic dynamo theory. The interaction of magneto-rotational instability and random supernovae explosions remains an open question. It would be desirable to run the simulation with the supernovae explosions included. They would disrupt the calm ring structure produced by global MRI, may be even to the level when we can no longer blame MRI to be responsible for the turbulence.
We present an analysis of student language input in a corpus of tutoring dialogue in the domain of symbolic differentiation. Our focus on procedural tutoring makes the dialogue comparable to collaborative problem-solving (CPS). Existing CPS models describe the process of negotiating plans and goals, which also fits procedural tutoring. However, we provide a classification of student utterances and corpus annotation which shows that approximately 28% of non-trivial student language in this corpus is not accounted for by existing models, and addresses other functions, such as evaluating past actions or correcting mistakes. Our analysis can be used as a foundation for improving models of tutoring dialogue.
The correctness of model transformations is a crucial element for model-driven engineering of high quality software. In particular, behavior preservation is the most important correctness property avoiding the introduction of semantic errors during the model-driven engineering process. Behavior preservation verification techniques either show that specific properties are preserved, or more generally and complex, they show some kind of behavioral equivalence or refinement between source and target model of the transformation. Both kinds of behavior preservation verification goals have been presented with automatic tool support for the instance level, i.e. for a given source and target model specified by the model transformation. However, up until now there is no automatic verification approach available at the transformation level, i.e. for all source and target models specified by the model transformation.
In this report, we extend our results presented in [27] and outline a new sophisticated approach for the automatic verification of behavior preservation captured by bisimulation resp. simulation for model transformations specified by triple graph grammars and semantic definitions given by graph transformation rules. In particular, we show that the behavior preservation problem can be reduced to invariant checking for graph transformation and that the resulting checking problem can be addressed by our own invariant checker even for a complex example where a sequence chart is transformed into communicating automata. We further discuss today's limitations of invariant checking for graph transformation and motivate further lines of future work in this direction.
While offering significant expressive power, graph transformation systems often come with rather limited capabilities for automated analysis, particularly if systems with many possible initial graphs and large or infinite state spaces are concerned. One approach that tries to overcome these limitations is inductive invariant checking. However, the verification of inductive invariants often requires extensive knowledge about the system in question and faces the approach-inherent challenges of locality and lack of context.
To address that, this report discusses k-inductive invariant checking for graph transformation systems as a generalization of inductive invariants. The additional context acquired by taking multiple (k) steps into account is the key difference to inductive invariant checking and is often enough to establish the desired invariants without requiring the iterative development of additional properties.
To analyze possibly infinite systems in a finite fashion, we introduce a symbolic encoding for transformation traces using a restricted form of nested application conditions. As its central contribution, this report then presents a formal approach and algorithm to verify graph constraints as k-inductive invariants. We prove the approach's correctness and demonstrate its applicability by means of several examples evaluated with a prototypical implementation of our algorithm.
Graph transformation systems are a powerful formal model to capture model transformations or systems with infinite state space, among others. However, this expressive power comes at the cost of rather limited automated analysis capabilities. The general case of unbounded many initial graphs or infinite state spaces is only supported by approaches with rather limited scalability or expressiveness. In this report we improve an existing approach for the automated verification of inductive invariants for graph transformation systems. By employing partial negative application conditions to represent and check many alternative conditions in a more compact manner, we can check examples with rules and constraints of substantially higher complexity. We also substantially extend the expressive power by supporting more complex negative application conditions and provide higher accuracy by employing advanced implication checks. The improvements are evaluated and compared with another applicable tool by considering three case studies.
With rising complexity of today's software and hardware systems and the hypothesized increase in autonomous, intelligent, and self-* systems, developing correct systems remains an important challenge. Testing, although an important part of the development and maintainance process, cannot usually establish the definite correctness of a software or hardware system - especially when systems have arbitrarily large or infinite state spaces or an infinite number of initial states. This is where formal verification comes in: given a representation of the system in question in a formal framework, verification approaches and tools can be used to establish the system's adherence to its similarly formalized specification, and to complement testing.
One such formal framework is the field of graphs and graph transformation systems. Both are powerful formalisms with well-established foundations and ongoing research that can be used to describe complex hardware or software systems with varying degrees of abstraction. Since their inception in the 1970s, graph transformation systems have continuously evolved; related research spans extensions of expressive power, graph algorithms, and their implementation, application scenarios, or verification approaches, to name just a few topics.
This thesis focuses on a verification approach for graph transformation systems called k-inductive invariant checking, which is an extension of previous work on 1-inductive invariant checking. Instead of exhaustively computing a system's state space, which is a common approach in model checking, 1-inductive invariant checking symbolically analyzes graph transformation rules - i.e. system behavior - in order to draw conclusions with respect to the validity of graph constraints in the system's state space. The approach is based on an inductive argument: if a system's initial state satisfies a graph constraint and if all rules preserve that constraint's validity, we can conclude the constraint's validity in the system's entire state space - without having to compute it.
However, inductive invariant checking also comes with a specific drawback: the locality of graph transformation rules leads to a lack of context information during the symbolic analysis of potential rule applications. This thesis argues that this lack of context can be partly addressed by using k-induction instead of 1-induction. A k-inductive invariant is a graph constraint whose validity in a path of k-1 rule applications implies its validity after any subsequent rule application - as opposed to a 1-inductive invariant where only one rule application is taken into account. Considering a path of transformations then accumulates more context of the graph rules' applications.
As such, this thesis extends existing research and implementation on 1-inductive invariant checking for graph transformation systems to k-induction. In addition, it proposes a technique to perform the base case of the inductive argument in a symbolic fashion, which allows verification of systems with an infinite set of initial states. Both k-inductive invariant checking and its base case are described in formal terms. Based on that, this thesis formulates theorems and constructions to apply this general verification approach for typed graph transformation systems and nested graph constraints - and to formally prove the approach's correctness.
Since unrestricted graph constraints may lead to non-termination or impracticably high execution times given a hypothetical implementation, this thesis also presents a restricted verification approach, which limits the form of graph transformation systems and graph constraints. It is formalized, proven correct, and its procedures terminate by construction. This restricted approach has been implemented in an automated tool and has been evaluated with respect to its applicability to test cases, its performance, and its degree of completeness.
We study the Dirichlet problem in a bounded plane domain for the heat equation with small parameter multiplying the derivative in t. The behaviour of solution at characteristic points of the boundary is of special interest. The behaviour is well understood if a characteristic line is tangent to the boundary with contact degree at least 2. We allow the boundary to not only have contact of degree less than 2 with a characteristic line but also a cuspidal singularity at a characteristic point. We construct an asymptotic solution of the problem near the characteristic point to describe how the boundary layer degenerates.
We develop a new approach to the analysis of pseudodifferential operators with small parameter 'epsilon' in (0,1] on a compact smooth manifold X. The standard approach assumes action of operators in Sobolev spaces whose norms depend on 'epsilon'. Instead we consider the cylinder [0,1] x X over X and study pseudodifferential operators on the cylinder which act, by the very nature, on functions depending on 'epsilon' as well. The action in 'epsilon' reduces to multiplication by functions of this variable and does not include any differentiation. As but one result we mention asymptotic of solutions to singular perturbation problems for small values of 'epsilon'.
In this thesis we consider diverse aspects of existence and correctness of asymptotic solutions to elliptic differential and pseudodifferential equations. We begin our studies with the case of a general elliptic boundary value problem in partial derivatives. A small parameter enters the coefficients of the main equation as well as into the boundary conditions. Such equations have already been investigated satisfactory, but there still exist certain theoretical deficiencies. Our aim is to present the general theory of elliptic problems with a small parameter. For this purpose we examine in detail the case of a bounded domain with a smooth boundary. First of all, we construct formal solutions as power series in the small parameter. Then we examine their asymptotic properties. It suffices to carry out sharp two-sided \emph{a priori} estimates for the operators of boundary value problems which are uniform in the small parameter. Such estimates failed to hold in functional spaces used in classical elliptic theory. To circumvent this limitation we exploit norms depending on the small parameter for the functions defined on a bounded domain. Similar norms are widely used in literature, but their properties have not been investigated extensively. Our theoretical investigation shows that the usual elliptic technique can be correctly carried out in these norms. The obtained results also allow one to extend the norms to compact manifolds with boundaries. We complete our investigation by formulating algebraic conditions on the operators and showing their equivalence to the existence of a priori estimates. In the second step, we extend the concept of ellipticity with a small parameter to more general classes of operators. Firstly, we want to compare the difference in asymptotic patterns between the obtained series and expansions for similar differential problems. Therefore we investigate the heat equation in a bounded domain with a small parameter near the time derivative. In this case the characteristics touch the boundary at a finite number of points. It is known that the solutions are not regular in a neighbourhood of such points in advance. We suppose moreover that the boundary at such points can be non-smooth but have cuspidal singularities. We find a formal asymptotic expansion and show that when a set of parameters comes through a threshold value, the expansions fail to be asymptotic. The last part of the work is devoted to general concept of ellipticity with a small parameter. Several theoretical extensions to pseudodifferential operators have already been suggested in previous studies. As a new contribution we involve the analysis on manifolds with edge singularities which allows us to consider wider classes of perturbed elliptic operators. We examine that introduced classes possess a priori estimates of elliptic type. As a further application we demonstrate how developed tools can be used to reduce singularly perturbed problems to regular ones.
We describe an approach to modeling biological networks by action languages via answer set programming. To this end, we propose an action language for modeling biological networks, building on previous work by Baral et al. We introduce its syntax and semantics along with a translation into answer set programming, an efficient Boolean Constraint Programming Paradigm. Finally, we describe one of its applications, namely, the sulfur starvation response-pathway of the model plant Arabidopsis thaliana and sketch the functionality of our system and its usage.
Being born large for gestational age is associated with increased global placental DNA methylation
(2020)
Being born small (SGA) or large for gestational age (LGA) is associated with adverse birth outcomes and metabolic diseases in later life of the offspring. It is known that aberrations in growth during gestation are related to altered placental function. Placental function is regulated by epigenetic mechanisms such as DNA methylation. Several studies in recent years have demonstrated associations between altered patterns of DNA methylation and adverse birth outcomes. However, larger studies that reliably investigated global DNA methylation are lacking. The aim of this study was to characterize global placental DNA methylation in relationship to size for gestational age. Global DNA methylation was assessed in 1023 placental samples by LC-MS/MS. LGA offspring displayed significantly higher global placental DNA methylation compared to appropriate for gestational age (AGA; p<0.001). ANCOVA analyses adjusted for known factors impacting on DNA methylation demonstrated an independent association between placental global DNA methylation and LGA births (p<0.001). Tertile stratification according to global placental DNA methylation levels revealed a significantly higher frequency of LGA births in the third tertile. Furthermore, a multiple logistic regression analysis corrected for known factors influencing birth weight highlighted an independent positive association between global placental DNA methylation and the frequency of LGA births (p=0.001).
Using a code that employs a self-consistent method for computing the effects of photoionization on circumstellar gas dynamics, we model the formation of wind-driven nebulae around massive Wolf-Rayet (W-R) stars. Our algorithm incorporates a simplified model of the photo-ionization source, computes the fractional ionization of hydrogen due to the photoionizing flux and recombination, and determines self-consistently the energy balance due to ionization, photo-heating and radiative cooling. We take into account changes in stellar properties and mass-loss over the star's evolution. Our multi-dimensional simulations clearly reveal the presence of strong ionization front instabilities. Using various X-ray emission models, and abundances consistent with those derived for W-R nebulae, we compute the X-ray flux and spectra from our wind bubble models. We show the evolution of the X-ray spectral features with time over the evolution of the star, taking the absorption of the X-rays by the ionized bubble into account. Our simulated X-ray spectra compare reasonably well with observed spectra of Wolf-Rayet bubbles. They suggest that X-ray nebulae around massive stars may not be easily detectable, consistent with observations.∗
In this study, we analyze interactions in lake and lake catchment systems of a continuous permafrost area. We assessed colored dissolved organic matter (CDOM) absorption at 440 nm (a(440)(CDOM)) and absorption slope (S300-500) in lakes using field sampling and optical remote sensing data for an area of 350 km(2) in Central Yamal, Siberia. Applying a CDOM algorithm (ratio of green and red band reflectance) for two high spatial resolution multispectral GeoEye-1 and Worldview-2 satellite images, we were able to extrapolate the a()(CDOM) data from 18 lakes sampled in the field to 356 lakes in the study area (model R-2 = 0.79). Values of a(440)(CDOM) in 356 lakes varied from 0.48 to 8.35 m(-1) with a median of 1.43 m(-1). This a()(CDOM) dataset was used to relate lake CDOM to 17 lake and lake catchment parameters derived from optical and radar remote sensing data and from digital elevation model analysis in order to establish the parameters controlling CDOM in lakes on the Yamal Peninsula. Regression tree model and boosted regression tree analysis showed that the activity of cryogenic processes (thermocirques) in the lake shores and lake water level were the two most important controls, explaining 48.4% and 28.4% of lake CDOM, respectively (R-2 = 0.61). Activation of thermocirques led to a large input of terrestrial organic matter and sediments from catchments and thawed permafrost to lakes (n = 15, mean a(440)(CDOM) = 5.3 m(-1)). Large lakes on the floodplain with a connection to Mordy-Yakha River received more CDOM (n = 7, mean a(440)(CDOM) = 3.8 m(-1)) compared to lakes located on higher terraces.
This study analyzes the influence of local and regional climatic factors on the stable isotopic composition of rainfall in the Vietnamese Mekong Delta (VMD) as part of the Asian monsoon region. It is based on 1.5 years of weekly rainfall samples. In the first step, the isotopic composition of the samples is analyzed by local meteoric water lines (LMWLs) and single-factor linear correlations. Additionally, the contribution of several regional and local factors is quantified by multiple linear regression (MLR) of all possible factor combinations and by relative importance analysis. This approach is novel for the interpretation of isotopic records and enables an objective quantification of the explained variance in isotopic records for individual factors. In this study, the local factors are extracted from local climate records, while the regional factors are derived from atmospheric backward trajectories of water particles. The regional factors, i.e., precipitation, temperature, relative humidity and the length of backward trajectories, are combined with equivalent local climatic parameters to explain the response variables delta O-18, delta H-2, and d-excess of precipitation at the station of measurement.
The results indicate that (i) MLR can better explain the isotopic variation in precipitation (R-2 = 0.8) compared to single-factor linear regression (R-2 = 0.3); (ii) the isotopic variation in precipitation is controlled dominantly by regional moisture regimes (similar to 70 %) compared to local climatic conditions (similar to 30 %); (iii) the most important climatic parameter during the rainy season is the precipitation amount along the trajectories of air mass movement; (iv) the influence of local precipitation amount and temperature is not sig-nificant during the rainy season, unlike the regional precipitation amount effect; (v) secondary fractionation processes (e.g., sub-cloud evaporation) can be identified through the d-excess and take place mainly in the dry season, either locally for delta O-18 and delta H-2, or along the air mass trajectories for d-excess. The analysis shows that regional and local factors vary in importance over the seasons and that the source regions and transport pathways, and particularly the climatic conditions along the pathways, have a large influence on the isotopic composition of rainfall. Although the general results have been reported qualitatively in previous studies (proving the validity of the approach), the proposed method provides quantitative estimates of the controlling factors, both for the whole data set and for distinct seasons. Therefore, it is argued that the approach constitutes an advancement in the statistical analysis of isotopic records in rainfall that can supplement or precede more complex studies utilizing atmospheric models. Due to its relative simplicity, the method can be easily transferred to other regions, or extended with other factors.
The results illustrate that the interpretation of the isotopic composition of precipitation as a recorder of local climatic conditions, as for example performed for paleorecords of water isotopes, may not be adequate in the southern part of the Indochinese Peninsula, and likely neither in other regions affected by monsoon processes. However, the presented approach could open a pathway towards better and seasonally differentiated reconstruction of paleoclimates based on isotopic records.
Understanding hydrological processes is of fundamental importance for the Vietnamese national food security and the livelihood of the population in the Vietnamese Mekong Delta (VMD). As a consequence of sparse data in this region, however, hydrologic processes, such as the controlling processes of precipitation, the interaction between surface and groundwater, and groundwater dynamics, have not been thoroughly studied. The lack of this knowledge may negatively impact the long-term strategic planning for sustainable groundwater resources management and may result in insufficient groundwater recharge and freshwater scarcity. It is essential to develop useful methods for a better understanding of hydrological processes in such data-sparse regions. The goal of this dissertation is to advance methodologies that can improve the understanding of fundamental hydrological processes in the VMD, based on the analyses of stable water isotopes and monitoring data. The thesis mainly focuses on the controlling processes of precipitation, the mechanism of surface–groundwater interaction, and the groundwater dynamics. These processes have not been fully addressed in the VMD so far. The thesis is based on statistical analyses of the isotopic data of Global Network of Isotopes in Precipitation (GNIP), of meteorological and hydrological data from Vietnamese agencies, and of the stable water isotopes and monitoring data collected as part of this work.
First, the controlling processes of precipitation were quantified by the combination of trajectory analysis, multi-factor linear regression, and relative importance analysis (hereafter, a model‐based statistical approach). The validity of this approach is confirmed by similar, but mainly qualitative results obtained in other studies. The total variation in precipitation isotopes (δ18O and δ2H) can be better explained by multiple linear regression (up to 80%) than single-factor linear regression (30%). The relative importance analysis indicates that atmospheric moisture regimes control precipitation isotopes rather than local climatic conditions. The most crucial factor is the upstream rainfall along the trajectories of air mass movement. However, the influences of regional and local climatic factors vary in importance over the seasons. The developed model‐based statistical approach is a robust tool for the interpretation of precipitation isotopes and could also be applied to understand the controlling processes of precipitation in other regions.
Second, the concept of the two-component lumped-parameter model (LPM) in conjunction with stable water isotopes was applied to examine the surface–groundwater interaction in the VMD. A calibration framework was also set up to evaluate the behaviour, parameter identifiability, and uncertainties of two-component LPMs. The modelling results provided insights on the subsurface flow conditions, the recharge contributions, and the spatial variation of groundwater transit time. The subsurface flow conditions at the study site can be best represented by the linear-piston flow distribution. The contributions of the recharge sources change with distance to the river. The mean transit time (mTT) of riverbank infiltration increases with the length of the horizontal flow path and the decreasing gradient between river and groundwater. River water infiltrates horizontally mainly via the highly permeable aquifer, resulting in short mTTs (<40 weeks) for locations close to the river (<200 m). The vertical infiltration from precipitation takes place primarily via a low‐permeable overlying aquitard, resulting in considerably longer mTTs (>80 weeks). Notably, the transit time of precipitation infiltration is independent of the distance to the river. All these results are hydrologically plausible and could be quantified by the presented method for the first time. This study indicates that the highly complex mechanism of surface–groundwater interaction at riverbank infiltration systems can be conceptualized by exploiting two‐component LPMs. It is illustrated that the model concept can be used as a tool to investigate the hydrological functioning of mixing processes and the flow path of multiple water components in riverbank infiltration systems.
Lastly, a suite of time series analysis approaches was applied to examine the groundwater dynamics in the VMD. The assessment was focused on the time-variant trends of groundwater levels (GWLs), the groundwater memory effect (representing the time that an aquifer holds water), and the hydraulic response between surface water and multi-layer alluvial aquifers. The analysis indicates that the aquifers act as low-pass filters to reduce the high‐frequency signals in the GWL variations, and limit the recharge to the deep groundwater. The groundwater abstraction has exceeded groundwater recharge between 1997 and 2017, leading to the decline of groundwater levels (0.01-0.55 m/year) in all considered aquifers in the VMD. The memory effect varies according to the geographical location, being shorter in shallow aquifers and flood-prone areas and longer in deep aquifers and coastal regions. Groundwater depth, season, and location primarily control the variation of the response time between the river and alluvial aquifers. These findings are important contributions to the hydrogeological literature of a little-known groundwater system in an alluvial setting. It is suggested that time series analysis can be used as an efficient tool to understand groundwater systems where resources are insufficient to develop a physical-based groundwater model.
This doctoral thesis demonstrates that important aspects of hydrological processes can be understood by statistical analysis of stable water isotope and monitoring data. The approaches developed in this thesis can be easily transferred to regions in similar tropical environments, particularly those in alluvial settings. The results of the thesis can be used as a baseline for future isotope-based studies and contribute to the hydrogeological literature of little-known groundwater systems in the VMD.
Building biological models by inferring functional dependencies from experimental data is an important issue in Molecular Biology. To relieve the biologist from this traditionally manual process, various approaches have been proposed to increase the degree of automation. However, available approaches often yield a single model only, rely on specific assumptions, and/or use dedicated, heuristic algorithms that are intolerant to changing circumstances or requirements in the view of the rapid progress made in Biotechnology. Our aim is to provide a declarative solution to the problem by appeal to Answer Set Programming (ASP) overcoming these difficulties. We build upon an existing approach to Automatic Network Reconstruction proposed by part of the authors. This approach has firm mathematical foundations and is well suited for ASP due to its combinatorial flavor providing a characterization of all models explaining a set of experiments. The usage of ASP has several benefits over the existing heuristic algorithms. First, it is declarative and thus transparent for biological experts. Second, it is elaboration tolerant and thus allows for an easy exploration and incorporation of biological constraints. Third, it allows for exploring the entire space of possible models. Finally, our approach offers an excellent performance, matching existing, special-purpose systems.
„Es, sin duda, un ecuatoriano ilustre, no obstante su origen alemán y su naturaleza universal.“ So lautet die Aussage eines ecuadorianischen Akademikers über Alexander von Humboldt. Eine Aussage, die sich freilich nur erklären lässt, wenn die Rezeption des deutschen Universalgelehrten im Zusammenhang mit dem ecuadorianischen Nationsbildungsprozess kritisch betrachtet wird. Der Artikel greift diesen Ansatz auf und untersucht den direkten und indirekten Einfluss des Naturforschers auf den diskursiven Prozess, in dem der nationale Raum wissenschaftlich, visuell und literarisch bearbeitet wurde. Illustriert wird dies anhand des geographischen Werkes von Manuel Villavicencio (1804-1871) und des Werks des Malers Juan Augustín Guerrero (1818-1886). Gegenstand der Analyse wird schliesslich auch, wie durch das Humboldt-Bild einige zentrale Aspekte der „offiziellen“ nationalen Identität gestützt werden konnten. Aufzeigen lässt sich dies im Humboldt-Bild, das im Zeitraum zwischen den Gedenkfeierlichkeiten 1959 und 1969 entworfen worden war.
The desiccation-tolerant plant Haberlea rhodopensis can withstand months of darkness without any visible senescence. Here, we investigated the molecular mechanisms of this adaptation to prolonged (30 d) darkness and subsequent return to light. H. rhodopensis plants remained green and viable throughout the dark treatment. Transcriptomic analysis revealed that darkness regulated several transcription factor (TF) genes. Stress-and autophagy-related TFs such as ERF8, HSFA2b, RD26, TGA1, and WRKY33 were up-regulated, while chloroplast-and flowering-related TFs such as ATH1, COL2, COL4, RL1, and PTAC7 were repressed. PHYTOCHROME INTERACTING FACTOR4, a negative regulator of photomorphogenesis and promoter of senescence, also was down-regulated. In response to darkness, most of the photosynthesis-and photorespiratory-related genes were strongly down-regulated, while genes related to autophagy were up-regulated. This occurred concomitant with the induction of SUCROSE NON-FERMENTING1-RELATED PROTEIN KINASES (SnRK1) signaling pathway genes, which regulate responses to stress-induced starvation and autophagy. Most of the genes associated with chlorophyll catabolism, which are induced by darkness in dark-senescing species, were either unregulated (PHEOPHORBIDE A OXYGENASE, PAO; RED CHLOROPHYLL CATABOLITE REDUCTASE, RCCR) or repressed (STAY GREEN-LIKE, PHEOPHYTINASE, and NON-YELLOW COLORING1). Metabolite profiling revealed increases in the levels of many amino acids in darkness, suggesting increased protein degradation. In darkness, levels of the chloroplastic lipids digalactosyldiacylglycerol, monogalactosyldiacylglycerol, phosphatidylglycerol, and sulfoquinovosyldiacylglycerol decreased, while those of storage triacylglycerols increased, suggesting degradation of chloroplast membrane lipids and their conversion to triacylglycerols for use as energy and carbon sources. Collectively, these data show a coordinated response to darkness, including repression of photosynthetic, photorespiratory, flowering, and chlorophyll catabolic genes, induction of autophagy and SnRK1 pathways, and metabolic reconfigurations that enable survival under prolonged darkness.
Background: Phosphorylation of proteins plays a crucial role in the regulation and activation of metabolic and signaling pathways and constitutes an important target for pharmaceutical intervention. Central to the phosphorylation process is the recognition of specific target sites by protein kinases followed by the covalent attachment of phosphate groups to the amino acids serine, threonine, or tyrosine. The experimental identification as well as computational prediction of phosphorylation sites (P-sites) has proved to be a challenging problem. Computational methods have focused primarily on extracting predictive features from the local, one-dimensional sequence information surrounding phosphorylation sites. Results: We characterized the spatial context of phosphorylation sites and assessed its usability for improved phosphorylation site predictions. We identified 750 non-redundant, experimentally verified sites with three-dimensional (3D) structural information available in the protein data bank (PDB) and grouped them according to their respective kinase family. We studied the spatial distribution of amino acids around phosphorserines, phosphothreonines, and phosphotyrosines to extract signature 3D-profiles. Characteristic spatial distributions of amino acid residue types around phosphorylation sites were indeed discernable, especially when kinase-family-specific target sites were analyzed. To test the added value of using spatial information for the computational prediction of phosphorylation sites, Support Vector Machines were applied using both sequence as well as structural information. When compared to sequence-only based prediction methods, a small but consistent performance improvement was obtained when the prediction was informed by 3D-context information. Conclusion: While local one-dimensional amino acid sequence information was observed to harbor most of the discriminatory power, spatial context information was identified as relevant for the recognition of kinases and their cognate target sites and can be used for an improved prediction of phosphorylation sites. A web-based service (Phos3D) implementing the developed structurebased P-site prediction method has been made available at http://phos3d.mpimp-golm.mpg.de.
The study of biological interaction networks is a central theme in systems biology. Here, we investigate common as well as differentiating principles of molecular interaction networks associated with different levels of molecular organization. They include metabolic pathway maps, protein-protein interaction networks as well as kinase interaction networks. First, we present an integrated analysis of metabolic pathway maps and protein-protein interaction networks (PIN). It has long been established that successive enzymatic steps are often catalyzed by physically interacting proteins forming permanent or transient multi-enzyme complexes. Inspecting high-throughput PIN data, it has been shown recently that, indeed, enzymes involved in successive reactions are generally more likely to interact than other protein pairs. In this study, we expanded this line of research to include comparisons of the respective underlying network topologies as well as to investigate whether the spatial organization of enzyme interactions correlates with metabolic efficiency. Analyzing yeast data, we detected long-range correlations between shortest paths between proteins in both network types suggesting a mutual correspondence of both network architectures. We discovered that the organizing principles of physical interactions between metabolic enzymes differ from the general PIN of all proteins. While physical interactions between proteins are generally dissortative, enzyme interactions were observed to be assortative. Thus, enzymes frequently interact with other enzymes of similar rather than different degree. Enzymes carrying high flux loads are more likely to physically interact than enzymes with lower metabolic throughput. In particular, enzymes associated with catabolic pathways as well as enzymes involved in the biosynthesis of complex molecules were found to exhibit high degrees of physical clustering. Single proteins were identified that connect major components of the cellular metabolism and hence might be essential for the structural integrity of several biosynthetic systems. Besides metabolic aspects of PINs, we investigated the characteristic topological properties of protein interactions involved in signaling and regulatory functions mediated by kinase interactions. Characteristic topological differences between PINs associated with metabolism, and those describing phosphorylation networks were revealed and shown to reflect the different modes of biological operation of both network types. The construction of phosphorylation networks is based on the identification of specific kinase-target relations including the determination of the actual phosphorylation sites (P-sites). The computational prediction of P-sites as well as the identification of involved kinases still suffers from insufficient accuracies and specificities of the underlying prediction algorithms, and the experimental identification in a genome-scale manner is not (yet) doable. Computational prediction methods have focused primarily on extracting predictive features from the local, one-dimensional sequence information surrounding P-sites. However the recognition of such motifs by the respective kinases is a spatial event. Therefore, we characterized the spatial distributions of amino acid residue types around P-sites and extracted signature 3D-profiles. We then tested the added value of spatial information on the prediction performance. When compared to sequence-only based predictors, a consistent performance gain was obtained. The availability of reliable training data of experimentally determined P-sites is critical for the development of computational prediction methods. As part of this thesis, we provide an assessment of false-positive rates of phosphoproteomic data.
Living cells rely on transport and interaction of biomolecules to perform their diverse functions. A powerful toolbox to study these highly dynamic processes in the native environment is provided by fluorescence fluctuation spectroscopy (FFS) techniques. In more detail, FFS takes advantage of the inherent dynamics present in biological systems, such as diffusion, to infer molecular parameters from fluctuations of the signal emitted by an ensemble of fluorescently tagged molecules. In particular, two parameters are accessible: the concentration of molecules and their transit times through the observation volume. In addition, molecular interactions can be measured by analyzing the average signal emitted per molecule - the molecular brightness - and the cross-correlation of signals detected from differently tagged species.
In the present work, several FFS techniques were implemented and applied in different biological contexts. In particular, scanning fluorescence correlation spectroscopy (sFCS) was performed to measure protein dynamics and interactions at the plasma membrane (PM) of cells, and number and brightness (N&B) analysis to spatially map molecular aggregation. To account for technical limitations and sample related artifacts, e.g. detector noise, photobleaching, or background signal, several correction schemes were explored. In addition, sFCS was combined with spectral detection and higher moment analysis of the photon count distribution to resolve multiple species at the PM.
Using scanning fluorescence cross-correlation spectroscopy and cross-correlation N&B, the interactions of amyloid precursor-like protein 1 (APLP1), a synaptic membrane protein, were investigated. It is shown for the first time directly in living cells, that APLP1 undergoes specific interactions at cell-cell contacts. It is further demonstrated that zinc ions induce formation of large APLP1 clusters that enrich at contact sites and bind to clusters on the opposing cell. Altogether, these results provide direct evidence that APLP1 is a zinc ion dependent neuronal adhesion protein.
In the context of APLP1, discrepancies of oligomeric state estimates were observed, which were attributed to non-fluorescent states of the chosen red fluorescent protein (FP) tag mCardinal (mCard). Therefore, multiple FPs and their performance in FFS based measurements of protein interactions were systematically evaluated. The study revealed superior properties of monomeric enhanced green fluorescent protein (mEGFP) and mCherry2. Furthermore, a simple correction scheme allowed unbiased in situ measurements of protein oligomerization by quantifying non-fluorescent state fractions of FP tags. The procedure was experimentally confirmed for biologically relevant protein complexes consisting of up to 12 monomers.
In the last part of this work, fluorescence correlation spectroscopy (FCS) and single particle tracking (SPT) were used to characterize diffusive transport dynamics in a bacterial biofilm model. Biofilms are surface adherent bacterial communities, whose structural organization is provided by extracellular polymeric substances (EPS) that form a viscous polymer hydrogel. The presented study revealed a probe size and polymer concentration dependent (anomalous) diffusion hindrance in a reconstituted EPS matrix system caused by polymer chain entanglement at physiological concentrations. This result indicates a meshwork-like organization of the biofilm matrix that allows free diffusion of small particles, but strongly hinders diffusion of larger particles such as bacteriophages. Finally, it is shown that depolymerization of the matrix by phage derived enzymes rapidly facilitated free diffusion. In the context of phage infections, such enzymes may provide a key to evade trapping in the biofilm matrix and promote efficient infection of bacteria. In combination with phage application, matrix depolymerizing enzymes may open up novel antimicrobial strategies against multiresistant bacterial strains, as a promising, more specific alternative to conventional antibiotics.
This book deals with the inner life of the capitalist firm. There we find numerous conflicts, the most important of which concerns the individual employment relationship which is understood as a principal-agent problem between the manager, the principal, who issues orders that are to be followed by the employee, the agent. Whereas economic theory traditionally analyses this relationship from a (normative) perspective of the firm in order to support the manager in finding ways to influence the behavior of the employees, such that the latter – ideally – act on behalf of their superior, this book takes a neutral stance. It focusses on explaining individual behavioral patterns and the resulting interactions between the actors in the firm by taking sociological, institutional, and above all, psychological research into consideration. In doing so, insights are gained which challenge many assertions economists take for granted.
Biological materials, in addition to having remarkable physical properties, can also change shape and volume. These shape and volume changes allow organisms to form new tissue during growth and morphogenesis, as well as to repair and remodel old tissues. In addition shape or volume changes in an existing tissue can lead to useful motion or force generation (actuation) that may even still function in the dead organism, such as in the well known example of the hygroscopic opening or closing behaviour of the pine cone. Both growth and actuation of tissues are mediated, in addition to biochemical factors, by the physical constraints of the surrounding environment and the architecture of the underlying tissue. This habilitation thesis describes biophysical studies carried out over the past years on growth and swelling mediated shape changes in biological systems. These studies use a combination of theoretical and experimental tools to attempt to elucidate the physical mechanisms governing geometry controlled tissue growth and geometry constrained tissue swelling. It is hoped that in addition to helping understand fundamental processes of growth and morphogenesis, ideas stemming from such studies can also be used to design new materials for medicine and robotics.
Präventiver Kinderschutz
(2014)
A Conjunction of Mysteries
(2016)
The nutrient exchange between plant and fungus is the key element of the arbuscular mycorrhizal (AM) symbiosis. The fungus improves the plant’s uptake of mineral nutrients, mainly phosphate, and water, while the plant provides the fungus with photosynthetically assimilated carbohydrates. Still, the knowledge about the mechanisms of the nutrient exchange between the symbiotic partners is very limited. Therefore, transport processes of both, the plant and the fungal partner, are investigated in this study. In order to enhance the understanding of the molecular basis underlying this tight interaction between the roots of Medicago truncatula and the AM fungus Rhizophagus irregularis, genes involved in transport processes of both symbiotic partners are analysed here. The AM-specific regulation and cell-specific expression of potential transporter genes of M. truncatula that were found to be specifically regulated in arbuscule-containing cells and in non-arbusculated cells of mycorrhizal roots was confirmed. A model for the carbon allocation in mycorrhizal roots is suggested, in which carbohydrates are mobilized in non-arbusculated cells and symplastically provided to the arbuscule-containing cells. New insights into the mechanisms of the carbohydrate allocation were gained by the analysis of hexose/H+ symporter MtHxt1 which is regulated in distinct cells of mycorrhizal roots. Metabolite profiling of leaves and roots of a knock-out mutant, hxt1, showed that it indeed does have an impact on the carbohydrate balance in the course of the symbiosis throughout the whole plant, and on the interaction with the fungal partner. The primary metabolite profile of M. truncatula was shown to be altered significantly in response to mycorrhizal colonization. Additionally, molecular mechanisms determining the progress of the interaction in the fungal partner of the AM symbiosis were investigated. The R. irregularis transcriptome in planta and in extraradical tissues gave new insight into genes that are differentially expressed in these two fungal tissues. Over 3200 fungal transcripts with a significantly altered expression level in laser capture microdissection-collected arbuscules compared to extraradical tissues were identified. Among them, six previously unknown specifically regulated potential transporter genes were found. These are likely to play a role in the nutrient exchange between plant and fungus. While the substrates of three potential MFS transporters are as yet unknown, two potential sugar transporters are might play a role in the carbohydrate flow towards the fungal partner. In summary, this study provides new insights into transport processes between plant and fungus in the course of the AM symbiosis, analysing M. truncatula on the transcript and metabolite level, and provides a dataset of the R. irregularis transcriptome in planta, providing a high amount of new information for future works.
The Green formula is proved for boundary value problems (BVPs), when "basic" operator is arbitrary partial differential operator with variable matrix coefficients and "boundary" operators are quasi-normal with vector-coeficients. If the system possesses the fundamental solution, representation formula for a solution is derived and boundedness properties of participating layer potentials from function spaces on the boundary (Besov, Zygmund spaces) into appropriate weighted function spaces on the inner and the outer domains are established. Some related problems are discussed in conclusion: traces of functions from weighted spaces, traces of potential-type functions, Plemelji formulae,Calderón projections, restricted smoothness of the underlying surface and coefficients. The results have essential applications in investigations of BVPs by the potential method, in apriori estimates and in asymptotics of solutions.
Rezensiertes Werk: The literature of the sages : second part; Midrash and Targum; Liturgy, poetry, mysticism, contracts, inscriptions, ancient science and the languages of rabbinic literature / Shmuel Safrai ... [Hg.]. - Assen : Royal Van Gorcum/Fortress Press, 2006. - XVII, 722 S. - (= Compendia rerum Iudaicarum ad Novum Testamentum, Section 2: 3/2)
Background
Mortality is a main driver in zooplankton population biology but it is poorly constrained in models that describe zooplankton population dynamics, food web interactions and nutrient dynamics. Mortality due to non-predation factors is often ignored even though anecdotal evidence of non-predation mass mortality of zooplankton has been reported repeatedly. One way to estimate non-predation mortality rate is to measure the removal rate of carcasses, for which sinking is the primary removal mechanism especially in quiescent shallow water bodies.
Objectives and Results
We used sediment traps to quantify in situ carcass sinking velocity and non-predation mortality rate on eight consecutive days in 2013 for the cladoceran Bosmina longirostris in the oligo-mesotrophic Lake Stechlin; the outcomes were compared against estimates derived from in vitro carcass sinking velocity measurements and an empirical model correcting in vitro sinking velocity for turbulence resuspension and microbial decomposition of carcasses. Our results show that the latter two approaches produced unrealistically high mortality rates of 0.58-1.04 d(-1), whereas the sediment trap approach, when used properly, yielded a mortality rate estimate of 0.015 d(-1), which is more consistent with concurrent population abundance data and comparable to physiological death rate from the literature.
Ecological implications
Zooplankton carcasses may be exposed to water column microbes for days before entering the benthos; therefore, non-predation mortality affects not only zooplankton population dynamics but also microbial and benthic food webs. This would be particularly important for carbon and nitrogen cycles in systems where recurring mid-summer decline of zooplankton population due to non-predation mortality is observed.
This thesis rests on two large Active Galactic Nuclei (AGNs) surveys. The first survey deals with galaxies that host low-level AGNs (LLAGN) and aims at identifying such galaxies by quantifying their variability. While numerous studies have shown that AGNs can be variable at all wavelengths, the nature of the variability is still not well understood. Studying the properties of LLAGNs may help to understand better galaxy evolution, and how AGNs transit between active and inactive states. In this thesis, we develop a method to extract variability properties of AGNs. Using multi-epoch deep photometric observations, we subtract the contribution of the host galaxy at each epoch to extract variability and estimate AGN accretion rates. This pipeline will be a powerful tool in connection with future deep surveys such as PANSTARS. The second study in this thesis describes a survey of X-ray selected AGN hosts at redshifts z>1.5 and compares them to quiescent galaxies. This survey aims at studying environments, sizes and morphologies of star-forming high-redshift AGN hosts in the COSMOS Survey at the epoch of peak AGN activity. Between redshifts 1.5<z<3.8, the COSMOS HST/ACS imaging probes the UV regime, where separating the AGN flux from its host galaxy is very challenging. Nevertheless, we successfully derived the structural properties of 249 AGN hosts using two-dimensional surface-brightness profile fitting with the GALFIT package. This is the largest sample of AGN hosts at redshift z>1.5 to date. We analyzed the evolution of structural parameters of AGN and non-AGN host galaxies with redshift, and compared their disturbance rates to identify the more probable AGN triggering mechanism in the 43.5<log_10 L_X<45 luminosity range. We also conducted mock AGN and quiescent galaxies observations to determine errors and corrections for the derived parameters. We find that the size-absolute magnitude relations of AGN hosts and non-AGN galaxies are very similar, with estimated mean sizes in both samples decreasing by ~50% between redshifts z=1.5 and z=3.5. Morphological classification of both active and quiescent galaxies shows that the majority of the AGN host galaxies are disc-dominated, with disturbance rates that are significantly lower than among the non-AGN galaxies. Such a finding suggests that Major Mergers are probably not responsible for triggering AGN accretion in most of these galaxies. Other secular mechanisms should therefore be responsible.