Refine
Has Fulltext
- yes (517) (remove)
Year of publication
- 2021 (517) (remove)
Document Type
- Doctoral Thesis (165)
- Postprint (154)
- Article (94)
- Part of Periodical (20)
- Working Paper (19)
- Master's Thesis (18)
- Monograph/Edited Volume (15)
- Review (10)
- Bachelor Thesis (7)
- Report (7)
Keywords
- USA (10)
- United States (9)
- moderne jüdische Geschichte (9)
- Christian Gottfried Ehrenberg (8)
- 20. Jahrhundert (7)
- 20th century (7)
- modern Jewish history (7)
- 19. Jahrhundert (5)
- Forschungsreisen (5)
- 19th century (4)
Institute
- Hasso-Plattner-Institut für Digital Engineering GmbH (50)
- Extern (47)
- Institut für Biochemie und Biologie (45)
- Institut für Geowissenschaften (39)
- Institut für Physik und Astronomie (34)
- Institut für Chemie (28)
- Institut für Romanistik (28)
- Institut für Umweltwissenschaften und Geographie (21)
- Vereinigung für Jüdische Studien e. V. (21)
- Department Psychologie (19)
- Strukturbereich Kognitionswissenschaften (19)
- Center for Economic Policy Analysis (CEPA) (17)
- Department Sport- und Gesundheitswissenschaften (17)
- MenschenRechtsZentrum (17)
- Institut für Ernährungswissenschaft (14)
- Fachgruppe Volkswirtschaftslehre (13)
- Verband für Patholinguistik e. V. (vpl) (12)
- Wirtschaftswissenschaften (12)
- Arbeitskreis Militär und Gesellschaft in der Frühen Neuzeit e. V. (9)
- Institut für Informatik und Computational Science (9)
- Institut für Mathematik (9)
- Department Erziehungswissenschaft (7)
- Fachgruppe Betriebswirtschaftslehre (7)
- Fachgruppe Politik- & Verwaltungswissenschaft (7)
- Klassische Philologie (7)
- Referat für Presse- und Öffentlichkeitsarbeit (7)
- Department Linguistik (5)
- Historisches Institut (5)
- Department Grundschulpädagogik (4)
- Department für Inklusionspädagogik (4)
- Fakultät für Gesundheitswissenschaften (4)
- Humanwissenschaftliche Fakultät (4)
- Institut für Jüdische Studien und Religionswissenschaft (4)
- Universitätsbibliothek (4)
- Institut für Anglistik und Amerikanistik (3)
- Institut für Germanistik (3)
- Institut für Philosophie (3)
- Sozialwissenschaften (3)
- Strukturbereich Bildungswissenschaften (3)
- Hochschulambulanz (2)
- Patholinguistics/Neurocognition of Language (2)
- Öffentliches Recht (2)
- Abraham Geiger Kolleg gGmbH (1)
- Department Musik und Kunst (1)
- Digital Engineering Fakultät (1)
- Fachgruppe Soziologie (1)
- Foundations of Computational Linguistics (1)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (1)
- Institut für Künste und Medien (1)
- Lehreinheit für Wirtschafts-Arbeit-Technik (1)
- Potsdam Institute for Climate Impact Research (PIK) e. V. (1)
- Potsdam Transfer - Zentrum für Gründung, Innovation, Wissens- und Technologietransfer (1)
- Psycholinguistics and Neurolinguistics (1)
- Strafrecht (1)
- Theodor-Fontane-Archiv (1)
- Zentrum für Lehrerbildung und Bildungsforschung (ZeLB) (1)
In den vergangenen Jahren hat sich die Politikdidaktik zunehmend mit dem Einsatz von Narrationen im Politikunterricht beschäftigt, denn neben Sachtexten bietet auch die Belletristik die Möglichkeit, sich mit politischen Themen auseinanderzusetzen. Insbesondere die Literatur von Ferdinand von Schirach hat in den letzten Jahren zunehmend Anklang in der Gesellschaft gefunden. Von Schirachs Texte greifen gesellschaftskritische Themen auf, beleuchten diese aus verschiedenen Perspektiven und fordern zur Meinungsbildung heraus. Aus diesem Grund weisen von Schirachs Narrationen ein hohes Potential für die Politische Bildung auf. Politische Bildung schließt auch die Rechterziehung ein. Der Fall Collini von Ferdinand von Schirach setzt sich sowohl mit rechtlichen, als auch mit politischen Themen im Sinne der Rechtserziehung auseinander. In der vorliegenden Masterarbeit wird der Frage nachgegangen, inwieweit der Roman Der Fall Collini von Ferdinand von Schirach als Narration eine Chance für politisch-rechtliches Lernen im Politikunterricht darstellt. Um die Forschungsfrage zu beantworten, werden die Lernchancen und -grenzen des Romans hinsichtlich seiner Thematik und seines Genres, sowie durch den Roman geförderten Kompetenzen herausgearbeitet und die durch ihn möglichen fächerübergreifenden Bezüge verdeutlicht. Durch die Auseinandersetzung mit von Schirachs Werk beschäftigen sich die Schülerinnen und Schüler mit politisch-rechtlichen Themen, wie dem Spannungsverhältnis von Recht und Gerechtigkeit, dem Ablauf von Strafgerichtsverfahren, dem theoretischen Anspruch des Rechtsstaates und dessen realen Schwächen. Zudem fördert die Auseinandersetzung mit dem Roman Der Fall Collini die vier fachbezogenen Kompetenzen der Politischen Bildung, sowie Multiperspektivität und exemplarisches Lernen. Des Weiteren verknüpft der Roman historische, politisch-rechtliche und moralisch-ethische Aspekte miteinander, wodurch fächerübergreifende Bezüge mit den Fächern Geschichte, Deutsch und L-E-R hergestellt werden können. Darüber hinaus spricht der Justizroman als Narration seine Leserinnen und Leser auch emotional an und fördert somit eine ganzheitliche und nachhaltige Wissensvermittlung im Sinne der Rechtserziehung. Es hat sich gezeigt, dass Der Fall Collini von Ferdinand von Schirach sich für die unterrichtliche Beschäftigung innerhalb der Politischen Bildung besonders eignet.
(Auf) Humboldts Spuren
(2021)
Vor seiner Besteigung des Antisana in Ecuador verbrachte Alexander von Humboldt mit seinem Expeditionsteam die Nacht vom 15. auf den 16. März 1802 in einer Hacienda am Fuße des Vulkangipfels, deren letztes bauliches Zeugnis eine steinerne Hütte darstellt. Bauforscherische Untersuchungen eines internationalen Forscherteams konnten die mehrschichtige Bau- und Reparaturgeschichte dieses Baudenkmals ermitteln und über eine Auswertung von Reiseberichten mehrerer Andenforscher die Nutzungsgeschichte des einzelnen Gebäudes und des gesamten Anwesens klären. Schließlich ergaben sich daraus neue Erkenntnisse zu Humboldts Aufenthalt am Antisana.
Die Technologie des 3D-Drucks hat sich in den letzten Jahrzehnten rasant entwickelt. Im Industriebereich entstehen immer modernere und spezialisiertere Druckverfahren, im Hobby- und Privatanwenderbereich hingegen werden stetig kostengünstigere und einfacher zu bedienende Geräte zugänglich. Einzig im Bildungsbereich scheint das Themenfeld hingegen erst langsam eine Rolle zu spielen, obwohl sich zahlreiche Bezugspunkte für einen Einsatz in verschiedensten Fächern finden lassen. Insbesondere im Fach Wirtschaft-Arbeit-Technik sind die Schnittstellen zum Rahmenlehrplan Berlin/Brandenburg augenscheinlich, doch es liegen erst vereinzelt konkrete und systematische didaktische Konzepte und Vorschläge zur unterrichtspraktischen Einbettung vor. Die Verfasserin versucht daher in dieser Arbeit die Relevanz des Themas für die technische Bildung deutlich zu machen, eine kurze technische Einführung in das für einen schulischen Einsatz besonders geeignete FDM-Druckverfahren zu geben und daran anknüpfend konkrete Umsetzungsvorschläge aufzuzeigen: einerseits in Form eines allgemeinen Phasenmodells zur Planung von Technikunterricht sowie andererseits in Form eines exemplarischen Unterrichtskonzepts. Am Beispiel eines Schachsets wird verdeutlicht, wie Schülerinnen und Schüler zum Anfertigen der Konstruktionsunterlagen digitale CAD-Programme nutzen und anschließend mit Hilfe eines 3D-Druckers additiv fertigen können.
Over the past decades, natural hazards, many of which are aggravated by climate change and reveal an increasing trend in frequency and intensity, have caused significant human and economic losses and pose a considerable obstacle to sustainable development. Hence, dedicated action toward disaster risk reduction is needed to understand the underlying drivers and create efficient risk mitigation plans. Such action is requested by the Sendai Framework for Disaster Risk Reduction 2015-2030 (SFDRR), a global agreement launched in 2015 that establishes stating priorities for action, e.g. an improved understanding of disaster risk. Turkey is one of the SFDRR contracting countries and has been severely affected by many natural hazards, in particular earthquakes and floods. However, disproportionately little is known about flood hazards and risks in Turkey. Therefore, this thesis aims to carry out a comprehensive analysis of flood hazards for the first time in Turkey from triggering drivers to impacts. It is intended to contribute to a better understanding of flood risks, improvements of flood risk mitigation and the facilitated monitoring of progress and achievements while implementing the SFDRR.
In order to investigate the occurrence and severity of flooding in comparison to other natural hazards in Turkey and provide an overview of the temporal and spatial distribution of flood losses, the Turkey Disaster Database (TABB) was examined for the years 1960-2014. The TABB database was reviewed through comparison with the Emergency Events Database (EM-DAT), the Dartmouth Flood Observatory database, the scientific literature and news archives. In addition, data on the most severe flood events between 1960 and 2014 were retrieved. These served as a basis for analyzing triggering mechanisms (i.e. atmospheric circulation and precipitation amounts) and aggravating pathways (i.e. topographic features, catchment size, land use types and soil properties). For this, a new approach was developed and the events were classified using hierarchical cluster analyses to identify the main influencing factor per event and provide additional information about the dominant flood pathways for severe floods. The main idea of the study was to start with the event impacts based on a bottom-up approach and identify the causes that created damaging events, instead of applying a model chain with long-term series as input and searching for potentially impacting events as model outcomes. However, within the frequency analysis of the flood-triggering circulation pattern types, it was discovered that events in terms of heavy precipitation were not included in the list of most severe floods, i.e. their impacts were not recorded in national and international loss databases but were mentioned in news archives and reported by the Turkish State Meteorological Service. This finding challenges bottom-up modelling approaches and underlines the urgent need for consistent event and loss documentation. Therefore, as a next step, the aim was to enhance the flood loss documentation by calibrating, validating and applying the United Nations Office for Disaster Risk Reduction (UNDRR) loss estimation method for the recent severe flood events (2015-2020). This provided, a consistent flood loss estimation model for Turkey, allowing governments to estimate losses as quickly as possible after events, e.g. to better coordinate financial aid.
This thesis reveals that, after earthquakes, floods have the second most destructive effects in Turkey in terms of human and economic impacts, with over 800 fatalities and US$ 885.7 million in economic losses between 1960 and 2020, and that more attention should be paid on the national scale. The clustering results of the dominant flood-producing mechanisms (e.g. circulation pattern types, extreme rainfall, sudden snowmelt) present crucial information regarding the source and pathway identification, which can be used as base information for hazard identification in the preliminary risk assessment process. The implementation of the UNDRR loss estimation model shows that the model with country-specific parameters, calibrated damage ratios and sufficient event documentation (i.e. physically damaged units) can be recommended in order to provide first estimates of the magnitude of direct economic losses, even shortly after events have occurred, since it performed well when estimates were compared to documented losses.
The presented results can contribute to improving the national disaster loss database in Turkey and thus enable a better monitoring of the national progress and achievements with regard to the targets stated by the SFDRR. In addition, the outcomes can be used to better characterize and classify flood events. Information on the main underlying factors and aggravating flood pathways further supports the selection of suitable risk reduction policies.
All input variables used in this thesis were obtained from publicly available data. The results are openly accessible and can be used for further research.
As an overall conclusion, it can be stated that consistent loss data collection and better event documentation should gain more attention for a reliable monitoring of the implementation of the SFDRR. Better event documentation should be established according to a globally accepted standard for disaster classification and loss estimation in Turkey. Ultimately, this enables stakeholders to create better risk mitigation actions based on clear hazard definitions, flood event classification and consistent loss estimations.
Exendin-4 is a pharmaceutical peptide used in the control of insulin secretion. Structural information on exendin-4 and related peptides especially on the level of quaternary structure is scarce. We present the first published association equilibria of exendin-4 directly measured by static and dynamic light scattering. We show that exendin-4 oligomerization is pH dependent and that these oligomers are of low compactness. We relate our experimental results to a structural hypothesis to describe molecular details of exendin-4 oligomers. Discussion of the validity of this hypothesis is based on NMR, circular dichroism and fluorescence spectroscopy, and light scattering data on exendin-4 and a set of exendin-4 derived peptides. The essential forces driving oligomerization of exendin-4 are helix–helix interactions and interactions of a conserved hydrophobic moiety. Our structural hypothesis suggests that key interactions of exendin-4 monomers in the experimentally supported trimer take place between a defined helical segment and a hydrophobic triangle constituted by the Phe22 residues of the three monomeric subunits. Our data rationalize that Val19 might function as an anchor in the N-terminus of the interacting helix-region and that Trp25 is partially shielded in the oligomer by C-terminal amino acids of the same monomer. Our structural hypothesis suggests that the Trp25 residues do not interact with each other, but with C-terminal Pro residues of their own monomers.
A few months before his death, A. v. Humboldt attended the celebration in honor of the 127th birthday of George Washington at the US legation in Berlin. A letter to the American Envoy, Joseph A. Wright (1810 – 1867), underlines Humboldt’s admiration for the fi rst president of the United States. At the same time Humboldt asked the diplomat to mail a letter to the German-American Bernard Moses (1832 – 1897) in Clinton, Louisiana, who had named his son Alexander Humboldt Moses (grave on the Hebrew Rest Cemetery #2 in New Orleans, burial plot A, 12, 5). It appears to be possible that the Moses family still owns Humboldt’s letter.
The large literature that aims to find evidence of climate migration delivers mixed findings. This meta-regression analysis i) summarizes direct links between adverse climatic events and migration, ii) maps patterns of climate migration, and iii) explains the variation in outcomes. Using a set of limited dependent variable models, we meta-analyze thus-far the most comprehensive sample of 3,625 estimates from 116 original studies and produce novel insights on climate migration. We find that extremely high temperatures and drying conditions increase migration. We do not find a significant effect of sudden-onset events. Climate migration is most likely to emerge due to contemporaneous events, to originate in rural areas and to take place in middle-income countries, internally, to cities. The likelihood to become trapped in affected areas is higher for women and in low-income countries, particularly in Africa. We uniquely quantify how pitfalls typical for the broader empirical climate impact literature affect climate migration findings. We also find evidence of different publication biases.
With the downscaling of CMOS technologies, the radiation-induced Single Event Transient (SET) effects in combinational logic have become a critical reliability issue for modern integrated circuits (ICs) intended for operation under harsh radiation conditions. The SET pulses generated in combinational logic may propagate through the circuit and eventually result in soft errors. It has thus become an imperative to address the SET effects in the early phases of the radiation-hard IC design. In general, the soft error mitigation solutions should accommodate both static and dynamic measures to ensure the optimal utilization of available resources. An efficient soft-error-aware design should address synergistically three main aspects: (i) characterization and modeling of soft errors, (ii) multi-level soft error mitigation, and (iii) online soft error monitoring. Although significant results have been achieved, the effectiveness of SET characterization methods, accuracy of predictive SET models, and efficiency of SET mitigation measures are still critical issues. Therefore, this work addresses the following topics: (i) Characterization and modeling of SET effects in standard combinational cells, (ii) Static mitigation of SET effects in standard combinational cells, and (iii) Online particle detection, as a support for dynamic soft error mitigation.
Since the standard digital libraries are widely used in the design of radiation-hard ICs, the characterization of SET effects in standard cells and the availability of accurate SET models for the Soft Error Rate (SER) evaluation are the main prerequisites for efficient radiation-hard design. This work introduces an approach for the SPICE-based standard cell characterization with the reduced number of simulations, improved SET models and optimized SET sensitivity database. It has been shown that the inherent similarities in the SET response of logic cells for different input levels can be utilized to reduce the number of required simulations. Based on characterization results, the fitting models for the SET sensitivity metrics (critical charge, generated SET pulse width and propagated SET pulse width) have been developed. The proposed models are based on the principle of superposition, and they express explicitly the dependence of the SET sensitivity of individual combinational cells on design, operating and irradiation parameters. In contrast to the state-of-the-art characterization methodologies which employ extensive look-up tables (LUTs) for storing the simulation results, this work proposes the use of LUTs for storing the fitting coefficients of the SET sensitivity models derived from the characterization results. In that way the amount of characterization data in the SET sensitivity database is reduced significantly.
The initial step in enhancing the robustness of combinational logic is the application of gate-level mitigation techniques. As a result, significant improvement of the overall SER can be achieved with minimum area, delay and power overheads. For the SET mitigation in standard cells, it is essential to employ the techniques that do not require modifying the cell structure. This work introduces the use of decoupling cells for improving the robustness of standard combinational cells. By insertion of two decoupling cells at the output of a target cell, the critical charge of the cell’s output node is increased and the attenuation of short SETs is enhanced. In comparison to the most common gate-level techniques (gate upsizing and gate duplication), the proposed approach provides better SET filtering. However, as there is no single gate-level mitigation technique with optimal performance, a combination of multiple techniques is required. This work introduces a comprehensive characterization of gate-level mitigation techniques aimed to quantify their impact on the SET robustness improvement, as well as introduced area, delay and power overhead per gate. By characterizing the gate-level mitigation techniques together with the standard cells, the required effort in subsequent SER analysis of a target design can be reduced. The characterization database of the hardened standard cells can be utilized as a guideline for selection of the most appropriate mitigation solution for a given design.
As a support for dynamic soft error mitigation techniques, it is important to enable the online detection of energetic particles causing the soft errors. This allows activating the power-greedy fault-tolerant configurations based on N-modular redundancy only at the high radiation levels. To enable such a functionality, it is necessary to monitor both the particle flux and the variation of particle LET, as these two parameters contribute significantly to the system SER. In this work, a particle detection approach based on custom-sized pulse stretching inverters is proposed. Employing the pulse stretching inverters connected in parallel enables to measure the particle flux in terms of the number of detected SETs, while the particle LET variations can be estimated from the distribution of SET pulse widths. This approach requires a purely digital processing logic, in contrast to the standard detectors which require complex mixed-signal processing. Besides the possibility of LET monitoring, additional advantages of the proposed particle detector are low detection latency and power consumption, and immunity to error accumulation.
The results achieved in this thesis can serve as a basis for establishment of an overall soft-error-aware database for a given digital library, and a comprehensive multi-level radiation-hard design flow that can be implemented with the standard IC design tools. The following step will be to evaluate the achieved results with the irradiation experiments.
Atmospheric water vapour content is a key variable that controls the development of deep convective storms and rainfall extremes over the central Andes. Direct measurements of water vapour are challenging; however, recent developments in microwave processing allow the use of phase delays from L-band radar to measure the water vapour content throughout the atmosphere: Global Navigation Satellite System (GNSS)-based integrated water vapour (IWV) monitoring shows promising results to measure vertically integrated water vapour at high temporal resolutions. Previous works also identified convective available potential energy (CAPE) as a key climatic variable for the formation of deep convective storms and rainfall in the central Andes. Our analysis relies on GNSS data from the Argentine Continuous Satellite Monitoring Network, Red Argentina de Monitoreo Satelital Continuo (RAMSAC) network from 1999 to 2013. CAPE is derived from version 2.0 of the ECMWF’s (European Centre for Medium-Range Weather Forecasts) Re-Analysis (ERA-interim) and rainfall from the TRMM (Tropical Rainfall Measuring Mission) product. In this study, we first analyse the rainfall characteristics of two GNSS-IWV stations by comparing their complementary cumulative distribution function (CCDF). Second, we separately derive the relation between rainfall vs. CAPE and GNSS-IWV. Based on our distribution fitting analysis, we observe an exponential relation of rainfall to GNSS-IWV. In contrast, we report a power-law relationship between the daily mean value of rainfall and CAPE at the GNSS-IWV station locations in the eastern central Andes that is close to the theoretical relationship based on parcel theory. Third, we generate a joint regression model through a multivariable regression analysis using CAPE and GNSS-IWV to explain the contribution of both variables in the presence of each other to extreme rainfall during the austral summer season. We found that rainfall can be characterised with a higher statistical significance for higher rainfall quantiles, e.g., the 0.9 quantile based on goodness-of-fit criterion for quantile regression. We observed different contributions of CAPE and GNSS-IWV to rainfall for each station for the 0.9 quantile. Fourth, we identify the temporal relation between extreme rainfall (the 90th, 95th, and 99th percentiles) and both GNSS-IWV and CAPE at 6 h time steps. We observed an increase before the rainfall event and at the time of peak rainfall—both for GNSS-integrated water vapour and CAPE. We show higher values of CAPE and GNSS-IWV for higher rainfall percentiles (99th and 95th percentiles) compared to the 90th percentile at a 6-h temporal scale. Based on our correlation analyses and the dynamics of the time series, we show that both GNSS-IWV and CAPE had comparable magnitudes, and we argue to consider both climatic variables when investigating their effect on rainfall extremes.
We consider a sequential cascade of molecular first-reaction events towards a terminal reaction centre in which each reaction step is controlled by diffusive motion of the particles. The model studied here represents a typical reaction setting encountered in diverse molecular biology systems, in which, e.g. a signal transduction proceeds via a series of consecutive 'messengers': the first messenger has to find its respective immobile target site triggering a launch of the second messenger, the second messenger seeks its own target site and provokes a launch of the third messenger and so on, resembling a relay race in human competitions. For such a molecular relay race taking place in infinite one-, two- and three-dimensional systems, we find exact expressions for the probability density function of the time instant of the terminal reaction event, conditioned on preceding successful reaction events on an ordered array of target sites. The obtained expressions pertain to the most general conditions: number of intermediate stages and the corresponding diffusion coefficients, the sizes of the target sites, the distances between them, as well as their reactivities are arbitrary.
In an attempt to pave the way for more extensive Computer Science Education (CSE) coverage in K-12, this research developed and made a preliminary evaluation of a blended-learning Introduction to CS program based on an academic MOOC. Using an academic MOOC that is pedagogically effective and engaging, such a program may provide teachers with disciplinary scaffolds and allow them to focus their attention on enhancing students’ learning experience and nurturing critical 21st-century skills such as self-regulated learning. As we demonstrate, this enabled us to introduce an academic level course to middle-school students. In this research, we developed the principals and initial version of such a program, targeting ninth-graders in science-track classes who learn CS as part of their standard curriculum. We found that the middle-schoolers who participated in the program achieved academic results on par with undergraduate students taking this MOOC for academic credit. Participating students also developed a more accurate perception of the essence of CS as a scientific discipline. The unplanned school closure due to the COVID19 pandemic outbreak challenged the research but underlined the advantages of such a MOOCbased blended learning program above classic pedagogy in times of global or local crises that lead to school closure. While most of the science track classes seem to stop learning CS almost entirely, and the end-of-year MoE exam was discarded, the program’s classes smoothly moved to remote learning mode, and students continued to study at a pace similar to that experienced before the school shut down.
This thesis focuses on the study of marked Gibbs point processes, in particular presenting some results on their existence and uniqueness, with ideas and techniques drawn from different areas of statistical mechanics: the entropy method from large deviations theory, cluster expansion and the Kirkwood--Salsburg equations, the Dobrushin contraction principle and disagreement percolation.
We first present an existence result for infinite-volume marked Gibbs point processes. More precisely, we use the so-called entropy method (and large-deviation tools) to construct marked Gibbs point processes in R^d under quite general assumptions. In particular, the random marks belong to a general normed space S and are not bounded. Moreover, we allow for interaction functionals that may be unbounded and whose range is finite but random. The entropy method relies on showing that a family of finite-volume Gibbs point processes belongs to sequentially compact entropy level sets, and is therefore tight.
We then present infinite-dimensional Langevin diffusions, that we put in interaction via a Gibbsian description. In this setting, we are able to adapt the general result above to show the existence of the associated infinite-volume measure. We also study its correlation functions via cluster expansion techniques, and obtain the uniqueness of the Gibbs process for all inverse temperatures β and activities z below a certain threshold. This method relies in first showing that the correlation functions of the process satisfy a so-called Ruelle bound, and then using it to solve a fixed point problem in an appropriate Banach space. The uniqueness domain we obtain consists then of the model parameters z and β for which such a problem has exactly one solution.
Finally, we explore further the question of uniqueness of infinite-volume Gibbs point processes on R^d, in the unmarked setting. We present, in the context of repulsive interactions with a hard-core component, a novel approach to uniqueness by applying the discrete Dobrushin criterion to the continuum framework. We first fix a discretisation parameter a>0 and then study the behaviour of the uniqueness domain as a goes to 0. With this technique we are able to obtain explicit thresholds for the parameters z and β, which we then compare to existing results coming from the different methods of cluster expansion and disagreement percolation.
Throughout this thesis, we illustrate our theoretical results with various examples both from classical statistical mechanics and stochastic geometry.
Monoclonal antibodies are used worldwide as highly potent and efficient detection reagents for research and diagnostic applications. Nevertheless, the specific targeting of complex antigens such as whole microorganisms remains a challenge. To provide a comprehensive workflow, we combined bioinformatic analyses with novel immunization and selection tools to design monoclonal antibodies for the detection of whole microorganisms. In our initial study, we used the human pathogenic strain E. coli O157:H7 as a model target and identified 53 potential protein candidates by using reverse vaccinology methodology. Five different peptide epitopes were selected for immunization using epitope-engineered viral proteins. The identification of antibody-producing hybridomas was performed by using a novel screening technology based on transgenic fusion cell lines. Using an artificial cell surface receptor expressed by all hybridomas, the desired antigen-specific cells can be sorted fast and efficiently out of the fusion cell pool. Selected antibody candidates were characterized and showed strong binding to the target strain E. coli O157:H7 with minor or no cross-reactivity to other relevant microorganisms such as Legionella pneumophila and Bacillus ssp. This approach could be useful as a highly efficient workflow for the generation of antibodies against microorganisms.
The spread of shrubs in Namibian savannas raises questions about the resilience of these ecosystems to global change. This makes it necessary to understand the past dynamics of the vegetation, since there is no consensus on whether shrub encroachment is a new phenomenon, nor on its main drivers. However, a lack of long-term vegetation datasets for the region and the scarcity of suitable palaeoecological archives, makes reconstructing past vegetation and land cover of the savannas a challenge.
To help meet this challenge, this study addresses three main research questions: 1) is pollen analysis a suitable tool to reflect the vegetation change associated with shrub encroachment in savanna environments? 2) Does the current encroached landscape correspond to an alternative stable state of savanna vegetation? 3) To what extent do pollen-based quantitative vegetation reconstructions reflect changes in past land cover?
The research focuses on north-central Namibia, where despite being the region most affected by shrub invasion, particularly since the 21st century, little is known about the dynamics of this phenomenon.
Field-based vegetation data were compared with modern pollen data to assess their correspondence in terms of composition and diversity along precipitation and grazing intensity gradients. In addition, two sediment cores from Lake Otjikoto were analysed to reveal changes in vegetation composition that have occurred in the region over the past 170 years and their possible drivers. For this, a multiproxy approach (fossil pollen, sedimentary ancient DNA (sedaDNA), biomarkers, compound specific carbon (δ13C) and deuterium (δD) isotopes, bulk carbon isotopes (δ13Corg), grain size, geochemical properties) was applied at high taxonomic and temporal resolution. REVEALS modelling of the fossil pollen record from Lake Otjikoto was run to quantitatively reconstruct past vegetation cover. For this, we first made pollen productivity estimates (PPE) of the most relevant savanna taxa in the region using the extended R-value model and two pollen dispersal options (Gaussian plume model and Lagrangian stochastic model). The REVEALS-based vegetation reconstruction was then validated using remote sensing-based regional vegetation data.
The results show that modern pollen reflects the composition of the vegetation well, but diversity less well. Interestingly, precipitation and grazing explain a significant amount of the compositional change in the pollen and vegetation spectra. The multiproxy record shows that a state change from open Combretum woodland to encroached Terminalia shrubland can occur over a century, and that the transition between states spans around 80 years and is characterized by a unique vegetation composition. This transition is supported by gradual environmental changes induced by management (i.e. broad-scale logging for the mining industry, selective grazing and reduced fire activity associated with intensified farming) and related land-use change. Derived environmental changes (i.e. reduced soil moisture, reduced grass cover, changes in species composition and competitiveness, reduced fire intensity) may have affected the resilience of Combretum open woodlands, making them more susceptible to change to an encroached state by stochastic events such as consecutive years of precipitation and drought, and by high concentrations of pCO2. We assume that the resulting encroached state was further stabilized by feedback mechanisms that favour the establishment and competitiveness of woody vegetation.
The REVEALS-based quantitative estimates of plant taxa indicate the predominance of a semi-open landscape throughout the 20th century and a reduction in grass cover below 50% since the 21st century associated with the spread of encroacher woody taxa. Cover estimates show a close match with regional vegetation data, providing support for the vegetation dynamics inferred from multiproxy analyses. Reasonable PPEs were made for all woody taxa, but not for Poaceae.
In conclusion, pollen analysis is a suitable tool to reconstruct past vegetation dynamics in savannas. However, because pollen cannot identify grasses beyond family level, a multiproxy approach, particularly the use of sedaDNA, is required. I was able to separate stable encroached states from mere woodland phases, and could identify drivers and speculate about related feedbacks. In addition, the REVEALS-based quantitative vegetation reconstruction clearly reflects the magnitude of the changes in the vegetation cover that occurred during the last 130 years, despite the limitations of some PPEs.
This research provides new insights into pollen-vegetation relationships in savannas and highlights the importance of multiproxy approaches when reconstructing past vegetation dynamics in semi-arid environments. It also provides the first time series with sufficient taxonomic resolution to show changes in vegetation composition during shrub encroachment, as well as the first quantitative reconstruction of past land cover in the region. These results help to identify the different stages in savanna dynamics and can be used to calibrate predictive models of vegetation change, which are highly relevant to land management.
Objective
The Caribbean is an important global biodiversity hotspot. Adaptive radiations there lead to many speciation events within a limited period and hence are particularly prominent biodiversity generators. A prime example are freshwater fish of the genus Limia, endemic to the Greater Antilles. Within Hispaniola, nine species have been described from a single isolated site, Lake Miragoâne, pointing towards extraordinary sympatric speciation. This study examines the evolutionary history of the Limia species in Lake Miragoâne, relative to their congeners throughout the Caribbean.
Results
For 12 Limia species, we obtained almost complete sequences of the mitochondrial cytochrome b gene, a well-established marker for lower-level taxonomic relationships. We included sequences of six further Limia species from GenBank (total N = 18 species). Our phylogenies are in concordance with other published phylogenies of Limia. There is strong support that the species found in Lake Miragoâne in Haiti are monophyletic, confirming a recent local radiation. Within Lake Miragoâne, speciation is likely extremely recent, leading to incomplete lineage sorting in the mtDNA. Future studies using multiple unlinked genetic markers are needed to disentangle the relationships within the Lake Miragoâne clade.
Background
Relatively little is known about protective factors and the emergence and maintenance of positive outcomes in the field of adolescents with chronic conditions. Therefore, the primary aim of the study is to acquire a deeper understanding of the dynamic process of resilience factors, coping strategies and psychosocial adjustment of adolescents living with chronic conditions.
Methods/design
We plan to consecutively recruit N = 450 adolescents (12–21 years) from three German patient registries for chronic conditions (type 1 diabetes, cystic fibrosis, or juvenile idiopathic arthritis). Based on screening for anxiety and depression, adolescents are assigned to two parallel groups – “inconspicuous” (PHQ-9 and GAD-7 < 7) vs. “conspicuous” (PHQ-9 or GAD-7 ≥ 7) – participating in a prospective online survey at baseline and 12-month follow-up. At two time points (T1, T2), we assess (1) intra- and interpersonal resiliency factors, (2) coping strategies, and (3) health-related quality of life, well-being, satisfaction with life, anxiety and depression. Using a cross-lagged panel design, we will examine the bidirectional longitudinal relations between resiliency factors and coping strategies, psychological adaptation, and psychosocial adjustment. To monitor Covid-19 pandemic effects, participants are also invited to take part in an intermediate online survey.
Discussion
The study will provide a deeper understanding of adaptive, potentially modifiable processes and will therefore help to develop novel, tailored interventions supporting a positive adaptation in youths with a chronic condition. These strategies should not only support those at risk but also promote the maintenance of a successful adaptation.
Trial registration
German Clinical Trials Register (DRKS), no. DRKS00025125. Registered on May 17, 2021.
A Secular Tradition
(2021)
This article focuses on the social philosopher Horace Kallen and the revisions he made to the concept of cultural pluralism that he first developed in the early 20th century, applying it to postwar America and the young State of Israel. It shows how he opposed the assumption that the United States’ social order was based on a “Judeo-Christian tradition.” By constructing pluralism as a civil religion and carving out space for secular self-understandings in midcentury America, Kallen attempted to preserve the integrity of his earlier political visions, developed during World War I, of pluralist societies in the United States and Palestine within an internationalist global order. While his perspective on the State of Israel was largely shaped by his American experiences, he revised his approach to politically functionalizing religious traditions as he tested his American understanding of a secular, pluralist society against the political theology effective in the State of Israel. The trajectory of Kallen’s thought points to fundamental questions about the compatibility of American and Israeli understandings of religion’s function in society and its relation to political belonging, especially in light of their transnational connection through American Jewish support for the recently established state.
In Systems Medicine, in addition to high-throughput molecular data (*omics), the wealth of clinical characterization plays a major role in the overall understanding of a disease. Unique problems and challenges arise from the heterogeneity of data and require new solutions to software and analysis methods. The SMART and EurValve studies establish a Systems Medicine approach to valvular heart disease -- the primary cause of subsequent heart failure.
With the aim to ascertain a holistic understanding, different *omics as well as the clinical picture of patients with aortic stenosis (AS) and mitral regurgitation (MR) are collected. Our task within the SMART consortium was to develop an IT platform for Systems Medicine as a basis for data storage, processing, and analysis as a prerequisite for collaborative research. Based on this platform, this thesis deals on the one hand with the transfer of the used Systems Biology methods to their use in the Systems Medicine context and on the other hand with the clinical and biomolecular differences of the two heart valve diseases. To advance differential expression/abundance (DE/DA) analysis software for use in Systems Medicine, we state 21 general software requirements and features of automated DE/DA software, including a novel concept for the simple formulation of experimental designs that can represent complex hypotheses, such as comparison of multiple experimental groups, and demonstrate our handling of the wealth of clinical data in two research applications DEAME and Eatomics. In user interviews, we show that novice users are empowered to formulate and test their multiple DE hypotheses based on clinical phenotype. Furthermore, we describe insights into users' general impression and expectation of the software's performance and show their intention to continue using the software for their work in the future. Both research applications cover most of the features of existing tools or even extend them, especially with respect to complex experimental designs. Eatomics is freely available to the research community as a user-friendly R Shiny application.
Eatomics continued to help drive the collaborative analysis and interpretation of the proteomic profile of 75 human left myocardial tissue samples from the SMART and EurValve studies. Here, we investigate molecular changes within the two most common types of valvular heart disease: aortic valve stenosis (AS) and mitral valve regurgitation (MR). Through DE/DA analyses, we explore shared and disease-specific protein alterations, particularly signatures that could only be found in the sex-stratified analysis. In addition, we relate changes in the myocardial proteome to parameters from clinical imaging. We find comparable cardiac hypertrophy but differences in ventricular size, the extent of fibrosis, and cardiac function. We find that AS and MR show many shared remodeling effects, the most prominent of which is an increase in the extracellular matrix and a decrease in metabolism. Both effects are stronger in AS. In muscle and cytoskeletal adaptations, we see a greater increase in mechanotransduction in AS and an increase in cortical cytoskeleton in MR. The decrease in proteostasis proteins is mainly attributable to the signature of female patients with AS. We also find relevant therapeutic targets.
In addition to the new findings, our work confirms several concepts from animal and heart failure studies by providing the largest collection of human tissue from in vivo collected biopsies to date. Our dataset contributing a resource for isoform-specific protein expression in two of the most common valvular heart diseases. Apart from the general proteomic landscape, we demonstrate the added value of the dataset by showing proteomic and transcriptomic evidence for increased expression of the SARS-CoV-2- receptor at pressure load but not at volume load in the left ventricle and also provide the basis of a newly developed metabolic model of the heart.
A tale of shifting relations
(2021)
Understanding the dynamics between the East Asian summer (EASM) and winter monsoon (EAWM) is needed to predict their variability under future global warming scenarios. Here, we investigate the relationship between EASM and EAWM as well as the mechanisms driving their variability during the last 10,000 years by stacking marine and terrestrial (non-speleothem) proxy records from the East Asian realm. This provides a regional and proxy independent signal for both monsoonal systems. The respective signal was subsequently analysed using a linear regression model. We find that the phase relationship between EASM and EAWM is not time-constant and significantly depends on orbital configuration changes. In addition, changes in the Atlantic Meridional Overturning circulation, Arctic sea-ice coverage, El Niño-Southern Oscillation and Sun Spot numbers contributed to millennial scale changes in the EASM and EAWM during the Holocene. We also argue that the bulk signal of monsoonal activity captured by the stacked non-speleothem proxy records supports the previously argued bias of speleothem climatic archives to moisture source changes and/or seasonality.
Iron-sulfur clusters are essential enzyme cofactors. The most common and stable clusters are [2Fe-2S] and [4Fe-4S] that are found in nature. They are involved in crucial biological processes like respiration, gene regulation, protein translation, replication and DNA repair in prokaryotes and eukaryotes. In Escherichia coli, Fe-S clusters are essential for molybdenum cofactor (Moco) biosynthesis, which is a ubiquitous and highly conserved pathway. The first step of Moco biosynthesis is catalyzed by the MoaA protein to produce cyclic pyranopterin monophosphate (cPMP) from 5’GTP. MoaA is a [4Fe-4S] cluster containing radical S-adenosyl-L-methionine (SAM) enzyme. The focus of this study was to investigate Fe-S cluster insertion into MoaA under nitrate and TMAO respiratory conditions using E. coli as a model organism. Nitrate and TMAO respiration usually occur under anaerobic conditions, where oxygen is depleted. Under these conditions, E. coli uses nitrate and TMAO as terminal electron. Previous studies revealed that Fe-S cluster insertion is performed by Fe-S cluster carrier proteins. In E. coli, these proteins are known as A-type carrier proteins (ATC) by phylogenomic and genetic studies. So far, three of them have been characterized in detail in E. coli, namely IscA, SufA, and ErpA. This study shows that ErpA and IscA are involved in Fe-S cluster insertion into MoaA under nitrate and TMAO respiratory conditions. ErpA and IscA can partially replace each other in their role to provide [4Fe-4S] clusters for MoaA. SufA is not able to replace the functions of IscA or ErpA under nitrate respiratory conditions.
Nitrate reductase is a molybdoenzyme that coordinates Moco and Fe-S clusters. Under nitrate respiratory conditions, the expression of nitrate reductase is significantly increased in E. coli. Nitrate reductase is encoded in narGHJI genes, the expression of which is regulated by the transcriptional regulator, fumarate and nitrate reduction (FNR). The activation of FNR under conditions of nitrate respiration requires one [4Fe-4S] cluster. In this part of the study, we analyzed the insertion of Fe-S cluster into FNR for the expression of narGHJI genes in E. coli. The results indicate that ErpA is essential for the FNR-dependent expression of the narGHJI genes, a role that can be replaced partially by IscA and SufA when they are produced sufficiently under the conditions tested. This observation suggests that ErpA is indirectly regulating nitrate reductase expression via inserting Fe-S clusters into FNR.
Most molybdoenzymes are complex multi-subunit and multi-cofactor-containing enzymes that coordinate Fe-S clusters, which are functioning as electron transfer chains for catalysis. In E. coli, periplasmic aldehyde oxidoreductase (PaoAC) is a heterotrimeric molybdoenzyme that
consists of flavin, two [2Fe-2S], one [4Fe-4S] cluster and Moco. In the last part of this study, we investigated the insertion of Fe-S clusters into E. coli periplasmic aldehyde oxidoreductase (PaoAC). The results show that SufA and ErpA are involved in inserting [4Fe-4S] and [2Fe-2S] clusters into PaoABC, respectively under aerobic respiratory conditions.
Gravitational-wave (GW) astrophysics is a field in full blossom. Since the landmark detection of GWs from a binary black hole on September 14th 2015, fifty-two compact-object binaries have been reported by the LIGO-Virgo collaboration. Such events carry astrophysical and cosmological information ranging from an understanding of how black holes and neutron stars are formed, what neutron stars are composed of, how the Universe expands, and allow testing general relativity in the highly-dynamical strong-field regime. It is the goal of GW astrophysics to extract such information as accurately as possible. Yet, this is only possible if the tools and technology used to detect and analyze GWs are advanced enough. A key aspect of GW searches are waveform models, which encapsulate our best predictions for the gravitational radiation under a certain set of parameters, and that need to be cross-correlated with data to extract GW signals. Waveforms must be very accurate to avoid missing important physics in the data, which might be the key to answer the fundamental questions of GW astrophysics. The continuous improvements of the current LIGO-Virgo detectors, the development of next-generation ground-based detectors such as the Einstein Telescope or the Cosmic Explorer, as well as the development of the Laser Interferometer Space Antenna (LISA), demand accurate waveform models. While available models are enough to capture the low spins, comparable-mass binaries routinely detected in LIGO-Virgo searches, those for sources from both current and next-generation ground-based and spaceborne detectors must be accurate enough to detect binaries with large spins and asymmetry in the masses. Moreover, the thousands of sources that we expect to detect with future detectors demand accurate waveforms to mitigate biases in the estimation of signals’ parameters due to the presence of a foreground of many sources that overlap in the frequency band. This is recognized as one of the biggest challenges for the analysis of future-detectors’ data, since biases might hinder the extraction of important astrophysical and cosmological information from future detectors’ data. In the first part of this thesis, we discuss how to improve waveform models for binaries with high spins and asymmetry in the masses. In the second, we present the first generic metrics that have been proposed to predict biases in the presence of a foreground of many overlapping signals in GW data.
For the first task, we will focus on several classes of analytical techniques. Current models for LIGO and Virgo studies are based on the post-Newtonian (PN, weak-field, small velocities) approximation that is most natural for the bound orbits that are routinely detected in GW searches. However, two other approximations have risen in prominence, the post-Minkowskian (PM, weak- field only) approximation natural for unbound (scattering) orbits and the small-mass-ratio (SMR) approximation typical of binaries in which the mass of one body is much bigger than the other. These are most appropriate to binaries with high asymmetry in the masses that challenge current waveform models. Moreover, they allow one to “cover” regions of the parameter space of coalescing binaries, thereby improving the interpolation (and faithfulness) of waveform models. The analytical approximations to the relativistic two-body problem can synergically be included within the effective-one-body (EOB) formalism, in which the two-body information from each approximation can be recast into an effective problem of a mass orbiting a deformed Schwarzschild (or Kerr) black hole. The hope is that the resultant models can cover both the low-spin comparable-mass binaries that are routinely detected, and the ones that challenge current models. The first part of this thesis is dedicated to a study about how to best incorporate information from the PN, PM, SMR and EOB approaches in a synergistic way. We also discuss how accurate the resulting waveforms are, as compared against numerical-relativity (NR) simulations. We begin by comparing PM models, whether alone or recast in the EOB framework, against PN models and NR simulations. We will show that PM information has the potential to improve currently-employed models for LIGO and Virgo, especially if recast within the EOB formalism. This is very important, as the PM approximation comes with a host of new computational techniques from particle physics to exploit. Then, we show how a combination of PM and SMR approximations can be employed to access previously-unknown PN orders, deriving the third subleading PN dynamics for spin-orbit and (aligned) spin1-spin2 couplings. Such new results can then be included in the EOB models currently used in GW searches and parameter estimation studies, thereby improving them when the binaries have high spins. Finally, we build an EOB model for quasi-circular nonspinning binaries based on the SMR approximation (rather than the PN one as usually done). We show how this is done in detail without incurring in the divergences that had affected previous attempts, and compare the resultant model against NR simulations. We find that the SMR approximation is an excellent approximation for all (quasi-circular nonspinning) binaries, including both the equal-mass binaries that are routinely detected in GW searches and the ones with highly asymmetric masses. In particular, the SMR-based models compare much better than the PN models, suggesting that SMR-informed EOB models might be the key to model binaries in the future. In the second task of this thesis, we work within the linear-signal ap- proximation and describe generic metrics to predict inference biases on the parameters of a GW source of interest in the presence of confusion noise from unfitted foregrounds and from residuals of other signals that have been incorrectly fitted out. We illustrate the formalism with simple (yet realistic) LISA sources, and demonstrate its validity against Monte-Carlo simulations. The metrics we describe pave the way for more realistic studies to quantify the biases with future ground-based and spaceborne detectors.
Magmatic continental rifts often constitute the earliest stage of nascent plate boundaries. These extensional tectonic provinces are characterized by ubiquitous normal faulting and volcanic activity; the spatial pattern, the geometry, and the age of these normal faults can help to unravel the spatiotemporal relationships between extensional deformation, magmatism, and long-wavelength crustal deformation of continental rift provinces. This study focuses on the active faulting in the Kenya Rift of the Cenozoic East African Rift System (EARS) with a focus on the mid-Pleistocene to the present-day.
To examine the early stages of continental break-up in the EARS, this thesis presents a time-averaged minimum extension rate for the inner graben of the Northern Kenya Rift (NKR) for the last 0.5 m.y. Using the TanDEM-X digital elevation model, fault-scarp geometries and associated throws are determined across the volcano-tectonic axis of the inner graben of the NKR. By integrating existing geochronology of faulted units with new ⁴⁰Ar/³⁹Ar radioisotopic dates, time-averaged extension rates are calculated. This study reveals that in the inner graben of the NKR, the long-term extension rate based on mid-Pleistocene to recent brittle deformation has minimum values of 1.0 to 1.6 mm yr⁻¹, locally with values up to 2.0 mm yr⁻¹. In light of virtually inactive border faults of the NKR, we show that extension is focused in the region of the active volcano-tectonic axis in the inner graben, thus highlighting the maturing of continental rifting in the NKR.
The phenomenon of focused extension is further investigated with a structural analysis of the youngest volcanic manifestations of the Kenya Rift, their relationship with extensional structures, and their overprint by Holocene faulting. In this context I analyzed the fault characteristics at the ~36 ka old Menengai Caldera and adjacent areas in the Central Kenya Rift using detailed field mapping and a structure-from-motion-based DEM generated from UAV data. In general, the Holocene intra-rift normal faults are dip-slip faults which strike NNE and thus reflect the present-day tectonic stress field; however, inside Menengai caldera persistent magmatic activity and magmatic resurgence overprints these young structures significantly. The caldera is located at the center of an actively extending rift segment and this and the other volcanic edifices of the Kenya Rift may constitute nucleation points of faulting an magmatic extensional processes that ultimately lead into a future stage of magma-assisted rifting.
When viewed at the scale of the entire Kenya Rift the protracted normal faulting in this region compartmentalizes the larger rift depressions, and influences the sedimentology and the hydrology of the intra-rift basins at a scale of less than 100 km. In the present day, most of the fault-bounded sub-basins of the Kenya Rift are hydrologically isolated due to this combination of faulting and magmatic activity that has generated efficient hydrological barriers that maintain these basins as semi-independent geomorphic entities. This isolation, however, was overcome during wetter climatic conditions during the past when the basins were transiently connected. I therefore also investigated the hydrological connectivity of the rift basins during the African Humid Period of the early Holocene, when climate was wetter. With the help of DEM analysis, lake-highstand indicators, radiocarbon dating, and a review of the fossil record, two lake-river-cascades could be identified: one directed southward, and one directed northward. Both cascades connected presently isolated rift basins during the early Holocene via spillovers of lakes and incised river gorges. This hydrological connection fostered the dispersal of aquatic faunas along the rift, and in addition, the water divide between the two river systems represented the only terrestrial dispersal corridor across the Kenya Rift. The reconstruction explains isolated distributions of Nilotic fish species in Kenya Rift lakes and of Guineo-Congolian mammal species in forests east of the Kenya Rift. On longer timescales, repeated episodes of connectivity and isolation must have occurred. To address this problem I participated in research to analyze a sediment drill core from the Koora basin of the Southern Kenya Rift, which provides a paleo-environmental record of the last 1 Ma. Based on this record it can be concluded that at ~400 ka relatively stable environmental conditions were disrupted by tectonic, hydrological, and ecological changes, resulting in increasingly large and frequent fluctuations in water availability, grassland communities, and woody plant cover. The major environmental shifts reflected in the drill core data coincide with phases where volcano-tectonic activity affected the basin. This thesis therefore shows how protracted extensional tectonic processes and the resulting geomorphologic conditions can affect the hydrology, the paleo-environment and the biodiversity of extensional zones in Kenya and elsewhere.
Background: High-intensity muscle actions have the potential to temporarily improve the performance which has been denoted as postactivation performance enhancement.
Objectives: This study determined the acute effects of different stretch-shortening (fast vs. low) and strength (dynamic vs. isometric) exercises executed during one training session on subsequent balance performance in youth weightlifters.
Materials and Methods: Sixteen male and female young weightlifters, aged 11.3±0.6years, performed four strength exercise conditions in randomized order, including dynamic strength (DYN; 3 sets of 3 repetitions of 10 RM) and isometric strength exercises (ISOM; 3 sets of maintaining 3s of 10 RM of back-squat), as well as fast (FSSC; 3 sets of 3 repetitions of 20-cm drop-jumps) and slow (SSSC; 3 sets of 3 hurdle jumps over a 20-cm obstacle) stretch-shortening cycle protocols. Balance performance was tested before and after each of the four exercise conditions in bipedal stance on an unstable surface (i.e., BOSU ball with flat side facing up) using two dependent variables, i.e., center of pressure surface area (CoP SA) and velocity (CoP V).
Results: There was a significant effect of time on CoP SA and CoP V [F(1,60)=54.37, d=1.88, p<0.0001; F(1,60)=9.07, d=0.77, p=0.003]. In addition, a statistically significant effect of condition on CoP SA and CoP V [F(3,60)=11.81, d=1.53, p<0.0001; F(3,60)=7.36, d=1.21, p=0.0003] was observed. Statistically significant condition-by-time interactions were found for the balance parameters CoP SA (p<0.003, d=0.54) and CoP V (p<0.002, d=0.70). Specific to contrast analysis, all specified hypotheses were tested and demonstrated that FSSC yielded significantly greater improvements than all other conditions in CoP SA and CoP V [p<0.0001 (d=1.55); p=0.0004 (d=1.19), respectively]. In addition, FSSC yielded significantly greater improvements compared with the two conditions for both balance parameters [p<0.0001 (d=2.03); p<0.0001 (d=1.45)].
Conclusion: Fast stretch-shortening cycle exercises appear to be more effective to improve short-term balance performance in young weightlifters. Due to the importance of balance for overall competitive achievement in weightlifting, it is recommended that young weightlifters implement dynamic plyometric exercises in the fast stretch-shortening cycle during the warm-up to improve their balance performance.
Populations adapt to novel environmental conditions by genetic changes or phenotypic plasticity. Plastic responses are generally faster and can buffer fitness losses under variable conditions. Plasticity is typically modeled as random noise and linear reaction norms that assume simple one-to- one genotype–phenotype maps and no limits to the phenotypic response. Most studies on plasticity have focused on its effect on population viability. However, it is not clear, whether the advantage of plasticity depends solely on environmental fluctuations or also on the genetic and demographic properties (life histories) of populations. Here we present an individual-based model and study the relative importance of adaptive and nonadaptive plasticity for populations of sexual species with different life histories experiencing directional stochastic climate change. Environmental fluctuations were simulated using differentially autocorrelated climatic stochasticity or noise color, and scenarios of directiona
climate change. Nonadaptive plasticity was simulated as a random environmental effect on trait development, while adaptive plasticity as a linear, saturating, or sinusoidal reaction norm. The last two imposed limits to the plastic response and emphasized flexible interactions of the genotype with the environment. Interestingly, this assumption led to (a) smaller phenotypic than genotypic variance in the population (many-to- one genotype–phenotype map) and the coexistence of polymorphisms, and (b) the maintenance of higher genetic variation—compared to linear reaction norms and genetic determinism—even when the population was exposed to a constant environment for several generations. Limits to plasticity led to genetic accommodation, when costs were negligible, and to the appearance of cryptic variation when limits were exceeded. We found that adaptive plasticity promoted population persistence under red environmental noise and was particularly important for life histories with low fecundity. Populations produing more offspring could cope with environmental fluctuations solely by genetic changes or random plasticity, unless environmental change was too fast.
As competition over peer status becomes intense during adolescence, some adolescents develop insecure feelings regarding their social standing among their peers (i.e., social status insecurity). These adolescents sometimes use aggression to defend or promote their status. The aim of this study was to examine the relationships among social status insecurity, callous-unemotional (CU) traits, and popularity-motivated aggression and prosocial behaviors among adolescents, while controlling for gender. Another purpose was to examine the potential moderating role of CU traits in these relationships. Participants were 1,047 (49.2% girls; Mage = 12.44 years; age range from 11 to 14 years) in the 7th or 8th grades from a large Midwestern city. They completed questionnaires on social status insecurity, CU traits, and popularity-motivated relational aggression, physical aggression, cyberaggression, and prosocial behaviors. A structural regression model was conducted, with gender as a covariate. The model had adequate fit. Social status insecurity was associated positively with callousness, unemotional, and popularity-motivated aggression and related negatively to popularity-motivated prosocial behaviors. High social status insecurity was related to greater popularity-motivated aggression when adolescents had high callousness traits. The findings have implications for understanding the individual characteristics associated with social status insecurity.
In the last decades, there was a notable progress in solving the well-known Boolean satisfiability (Sat) problem, which can be witnessed by powerful Sat solvers. One of the reasons why these solvers are so fast are structural properties of instances that are utilized by the solver’s interna. This thesis deals with the well-studied structural property treewidth, which measures the closeness of an instance to being a tree. In fact, there are many problems parameterized by treewidth that are solvable in polynomial time in the instance size when parameterized by treewidth.
In this work, we study advanced treewidth-based methods and tools for problems in knowledge representation and reasoning (KR). Thereby, we provide means to establish precise runtime results (upper bounds) for canonical problems relevant to KR. Then, we present a new type of problem reduction, which we call decomposition-guided (DG) that
allows us to precisely monitor the treewidth when reducing from one problem to another problem. This new reduction type will be the basis for a long-open lower bound result for quantified Boolean formulas and allows us to design a new methodology for establishing runtime lower bounds for problems parameterized by treewidth.
Finally, despite these lower bounds, we provide an efficient implementation of algorithms that adhere to treewidth. Our approach finds suitable abstractions of instances, which are subsequently refined in a recursive fashion, and it uses Sat solvers for solving subproblems. It turns out that our resulting solver is quite competitive for two canonical counting problems related to Sat.
Precipitation forecasting has an important place in everyday life – during the day we may have tens of small talks discussing the likelihood that it will rain this evening or weekend. Should you take an umbrella for a walk? Or should you invite your friends for a barbecue? It will certainly depend on what your weather application shows.
While for years people were guided by the precipitation forecasts issued for a particular region or city several times a day, the widespread availability of weather radars allowed us to obtain forecasts at much higher spatiotemporal resolution of minutes in time and hundreds of meters in space. Hence, radar-based precipitation nowcasting, that is, very-short-range forecasting (typically up to 1–3 h), has become an essential technique, also in various professional application contexts, e.g., early warning, sewage control, or agriculture.
There are two major components comprising a system for precipitation nowcasting: radar-based precipitation estimates, and models to extrapolate that precipitation to the imminent future. While acknowledging the fundamental importance of radar-based precipitation retrieval for precipitation nowcasts, this thesis focuses only on the model development: the establishment of open and competitive benchmark models, the investigation of the potential of deep learning, and the development of procedures for nowcast errors diagnosis and isolation that can guide model development.
The present landscape of computational models for precipitation nowcasting still struggles with the availability of open software implementations that could serve as benchmarks for measuring progress. Focusing on this gap, we have developed and extensively benchmarked a stack of models based on different optical flow algorithms for the tracking step and a set of parsimonious extrapolation procedures based on image warping and advection. We demonstrate that these models provide skillful predictions comparable with or even superior to state-of-the-art operational software. We distribute the corresponding set of models as a software library, rainymotion, which is written in the Python programming language and openly available at GitHub (https://github.com/hydrogo/rainymotion). That way, the library acts as a tool for providing fast, open, and transparent solutions that could serve as a benchmark for further model development and hypothesis testing.
One of the promising directions for model development is to challenge the potential of deep learning – a subfield of machine learning that refers to artificial neural networks with deep architectures, which may consist of many computational layers. Deep learning showed promising results in many fields of computer science, such as image and speech recognition, or natural language processing, where it started to dramatically outperform reference methods.
The high benefit of using "big data" for training is among the main reasons for that. Hence, the emerging interest in deep learning in atmospheric sciences is also caused and concerted with the increasing availability of data – both observational and model-based. The large archives of weather radar data provide a solid basis for investigation of deep learning potential in precipitation nowcasting: one year of national 5-min composites for Germany comprises around 85 billion data points.
To this aim, we present RainNet, a deep convolutional neural network for radar-based precipitation nowcasting. RainNet was trained to predict continuous precipitation intensities at a lead time of 5 min, using several years of quality-controlled weather radar composites provided by the German Weather Service (DWD). That data set covers Germany with a spatial domain of 900 km x 900 km and has a resolution of 1 km in space and 5 min in time. Independent verification experiments were carried out on 11 summer precipitation events from 2016 to 2017. In these experiments, RainNet was applied recursively in order to achieve lead times of up to 1 h. In the verification experiments, trivial Eulerian persistence and a conventional model based on optical flow served as benchmarks. The latter is available in the previously developed rainymotion library.
RainNet significantly outperformed the benchmark models at all lead times up to 60 min for the routine verification metrics mean absolute error (MAE) and critical success index (CSI) at intensity thresholds of 0.125, 1, and 5 mm/h. However, rainymotion turned out to be superior in predicting the exceedance of higher intensity thresholds (here 10 and 15 mm/h). The limited ability of RainNet to predict high rainfall intensities is an undesirable property which we attribute to a high level of spatial smoothing introduced by the model. At a lead time of 5 min, an analysis of power spectral density confirmed a significant loss of spectral power at length scales of 16 km and below.
Obviously, RainNet had learned an optimal level of smoothing to produce a nowcast at 5 min lead time. In that sense, the loss of spectral power at small scales is informative, too, as it reflects the limits of predictability as a function of spatial scale. Beyond the lead time of 5 min, however, the increasing level of smoothing is a mere artifact – an analogue to numerical diffusion – that is not a property of RainNet itself but of its recursive application. In the context of early warning, the smoothing is particularly unfavorable since pronounced features of intense precipitation tend to get lost over longer lead times. Hence, we propose several options to address this issue in prospective research on model development for precipitation nowcasting, including an adjustment of the loss function for model training, model training for longer lead times, and the prediction of threshold exceedance.
The model development together with the verification experiments for both conventional and deep learning model predictions also revealed the need to better understand the source of forecast errors. Understanding the dominant sources of error in specific situations should help in guiding further model improvement. The total error of a precipitation nowcast consists of an error in the predicted location of a precipitation feature and an error in the change of precipitation intensity over lead time. So far, verification measures did not allow to isolate the location error, making it difficult to specifically improve nowcast models with regard to location prediction.
To fill this gap, we introduced a framework to directly quantify the location error. To that end, we detect and track scale-invariant precipitation features (corners) in radar images. We then consider these observed tracks as the true reference in order to evaluate the performance (or, inversely, the error) of any model that aims to predict the future location of a precipitation feature. Hence, the location error of a forecast at any lead time ahead of the forecast time corresponds to the Euclidean distance between the observed and the predicted feature location at the corresponding lead time.
Based on this framework, we carried out a benchmarking case study using one year worth of weather radar composites of the DWD. We evaluated the performance of four extrapolation models, two of which are based on the linear extrapolation of corner motion; and the remaining two are based on the Dense Inverse Search (DIS) method: motion vectors obtained from DIS are used to predict feature locations by linear and Semi-Lagrangian extrapolation.
For all competing models, the mean location error exceeds a distance of 5 km after 60 min, and 10 km after 110 min. At least 25% of all forecasts exceed an error of 5 km after 50 min, and of 10 km after 90 min. Even for the best models in our experiment, at least 5 percent of the forecasts will have a location error of more than 10 km after 45 min. When we relate such errors to application scenarios that are typically suggested for precipitation nowcasting, e.g., early warning, it becomes obvious that location errors matter: the order of magnitude of these errors is about the same as the typical extent of a convective cell. Hence, the uncertainty of precipitation nowcasts at such length scales – just as a result of locational errors – can be substantial already at lead times of less than 1 h. Being able to quantify the location error should hence guide any model development that is targeted towards its minimization. To that aim, we also consider the high potential of using deep learning architectures specific to the assimilation of sequential (track) data.
Last but not least, the thesis demonstrates the benefits of a general movement towards open science for model development in the field of precipitation nowcasting. All the presented models and frameworks are distributed as open repositories, thus enhancing transparency and reproducibility of the methodological approach. Furthermore, they are readily available to be used for further research studies, as well as for practical applications.
Children’s physical fitness development and related moderating effects of age and sex are well documented, especially boys’ and girls’ divergence during puberty. The situation might be different during prepuberty. As girls mature approximately two years earlier than boys, we tested a possible convergence of performance with five tests representing four components of physical fitness in a large sample of 108,295 eight-year old third-graders. Within this single prepubertal year of life and irrespective of the test, performance increased linearly with chronological age, and boys outperformed girls to a larger extent in tests requiring muscle mass for successful performance. Tests differed in the magnitude of age effects (gains), but there was no evidence for an interaction between age and sex. Moreover, “physical fitness” of schools correlated at r = 0.48 with their age effect which might imply that "fit schools” promote larger gains; expected secular trends from 2011 to 2019 were replicated.
Ausgehend von einem Brief Alexander von Humboldts an die Schriftstellerin Therese von Bacheracht(1804 – 1852) wird die Geschichte der Herausgabe der Briefe seines Bruders Wilhelm an Charlotte Diede im Jahr 1847, nach dem Tod beider Korrespondenten, nachgezeichnet. Besonders wird dabei auf die bisher unveröffentlichten „Tagesblätter“ Karl August Varnhagens von Ense zurückgegriffen, aus denen hervorgeht, dass Alexander von Humboldt seine anfangs ablehnende Haltung aufgibt, Varnhagen mit der Prüfung und Korrektur des Manuskripts betraut, und dass Therese von Bacheracht durch Hartnäckigkeit und Charme ihr Ziel erreicht, die nicht unbeträchtlichen Einnahmen aus der Veröffentlichung zugesprochen zu bekommen.
Während seines Aufenthalts 1803 in Mexiko machte von Humboldt die Bekanntschaft von Dupaix, einem spanischen Soldaten luxemburgischer Herkunft und Liebhaber präkolumbischer Altertümer. Die Entdeckung verschiedener Manuskripte Dupaix’ sowie die Untersuchung diverser Archive und persönlicher Aufzeichnungen des Freiherrn und von Dokumenten unterschiedlicher Institutionen beiderseits des Atlantiks erlauben es, den außergewöhnlichen Weg eines berühmten mexikanischen Objekts, der Chalchiuhtlicue, nachzuvollziehen, die der preußische Forschungsreisende 1810 als „aztekische Priesterin“ in seinem Buch Vues des cordillères … beschrieben hatte. Der vorliegende Beitrag versucht, die verschiedenen Besitzer und die Umstände nachzuzeichnen, welche die Wanderung dieser emblematischen prähispanischen Statuette von Mexiko-Stadt nach London fast ein halbes Jahrhundert lang begleitet haben.
Alles kann besser werden!
(2021)
Background
The anterior cruciate ligament (ACL) rupture can lead to impaired knee function. Reconstruction decreases the mechanical instability but might not have an impact on sensorimotor alterations.
Objective
Evaluation of the sensorimotor function measured with the active joint position sense (JPS) test in anterior cruciate ligament (ACL) reconstructed patients compared to the contralateral side and a healthy control group.
Methods
The databases MEDLINE, CINAHL, EMBASE, PEDro, Cochrane Library and SPORTDiscus were systematically searched from origin until April 2020. Studies published in English, German, French, Spanish or Italian language were included. Evaluation of the sensorimotor performance was restricted to the active joint position sense test in ACL reconstructed participants or healthy controls. The Preferred Items for Systematic Reviews and Meta-Analyses guidelines were followed. Study quality was evaluated using the Quality Assessment Tool for Observational Cohort and Cross-Sectional Studies. Data was descriptively synthesized.
Results
Ten studies were included after application of the selective criteria. Higher angular deviation, reaching significant difference (p < 0.001) in one study, was shown up to three months after surgery in the affected limb. Six months post-operative significantly less error (p < 0.01) was found in the reconstructed leg compared to the contralateral side and healthy controls. One or more years after ACL reconstruction significant differences were inconsistent along the studies.
Conclusions
Altered sensorimotor function was present after ACL reconstruction. Due to inconsistencies and small magnitudes, clinical relevance might be questionable. JPS testing can be performed in acute injured persons and prospective studies could enhance knowledge of sensorimotor function throughout the rehabilitative processes.
As mid-19th-century American Jews introduced radical changes to their religious observance and began to define Judaism in new ways, to what extent did they engage with European Jewish ideas? Historians often approach religious change among Jews from German lands during this period as if Jewish immigrants had come to America with one set of ideas that then evolved solely in conversation with their American contexts. Historians have similarly cast the kinds of Judaism Americans created as both unique to America and uniquely American. These characterizations are accurate to an extent. But to what extent did Jewish innovations in the United States take place in conversation with European Jewish developments? Looking to the 19th-century American Jewish press, this paper seeks to understand how American Jews engaged European Judaism in formulating their own ideas, understanding themselves, and understanding their place in world Judaism.
Media artists have been struggling for financial survival ever since media art came into being. The non-material value of the artwork, a provocative attitude towards the traditional arts world and originally anti-capitalist mindset of the movement makes it particularly difficult to provide a constructive solution. However, a cultural entrepreneurial approach can be used to build a framework in order to find a balance between culture and business while ensuring that the cultural mission remains the top priority.
Fluids in the Earth's crust can move by creating and flowing through fractures, in a process called `hydraulic fracturing’. The tip-line of such fluid-filled fractures grows at locations where stress is larger than the strength of the rock. Where the tip stress vanishes, the fracture closes and the fluid-front retreats. If stress gradients exist on the fracture's walls, induced by fluid/rock density contrasts or topographic stresses, this results in an asymmetric shape and growth of the fracture, allowing for the contained batch of fluid to propagate through the crust.
The state-of-the-art analytical and numerical methods to simulate fluid-filled fracture propagation are two-dimensional (2D). In this work I extend these to three dimensions (3D). In my analytical method, I approximate the propagating 3D fracture as a penny-shaped crack that is influenced by both an internal pressure and stress gradients. In addition, I develop a numerical method to model propagation where curved fractures can be simulated as a mesh of triangular dislocations, with the displacement of faces computed using the displacement discontinuity method. I devise a rapid technique to approximate stress intensity and use this to calculate the advance of the tip-line. My 3D models can be applied to arbitrary stresses, topographic and crack shapes, whilst retaining short computation times.
I cross-validate my analytical and numerical methods and apply them to various natural and man-made settings, to gain additional insights into the movements of hydraulic fractures such as magmatic dikes and fluid injections in rock. In particular, I calculate the `volumetric tipping point’, which once exceeded allows a fluid-filled fracture to propagate in a `self-sustaining’ manner. I discuss implications this has for hydro-fracturing in industrial operations. I also present two studies combining physical models that define fluid-filled fracture trajectories and Bayesian statistical techniques. In these studies I show that the stress history of the volcanic edifice defines the location of eruptive vents at volcanoes. Retrieval of the ratio between topographic to remote stresses allows for forecasting of probable future vent locations. Finally, I address the mechanics of 3D propagating dykes and sills in volcanic regions. I focus on Sierra Negra volcano in the Gal\'apagos islands, where in 2018, a large sill propagated with an extremely curved trajectory. Using a 3D analysis, I find that shallow horizontal intrusions are highly sensitive to topographic and buoyancy stress gradients, as well as the effects of the free surface.
Interoception is an often neglected but crucial aspect of the human minimal self. In this perspective, we extend the embodiment account of interoceptive inference to explain the development of the minimal self in humans. To do so, we first provide a comparative overview of the central accounts addressing the link between interoception and the minimal self. Grounding our arguments on the embodiment framework, we propose a bidirectional relationship between motor and interoceptive states, which jointly contribute to the development of the minimal self. We present empirical findings on interoception in development and discuss the role of interoception in the development of the minimal self. Moreover, we make theoretical predictions that can be tested in future experiments. Our goal is to provide a comprehensive view on the mechanisms underlying the minimal self by explaining the role of interoception in the development of the minimal self.
Background: The prevalence of diabetes worldwide is predicted to increase from 2.8% in 2000 to 4.4% in 2030. Diabetic neuropathy (DN) is associated with damage to nerve glial cells, their axons, and endothelial cells leading to impaired function and mobility.
Objective: We aimed to examine the effects of an endurance-dominated exercise program on maximum oxygen consumption (VO2max), ground reaction forces, and muscle activities during walking in patients with moderate DN.
Methods: Sixty male and female individuals aged 45–65 years with DN were randomly assigned to an intervention (IG, n = 30) or a waiting control (CON, n = 30) group. The research protocol of this study was registered with the Local Clinical Trial Organization (IRCT20200201046326N1). IG conducted an endurance-dominated exercise program including exercises on a bike ergometer and gait therapy. The progressive intervention program lasted 12 weeks with three sessions per week, each 40–55 min. CON received the same treatment as IG after the post-tests. Pre- and post-training, VO2max was tested during a graded exercise test using spiroergometry. In addition, ground reaction forces and lower limbs muscle activities were recorded while walking at a constant speed of ∼1 m/s.
Results: No statistically significant baseline between group differences was observed for all analyzed variables. Significant group-by-time interactions were found for VO2max (p < 0.001; d = 1.22). The post-hoc test revealed a significant increase in IG (p < 0.001; d = 1.88) but not CON. Significant group-by-time interactions were observed for peak lateral and vertical ground reaction forces during heel contact and peak vertical ground reaction force during push-off (p = 0.001–0.037; d = 0.56–1.53). For IG, post-hoc analyses showed decreases in peak lateral (p < 0.001; d = 1.33) and vertical (p = 0.004; d = 0.55) ground reaction forces during heel contact and increases in peak vertical ground reaction force during push-off (p < 0.001; d = 0.92). In terms of muscle activity, significant group-by-time interactions were found for vastus lateralis and gluteus medius during the loading phase and for vastus medialis during the mid-stance phase, and gastrocnemius medialis during the push-off phase (p = 0.001–0.044; d = 0.54–0.81). Post-hoc tests indicated significant intervention-related increases in vastus lateralis (p = 0.001; d = 1.08) and gluteus medius (p = 0.008; d = 0.67) during the loading phase and vastus medialis activity during mid-stance (p = 0.001; d = 0.86). In addition, post-hoc tests showed decreases in gastrocnemius medialis during the push-off phase in IG only (p < 0.001; d = 1.28).
Conclusions: This study demonstrated that an endurance-dominated exercise program has the potential to improve VO2max and diabetes-related abnormal gait in patients with DN. The observed decreases in peak vertical ground reaction force during the heel contact of walking could be due to increased vastus lateralis and gluteus medius activities during the loading phase. Accordingly, we recommend to implement endurance-dominated exercise programs in type 2 diabetic patients because it is feasible, safe and effective by improving aerobic capacity and gait characteristics.
The noble way to substantiate decisions that affect many people is to ask these people for their opinions. For governments that run whole countries, this means asking all citizens for their views to consider their situations and needs.
Organizations such as Africa's Voices Foundation, who want to facilitate communication between decision-makers and citizens of a country, have difficulty mediating between these groups. To enable understanding, statements need to be summarized and visualized. Accomplishing these goals in a way that does justice to the citizens' voices and situations proves challenging. Standard charts do not help this cause as they fail to create empathy for the people behind their graphical abstractions. Furthermore, these charts do not create trust in the data they are representing as there is no way to see or navigate back to the underlying code and the original data. To fulfill these functions, visualizations would highly benefit from interactions to explore the displayed data, which standard charts often only limitedly provide.
To help improve the understanding of people's voices, we developed and categorized 80 ideas for new visualizations, new interactions, and better connections between different charts, which we present in this report. From those ideas, we implemented 10 prototypes and two systems that integrate different visualizations. We show that this integration allows consistent appearance and behavior of visualizations. The visualizations all share the same main concept: representing each individual with a single dot. To realize this idea, we discuss technologies that efficiently allow the rendering of a large number of these dots. With these visualizations, direct interactions with representations of individuals are achievable by clicking on them or by dragging a selection around them. This direct interaction is only possible with a bidirectional connection from the visualization to the data it displays. We discuss different strategies for bidirectional mappings and the trade-offs involved. Having unified behavior across visualizations enhances exploration. For our prototypes, that includes grouping, filtering, highlighting, and coloring of dots. Our prototyping work was enabled by the development environment Lively4. We explain which parts of Lively4 facilitated our prototyping process. Finally, we evaluate our approach to domain problems and our developed visualization concepts.
Our work provides inspiration and a starting point for visualization development in this domain. Our visualizations can improve communication between citizens and their government and motivate empathetic decisions. Our approach, combining low-level entities to create visualizations, provides value to an explorative and empathetic workflow. We show that the design space for visualizing this kind of data has a lot of potential and that it is possible to combine qualitative and quantitative approaches to data analysis.
3D point clouds are a universal and discrete digital representation of three-dimensional objects and environments. For geospatial applications, 3D point clouds have become a fundamental type of raw data acquired and generated using various methods and techniques. In particular, 3D point clouds serve as raw data for creating digital twins of the built environment.
This thesis concentrates on the research and development of concepts, methods, and techniques for preprocessing, semantically enriching, analyzing, and visualizing 3D point clouds for applications around transport infrastructure. It introduces a collection of preprocessing techniques that aim to harmonize raw 3D point cloud data, such as point density reduction and scan profile detection. Metrics such as, e.g., local density, verticality, and planarity are calculated for later use. One of the key contributions tackles the problem of analyzing and deriving semantic information in 3D point clouds. Three different approaches are investigated: a geometric analysis, a machine learning approach operating on synthetically generated 2D images, and a machine learning approach operating on 3D point clouds without intermediate representation.
In the first application case, 2D image classification is applied and evaluated for mobile mapping data focusing on road networks to derive road marking vector data. The second application case investigates how 3D point clouds can be merged with ground-penetrating radar data for a combined visualization and to automatically identify atypical areas in the data. For example, the approach detects pavement regions with developing potholes. The third application case explores the combination of a 3D environment based on 3D point clouds with panoramic imagery to improve visual representation and the detection of 3D objects such as traffic signs.
The presented methods were implemented and tested based on software frameworks for 3D point clouds and 3D visualization. In particular, modules for metric computation, classification procedures, and visualization techniques were integrated into a modular pipeline-based C++ research framework for geospatial data processing, extended by Python machine learning scripts. All visualization and analysis techniques scale to large real-world datasets such as road networks of entire cities or railroad networks.
The thesis shows that some use cases allow taking advantage of established image vision methods to analyze images rendered from mobile mapping data efficiently. The two presented semantic classification methods working directly on 3D point clouds are use case independent and show similar overall accuracy when compared to each other. While the geometry-based method requires less computation time, the machine learning-based method supports arbitrary semantic classes but requires training the network with ground truth data. Both methods can be used in combination to gradually build this ground truth with manual corrections via a respective annotation tool.
This thesis contributes results for IT system engineering of applications, systems, and services that require spatial digital twins of transport infrastructure such as road networks and railroad networks based on 3D point clouds as raw data. It demonstrates the feasibility of fully automated data flows that map captured 3D point clouds to semantically classified models. This provides a key component for seamlessly integrated spatial digital twins in IT solutions that require up-to-date, object-based, and semantically enriched information about the built environment.
We use ultrafast x-ray diffraction to investigate the effect of expansive phononic and contractive magnetic stress driving the picosecond strain response of a metallic perovskite SrRuO3 thin film upon femtosecond laser excitation. We exemplify how the anisotropic bulk equilibrium thermal expansion can be used to predict the response of the thin film to ultrafast deposition of energy. It is key to consider that the laterally homogeneous laser excitation changes the strain response compared to the near-equilibrium thermal expansion because the balanced in-plane stresses suppress the Poisson stress on the picosecond timescale. We find a very large negative Grüneisen constant describing the large contractive stress imposed by a small amount of energy in the spin system. The temperature and fluence dependence of the strain response for a double-pulse excitation scheme demonstrates the saturation of the magnetic stress in the high-fluence regime.
Anerkennung und Macht
(2021)
In der vorliegenden Untersuchung habe ich das Ziel verfolgt, einen sachlich-eigenständigen Beitrag für eine Debatte gegen Honneths kritische Gesellschaftstheorie zu leisten. In dieser Debatte wird Honneth dahingehend kritisiert, dass es ihm mit seiner kritischen Gesellschaftstheorie entgegen seiner eigenen systematischen Zielsetzung nicht gelingt, in modernen liberaldemokratischen Gesellschaften sämtliche Phänomene von sozialer Herrschaft kritisch zu hinterfragen. Denn soziale Anerkennung, die Honneth als Schlüsselbegriff für diese kritische Hinterfragung behandelt, bei der soziale Herrschaft in Verbindung mit sozialer Missachtung (als mangelnde soziale Anerkennung) steht, kann laut der Kritik faktisch selbst ein Medium für die Stiftung von sozialer Unterwerfung sein. Dies geschieht in Prozessen von Identitätsentwicklung, in denen soziale Anerkennung für Individuen als Anerkannte bestimmte Identitätsmöglichkeiten einräumt und auf diese Weise gleichzeitig andere Identitätsmöglichkeiten ausschließt, womit sie auf diese Identität einschränkend und insofern herrschend wirkt. Es handelt sich um eine Form von sozialer Herrschaft, die durch soziale Anerkennung gestiftet wird. Honneth zieht dem Vorwurf zufolge nicht in Erwägung, dass soziale Anerkennung bei Individuen als Anerkannte einen solchen negativen Effekt erzielen kann. Hieraus ergeben sich die Fragen, ob soziale Anerkennung in Prozessen von Identitätsentwicklung jeweils mit sozialer Herrschaft einhergeht und wie dieser Typus von sozialer Herrschaft kritisiert werden kann. Diese Fragen hat Honneth zuletzt in einem persönlichen Gespräch mit Allen und Cooke (als zwei Teilnehmerinnen der Debatte gegen Honneth) beantwortet. An dieser Stelle vertritt er mit beiden Gesprächsteilnehmerinnen die Auffassung, dass die Operation der Einschränkung von Identitätsmöglichkeiten an sich keine Operation darstellt, welche, wie sonst in der Debatte gegen seine kritische Gesellschaftstheorie behauptet wird, auf soziale Herrschaft zurückführt. Diese Auffassung beruht auf der Idee, wonach soziale Anerkennung sich in jenem praktischen Kontext nur unter der Bedingung als herrschaftsstiftend erweist, dass sie immanente Prinzipien verletzt, die substanziell kritische Maßstäbe definieren.
Mein Beitrag zu dieser Debatte gegen Honneth besteht auf der einen Seite in der Erklärung, dass sowohl jene Auffassung als auch jene Idee argumentativ mangelhaft sind, und auf der anderen Seite in der Ausführung des Vorhabens, diesen argumentativen Mangel selbst zu beheben. Gegen jene Auffassung behaupte ich, dass die drei Autoren in ihrem Gespräch nicht erläutern, inwiefern soziale Anerkennung nicht herrschend wirkt, wenn sie die Identitätsmöglichkeiten von Individuen als Anerkannte einschränkt, denn mit dieser Einschränkung wird vielmehr faktisch über diese Individuen geherrscht – die Debatte gegen Honneth, so zur Unterstützung dieser Ansicht, baut hauptsächlich auf ebendiesem Faktum auf. Gegen jene Idee habe ich fünf problematische Fragen gestellt und beantwortet, die Bezug eigentlich nicht allein auf diese Idee selbst, sondern überdies auf weitere, naheliegende Ideen nehmen, welche die drei Autoren angesprochen haben.
Arbeit, Religion, Ruf
(2021)
Die Arbeit als Dienstmädchen stellte im Europa des 19. Jahrhunderts die weitverbreitetste Erwerbstätigkeit von Frauen dar. Oft erwies sie sich als die einzige Möglichkeit, trotz mangelnder Schulbildung und fehlender beruflicher Qualifikationen einen Lebensunterhalt zu bestreiten. In der Regel bewarben sich junge Mädchen, die vor der Gründung eines eigenen Haushalts Geld verdienen wollten. Aber auch ältere Frauen, die unverheiratet blieben, waren teils ihr Leben lang auf den Beruf als Dienstbotin angewiesen.
In den jüdischen Bürgerhaushalten der Niederlande, insbesondere in den zu dieser Zeit blühenden jüdischen Gemeinden in Amsterdam und anderen Großstädten, sah dies nicht anders aus. Auch dort putzten, kochten und stickten Dienstmädchen. Sie nahmen sich der Kindererziehung an und interagierten mit KollegInnen und ArbeitgeberInnen. Vor allem wegen eines Mangels an schriftlichen Quellen ist bisher jedoch wenig über dieses Kapitel jüdischer und weiblicher Erwerbsgeschichte bekannt.
Die vorliegende Studie wirft mit Hilfe von Stellenanzeigen für und von jüdischen Dienstmädchen Licht auf diese Berufsgruppe in den Jahren zwischen 1894 und 1925. Es wird ein Korpus von 540 Inseraten aus der vielgelesenen niederländischen Wochenzeitung Nieuw Israelietisch Weekblad diskursanalytisch untersucht, was neue Erkenntnisse über Leben und Arbeit der Dienstbotinnen zu Tage fördert. Die Anzeigen thematisieren sowohl das gesellschaftliche Ansehen der Frauen, ihre Aufgaben, Qualifikationen und finanziellen Ansprüche sowie ihre Religiosität. Durch einen Vergleich von Anzeigen aus drei Jahrzehnten kann die Studie aufzeigen, wie sich Einstellungen gegenüber dem Dienstmädchenberuf veränderten und sowohl Angestellte als auch ArbeitgeberInnen im Laufe der Zeit neue Maßstäbe an die häusliche Arbeit anlegten.
The present study aims to identify the optimal body-size/shape and maturity characteristics associated with superior fitness test performances having controlled for body-size, sex, and chronological-age differences. The sample consisted of 597 Tunisian children (396 boys and 201 girls) aged 8 to 15 years. Three sprint speeds recorded at 10, 20 and 30 m; two vertical and two horizontal jump tests; a change-of-direction and a handgrip-strength tests, were assessed during physical-education classes. Allometric modelling was used to identify the benefit of being an early or late maturer. Findings showed that being tall and light is the ideal shape to be successful at most physical fitness tests, but the height-to-weight “shape” ratio seems to be test-dependent. Having controlled for body-size/shape, sex, and chronological age, the model identified maturity-offset as an additional predictor. Boys who go earlier/younger through peak-height-velocity (PHV) outperform those who go at a later/older age. However, most of the girls’ physical-fitness tests peaked at the age at PHV and decline thereafter. Girls whose age at PHV was near the middle of the age range would appear to have an advantage compared to early or late maturers. These findings have important implications for talent scouts and coaches wishing to recruit children into their sports/athletic clubs.
Background
Building on the Realistic Accuracy Model, this paper explores whether it is easier for teachers to assess the achievement of some students than others. Accordingly, we suggest that certain individual characteristics of students, such as extraversion, academic self-efficacy, and conscientiousness, may guide teachers' evaluations of student achievement, resulting in more appropriate judgements and a stronger alignment of assigned grades with students' actual achievement level (as measured using standardized tests).
Aims
We examine whether extraversion, academic self-efficacy, and conscientiousness moderate the relations between teacher-assigned grades and students' standardized test scores in mathematics.
Sample
This study uses a representative sample of N = 5,919 seventh-grade students in Germany (48.8% girls; mean age: M = 12.5, SD = 0.62) who participated in a national, large-scale assessment focusing on students' academic development.
Methods
We specified structural equation models to examine the inter-relations of teacher-assigned grades with students' standardized test scores in mathematics, Big Five personality traits, and academic self-efficacy, while controlling for students' socioeconomic status, gender, and age.
Results
The correlation between teacher-assigned grades and standardized test scores in mathematics was r = .40. Teacher-assigned grades more closely related to standardized test scores when students reported higher levels of conscientiousness (beta = .05, p = .002). Students' extraversion and academic self-efficacy did not moderate the relationship between teacher-assigned grades and standardized test scores.
Conclusions
Our findings indicate that students' conscientiousness is a personality trait that seems to be important when it comes to how closely mathematics teachers align their grades to standardized test scores.
Are we good friends?
(2021)
Empirical studies already examined various facets of the friendship construct. Building on this, the present study examines the questions of how the number of friendships and their quality differ between students with and without SEN and whether a homophily-effect can be identified. The sample consists of 455 fourth-graders from 28 inclusive classes in Austria. The results indicate that students with SEN have fewer friends than students without SEN. Furthermore, students without SEN preferred peers without SEN as a friend. This homophily-effect was shown for students with SEN, too. However, students with and without SEN rated the quality of their friendships similarly and no interactions between the SEN status of oneself or of the friend was found for the quality of the friendship. The results show that, in the context of inclusion, the issue of friendship needs to be increasingly addressed to improve the situation of students with SEN.
This project describes the nominal, verbal and ‘truncation’ systems of Awing and explains the syntactic and semantic functions of the multifunctional l<-><-> (LE) morpheme in copular and wh-focused constructions. Awing is a Bantu Grassfields language spoken in the North West region of Cameroon. The work begins with morphological processes viz. deverbals, compounding, reduplication, borrowing and a thorough presentation of the pronominal system and takes on verbal categories viz. tense, aspect, mood, verbal extensions, negation, adverbs and triggers of a homorganic N(asal)-prefix that attaches to the verb and other verbal categories. Awing grammar also has a very unusual phenomenon whereby nouns and verbs take long and short forms. A chapter entitled truncation is dedicated to the phenomenon. It is observed that the truncation process does not apply to bare singular NPs, proper names and nouns derived via morphological processes. On the other hand, with the exception of the 1st person non-emphatic possessive determiner and the class 7 noun prefix, nouns generally take the truncated form with modifiers (i.e., articles, demonstratives and other possessives). It is concluded that nominal truncation depicts movement within the DP system (Abney 1987). Truncation of the verb occurs in three contexts: a mass/plurality conspiracy (or lattice structuring in terms of Link 1983) between the verb and its internal argument (i.e., direct object); a means to align (exhaustive) focus (in terms of Fery’s 2013), and a means to form polar questions.
The second part of the work focuses on the role of the LE morpheme in copular and wh-focused clauses. Firstly, the syntax of the Awing copular clause is presented and it is shown that copular clauses in Awing have ‘subject-focus’ vs ‘topic-focus’ partitions and that the LE morpheme indirectly relates such functions. Semantically, it is shown that LE does not express contrast or exhaustivity in copular clauses. Turning to wh-constructions, the work adheres to Hamblin’s (1973) idea that the meaning of a question is the set of its possible answers and based on Rooth’s (1985) underspecified semantic notion of alternative focus, concludes that the LE morpheme is not a Focus Marker (FM) in Awing: LE does not generate or indicate the presence of alternatives (Krifka 2007); The LE morpheme can associate with wh-elements as a focus-sensitive operator with semantic import that operates on the focus alternatives by presupposing an exhaustive answer, among other notions. With focalized categories, the project further substantiates the claim in Fominyam & Šimík (2017), namely that exhaustivity is part of the semantics of the LE morpheme and not derived via contextual implicature, via a number of diagnostics. Hence, unlike in copular clauses, the LE morpheme with wh-focused categories is analysed as a morphological exponent of a functional head Exh corresponding to Horvath's (2010) EI (Exhaustive Identification). The work ends with the syntax of verb focus and negation and modifies the idea in Fominyam & Šimík (2017), namely that the focalized verb that associates with the exhaustive (LE) particle is a lower copy of the finite verb that has been moved to Agr. It is argued that the LE-focused verb ‘cluster’ is an instantiation of adjunction. The conclusion is that verb doubling with verb focus in Awing is neither a realization of two copies of one and the same verb (Fominyam and Šimík 2017), nor a result of a copy triggered by a focus marker (Aboh and Dyakonova 2009). Rather, the focalized copy is said to be merged directly as the complement of LE forming a type of adjoining cluster.
Adaptive Force (AF) reflects the capability of the neuromuscular system to adapt adequately to external forces with the intention of maintaining a position or motion. One specific approach to assessing AF is to measure force and limb position during a pneumatically applied increasing external force. Through this method, the highest (AFmax), the maximal isometric (AFisomax) and the maximal eccentric Adaptive Force (AFeccmax) can be determined. The main question of the study was whether the AFisomax is a specific and independent parameter of muscle function compared to other maximal forces. In 13 healthy subjects (9 male and 4 female), the maximal voluntary isometric contraction (pre- and post-MVIC), the three AF parameters and the MVIC with a prior concentric contraction (MVICpri-con) of the elbow extensors were measured 4 times on two days. Arithmetic mean (M) and maximal (Max) torques of all force types were analyzed. Regarding the reliability of the AF parameters between days, the mean changes were 0.31–1.98 Nm (0.61%–5.47%, p = 0.175–0.552), the standard errors of measurements (SEM) were 1.29–5.68 Nm (2.53%–15.70%) and the ICCs(3,1) = 0.896–0.996. M and Max of AFisomax, AFmax and pre-MVIC correlated highly (r = 0.85–0.98). The M and Max of AFisomax were significantly lower (6.12–14.93 Nm; p ≤ 0.001–0.009) and more variable between trials (coefficient of variation (CVs) ≥ 21.95%) compared to those of pre-MVIC and AFmax (CVs ≤ 5.4%). The results suggest the novel measuring procedure is suitable to reliably quantify the AF, whereby the presented measurement errors should be taken into consideration. The AFisomax seems to reflect its own strength capacity and should be detected separately. It is suggested its normalization to the MVIC or AFmax could serve as an indicator of a neuromuscular function.
Previous studies have not considered the potential influence of maturity status on the relationship between mental imagery and change of direction (CoD) speed in youth soccer. Accordingly, this cross-sectional study examined the association between mental imagery and CoD performance in young elite soccer players of different maturity status. Forty young male soccer players, aged 10-17 years, were assigned into two groups according to their predicted age at peak height velocity (PHV) (Pre-PHV; n = 20 and Post-PHV; n = 20). Participants were evaluated on soccer-specific tests of CoD with (CoDBall-15m) and without (CoD-15m) the ball. Participants completed the movement imagery questionnaire (MIQ) with the three- dimensional structure, internal visual imagery (IVI), external visual imagery (EVI), as well as kinesthetic imagery (KI). The Post-PHV players achieved significantly better results than Pre-PHV in EVI (ES = 1.58, large; p < 0.001), CoD-15m (ES = 2.09, very large; p < 0.001) and CoDBall-15m (ES = 1.60, large; p < 0.001). Correlations were significantly different between maturity groups, where, for the pre-PHV group, a negative very large correlation was observed between CoDBall-15m and KI (r = –0.73, p = 0.001). For the post-PHV group, large negative correlations were observed between CoD-15m and IVI (r = –0.55, p = 0.011), EVI (r = –062, p = 0.003), and KI (r = –0.52, p = 0.020). A large negative correlation of CoDBall-15m with EVI (r = –0.55, p = 0.012) and very large correlation with KI (r = –0.79, p = 0.001) were also observed. This study provides evidence of the theoretical and practical use for the CoD tasks stimulus with imagery. We recommend that sport psychology specialists, coaches, and athletes integrated imagery for CoD tasks in pre-pubertal soccer players to further improve CoD related performance.