Refine
Year of publication
Document Type
- Article (203)
- Doctoral Thesis (165)
- Monograph/Edited Volume (70)
- Working Paper (36)
- Postprint (20)
- Conference Proceeding (11)
- Master's Thesis (7)
- Preprint (7)
- Report (6)
- Part of Periodical (4)
Language
- English (387)
- German (153)
- Portuguese (1)
- Spanish (1)
Is part of the Bibliography
- yes (542) (remove)
Keywords
- climate change (7)
- Arktis (6)
- Arctic (5)
- COVID-19 (5)
- Fernerkundung (5)
- Lehrkräftebildung (5)
- Reflexion (5)
- football (5)
- Forschungsdaten (4)
- Klimawandel (4)
Institute
- Extern (542) (remove)
Das Dokument "Forschungsdatenmanagement bei personenbezogenen Daten - eine Handreichung" versammelt zentrale Inhalte, Verweise und Vorgehensweisen für Forscher*innen, die in einer Studie personenbezogene Daten erheben und diese verarbeiten, archivieren oder veröffentlichen wollen. Die Handreichung verweist an den entsprechenden Abschnitten auf weiterführende Materialien wie insbesondere die Handreichung „Datenschutz“ des Rats für die Sozial-, Verhaltens-, Bildungs- und Wirtschaftswissenschaften (RatSWD).
Sulfur is an important element that is incorporated into many biomolecules in humans. The incorporation and transfer of sulfur into biomolecules is, however, facilitated by a series of different sulfurtransferases. Among these sulfurtransferases is the human mercaptopyruvate sulfurtransferase (MPST) also designated as tRNA thiouridine modification protein (TUM1). The role of the human TUM1 protein has been suggested in a wide range of physiological processes in the cell among which are but not limited to involvement in Molybdenum cofactor (Moco) biosynthesis, cytosolic tRNA thiolation and generation of H2S as signaling molecule both in mitochondria and the cytosol. Previous interaction studies showed that TUM1 interacts with the L-cysteine desulfurase NFS1 and the Molybdenum cofactor biosynthesis protein 3 (MOCS3). Here, we show the roles of TUM1 in human cells using CRISPR/Cas9 genetically modified Human Embryonic Kidney cells. Here, we show that TUM1 is involved in the sulfur transfer for Molybdenum cofactor synthesis and tRNA thiomodification by spectrophotometric measurement of the activity of sulfite oxidase and liquid chromatography quantification of the level of sulfur-modified tRNA. Further, we show that TUM1 has a role in hydrogen sulfide production and cellular bioenergetics.
In late summer, migratory bats of the temperate zone face the challenge of accomplishing two energy-demanding tasks almost at the same time: migration and mating. Both require information and involve search efforts, such as localizing prey or finding potential mates. In non-migrating bat species, playback studies showed that listening to vocalizations of other bats, both con-and heterospecifics, may help a recipient bat to find foraging patches and mating sites. However, we are still unaware of the degree to which migrating bats depend on con-or heterospecific vocalizations for identifying potential feeding or mating opportunities during nightly transit flights. Here, we investigated the vocal responses of Nathusius’ pipistrelle bats, Pipistrellus nathusii, to simulated feeding and courtship aggregations at a coastal migration corridor. We presented migrating bats either feeding buzzes or courtship calls of their own or a heterospecific migratory species, the common noctule, Nyctalus noctula. We expected that during migratory transit flights, simulated feeding opportunities would be particularly attractive to bats, as well as simulated mating opportunities which may indicate suitable roosts for a stopover. However, we found that when compared to the natural silence of both pre-and post-playback phases, bats called indifferently during the playback of conspecific feeding sounds, whereas P. nathusii echolocation call activity increased during simulated feeding of N. noctula. In contrast, the call activity of P. nathusii decreased during the playback of conspecific courtship calls, while no response could be detected when heterospecific call types were broadcasted. Our results suggest that while on migratory transits, P. nathusii circumnavigate conspecific mating aggregations, possibly to save time or to reduce the risks associated with social interactions where aggression due to territoriality might be expected. This avoidance behavior could be a result of optimization strategies by P. nathusii when performing long-distance migratory flights, and it could also explain the lack of a response to simulated conspecific feeding. However, the observed increase of activity in response to simulated feeding of N. noctula, suggests that P. nathusii individuals may be eavesdropping on other aerial hawking insectivorous species during migration, especially if these occupy a slightly different foraging niche.
The central gas in half of all galaxy clusters shows short cooling times. Assuming unimpeded cooling, this should lead to high star formation and mass cooling rates, which are not observed. Instead, it is believed that condensing gas is accreted by the central black hole that powers an active galactic nuclei jet, which heats the cluster. The detailed heating mechanism remains uncertain. A promising mechanism invokes cosmic ray protons that scatter on self-generated magnetic fluctuations, i.e. Alfvén waves. Continuous damping of Alfvén waves provides heat to the intracluster medium. Previous work has found steady state solutions for a large sample of clusters where cooling is balanced by Alfvénic wave heating. To verify modeling assumptions, we set out to study cosmic ray injection in three-dimensional magnetohydrodynamical simulations of jet feedback in an idealized cluster with the moving-mesh code arepo. We analyze the interaction of jet-inflated bubbles with the turbulent magnetized intracluster medium.
Furthermore, jet dynamics and heating are closely linked to the largely unconstrained jet composition. Interactions of electrons with photons of the cosmic microwave background result in observational signatures that depend on the bubble content. Those recent observations provided evidence for underdense bubbles with a relativistic filling while adopting simplifying modeling assumptions for the bubbles. By reproducing the observations with our simulations, we confirm the validity of their modeling assumptions and as such, confirm the important finding of low-(momentum) density jets.
In addition, the velocity and magnetic field structure of the intracluster medium have profound consequences for bubble evolution and heating processes. As velocity and magnetic fields are physically coupled, we demonstrate that numerical simulations can help link and thereby constrain their respective observables. Finally, we implement the currently preferred accretion model, cold accretion, into the moving-mesh code arepo and study feedback by light jets in a radiatively cooling magnetized cluster. While self-regulation is attained independently of accretion model, jet density and feedback efficiencies, we find that in order to reproduce observed cold gas morphology light jets are preferred.
The light reactions of photosynthesis are carried out by a series of multiprotein complexes embedded in thylakoid membranes. Among them, photosystem I (PSI), acting as plastocyanin-ferderoxin oxidoreductase, catalyzes the final reaction. Together with light-harvesting antenna I, PSI forms a high-molecular-weight supercomplex of ~600 kDa, consisting of eighteen subunits and nearly two hundred co-factors. Assembly of the various components into a functional thylakoid membrane complex requires precise coordination, which is provided by the assembly machinery. Although this includes a small number of proteins (PSI assembly factors) that have been shown to play a role in the formation of PSI, the process as a whole, as well as the intricacy of its members, remains largely unexplored.
In the present work, two approaches were used to find candidate PSI assembly factors. First, EnsembleNet was used to select proteins thought to be functionally related to known PSI assembly factors in Arabidopsis thaliana (approach I), and second, co-immunoprecipitation (Co-IP) of tagged PSI assembly factors in Nicotiana tabacum was performed (approach II).
Here, the novel PSI assembly factors designated CO-EXPRESSED WITH PSI ASSEMBLY 1 (CEPA1) and Ycf4-INTERACTING PROTEIN 1 (Y4IP1) were identified. A. thaliana null mutants for CEPA1 and Y4IP1 showed a growth phenotype and pale leaves compared with the wild type. Biophysical experiments using pulse amplitude modulation (PAM) revealed insufficient electron transport on the PSII acceptor side. Biochemical analyses revealed that both CEPA1 and Y4IP1 are specifically involved in PSI accumulation in A. thaliana at the post-translational level but are not essential. Consistent with their roles as factors in the assembly of a thylakoid membrane protein complex, the two proteins localize to thylakoid membranes. Remarkably, cepa1 y4ip1 double mutants exhibited lethal phenotypes in early developmental stages under photoautotrophic growth. Finally, co-IP and native gel experiments supported a possible role for CEPA1 and Y4IP1 in mediating PSI assembly in conjunction with other PSI assembly factors (e.g., PPD1- and PSA3-CEPA1 and Ycf4-Y4IP1). The fact that CEPA1 and Y4IP1 are found exclusively in green algae and higher plants suggests eukaryote-specific functions. Although the specific mechanisms need further investigation, CEPA1 and Y4IP1 are two novel assembly factors that contribute to PSI formation.
In recent decades, astronomy has seen a boom in large-scale stellar surveys of the Galaxy. The detailed information obtained about millions of individual stars in the Milky Way is bringing us a step closer to answering one of the most outstanding questions in astrophysics: how do galaxies form and evolve? The Milky Way is the only galaxy where we can dissect many stars into their high-dimensional chemical composition and complete phase space, which analogously as fossil records can unveil the past history of the genesis of the Galaxy. The processes that lead to large structure formation, such as the Milky Way, are critical for constraining cosmological models; we call this line of study Galactic archaeology or near-field cosmology.
At the core of this work, we present a collection of efforts to chemically and dynamically characterise the disks and bulge of our Galaxy. The results we present in this thesis have only been possible thanks to the advent of the Gaia astrometric satellite, which has revolutionised the field of Galactic archaeology by precisely measuring the positions, parallax distances and motions of more than a billion stars. Another, though not less important, breakthrough is the APOGEE survey, which has observed spectra in the near-infrared peering into the dusty regions of the Galaxy, allowing us to determine detailed chemical abundance patterns in hundreds of thousands of stars. To accurately depict the Milky Way structure, we use and develop the Bayesian isochrone fitting tool/code called StarHorse; this software can predict stellar distances, extinctions and ages by combining astrometry, photometry and spectroscopy based on stellar evolutionary models. The StarHorse code is pivotal to calculating distances where Gaia parallaxes alone cannot allow accurate estimates.
We show that by combining Gaia, APOGEE, photometric surveys and using StarHorse, we can produce a chemical cartography of the Milky way disks from their outermost to innermost parts. Such a map is unprecedented in the inner Galaxy. It reveals a continuity of the bimodal chemical pattern previously detected in the solar neighbourhood, indicating two populations with distinct formation histories. Furthermore, the data reveals a chemical gradient within the thin disk where the content of 𝛼-process elements and metals is higher towards the centre. Focusing on a sample in the inner MW we confirm the extension of the chemical duality to the innermost regions of the Galaxy. We find stars with bar shape orbits to show both high- and low-𝛼 abundances, suggesting the bar formed by secular evolution trapping stars that already existed. By analysing the chemical orbital space of the inner Galactic regions, we disentangle the multiple populations that inhabit this complex region. We reveal the presence of the thin disk, thick disk, bar, and a counter-rotating population, which resembles the outcome of a perturbed proto-Galactic disk. Our study also finds that the inner Galaxy holds a high quantity of super metal-rich stars up to three times solar suggesting it is a possible repository of old super-metal-rich stars found in the solar neighbourhood.
We also enter into the complicated task of deriving individual stellar ages. With StarHorse, we calculate the ages of main-sequence turn-off and sub-giant stars for several public spectroscopic surveys. We validate our results by investigating linear relations between chemical abundances and time since the 𝛼 and neutron capture elements are sensitive to age as a reflection of the different enrichment timescales of these elements. For further study of the disks in the solar neighbourhood, we use an unsupervised machine learning algorithm to delineate a multidimensional separation of chrono-chemical stellar groups revealing the chemical thick disk, the thin disk, and young 𝛼-rich stars. The thick disk is shown to have a small age dispersion indicating its fast formation contrary to the thin disk that spans a wide range of ages.
With groundbreaking data, this thesis encloses a detailed chemo-dynamical view of the disk and bulge of our Galaxy. Our findings on the Milky Way can be linked to the evolution of high redshift disk galaxies, helping to solve the conundrum of galaxy formation.
Cosmic rays (CRs) constitute an important component of the interstellar medium (ISM) of galaxies and are thought to play an essential role in governing their evolution. In particular, they are able to impact the dynamics of a galaxy by driving galactic outflows or heating the ISM and thereby affecting the efficiency of star-formation. Hence, in order to understand galaxy formation and evolution, we need to accurately model this non-thermal constituent of the ISM. But except in our local environment within the Milky Way, we do not have the ability to measure CRs directly in other galaxies. However, there are many ways to indirectly observe CRs via the radiation they emit due to their interaction with magnetic and interstellar radiation fields as well as with the ISM.
In this work, I develop a numerical framework to calculate the spectral distribution of CRs in simulations of isolated galaxies where a steady-state between injection and cooling is assumed. Furthermore, I calculate the non-thermal emission processes arising from the modelled CR proton and electron spectra ranging from radio wavelengths up to the very high-energy gamma-ray regime.
I apply this code to a number of high-resolution magneto-hydrodynamical (MHD) simulations of isolated galaxies, where CRs are included. This allows me to study their CR spectra and compare them to observations of the CR proton and electron spectra by the Voyager-1 satellite and the AMS-02 instrument in order to reveal the origin of the measured spectral features.
Furthermore, I provide detailed emission maps, luminosities and spectra of the non-thermal emission from our simulated galaxies that range from dwarfs to Milk-Way analogues to starburst galaxies at different evolutionary stages. I successfully reproduce the observed relations between the radio and gamma-ray luminosities with the far-infrared (FIR) emission of star-forming (SF) galaxies, respectively, where the latter is a good tracer of the star-formation rate. I find that highly SF galaxies are close to the limit where their CR population would lose all of their energy due to the emission of radiation, whereas CRs tend to escape low SF galaxies more quickly. On top of that, I investigate the properties of CR transport that are needed in order to match the observed gamma-ray spectra.
Furthermore, I uncover the underlying processes that enable the FIR-radio correlation (FRC) to be maintained even in starburst galaxies and find that thermal free-free-emission naturally explains the observed radio spectra in SF galaxies like M82 and NGC 253 thus solving the riddle of flat radio spectra that have been proposed to contradict the observed tight FRC.
Lastly, I scrutinise the steady-state modelling of the CR proton component by investigating for the first time the influence of spectrally resolved CR transport in MHD simulations on the hadronic gamma-ray emission of SF galaxies revealing new insights into the observational signatures of CR transport both spectrally and spatially.
Many widely used observational data sets are comprised of several overlapping instrument records. While data inter-calibration techniques often yield continuous and reliable data for trend analysis, less attention is generally paid to maintaining higher-order statistics such as variance and autocorrelation. A growing body of work uses these metrics to quantify the stability or resilience of a system under study and potentially to anticipate an approaching critical transition in the system. Exploring the degree to which changes in resilience indicators such as the variance or autocorrelation can be attributed to non-stationary characteristics of the measurement process – rather than actual changes in the dynamical properties of the system – is important in this context. In this work we use both synthetic and empirical data to explore how changes in the noise structure of a data set are propagated into the commonly used resilience metrics lag-one autocorrelation and variance. We focus on examples from remotely sensed vegetation indicators such as vegetation optical depth and the normalized difference vegetation index from different satellite sources. We find that time series resulting from mixing signals from sensors with varied uncertainties and covering overlapping time spans can lead to biases in inferred resilience changes. These biases are typically more pronounced when resilience metrics are aggregated (for example, by land-cover type or region), whereas estimates for individual time series remain reliable at reasonable sensor signal-to-noise ratios. Our work provides guidelines for the treatment and aggregation of multi-instrument data in studies of critical transitions and resilience.
In this work, binding interactions between biomolecules were analyzed by a technique that is based on electrically controllable DNA nanolevers. The technique was applied to virus-receptor interactions for the first time. As receptors, primarily peptides on DNA nanostructures and antibodies were utilized. The DNA nanostructures were integrated into the measurement technique and enabled the presentation of the peptides in a controllable geometrical order. The number of peptides could be varied to be compatible to the binding sites of the viral surface proteins.
Influenza A virus served as a model system, on which the general measurability was demonstrated. Variations of the receptor peptide, the surface ligand density, the measurement temperature and the virus subtypes showed the sensitivity and applicability of the technology. Additionally, the immobilization of virus particles enabled the measurement of differences in oligovalent binding of DNA-peptide nanostructures to the viral proteins in their native environment.
When the coronavirus pandemic broke out in 2020, work on binding interactions of a peptide from the hACE2 receptor and the spike protein of the SARS-CoV-2 virus revealed that oligovalent binding can be quantified in the switchSENSE technology. It could also be shown that small changes in the amino acid sequence of the spike protein resulted in complete loss of binding. Interactions of the peptide and inactivated virus material as well as pseudo virus particles could be measured. Additionally, the switchSENSE technology was utilized to rank six antibodies for their binding affinity towards the nucleocapsid protein of SARS-CoV-2 for the development of a rapid antigen test device.
The technique was furthermore employed to show binding of a non-enveloped virus (adenovirus) and a virus-like particle (norovirus-like particle) to antibodies. Apart from binding interactions, the use of DNA origami levers with a length of around 50 nm enabled the switching of virus material. This proved that the technology is also able to size objects with a hydrodynamic diameter larger than 14 nm.
A theoretical work on diffusion and reaction-limited binding interactions revealed that the technique and the chosen parameters enable the determination of binding rate constants in the reaction-limited regime.
Overall, the applicability of the switchSENSE technique to virus-receptor binding interactions could be demonstrated on multiple examples. While there are challenges that remain, the setup enables the determination of affinities between viruses and receptors in their native environment. Especially the possibilities regarding the quantification of oligo- and multivalent binding interactions could be presented.
Selenium (Se) is an essential trace element that is ubiquitously present in the environment in small concentrations. Essential functions of Se in the human body are manifested through the wide range of proteins, containing selenocysteine as their active center. Such proteins are called selenoproteins which are found in multiple physiological processes like antioxidative defense and the regulation of thyroid hormone functions. Therefore, Se deficiency is known to cause a broad spectrum of physiological impairments, especially in endemic regions with low Se content. Nevertheless, being an essential trace element, Se could exhibit toxic effects, if its intake exceeds tolerable levels. Accordingly, this range between deficiency and overexposure represents optimal Se supply. However, this range was found to be narrower than for any other essential trace element. Together with significantly varying Se concentrations in soil and the presence of specific bioaccumulation factors, this represents a noticeable difficulty in the assessment of Se
epidemiological status. While Se is acting in the body through multiple selenoproteins, its intake occurs mainly in form of small organic or inorganic molecular mass species. Thus, Se exposure not only depends on daily intake but also on the respective chemical form, in which it is present.
The essential functions of selenium have been known for a long time and its primary forms in different food sources have been described. Nevertheless, analytical capabilities for a comprehensive investigation of Se species and their derivatives have been introduced only in the last decades. A new Se compound was identified in 2010 in the blood and tissues of bluefin tuna. It was called selenoneine (SeN) since it is an isologue of naturally occurring antioxidant ergothioneine (ET), where Se replaces sulfur. In the following years, SeN was identified in a number of edible fish species and attracted attention as a new dietary Se source and potentially strong antioxidant. Studies in populations whose diet largely relies on fish revealed that SeN
represents the main non-protein bound Se pool in their blood. First studies, conducted with enriched fish extracts, already demonstrated the high antioxidative potential of SeN and its possible function in the detoxification of methylmercury in fish. Cell culture studies demonstrated, that SeN can utilize the same transporter as ergothioneine, and SeN metabolite was found in human urine.
Until recently, studies on SeN properties were severely limited due to the lack of ways to obtain the pure compound. As a predisposition to this work was firstly a successful approach to SeN synthesis in the University of Graz, utilizing genetically modified yeasts. In the current study, by use of HepG2 liver carcinoma cells, it was demonstrated, that SeN does not cause toxic effectsup to 100 μM concentration in hepatocytes. Uptake experiments showed that SeN is not bioavailable to the used liver cells.
In the next part a blood-brain barrier (BBB) model, based on capillary endothelial cells from the porcine brain, was used to describe the possible transfer of SeN into the central nervous system (CNS). The assessment of toxicity markers in these endothelial cells and monitoring of barrier conditions during transfer experiments demonstrated the absence of toxic effects from SeN on the BBB endothelium up to 100 μM concentration. Transfer data for SeN showed slow but substantial transfer. A statistically significant increase was observed after 48 hours following SeN incubation from the blood-facing side of the barrier. However, an increase in Se content was clearly visible already after 6 hours of incubation with 1 μM of SeN. While the transfer rate of SeN after application of 0.1 μM dose was very close to that for 1 μM, incubation with 10 μM of SeN resulted in a significantly decreased transfer rate. Double-sided application of SeN caused no side-specific transfer of SeN, thus suggesting a passive diffusion mechanism of SeN across the BBB. This data is in accordance with animal studies, where ET accumulation was observed in the rat brain, even though rat BBB does not have the primary ET transporter – OCTN1. Investigation of capillary endothelial cell monolayers after incubation with SeN and reference selenium compounds showed no significant increase of intracellular selenium concentration. Speciesspecific Se measurements in medium samples from apical and basolateral compartments, as good as in cell lysates, showed no SeN metabolization. Therefore, it can be concluded that SeN may reach the brain without significant transformation.
As the third part of this work, the assessment of SeN antioxidant properties was performed in Caco-2 human colorectal adenocarcinoma cells. Previous studies demonstrated that the intestinal epithelium is able to actively transport SeN from the intestinal lumen to the blood side and accumulate SeN. Further investigation within current work showed a much higher antioxidant potential of SeN compared to ET. The radical scavenging activity after incubation with SeN was close to the one observed for selenite and selenomethionine. However, the SeN effect on the viability of intestinal cells under oxidative conditions was close to the one caused by ET. To answer the question if SeN is able to be used as a dietary Se source and induce the activity of selenoproteins, the activity of glutathione peroxidase (GPx) and the secretion of selenoprotein P (SelenoP) were measured in Caco-2 cells, additionally. As expected, reference selenium compounds selenite and selenomethionine caused efficient induction of GPx activity. In contrast to those SeN had no effect on GPx activity. To examine the possibility of SeN being embedded into the selenoproteome, SelenoP was measured in a culture medium. Even though Caco-2 cells effectively take up SeN in quantities much higher than selenite or selenomethionine, no secretion of SelenoP was observed after SeN incubation.
Summarizing, we can conclude that SeN can hardly serve as a Se source for selenoprotein synthesis. However, SeN exhibit strong antioxidative properties, which appear when sulfur in ET is exchanged by Se. Therefore, SeN is of particular interest for research not as part of Se metabolism, but important endemic dietary antioxidant.
Air pollution has been a persistent global problem in the past several hundred years. While some industrialized nations have shown improvements in their air quality through stricter regulation, others have experienced declines as they rapidly industrialize. The WHO’s 2021 update of their recommended air pollution limit values reflects the substantial impacts on human health of pollutants such as NO2 and O3, as recent epidemiological evidence suggests substantial long-term health impacts of air pollution even at low concentrations. Alongside developments in our understanding of air pollution's health impacts, the new technology of low-cost sensors (LCS) has been taken up by both academia and industry as a new method for measuring air pollution. Due primarily to their lower cost and smaller size, they can be used in a variety of different applications, including in the development of higher resolution measurement networks, in source identification, and in measurements of air pollution exposure. While significant efforts have been made to accurately calibrate LCS with reference instrumentation and various statistical models, accuracy and precision remain limited by variable sensor sensitivity. Furthermore, standard procedures for calibration still do not exist and most proprietary calibration algorithms are black-box, inaccessible to the public. This work seeks to expand the knowledge base on LCS in several different ways: 1) by developing an open-source calibration methodology; 2) by deploying LCS at high spatial resolution in urban environments to test their capability in measuring microscale changes in urban air pollution; 3) by connecting LCS deployments with the implementation of local mobility policies to provide policy advice on resultant changes in air quality.
In a first step, it was found that LCS can be consistently calibrated with good performance against reference instrumentation using seven general steps: 1) assessing raw data distribution, 2) cleaning data, 3) flagging data, 4) model selection and tuning, 5) model validation, 6) exporting final predictions, and 7) calculating associated uncertainty. By emphasizing the need for consistent reporting of details at each step, most crucially on model selection, validation, and performance, this work pushed forward with the effort towards standardization of calibration methodologies. In addition, with the open-source publication of code and data for the seven-step methodology, advances were made towards reforming the largely black-box nature of LCS calibrations.
With a transparent and reliable calibration methodology established, LCS were then deployed in various street canyons between 2017 and 2020. Using two types of LCS, metal oxide (MOS) and electrochemical (EC), their performance in capturing expected patterns of urban NO2 and O3 pollution was evaluated. Results showed that calibrated concentrations from MOS and EC sensors matched general diurnal patterns in NO2 and O3 pollution measured using reference instruments. While MOS proved to be unreliable for discerning differences among measured locations within the urban environment, the concentrations measured with calibrated EC sensors matched expectations from modelling studies on NO2 and O3 pollution distribution in street canyons. As such, it was concluded that LCS are appropriate for measuring urban air quality, including for assisting urban-scale air pollution model development, and can reveal new insights into air pollution in urban environments.
To achieve the last goal of this work, two measurement campaigns were conducted in connection with the implementation of three mobility policies in Berlin. The first involved the construction of a pop-up bike lane on Kottbusser Damm in response to the COVID-19 pandemic, the second surrounded the temporary implementation of a community space on Böckhstrasse, and the last was focused on the closure of a portion of Friedrichstrasse to all motorized traffic. In all cases, measurements of NO2 were collected before and after the measure was implemented to assess changes in air quality resultant from these policies. Results from the Kottbusser Damm experiment showed that the bike-lane reduced NO2 concentrations that cyclists were exposed to by 22 ± 19%. On Friedrichstrasse, the street closure reduced NO2 concentrations to the level of the urban background without worsening the air quality on side streets. These valuable results were communicated swiftly to partners in the city administration responsible for evaluating the policies’ success and future, highlighting the ability of LCS to provide policy-relevant results.
As a new technology, much is still to be learned about LCS and their value to academic research in the atmospheric sciences. Nevertheless, this work has advanced the state of the art in several ways. First, it contributed a novel open-source calibration methodology that can be used by a LCS end-users for various air pollutants. Second, it strengthened the evidence base on the reliability of LCS for measuring urban air quality, finding through novel deployments in street canyons that LCS can be used at high spatial resolution to understand microscale air pollution dynamics. Last, it is the first of its kind to connect LCS measurements directly with mobility policies to understand their influences on local air quality, resulting in policy-relevant findings valuable for decisionmakers. It serves as an example of the potential for LCS to expand our understanding of air pollution at various scales, as well as their ability to serve as valuable tools in transdisciplinary research.
Both horizontal-to-vertical (H/V) spectral ratios and the spatial autocorrelation method (SPAC) have proven to be valuable tools to gain insight into local site effects by ambient noise measurements. Here, the two methods are employed to assess the subsurface velocity structure at the Piano delle Concazze area on Mt Etna. Volcanic tremor records from an array of 26 broadband seismometers is processed and a strong variability of H/V ratios during periods of increased volcanic activity is found. From the spatial distribution of H/V peak frequencies, a geologic structure in the north-east of Piano delle Concazze is imaged which is interpreted as the Ellittico caldera rim. The method is extended to include both velocity data from the broadband stations and distributed acoustic sensing data from a co-located 1.5 km long fibre optic cable. High maximum amplitude values of the resulting ratios along the trajectory of the cable coincide with known faults. The outcome also indicates previously unmapped parts of a fault. The geologic interpretation is in good agreement with inversion results from magnetic survey data. Using the neighborhood algorithm, spatial autocorrelation curves obtained from the modified SPAC are inverted alone and jointly with the H/V peak frequencies for 1D shear wave velocity profiles. The obtained models are largely consistent with published models and were able to validate the results from the fibre optic cable.
Digitalisation in industry – also called “Industry 4.0” – is seen by numerous actors as an opportunity to reduce the environmental impact of the industrial sector. The scientific assessments of the effects of digitalisation in industry on environmental sustainability, however, are ambivalent. This cumulative dissertation uses three empirical studies to examine the expected and observed effects of digitalisation in industry on environmental sustainability. The aim of this dissertation is to identify opportunities and risks of digitalisation at different system levels and to derive options for action in politics and industry for a more sustainable design of digitalisation in industry. I use an interdisciplinary, socio-technical approach and look at selected countries of the Global South (Study 1) and the example of China (all studies). In the first study (section 2, joint work with Marcel Matthess), I use qualitative content analysis to examine digital and industrial policies from seven different countries in Africa and Asia for expectations regarding the impact of digitalisation on sustainability and compare these with the potentials of digitalisation for sustainability in the respective country contexts. The analysis reveals that the documents express a wide range of vague expectations that relate more to positive indirect impacts of information and communication technology (ICT) use, such as improved energy efficiency and resource management, and less to negative direct impacts of ICT, such as electricity consumption through ICT. In the second study (section 3, joint work with Marcel Matthess, Grischa Beier and Bing Xue), I conduct and analyse interviews with 18 industry representatives of the electronics industry from Europe, Japan and China on digitalisation measures in supply chains using qualitative content analysis. I find that while there are positive expectations regarding the effects of digital technologies on supply chain sustainability, their actual use and observable effects are still limited. Interview partners can only provide few examples from their own companies which show that sustainability goals have already been pursued through digitalisation of the supply chain or where sustainability effects, such as resource savings, have been demonstrably achieved. In the third study (section 4, joint work with Peter Neuhäusler, Melissa Dachrodt and Marcel Matthess), I conduct an econometric panel data analysis. I examine the relationship between the degree of Industry 4.0, energy consumption and energy intensity in ten manufacturing sectors in China between 2006 and 2019. The results suggest that overall, there is no significant relationship between the degree of Industry 4.0 and energy consumption or energy intensity in manufacturing sectors in China. However, differences can be found in subgroups of sectors. I find a negative correlation of Industry 4.0 and energy intensity in highly digitalised sectors, indicating an efficiency-enhancing effect of Industry 4.0 in these sectors. On the other hand, there is a positive correlation of Industry 4.0 and energy consumption for sectors with low energy consumption, which could be explained by the fact that digitalisation, such as the automation of previously mainly labour-intensive sectors, requires energy and also induces growth effects. In the discussion section (section 6) of this dissertation, I use the classification scheme of the three levels macro, meso and micro, as well as of direct and indirect environmental effects to classify the empirical observations into opportunities and risks, for example, with regard to the probability of rebound effects of digitalisation at the three levels. I link the investigated actor perspectives (policy makers, industry representatives), statistical data and additional literature across the system levels and consider political economy aspects to suggest fields of action for more sustainable (digitalised) industries. The dissertation thus makes two overarching contributions to the academic and societal discourse. First, my three empirical studies expand the limited state of research at the interface between digitalisation in industry and sustainability, especially by considering selected countries in the Global South and the example of China. Secondly, exploring the topic through data and methods from different disciplinary contexts and taking a socio-technical point of view, enables an analysis of (path) dependencies, uncertainties, and interactions in the socio-technical system across different system levels, which have often not been sufficiently considered in previous studies. The dissertation thus aims to create a scientifically and practically relevant knowledge basis for a value-guided, sustainability-oriented design of digitalisation in industry.
Stars under influence: evidence of tidal interactions between stars and substellar companions
(2023)
Tidal interactions occur between gravitationally bound astrophysical bodies. If their spatial separation is sufficiently small, the bodies can induce tides on each other, leading to angular momentum transfer and altering of evolutionary path the bodies would have followed if they were single objects. The tidal processes are well established in the Solar planet-moon systems and close stellar binary systems. However, how do stars behave if they are orbited by a substellar companion (e.g. a planet or a brown dwarf) on a tight orbit?
Typically, a substellar companion inside the corotation radius of a star will migrate toward the star as it loses orbital angular momentum. On the other hand, the star will gain angular momentum which has the potential to increase its rotation rate. The effect should be more pronounced if the substellar companion is more massive. As the stellar rotation rate and the magnetic activity level are coupled, the star should appear more magnetically active under the tidal influence of the orbiting substellar companion. However, the difficulty in proving that a star has a higher magnetic activity level due to tidal interactions lies in the fact that (I) substellar companions around active stars are easier to detect if they are more massive, leading to a bias toward massive companions around active stars and mimicking the tidal interaction effect, and that (II) the age of a main-sequence star cannot be easily determined, leaving the possibility that a star is more active due to its young age.
In our work, we overcome these issues by employing wide stellar binary systems where one star hosts a substellar companion, and where the other star provides the magnetic activity baseline for the host star, assuming they have coevolved, and thereby provides the host's activity level if tidal interactions have no effect on it. Firstly, we find that extrasolar planets can noticeably increase the host star's X-ray luminosity and that the effect is more pronounced if the exoplanet is at least Jupiter-like in mass and close to the star. Further, we find that a brown dwarf will have an even stronger effect, as expected, and that the X-ray surface flux difference between the host star and the wide stellar companion is a significant outlier when compared to a large sample of similar wide binary systems without any known substellar companions. This result proves that substellar hosting wide binary systems can be good tools to reveal the tidal effect on host stars, and also show that the typical stellar age indicators as activity or rotation cannot be used for these stars. Finally, knowing that the activity difference is a good tracer of the substellar companion's tidal impact, we develop an analytical method to calculate the modified tidal quality factor Q' of individual host stars, which defines the tidal dissipation efficiency in the convective envelope of a given main-sequence star.
In the present thesis, AC electrokinetic forces, like dielectrophoresis and AC electroosmosis, were demonstrated as a simple and fast method to functionalize the surface of nanoelectrodes with submicrometer sized biological objects. These nanoelectrodes have a cylindrical shape with a diameter of 500 nm arranged in an array of 6256 electrodes. Due to its medical relevance influenza virus as well as anti-influenza antibodies were chosen as a model organism. Common methods to bring antibodies or proteins to biosensor surfaces are complex and time-consuming. In the present work, it was demonstrated that by applying AC electric fields influenza viruses and antibodies can be immobilized onto the nanoelectrodes within seconds without any prior chemical modification of neither the surface nor the immobilized biological object. The distribution of these immobilized objects is not uniform over the entire array, it exhibits a decreasing gradient from the outer row to the inner ones. Different causes for this gradient have been discussed, such as the vortex-shaped fluid motion above the nanoelectrodes generated by, among others, electrothermal fluid flow. It was demonstrated that parts of the accumulated material are permanently immobilized to the electrodes. This is a unique characteristic of the presented system since in the literature the AC electrokinetic immobilization is almost entirely presented as a method just for temporary immobilization. The spatial distribution of the immobilized viral material or the anti-influenza antibodies at the electrodes was observed by either the combination of fluorescence microscopy and deconvolution or by super-resolution microscopy (STED). On-chip immunoassays were performed to examine the suitability of the functionalized electrodes as a potential affinity-based biosensor. Two approaches were pursued: A) the influenza virus as the bio-receptor or B) the influenza virus as the analyte. Different sources of error were eliminated by ELISA and passivation experiments. Hence, the activity of the immobilized object was inspected by incubation with the analyte. This resulted in the successful detection of anti-influenza antibodies by the immobilized viral material. On the other hand, a detection of influenza virus particles by the immobilized anti-influenza antibodies was not possible. The latter might be due to lost activity or wrong orientation of the antibodies. Thus, further examinations on the activity of by AC electric fields immobilized antibodies should follow. When combined with microfluidics and an electrical read-out system, the functionalized chips possess the potential to serve as a rapid, portable, and cost-effective point-of-care (POC) device. This device can be utilized as a basis for diverse applications in diagnosing and treating influenza, as well as various other pathogens.
The global climate crisis is significantly contributing to changing ecosystems, loss of biodiversity and is putting numerous species on the verge of extinction. In principle, many species are able to adapt to changing conditions or shift their habitats to more suitable regions. However, change is progressing faster than some species can adjust, or potential adaptation is blocked and disrupted by direct and indirect human action. Unsustainable anthropogenic land use in particular is one of the driving factors, besides global heating, for these ecologically critical developments. Precisely because land use is anthropogenic, it is also a factor that could be quickly and immediately corrected by human action.
In this thesis, I therefore assess the impact of three climate change scenarios of increasing intensity in combination with differently scheduled mowing regimes on the long-term development and dispersal success of insects in Northwest German grasslands. The large marsh grasshopper (LMG, Stethophyma grossum, Linné 1758) is used as a species of reference for the analyses. It inhabits wet meadows and marshes and has a limited, yet fairly good ability to disperse. Mowing and climate conditions affect the development and mortality of the LMG differently depending on its life stage.
The specifically developed simulation model HiLEG (High-resolution Large Environmental
Gradient) serves as a tool for investigating and projecting viability and dispersal success under different climate conditions and land use scenarios. It is a spatially explicit, stage- and cohort-based model that can be individually configured to represent the life cycle and characteristics of terrestrial insect species, as well as high-resolution environmental data and the occurrence of external disturbances. HiLEG is a freely available and adjustable software that can be used to support conservation planning in cultivated grasslands.
In the three case studies of this thesis, I explore various aspects related to the structure of simulation models per se, their importance in conservation planning in general, and insights regarding the LMG in particular. It became apparent that the detailed resolution of model processes and components is crucial to project the long-term effect of spatially and temporally confined events. Taking into account conservation measures at the regional level has further proven relevant, especially in light of the climate crisis. I found that the LMG is benefiting from global warming in principle, but continues to be constrained by harmful mowing regimes. Land use measures could, however, be adapted in such a way that they allow the expansion and establishment of the LMG without overly affecting agricultural yields.
Overall, simulation models like HiLEG can make an important contribution and add value
to conservation planning and policy-making. Properly used, simulation results shed light
on aspects that might be overlooked by subjective judgment and the experience of individual stakeholders. Even though it is in the nature of models that they are subject to limitations and only represent fragments of reality, this should not keep stakeholders from using them, as long as these limitations are clearly communicated. Similar to HiLEG, models could further be designed in such a way that not only the parameterization can be adjusted as required, but also the implementation itself can be improved and changed as desired. This openness and flexibility should become more widespread in the development of simulation models.
Properties of Arctic aerosol in the transition between Arctic haze to summer season derived by lidar
(2023)
During the Arctic haze period, the Arctic troposphere consists of larger, yet fewer, aerosol particles than during the summer (Tunved et al., 2013; Quinn et al., 2007). Interannual variability (Graßl and Ritter, 2019; Rinke et al., 2004), as well as unknown origins (Stock et al., 2014) and properties of aerosol complicate modeling these annual aerosol cycles. This thesis investigates the modification of the microphysical properties of Arctic aerosols in the transition from Arctic haze to the summer season. Therefore, lidar measurements of Ny-Ålesund from April 2021 to the end of July 2021 are evaluated based on the aerosols’ optical properties. An overview of those properties will be provided. Furthermore, parallel radiosonde data is considered for indication of hygroscopic growth.
The annual aerosol cycle in 2021 differs from expectations based on previous studies from Tunved et al. (2013) and Quinn et al. (2007). Developments of backscatter, extinction, aerosol depolarisation, lidar ratio and color ratio show a return of the Arctic haze in May. The haze had already reduced in April, but regrew afterwards.
The average Arctic aerosol displays hygroscopic behaviour, meaning growth due to water uptake. To determine such a behaviour is generally laborious because various meteorological circumstances need to be considered. Two case studies provide further information on these possible events. In particular, a day with a rare ice cloud and with highly variable water cloud layers is observed.
The Andean Cordillera is a mountain range located at the western South American margin and is part of the Eastern- Circum-Pacific orogenic Belt. The ~7000 km long mountain range is one of the longest on Earth and hosts the second largest orogenic plateau in the world, the Altiplano-Puna plateau. The Andes are known as a non-collisional subduction-type orogen which developed as a result of the interaction between the subducted oceanic Nazca plate and the South American continental plate. The different Andean segments exhibit along-strike variations of morphotectonic provinces characterized by different elevations, volcanic activity, deformation styles, crustal thickness, shortening magnitude and oceanic plate geometry. Most of the present-day elevation can be explained by crustal shortening in the last ~50 Ma, with the shortening magnitude decreasing from ~300 km in the central (15°S-30°S) segment to less than half that in the southern part (30°S-40°S). Several factors were proposed that might control the magnitude and acceleration of shortening of the Central Andes in the last 15 Ma. One important factor is likely the slab geometry. At 27-33°S, the slab dips horizontally at ~100 km depth due to the subduction of the buoyant Juan Fernandez Ridge, forming the Pampean flat-slab. This horizontal subduction is thought to influence the thermo-mechanical state of the Sierras Pampeanas foreland, for instance, by strengthening the lithosphere and promoting the thick-skinned propagation of deformation to the east, resulting in the uplift of the Sierras Pampeanas basement blocks. The flat-slab has migrated southwards from the Altiplano latitude at ~30 Ma to its present-day position and the processes and consequences associated to its passage on the contemporaneous acceleration of the shortening rate in Central Andes remain unclear. Although the passage of the flat-slab could offer an explanation to the acceleration of the shortening, the timing does not explain the two pulses of shortening at about 15 Ma and 4 Ma that are suggested from geological observations. I hypothesize that deformation in the Central Andes is controlled by a complex interaction between the subduction dynamics of the Nazca plate and the dynamic strengthening and weakening of the South American plate due to several upper plate processes. To test this hypothesis, a detailed investigation into the role of the flat-slab, the structural inheritance of the continental plate, and the subduction dynamics in the Andes is needed. Therefore, I have built two classes of numerical thermo-mechanical models: (i) The first class of models are a series of generic E-W-oriented high-resolution 2D subduction models thatinclude flat subduction in order to investigate the role of the subduction dynamics on the temporal variability of the shortening rate in the Central Andes at Altiplano latitudes (~21°S). The shortening rate from the models was then validated with the observed tectonic shortening rate in the Central Andes. (ii) The second class of models are a series of 3D data-driven models of the present-day Pampean flat-slab configuration and the Sierras Pampeanas (26-42°S). The models aim to investigate the relative contribution of the present-day flat subduction and inherited structures in the continental lithosphere on the strain localization. Both model classes were built using the advanced finite element geodynamic code ASPECT.
The first main finding of this work is to suggest that the temporal variability of shortening in the Central Andes is primarily controlled by the subduction dynamics of the Nazca plate while it penetrates into the mantle transition zone. These dynamics depends on the westward velocity of the South American plate that provides the main crustal shortening force to the Andes and forces the trench to retreat. When the subducting plate reaches the lower mantle, it buckles on it-self until the forced trench retreat causes the slab to steepen in the upper mantle in contrast with the classical slab-anchoring model. The steepening of the slab hinders the trench causing it to resist the advancing South American plate, resulting in the pulsatile shortening. This buckling and steepening subduction regime could have been initiated because of the overall decrease in the westwards velocity of the South American plate. In addition, the passage of the flat-slab is required to promote the shortening of the continental plate because flat subduction scrapes the mantle lithosphere, thus weakening the continental plate. This process contributes to the efficient shortening when the trench is hindered, followed by mantle lithosphere delamination at ~20 Ma. Finally, the underthrusting of the Brazilian cratonic shield beneath the orogen occurs at ~11 Ma due to the mechanical weakening of the thick sediments covered the shield margin, and due to the decreasing resistance of the weakened lithosphere of the orogen.
The second main finding of this work is to suggest that the cold flat-slab strengthens the overriding continental lithosphere and prevents strain localization. Therefore, the deformation is transmitted to the eastern front of the flat-slab segment by the shear stress operating at the subduction interface, thus the flat-slab acts like an indenter that “bulldozes” the mantle-keel of the continental lithosphere. The offset in the propagation of deformation to the east between the flat and steeper slab segments in the south causes the formation of a transpressive dextral shear zone. Here, inherited faults of past tectonic events are reactivated and further localize the deformation in an en-echelon strike-slip shear zone, through a mechanism that I refer to as “flat-slab conveyor”. Specifically, the shallowing of the flat-slab causes the lateral deformation, which explains the timing of multiple geological events preceding the arrival of the flat-slab at 33°S. These include the onset of the compression and of the transition between thin to thick-skinned deformation styles resulting from the crustal contraction of the crust in the Sierras Pampeanas some 10 and 6 Myr before the Juan Fernandez Ridge collision at that latitude, respectively.
Here, we demonstrate the utility of native membrane derived vesicles (nMVs) as tools for expeditious electrophysiological analysis of membrane proteins. We used a cell-free (CF) and a cell-based (CB) approach for preparing protein-enriched nMVs. We utilized the Chinese Hamster Ovary (CHO) lysate-based cell-free protein synthesis (CFPS) system to enrich ER-derived microsomes in the lysate with the primary human cardiac voltage-gated sodium channel 1.5 (hNaV1.5; SCN5A) in 3 h. Subsequently, CB-nMVs were isolated from fractions of nitrogen-cavitated CHO cells overexpressing the hNaV1.5. In an integrative approach, nMVs were micro-transplanted into Xenopus laevis oocytes. CB-nMVs expressed native lidocaine-sensitive hNaV1.5 currents within 24 h; CF-nMVs did not elicit any response. Both the CB- and CF-nMV preparations evoked single-channel activity on the planar lipid bilayer while retaining sensitivity to lidocaine application. Our findings suggest a high usability of the quick-synthesis CF-nMVs and maintenance-free CB-nMVs as ready-to-use tools for in-vitro analysis of electrogenic membrane proteins and large, voltage-gated ion channels.
Late-type stars are by far the most frequent stars in the universe and of fundamental interest to various fields of astronomy – most notably to Galactic archaeology and exoplanet research. However, such stars barely change during their main sequence lifetime; their temperature, luminosity, or chemical composition evolve only very slowly over the course of billions of years. As such, it is difficult to obtain the age of such a star, especially when it is isolated and no other indications (like cluster association) can be used. Gyrochronology offers a way to overcome this problem.
Stars, just like all other objects in the universe, rotate and the rate at which stars rotate impacts many aspects of their appearance and evolution. Gyrochronology leverages the observed rotation rate of a late-type main sequence star and its systematic evolution to estimate their ages. Unlike the above-mentioned parameters, the rotation rate of a main sequence star changes drastically throughout its main sequence lifetime; stars spin down. The youngest stars rotate every few hours, whereas much older stars rotate only about once a month, or – in the case of some late M-stars – once in a hundred days. Given that this spindown is systematic (with an additional mass dependence), it gave rise to the idea of using the observed rotation rate of a star (and its mass or a suitable proxy thereof) to estimate a star’s age. This has been explored widely in young stellar open clusters but remains essentially unconstrained for stars older than the sun, and K and M stars older than 1 Gyr.
This thesis focuses on the continued exploration of the spindown behavior to assess, whether gyrochronology remains applicable for stars of old ages, whether it is universal for late-type main sequence stars (including field stars), and to provide calibration mileposts for spindown models. To accomplish this, I have analyzed data from Kepler space telescope for the open clusters Ruprecht 147 (2.7 Gyr old) and M 67 (4 Gyr). Time series photometry data (light curves)
were obtained for both clusters during Kepler’s K2 mission. However, due to technical limitations and telescope malfunctions, extracting usable data from the K2 mission to identify (especially long) rotation periods requires extensive data preparation.
For Ruprecht 147, I have compiled a list of about 300 cluster members from the literature and adopted preprocessed light curves from the Kepler archive where available. They have been cleaned of the gravest of data artifacts but still contained systematics. After correcting them for said artifacts, I was able to identify rotation periods in 31 of them.
For M 67 more effort was taken. My work on Ruprecht 147 has shown the limitations imposed by the preselection of Kepler targets. Therefore, I adopted the time series full frame image directly and performed photometry on a much higher spatial resolution to be able to obtain data for as many stars as possible. This also means that I had to deal with the ubiquitous artifacts in Kepler data. For that, I devised a method that correlates the artificial flux variations with the ongoing drift of the telescope pointing in order to remove it. This process was a large success and I was able to create light curves whose quality match and even exceede those that were created by the Kepler mission – all while operating on higher spatial resolution and processing fainter stars. Ultimately, I was able to identify signs of periodic variability in the (created) light curves for 31 and 47 stars in Ruprecht 147 and M 67, respectively. My data connect well to bluer stars of cluster of the same age and extend for the first time to stars redder than early-K and older than 1 Gyr. The cluster data show a clear flattening in the distribution of Ruprecht 147 and even a downturn for M 67, resulting in a somewhat sinusoidal shape. With that, I have shown that the systematic spindown of stars continues at least until 4 Gyr and stars continue to live on a single surface in age-rotation periods-mass space which allows gyrochronology to be used at least up to that age. However, the shape of the spindown – as exemplified by the newly discovered sinusoidal shape of the cluster sequence – deviates strongly from the expectations.
I then compiled an extensive sample of rotation data in open clusters – very much including my own work – and used the resulting cluster skeleton (with each cluster forming a rip in color-rotation period-mass space) to investigate if field stars follow the same spindown as cluster stars. For the field stars, I used wide binaries, which – with their shared origin and coevality – are in a sense the smallest possible open clusters. I devised an empirical method to evaluate the consistency between the rotation rates of the wide binary components and found that the vast majority of them are in fact consistent with what is observed in open clusters. This leads me to conclude that gyrochronology – calibrated on open clusters – can be applied to determine the ages of field stars.
Die Fachtagungen HDI (Hochschuldidaktik Informatik) beschäftigen sich mit den unterschiedlichen Aspekten informatischer Bildung im Hochschulbereich. Neben den allgemeinen Themen wie verschiedenen Lehr- und Lernformen, dem Einsatz von Informatiksystemen in der Hochschullehre oder Fragen der Gewinnung von geeigneten Studierenden, deren Kompetenzerwerb oder auch der Betreuung der Studierenden widmet sich die HDI immer auch einem Schwerpunktthema.
Im Jahr 2021 war dies die Berücksichtigung von Diversität in der Lehre. Diskutiert wurden beispielsweise die Einbeziehung von besonderen fachlichen und überfachlichen Kompetenzen Studierender, der Unterstützung von Durchlässigkeit aus nichtakademischen Berufen, aber auch die Gestaltung inklusiver Lehr- und Lernszenarios, Aspekte des Lebenslangen Lernens oder sich an die Diversität von Studierenden adaptierte oder adaptierende Lehrsysteme.
Dieser Band enthält ausgewählte Beiträge der 9. Fachtagung 2021, die in besonderer Weise die Konferenz und die dort diskutierten Themen repräsentieren.
Das 16. Herbsttreffen Patholinguistik mit dem Schwerpunktthema »Schnittstelle Alltag: Transfer und Teilhabe in der Sprachtherapie« fand am 19.11.2022 als Online-Veranstaltung statt. Das Herbsttreffen wird seit 2007 jährlich vom Verband für Patholinguistik e.V. (vpl), seit 2021 vom Deutschen Bundesverband für akademische Sprachtherapie und Logopädie (dbs) in Kooperation mit der Universität Potsdam durchgeführt. Der vorliegende Tagungsband beinhaltet die Vorträge zum Schwerpunktthema sowie die Posterpräsentationen zu weiteren Themen aus der sprachtherapeutischen Forschung und Praxis.
Complex emulsions are dispersions of kinetically stabilized multiphasic emulsion droplets comprised of two or more immiscible liquids that provide a novel material platform for the generation of active and dynamic soft materials. In recent years, the intrinsic reconfigurable morphological behavior of complex emulsions, which can be attributed to the unique force equilibrium between the interfacial tensions acting at the various interfaces, has become of fundamental and applied interest. As such, particularly biphasic Janus droplets have been investigated as structural templates for the generation of anisotropic precision objects, dynamic optical elements or as transducers and signal amplifiers in chemo- and bio-sensing applications. In the present thesis, switchable internal morphological responses of complex droplets triggered by stimuli-induced alterations of the balance of interfacial tensions have been explored as a universal building block for the design of multiresponsive, active, and adaptive liquid colloidal systems. A series of underlying principles and mechanisms that influence the equilibrium of interfacial tensions have been uncovered, which allowed the targeted design of emulsion bodies that can alter their shape, bind and roll on surfaces, or change their geometrical shape in response to chemical stimuli. Consequently, combinations of the unique triggerable behavior of Janus droplets with designer surfactants, such as a stimuli-responsive photosurfactant (AzoTAB) resulted for instance in shape-changing soft colloids that exhibited a jellyfish inspired buoyant motion behavior, holding great promise for the design of biological inspired active material architectures and transformable soft robotics.
In situ observations of spherical Janus emulsion droplets using a customized side-view microscopic imaging setup with accompanying pendant dropt measurements disclosed the sensitivity regime of the unique chemical-morphological coupling inside complex emulsions and enabled the recording of calibration curves for the extraction of critical parameters of surfactant effectiveness. The deduced new "responsive drop" method permitted a convenient and cost-efficient quantification and comparison of the critical micelle concentrations (CMCs) and effectiveness of various cationic, anionic, and nonionic surfactants. Moreover, the method allowed insightful characterization of stimuli-responsive surfactants and monitoring of the impact of inorganic salts on the CMC and surfactant effectiveness of ionic and nonionic surfactants. Droplet functionalization with synthetic crown ether surfactants yielded a synthetically minimal material platform capable of autonomous and reversible adaptation to its chemical environment through different supramolecular host-guest recognition events. Addition of metal or ammonium salts resulted in the uptake of the resulting hydrophobic complexes to the hydrocarbon hemisphere, whereas addition of hydrophilic ammonium compounds such as amino acids or polypeptides resulted in supramolecular assemblies at the hydrocarbon-water interface of the droplets. The multiresponsive material platform enabled interfacial complexation and
thus triggered responses of the droplets to a variety of chemical triggers including metal ions, ammonium compounds, amino acids, antibodies, carbohydrates as well as amino-functionalized solid surfaces.
In the final chapter, the first documented optical logic gates and combinatorial logic circuits based on complex emulsions are presented. More specifically, the unique reconfigurable and multiresponsive properties of complex emulsions were exploited to realize droplet-based logic gates of varying complexity using different stimuli-responsive surfactants in combination with diverse readout methods. In summary, different designs for multiresponsive, active, and adaptive liquid colloidal systems were presented and investigated, enabling the design of novel transformative chemo-intelligent soft material platforms.
Technologically important, environmentally friendly InP quantum dots (QDs) typically used as green and red emitters in display devices can achieve exceptional photoluminescence quantum yields (PL QYs) of near-unity (95-100%) when the-state-of-the-art core/shell heterostructure of the ZnSe inner/ZnS outer shell is elaborately applied. Nevertheless, it has only led to a few industrial applications as QD liquid crystal display (QD–LCD) which is applied to blue backlight units, even though QDs has a lot of possibilities that able to realize industrially feasible applications, such as QD light-emitting diodes (QD‒LEDs) and luminescence solar concentrator (LSC), due to their functionalizable characteristics.
Before introducing the main research, the theoretical basis and fundamentals of QDs are described in detail on the basis of the quantum mechanics and experimental synthetic results, where a concept of QD and colloidal QD, a type-I core/shell structure, a transition metal doped semiconductor QDs, the surface chemistry of QD, and their applications (LSC, QD‒LEDs, and EHD jet printing) are sequentially elucidated for better understanding. This doctoral thesis mainly focused on the connectivity between QD materials and QD devices, based on the synthesis of InP QDs that are composed of inorganic core (core/shell heterostructure) and organic shell (surface ligands on the QD surface). In particular, as for the former one (core/shell heterostructure), the ZnCuInS mid-shell as an intermediate layer is newly introduced between a Cu-doped InP core and a ZnS shell for LSC devices. As for the latter one (surface ligands), the ligand effect by 1-octanethiol and chloride ion are investigated for the device stability in QD‒LEDs and the printability of electro-hydrodynamic (EHD) jet printing system, in which this research explores the behavior of surface ligands, based on proton transfer mechanism on the QD surface.
Chapter 3 demonstrates the synthesis of strain-engineered highly emissive Cu:InP/Zn–Cu–In–S (ZCIS)/ZnS core/shell/shell heterostructure QDs via a one-pot approach. When this unconventional combination of a ZCIS/ZnS double shelling scheme is introduced to a series of Cu:InP cores with different sizes, the resulting Cu:InP/ZCIS/ZnS QDs with a tunable near-IR PL range of 694–850 nm yield the highest-ever PL QYs of 71.5–82.4%. These outcomes strongly point to the efficacy of the ZCIS interlayer, which makes the core/shell interfacial strain effectively alleviated, toward high emissivity. The presence of such an intermediate ZCIS layer is further examined by comparative size, structural, and compositional analyses. The end of this chapter briefly introduces the research related to the LSC devices, fabricated from Cu:InP/ZCIS/ZnS QDs, currently in progress.
Chapter 4 mainly deals with ligand effect in 1-octanethiol passivation of InP/ZnSe/ZnS QDs in terms of incomplete surface passivation during synthesis. This chapter demonstrates the lack of anionic carboxylate ligands on the surface of InP/ZnSe/ZnS quantum dots (QDs), where zinc carboxylate ligands can be converted to carboxylic acid or carboxylate ligands via proton transfer by 1-octanethiol. The as-synthesized QDs initially have an under-coordinated vacancy surface, which is passivated by solvent ligands such as ethanol and acetone. Upon exposure of 1-octanethiol to the QD surface, 1-octanthiol effectively induces the surface binding of anionic carboxylate ligands (derived from zinc carboxylate ligands) by proton transfer, which consequently exchanges ethanol and acetone ligands that bound on the incomplete QD surface. The systematic chemical analyses, such as thermogravimetric analysis‒mass spectrometry and proton nuclear magnetic resonance spectroscopy, directly show the interplay of surface ligands, and it associates with QD light-emitting diodes (QD‒LEDs).
Chapter 5 shows the relation between material stability of QDs and device stability of QD‒LEDs through the investigation of surface chemistry and shell thickness. In typical III–V colloidal InP quantum dots (QDs), an inorganic ZnS outermost shell is used to provide stability when overcoated onto the InP core. However, this work presents a faster photo-degradation of InP/ZnSe/ZnS QDs with a thicker ZnS shell than that with a thin ZnS shell when 1-octanethiol was applied as a sulfur source to form ZnS outmost shell. Herein, 1-octanethiol induces the form of weakly-bound carboxylate ligand via proton transfer on the QD surface, resulting in a faster degradation at UV light even though a thicker ZnS shell was formed onto InP/ZnSe QDs. Detailed insight into surface chemistry was obtained from proton nuclear magnetic resonance spectroscopy and thermogravimetric analysis–mass spectrometry. However, the lifetimes of the electroluminescence devices fabricated from InP/ZnSe/ZnS QDs with a thick or a thin ZnS shell show surprisingly the opposite result to the material stability of QDs, where the QD light-emitting diodes (QD‒LEDs) with a thick ZnS shelled QDs maintained its luminance more stable than that with a thin ZnS shelled QDs. This study elucidates the degradation mechanism of the QDs and the QD light-emitting diodes based on the results and discuss why the material stability of QDs is different from the lifetime of QD‒LEDs.
Chapter 6 suggests a method how to improve a printability of EHD jet printing when QD materials are applied to QD ink formulation, where this work introduces the application of GaP mid-shelled InP QDs as a role of surface charge in EHD jet printing technique. In general, GaP intermediate shell has been introduced in III–V colloidal InP quantum dots (QDs) to enhance their thermal stability and quantum efficiency in the case of type-I core/shell/shell heterostructure InP/GaP/ZnSeS QDs. Herein, these highly luminescent InP/GaP/ZnSeS QDs were synthesized and applied to EHD jet printing, by which this study demonstrates that unreacted Ga and Cl ions on the QD surface induce the operating voltage of cone jet and cone jet formation to be reduced and stabilized, respectively. This result indicates GaP intermediate shell not only improves PL QY and thermal stability of InP QDs but also adjusts the critical flow rate required for cone-jet formation. In other words, surface charges of quantum dots can have a significant role in forming cone apex in the EHD capillary nozzle. For an industrially convenient validation of surface charges on the QD surface, Zeta potential analyses of QD solutions as a simple method were performed, as well as inductively coupled plasma optical emission spectrometry (ICP-OES) for a composition of elements.
Beyond the generation of highly emissive InP QDs with narrow FWHM, these studies talk about the connection between QD material and QD devices not only to make it a vital jumping-off point for industrially feasible applications but also to reveal from chemical and physical standpoints the origin that obstructs the improvement of device performance experimentally and theoretically.
Facing the environmental crisis, new technologies are needed to sustain our society. In this context, this thesis aims to describe the properties and applications of carbon-based sustainable materials. In particular, it reports the synthesis and characterization of a wide set of porous carbonaceous materials with high nitrogen content obtained from nucleobases. These materials are used as cathodes for Li-ion capacitors, and a major focus is put on the cathode preparation, highlighting the oxidation resistance of nucleobase-derived materials. Furthermore, their catalytic properties for acid/base and redox reactions are described, pointing to the role of nitrogen speciation on their surfaces. Finally, these materials are used as supports for highly dispersed nickel loading, activating the materials for carbon dioxide electroreduction.
Functional materials, also called "Smart Materials", are described by their ability to fulfill a desired task through targeted interaction with its environment. Due to this functional integration, such materials are of increased interest, especially in areas where the increasing micronization of components is required. Modern manufacturing processes (e.g. microfluidics) and the availability of a wide variety of functional materials (e.g. shape memory materials) now enable the production of particle-based switching components. This category includes micropumps and microvalves, whose basic function is the active control of liquid flows. One approach in realizing those microcomponents as pursued by this work, enables variable size-switching of water-filled microballoons by implementing a stimulus-sensitive switching motif in the capsule's membrane shell, while being under the influence of a constant driving force. The switching motif with its gatekeeper function has a critical influence on one or more material parameters, which modulate the capsule's resistance against the driving force in microballoon expansion process. The advantage of this concept is that even non-variable analyte conditions, such as concentration levels of ions, can be capitalized to generate external force fields that, under the control of the membrane, cause an inflation of the microballoon by an osmotically driven water influx. In case of osmotic pressure gradients as the driving force for the capsule expansion, material parameters associated with the gatekeeper function are specifically the permeability and the mechanical stiffness of the shell material. While a modulation of the shell permeability could be utilized to kinetically impede the water influx on large time scales, a modulation of the shell's mechanical stiffness even might be utilized to completely prevent the capsule inflation due to a possible non-deformability beneath a certain threshold pressure. In polymer networks, which are a suitable material class for the demanded capsule shell because of their excellent elasticity, both the permeability and the mechanical properties are strongly influenced by the crystallinity of the material. Since the permeability is effectively reduced with increasing crystallinity, while the mechanical stiffness is simultaneously greatly increased, both effects point in the same direction in terms of their functional relationship. For this reason and due to a reversible and contactless modulation of the membrane crystallinity by heat input, crystallites may be suitable switching motifs for controlling the capsule expansion. As second design element of reversible expandable microballoons, the capsule geometry, defined by an aqueous core enveloped by the temperature-sensitive polymer network membrane, should allow an osmotic pressure gradient across the membrane layer. The strength of the inflation pressure and the associated inflation velocity upon membrane melting should be controlled by the salt concentration within the aqueous core, while a turn in the osmotic gradient should furthermore allow the reversible process of capsule deflation. Therefore, it should be possible to build either microvalves and micropumps, while their intended action of either pumping or valving is determined by their state of expansion and the direction of the osmotic pressure gradient.. Microballoons of approximately 300 µm in diameter were formed via droplet-based microfluidics from double-emulsion templates (w/o/w). The elastomeric capsule membrane was formed by photo-crosslinking of methacrylate (MA) functionalized oligo(ε-caprolactone) precursors (≈ 3.8 MA-arms, Mn ≈ 12000 g mol-1) within the organic medium layer (o) via UV-exposure after droplet-formation. After removal of the toluene/chloroform mixture by slow extraction via the continuous aqueous phase, the capsules solidified under the development of a characteristic "mushroom"-like shape at specific experimental conditions (e.g. λ = 308 nm, 57 mJ·s-1·cm-2, 16 min). It could be furthermore shown that in dependency to the process parameters: oligomer concentration and curing-time also spherical capsules were accessible. Long curing-times and high oligomer concentrations at a fixed light-intensity favored the formation of "mushroom"-like capsules, whereas the contrary led to spherical shaped capsules. A comparative study on thin polymer network films of same composition and equal treatment proved a correlation between the film's crosslink density and their contraction capability, while stronger crosslinked polymer networks showed a stronger contraction after solvent removal. In combination with observations during capsule solidification via light-microscopy, where a continuous shaping from almost spherical crosslinked templates to "mushroom"-shaped and solidified capsules was stated, the following mechanism was proposed. In case of low oligomer contents and short curing-times, the contraction of the capsule shell during solvent removal is strongly diminished due to a low degree of crosslinking. Therefore, the solidifying shell could freely collapse onto the aqueous core. In the other case, high oligomer concentrations and long curing-times will favor the formation of highly crosslinked capsule membranes with a strong contraction capability. Due to an observed decentered location of the aqueous core within the swollen polymer network, an uneven radial stress along the capsule's circumference is exerted to the incompressible core. This lead to an uneven contraction during solvent removal and a directed flow of the core fluid into the direction of the minimal stress vector. In consequence, the initially thicker spherical cap contracts, whereas the opposing thinner spherical cap get stretched. The "mushroom"-shape over some advantages over their spherical shaped counterparts, why they were selected for the further experiments. Besides the necessity of a high density of crosslinking for the purpose of extraordinary elasticity and toughness, the form-anisotropy promotes a faster microballoon expandability due to a partial reduction of the membrane thickness. Additionally, pre-stretched regions of thin thickness might provide a better resistance against inflation pressure than spherical but non-stretched capsules of equal membrane thickness. The resulting "mushroom"-shaped microcapsules exhibited a melting point of Tm ≈ 50 - 60 °C and a degree of crystallinity of Xc ≈ 29 - 38 % depending on the membrane thickness and internal salt content, which is slightly lower than for the non-crosslinked oligomer and reasoned by a limited chain mobility upon crosslinking. Nonetheless, the melting transition of the polymer network was associated with a strong drop in its mechanical stiffness, which was shown to have a strong influence on the osmotic driven expansion of the microcapsules. Capsules that were subjected to osmotic pressures between 1.5 and 4.7 MPa did not expand if the temperature was well below the melting point of the capsule's membrane, i.e. at room temperature. In contrast, a continuous expansion, while approaching asymptotically to a final capsule size, was observed if the temperature exceeded the melting point, i.e. 60 °C. Microballoons, which were kept for 56 days at ∆Π = 1.5 MPa and room temperature, did not change significantly in diameter, why the impact of the mechanical stiffness on the expansion behavior is considered to be the greater than the influence of the shell permeability. The time-resolved expansion behavior of the microballoons above their Tm was subsequently modeled, using difusion equations that were corrected for shape anisotropy and elastic restoring forces. A shape-related and expansion dependent pre-factor was used to dynamically address the influence of the shell thickness differences along the circumference on the inflation velocity, whereas the microballoon's elastic contraction upon inflation was rendered by the inclusion of a hyperelastic constitutive model. An important finding resulting from this model was the pronounced increase in inflation velocity compared to hypothetical capsules with a homogeneous shell thickness, which stresses the benefit of employing shape anisotropic balloon-like capsules in this study. Furthermore, the model was able to predict the finite expandability on basis of entropy-elastic recovery forces and strain-hardening effects. A comparison of six different microballoons with different shell thicknesses and internal salt contents showed the linear relationship between the volumetric expansion, the shell thickness and the applied osmotic pressure, as represented by the model. As the proposed model facilitates the prediction of the expansion kinetics depending on the membranes mechanical and diffusional characteristics, it might be a screening tool for future material selections. In course of the microballoon expansion process, capsules of intermediate diameters could be isolated by recrystallization of the membrane, which is mainly caused by a restoration of the membrane's mechanical stiffness and is otherwise difficult to achieve with other stimuli-sensitive systems. The capsule's crystallinity of intermediate expansion states was nearly unchanged, whereas the lamellar crystal size tends to decreased with the expansion ratio. Therefore, it was assumed that the elastic modulus was only minimally altered and might increased due to the networks segment-chain extension. In addition to the volume increase achieved by inflation, a turn in the osmotic gradient also facilitated the reversible deflation, which was shown in inflation/deflation cycles. These both characteristics of the introduced microballoons are important parameter regarding the realization of micropumps and microvalves. The fixation of expanded microcapsules via recrystallization enabled the storage of entropy-elastic strain-energy, which could be utilized for pumping actions in non-aqueous media. Here, the pumping velocity depended on both, the type of surrounding medium and the applied temperature. Surrounding media that supported the fast transport of pumped liquid showed an accelerated deflation, while high temperatures further accelerate the pumping velocity. Very fast rejection of the incorporated payload was furthermore realized with pierced expanded microballoons, which were subjected to temperatures above their Tm. The possible fixation of intermediate particle sizes provide opportunities for vent constructions that allowed the precise adjustment of specific flow-rates and multiple valve openings and closings. A valve construction was realized by the insertion of a single or multiple microballoons in a microfluidic channel. A complete and a partial closing of the microballoon-valves was demonstrated as a function of the heating period. In this context, a difference between the inflation and deflation velocity was stated, summarizing slower expansion kinetics. Overall, microballoons, which presented both on-demand pumping and reversible valving by a temperature-triggered change in the capsule's volume, might be suitable components that help to design fully integrated LOC devices, due to the implementation of the control switch and controllable inflation/deflation kinetics. In comparison to other state of the art stimuli-sensitive materials, one has to highlight the microballoons capability of stabilizing almost continuously intermediate capsule sizes by simple recrystallization of the microballoon's membrane.
Reflexion ist eine Schlüsselkategorie für die professionelle Entwicklung von Lehrkräften, welche als Ausbildungsziel in den Bildungsstandards für die Lehrkräftebildung verankert ist. Eine Verstetigung universitär geprägter Forschung und Modellierung in der praxisnahen Anwendung im schulischen Kontext bietet Potentiale nachhaltiger Professionalisierung. Die Stärkung reflexionsbezogener Kompetenzen durch Empirie und Anwendung scheint eine phasenübergreifende Herausforderung der Lehrkräftebildung zu sein, die es zu bewältigen gilt. Ziele des Tagungsbandes Reflexion in der Lehrkräftebildung sind eine theoretische Schärfung des Konzeptes „Reflexive Professionalisierung“ und der Austausch über Fragen der Einbettung wirksamer reflexionsbezogener Lerngelegenheiten in die Lehrkräftebildung. Forschende und Lehrende der‚ drei Phasen (Studium, Referendariat sowie Fort- und Weiterbildung) der Lehrkräftebildung stellen Lehrkonzepte und Forschungsprojekte zum Thema Reflexion in der Lehrkräftebildung vor und diskutieren diese. Gemeinsam mit Teilnehmenden aller Phasen und von verschiedenen Standorten der Lehrkräftebildung werden zukünftige Herausforderungen identifiziert und Lösungsansätze herausgearbeitet.
Die Fachausbildung des Vorbereitungsdiensts im Land Brandenburg bietet Lehramtskandidat:innen (LAK) gemeinsame Fachgruppenarbeit, individuelle Fachkonsultationen sowie Hospitationen als unterstützende Angebote im Entwicklungsprozess. Um diesen Professionalisierungsprozess sichtbar zu machen und Zielperspektiven entwickeln und verfolgen zu können, wurden aus einem Spinnennetzdiagramm zwei Praxistools zur Reflexion und Diagnose entwickelt.
(1) Reflexionstool: Die Übertragung eines tabellarischen Kompetenzprofils (Arnold & Iffert nach MBJS [Ministerium für Bildung, Jugend und Sport des Landes Brandenburg], 2014) in das Spinnennetzdiagramm bietet den LAK niederschwellige Gelegenheit der kontinuierlichen, prozessbegleitenden Selbstreflexion. Die Selbstwahrnehmung von positiven Entwicklungen kann zur Stärkung der Selbstwirksamkeit führen, schafft gleichzeitig jedoch eine Bewusstmachung für einzelne Herausforderungen. Die Abbildung individueller Entwicklungsaufgaben und Professionalisierungsbedarfe ermöglicht eine bedarfsorientierte Gestaltung der Fachgruppenarbeit.
(2) Diagnosetool: Analog wird Fachausbilder:innen durch die Übertragung der Beobachtungskriterien des MBJS (2014) in das Diagramm eine übersichtliche Bestandsaufnahme einzelner Unterrichtssituationen zur Diagnose von Grundkoordinaten des Unterrichts transparent. Auf diese Weise können Fachausbilder:innen mögliche blinde Flecken identifizieren und Feedback zur Auswahl von Beobachtungskriterien geben. Darüber hinaus ergeben sich Aspekte zur Gestaltung von Fachkonsultationen sowie Diskussionsgrundlagen für Gruppenhospitationen.
Schulpraktische Phasen stellen eine bedeutende praxisnahe Lerngelegenheit im Lehramtsstudium dar, da sie Raum für umfangreiche Reflexionen der eigenen Lernerfahrung bieten. Das im Studium erworbene theoretisch-formale Wissen steht hierbei dem praktischen Wissen und Können gegenüber. Mit der professionellen Entwicklung im Referendariat, besonders im Kompetenzbereich des Unterrichtens, kann geschlussfolgert werden, dass sich eine Reflexion über eher fachliche Aspekte unter den Studierenden im Referendariat auf eine Reflexion über eher überfachliche und pädagogische Aspekte weitet. Infolge der Analyse von N = 55 schriftlichen Fremdreflexionen von angehenden Physiklehrkräften aus Studium und Referendariat konnte diese Hypothese für den Bereich der Unterrichtsanalyse und -reflexion unterstützt werden. Weiter wurde aus der Videovignette ein Workshopangebot für Lehrkräfte der zweiten und dritten Phase der Lehrkräftebildung entwickelt, erprobt und evaluiert.
High growth firms (HGFs) are important for job creation and considered to be precursors of economic growth. We investigate how formal institutions, like product- and labor-market regulations, as well as the quality of regional governments that implement these regulations, affect HGF development across European regions. Using data from Eurostat, OECD, WEF, and Gothenburg University, we show that both regulatory stringency and the quality of the regional government influence the regional shares of HGFs. More importantly, we find that the effect of labor- and product-market regulations ultimately depends on the quality of regional governments: in regions with high quality of government, the share of HGFs is neither affected by the level of product market regulation, nor by more or less flexibility in hiring and firing practices. Our findings contribute to the debate on the effects of regulations by showing that regulations are not, per se, “good, bad, and ugly”, rather their impact depends on the efficiency of regional governments. Our paper offers important building blocks to develop tailored policy measures that may influence the development of HGFs in a region.
Die Königlich Preußische Seehandlung, nach der heute die „Stiftung Preußische Seehandlung“ benannt ist, besitzt eine lange und vielseitige Geschichte. Der anlässlich des Stiftungsjubiläums erscheinende Band wirft einen Blick auf die Gründungskonstellation 1772, als König Friedrich II. die Gewerbe in Preußen fördern wollte. Er zeichnet die Aktivitäten von Männern an der Spitze der Seehandlung nach, wie Finanzminister Carl August von Struensee und dem unternehmerisch denkenden Karrierebeamten Christian Rother. Das Gebäude der Seehandlung wurde nach 1900 neu erbaut und ist heute in der Berlin-Brandenburgischen Akademie am Gendarmenmarkt lebendige Gegenwart. Die Seehandlung erhielt von ihren Zeitgenossen im 19. Jahrhundert ambivalente Urteile. Ein Ausblick auf die Geschichte der Stiftung Preußische Seehandlung seit 1983 zeigt das Bemühen um Kunst- und Kulturförderung als zentrale Aufgabe.
Für die Entwicklung professioneller Handlungskompetenzen angehender Lehrkräfte stellt die Unterrichtsreflexion ein wichtiges Instrument dar, um Theoriewissen und Praxiserfahrungen in Beziehung zu setzen. Die Auswertung von Unterrichtsreflexionen und eine entsprechende Rückmeldung stellt Forschende und Dozierende allerdings vor praktische wie theoretische Herausforderungen. Im Kontext der Forschung zu Künstlicher Intelligenz (KI) entwickelte Methoden bieten hier neue Potenziale. Der Beitrag stellt überblicksartig zwei Teilstudien vor, die mit Hilfe von KI-Methoden wie dem maschinellen Lernen untersuchen, inwieweit eine Auswertung von Unterrichtsreflexionen angehender Physiklehrkräfte auf Basis eines theoretisch abgeleiteten Reflexionsmodells und die automatisierte Rückmeldung hierzu möglich sind. Dabei wurden unterschiedliche Ansätze des maschinellen Lernens verwendet, um modellbasierte Klassifikation und Exploration von Themen in Unterrichtsreflexionen umzusetzen. Die Genauigkeit der Ergebnisse wurde vor allem durch sog. Große Sprachmodelle gesteigert, die auch den Transfer auf andere Standorte und Fächer ermöglichen. Für die fachdidaktische Forschung bedeuten sie jedoch wiederum neue Herausforderungen, wie etwa systematische Verzerrungen und Intransparenz von Entscheidungen. Dennoch empfehlen wir, die Potenziale der KI-basierten Methoden gründlicher zu erforschen und konsequent in der Praxis (etwa in Form von Webanwendungen) zu implementieren.
Algorithmen als Dozierende?
(2023)
Auf maschinellem Lernen basierende Tools haben schon längst Einzug in unseren Alltag gefunden und so konnten auch in der Lehrkräftebildung erste Anwendungen entwickelt, erprobt und evaluiert werden. Im Teilprojekt Physikdidaktik des Schwerpunktes 2 „Schulpraktische Studien“ wurden auf Basis eines Rahmenmodells für Reflexion (Nowak et al., 2019) automatisierte Analysemethoden (Wulff et al., 2020) entwickelt und fanden Einzug in universitäre fachdidaktische Lehre (Mientus et al., 2021a). Mit dem Projekt konnten Potenziale KI-basierter Unterstützung aufgezeigt und verstetigt sowie spezifische Herausforderungen identifiziert werden. Dieser Beitrag skizziert ausgewählte Anwendungsmöglichkeiten und weiterführende Forschungen unter dem Gesichtspunkt der Akzeptanz computerunterstützter Lehre.
Reflexion – unhinterfragt eines der wichtigsten Worte im Kontext der Lehrkräftebildung. Fest verankert in den bundesdeutschen Bildungsstandards sind in Forschung und Lehre die Suche nach Evidenz und die Unterstützung (angehender) Lehrkräfte ständiger Antrieb unzähliger Akteur:innen aller Phasen der Lehrkräftebildung. Wenngleich begriff liche Unklarheiten die Kommunikation von Forschungsergebnissen nicht immer intuitiv und die Unterstützung in der Lehre nicht immer praktikabel werden lassen, besteht Einigkeit darüber, dass ein Diskurs zur reflexiven Professionalisierung von Lehrkräften geführt werden muss. Aus diesem Grund veranstalteten die beiden QLB-Projekte PSI-Potsdam der Universität Potsdam und K2teach der Freien Universität Berlin vom 5. bis 7. Oktober 2022 die Onlinetagung „Reflexion in der Lehrkräftebildung. Empirisch – Phasenübergreifend – Interdisziplinär“. Ausgehend von den verschiedensten Fachdisziplinen diskutierten Akteur:innen aller Phasen der Lehrkräftebildung unterschiedlicher Standorte Ergebnisse empirischer Studien und Erfahrungen aus der Arbeit mit (angehenden) Lehrkräften. Beiträge der Tagung sind in diesem Buch festgehalten und sind als Momentaufnahme eines sich ständig entwickelnden Themenfelds zu verstehen. Forschende und Lehrende haben mit dieser Momentaufnahme die Möglichkeit, Eindrücke für die eigene Arbeit aufzunehmen und weiterzuentwickeln.
Zentrale Aufgaben von Kommunen in Deutschland umfassen die Gewährleistung der Daseinsvorsorge, des öffentlichen Verkehrs, Wirtschaftsförderung, Zugang zu ausreichender Breitbandinfrastruktur, gesundheitliche und soziale Betreuung und Zugang zu kulturellem Leben. Kommunen in ländlichen Regionen stehen gleichzeitig vor zahlreichen gesellschaftlichen, wirtschaftlichen, sozialen und politischen Herausforderungen. Neuartige Ansätze und innovative Akteure und Netzwerke werden daher im Kontext der Schaffung von sozialen oder digitalen Innovationen von den Kommunen als Antwort auf diese Herausforderungen begrüßt, stoßen aber auch teilweise auf Barrieren.
In dem Sammelband wird von den Herausgebenden die Frage untersucht, wie sich digitale Vorreiter:innen, die wir „Digitale Pioniere“ nennen, in ländlichen Regionen vernetzen, um einen positiven Beitrag zur ländlichen Regionalentwicklung zu leisten. Dabei liegt der Fokus hauptsächlich auf der kommunalpolitischen Ebene und auf der Frage, wie Digitale Pioniere als Schlüsselakteure in der ländlichen Governance agieren. Die Forschungsergebnisse kommunaler Governance sind anhand ländlicher Untersuchungsteilregionen in Baden-Württemberg und Mecklenburg-Vorpommern im Rahmen des Forschungsprojekts „DigPion – Digitale Pioniere in der ländlichen Regionalentwicklung“ (2020–2023) erarbeitet worden. Abschließend wird überprüft, wie die Erkenntnisse und erarbeiteten Handlungsempfehlungen für das Bundesland Brandenburg zu übertragen sind.
Im Rahmen des PSI-Projekts wurde eine Lehrveranstaltung konzipiert, die Lehramtsstudierenden einen vertieften Einblick sowohl in den Ablauf von Forschung als auch eine Bearbeitung einer eigenen experimentellen Forschungsaufgabe ermöglichen soll. Anlass waren die Berücksichtigung eines „Wissens über Erkenntnisgewinnung in der Disziplin“ im Modell des „Erweiterten Fachwissens für den schulischen Kontext“ (PSI) sowie Erkenntnisse empirischer Studien, die die Relevanz eigener Forschungserfahrung für das Unterrichten naturwissenschaftlicher Erkenntnisgewinnungsprozesse zeigen. Hier stellen wir eine neue Lehrveranstaltung (4 SWS) vor, die den angehenden Lehrkräften Forschungserfahrung ermöglicht (Seminar und Praktikum). Die Lehrveranstaltung vermittelt Einblicke in Forschung und die „Natur der Naturwissenschaften“, ermöglicht das Durchführen eigener wissenschaftlicher und schulrelevanter Experimente und bietet eine angemessene Reflexion über die verschiedenen Kurselemente. Die Evaluationsergebnisse sind überwiegend positiv, zeigen aber auch, dass für die Studierenden die wahrgenommene Schulrelevanz und die fachdidaktischen Aspekte ein wichtiges Kriterium für die positive Bewertung sind.
Im 19. Jahrhundert waren Konversationslexika, wie der Name schon andeutet, dazu gedacht, die Konversation in Salon und Vereinen mit Informationen zu bereichern. In den einzelnen Artikeln wurde auf Präzision, Genauigkeit und Überprüfbarkeit gesetzt, um der Leserschaft auch ein eigenes Urteil zu ermöglichen. Die „Seehandlungs-Societät in Preußen“ oder „Seehandlung, preußische“, wie sie in deutschen Lexika vorkommt, wandelte sich im 19. Jahrhundert zur Staatsbank. In der ersten Hälfte des 19. Jahrhunderts fielen die Urteile der Lexika meist ablehnend aus: Die Seehandlung erschien als eine wirtschaftspolitisch katastrophale Fehlentwicklung. Eine besondere Rolle spielte der Präsident Christian (von) Rother, der die Seehandlung zum selbständigen Unternehmen gemacht hatte. Der Wandel der allgemeinen Lexika in der zweiten Hälfte des 19. Jahrhunderts veränderte auch die Sicht auf die Seehandlung. Die Geschäfte der Bank wurden positiv hervorgehoben, die Beurteilungen verwiesen auf Statistiken und Bilanzen. Der Fokus rückte von den leitenden Personen der Seehandlung hin zum Kampf um Handelsmonopole und den preußischen Landtag als öffentlichem Forum. Das vernichtende Urteil der ersten Hälfte des 19. Jahrhunderts war einer differenzierten Bewertung der Bankentätigkeit gewichen.
Carbon dioxide removal from the atmosphere is becoming an important option to achieve net zero climate targets. This paper develops a welfare and public economics perspective on optimal policies for carbon removal and storage in non-permanent sinks like forests, soil, oceans, wood products or chemical products. We derive a new metric for the valuation of non-permanent carbon storage, the social cost of carbon removal (SCC-R), which embeds also the conventional social cost of carbon emissions. We show that the contribution of CDR is to create new carbon sinks that should be used to reduce transition costs, even if the stored carbon is released to the atmosphere eventually. Importantly, CDR does not raise the ambition of optimal temperature levels unless initial atmospheric carbon stocks are excessively high. For high initial atmospheric carbon stocks, CDR allows to reduce the optimal temperature below initial levels. Finally, we characterize three different policy regimes that ensure an optimal deployment of carbon removal: downstream carbon pricing, upstream carbon pricing, and carbon storage pricing. The policy regimes differ in their informational and institutional requirements regarding monitoring, liability and financing.
Aus dem Inhalt:
- Menschenrechtsklagen vor Zivilgerichten in Deutschland – Eine Bestandsaufnahme der methodisch-rechtspolitischen Ansätze im Internationalen Privatrecht (IPR)
- Das Vorsorgeprinzip – ein unterschätzter Bestandteil menschenrechtlicher Klimaklagen?
- Der Menschenrechtsausschuss der Vereinten Nationen und die Klimakrise – Die Entscheidung Billy et al. gegen Australien und ihr Beitrag zur „Begrünung“ des Menschenrechtsschutzes
The last years have been affected by Covid-19 and the international emergency mecha-nism to deal with health-related threats. The effects of this period manifested differ-ently worldwide, depending on matters such as international relations, national policies, power dynamics etc. Additionally, the impact of this time will likely have long-term effects which are yet to be known. This paper gives a critical overview of the Public Health Emergency of International Concern (PHEIC) mechanism in the context of Covid-19. It does so by explaining the legal framework for states of emergency, specifically in the context of a PHEIC, while considering its restrictions and limitations on human rights. It further outlines issues in the manifestation of global protections and limitations on human rights during Covid-19. Lastly, considering the likelihood of future PHEICs and the known systemic obstructions, this paper offers ways to im-prove this mechanism from a holistic, non-zero-sum perspective.
Fontanes Medien
(2023)
Theodor Fontane war, im durchaus modernen Sinne, ein Medienarbeiter: Als Presse-Agent in London lernte er die innovativste Presselandschaft seiner Zeit kennen; als Redakteur in Berlin leistete er journalistische Kärrnerarbeit; er schrieb Kritiken über das Theater, die bildende Kunst und die Literatur – und auch seine Romane wie seine Reisebücher sind stets Medienprodukte, als Serien in in Zeitungen und Zeitschriften platziert, bevor sie auf dem Buchmarkt erschienen.
Der vorliegende Band dokumentiert die Ergebnisse eines internationalen Kongresses, veranstaltet 2019 vom Theodor-Fontane-Archiv in Potsdam. Die ebenso rasante wie umfassende Medialisierung und Vernetzung der Gesellschaft im Laufe des 19. Jahrhunderts wird dabei als produktive Voraussetzung der schriftstellerischen Tätigkeit Fontanes begriffen. Eingebettet in ein weit verzweigtes Netz der Korrespondenz und der postalischen Textzirkulation, vertraut mit den Routinen und Publika der periodischen Massenpresse, für die er sein Leben lang schrieb, und auf vielfältige Weise geprägt von der visuellen Kultur seiner Zeit wird Theodor Fontane als gleichermaßen journalistisch versierter wie ästhetisch sensibler Grenzgänger erkennbar.
Kein anderer Akteur prägte die ersten Dezennien der Preußischen Seehandlung so sehr wie Carl August von Struensee. Als deren Direktor und dann als preußischer Finanzminister initiierte er zwischen 1782 und seinem Tod im Jahr 1804 bereits maßgeblich den langen Transformationsprozess der Seehandlung vom königlichen Wachs- und Salzmonopol hin zu einer Staatsbank, der erst im 20. Jahrhundert zum Abschluss kommen sollte. In dem Beitrag wird Struensee sowohl als Wirtschaftstheoretiker in den ökonomischen Diskursen der Aufklärung zwischen Physiokratie und Frühliberalismus situiert als auch als ein Finanzpolitiker mit konsequent europäischem Handlungshorizont vor dem Hintergrund einer beschleunigten globalen und kolonialen Mächtekonkurrenz porträtiert.
Vorwort
(2023)
Die Königlich Preußische Seehandlung, nach der heute die „Stiftung Preußische Seehandlung“ benannt ist, besitzt eine lange und vielseitige Geschichte. Der anlässlich des Stiftungsjubiläums erscheinende Band wirft einen Blick auf die Gründungskonstellation 1772, als König Friedrich II. die Gewerbe in Preußen fördern wollte. Er zeichnet die Aktivitäten von Männern an der Spitze der Seehandlung nach, wie Finanzminister Carl August von Struensee und dem unter- nehmerisch denkenden Karrierebeamten Christian Rother.
Das Gebäude der Seehandlung wurde nach 1900 neu erbaut und ist heute in der Berlin-Brandenburgischen Akademie am Gendarmenmarkt lebendige Gegenwart. Die Seehand- lung erhielt von ihren Zeitgenossen im 19. Jahr- hundert ambivalente Urteile. Ein Ausblick auf die Geschichte der Stiftung Preußische Seehandlung seit 1983 zeigt das Bemühen um Kunst- und Kul- turförderung als zentrale Aufgabe.
Aus dem Inhalt:
- Dimensionen von Macht – Unter Betrachtung des ethischen Berufskodex, der professionellen Haltung und systemimmanenten Dilemmata im ungarischen Kinderschutzsystem
– Das 12. Zusatzprotokoll zur EMRK – Chancen und Potenziale eines allgemeinen und umfassenden Diskriminierungsverbots
- Zum aktuellen Stand und zu aktuellen Fragen des Menschenrechtsschutzes von LGBTQI+-Personen
- Fragen nach gerechter Verteilung – eine menschenrechtliche Analyse der Allokation am Beispiel von COVID-19-Impfstoffen für Ältere
- Extraterritorial Constitutional Rights: A Comparative Case Study of the United States and Germany*
- Bericht über die Tätigkeit des Menschenrechtsausschusses der Vereinten Nationen im Jahre 2022 – Teil II: Individualbeschwerden
Recent research suggests that design thinking practices may foster the development of needed capabilities in new digitalised landscapes. However, existing publications represent individual contributions, and we lack a holistic understanding of the value of design thinking in a digital world. No review, to date, has offered a holistic retrospection of this research. In response, in this bibliometric review, we aim to shed light on the intellectual structure of multidisciplinary design thinking literature related to capabilities relevant to the digital world in higher education and business settings, highlight current trends and suggest further studies to advance theoretical and empirical underpinnings. Our study addresses this aim using bibliometric methods—bibliographic coupling and co-word analysis as they are particularly suitable for identifying current trends and future research priorities at the forefront of the research. Overall, bibliometric analyses of the publications dealing with the related topics published in the last 10 years (extracted from the Web of Science database) expose six trends and two possible future research developments highlighting the expanding scope of the design thinking scientific field related to capabilities required for the (more sustainable and human-centric) digital world. Relatedly, design thinking becomes a relevant approach to be included in higher education curricula and human resources training to prepare students and workers for the changing work demands. This paper is well-suited for education and business practitioners seeking to embed design thinking capabilities in their curricula and for design thinking and other scholars wanting to understand the field and possible directions for future research.
Extreme flooding displaces an average of 12 million people every year. Marginalized populations in low-income countries are in particular at high risk, but also industrialized countries are susceptible to displacement and its inherent societal impacts. The risk of being displaced results from a complex interaction of flood hazard, population exposed in the floodplains, and socio-economic vulnerability. Ongoing global warming changes the intensity, frequency, and duration of flood hazards, undermining existing protection measures. Meanwhile, settlements in attractive yet hazardous flood-prone areas have led to a higher degree of population exposure. Finally, the vulnerability to displacement is altered by demographic and social change, shifting economic power, urbanization, and technological development. These risk components have been investigated intensively in the context of loss of life and economic damage, however, only little is known about the risk of displacement under global change.
This thesis aims to improve our understanding of flood-induced displacement risk under global climate change and socio-economic change. This objective is tackled by addressing the following three research questions. First, by focusing on the choice of input data, how well can a global flood modeling chain reproduce flood hazards of historic events that lead to displacement? Second, what are the socio-economic characteristics that shape the vulnerability to displacement? Finally, to what degree has climate change potentially contributed to recent flood-induced displacement events?
To answer the first question, a global flood modeling chain is evaluated by comparing simulated flood extent with satellite-derived inundation information for eight major flood events. A focus is set on the sensitivity to different combinations of the underlying climate reanalysis datasets and global hydrological models which serve as an input for the global hydraulic model. An evaluation scheme of performance scores shows that simulated flood extent is mostly overestimated without the consideration of flood protection and only for a few events dependent on the choice of global hydrological models. Results are more sensitive to the underlying climate forcing, with two datasets differing substantially from a third one. In contrast, the incorporation of flood protection standards results in an underestimation of flood extent, pointing to potential deficiencies in the protection level estimates or the flood frequency distribution within the modeling chain.
Following the analysis of a physical flood hazard model, the socio-economic drivers of vulnerability to displacement are investigated in the next step. For this purpose, a satellite- based, global collection of flood footprints is linked with two disaster inventories to match societal impacts with the corresponding flood hazard. For each event the number of affected population, assets, and critical infrastructure, as well as socio-economic indicators are computed. The resulting datasets are made publicly available and contain 335 displacement events and 695 mortality/damage events. Based on this new data product, event-specific displacement vulnerabilities are determined and multiple (national) dependencies with the socio-economic predictors are derived. The results suggest that economic prosperity only partially shapes vulnerability to displacement; urbanization, infant mortality rate, the share of elderly, population density and critical infrastructure exhibit a stronger functional relationship, suggesting that higher levels of development are generally associated with lower vulnerability.
Besides examining the contextual drivers of vulnerability, the role of climate change in the context of human displacement is also being explored. An impact attribution approach is applied on the example of Cyclone Idai and associated extreme coastal flooding in Mozambique. A combination of coastal flood modeling and satellite imagery is used to construct factual and counterfactual flood events. This storyline-type attribution method allows investigating the isolated or combined effects of sea level rise and the intensification of cyclone wind speeds on coastal flooding. The results suggest that displacement risk has increased by 3.1 to 3.5% due to the total effects of climate change on coastal flooding, with the effects of increasing wind speed being the dominant factor.
In conclusion, this thesis highlights the potentials and challenges of modeling flood- induced displacement risk. While this work explores the sensitivity of global flood modeling to the choice of input data, new questions arise on how to effectively improve the reproduction of flood return periods and the representation of protection levels. It is also demonstrated that disentangling displacement vulnerabilities is feasible, with the results providing useful information for risk assessments, effective humanitarian aid, and disaster relief. The impact attribution study is a first step in assessing the effects of global warming on displacement risk, leading to new research challenges, e.g., coupling fluvial and coastal flood models or the attribution of other hazard types and displacement events. This thesis is one of the first to address flood-induced displacement risk from a global perspective. The findings motivate for further development of the global flood modeling chain to improve our understanding of displacement vulnerability and the effects of global warming.
El plateau Andino es el segundo plateau orogénico más grande del mundo y se ubica en los Andes Centrales, desarrollado en un sistema orogénico no colisional. Se extiende desde el sur del Perú (15°S), hasta el norte de Argentina y Chile (27°30´S). A partir de los 24°S y prologándose hacia el sur, el plateau Andino se denomina Puna y está caracterizado por un sistema de cuencas endorreicas y salares delimitados por cordones montañosos. Entre los 26° y 27°30´S, la Puna encuentra su límite austral en una zona de transición entre una zona de subducción normal y una zona de subducción plana o “flat slab” que se prolonga hasta los 33°S. Diversos estudios documentan la ocurrencia de un aumento del espesor cortical, y levantamiento episódico y diacrónico del relieve, alcanzando su configuración actual durante el Mioceno tardío. Posteriormente, el plateau habría experimentado un cambio en el estilo de deformación dominado por procesos extensionales evidenciado por fallas y terremotos de cinemática normal. Sin embargo, en el borde sur del plateau de la Puna y en las áreas delimitadas con el resto del orógeno, la variación del campo de esfuerzo no está del todo comprendida, reflejando una excelente oportunidad para evaluar cómo el campo de esfuerzo puede evolucionar durante el desarrollo del orógeno y cómo puede verse afectado por la presencia/ausencia de un plateau orogénico, así como también por la existencia de anisotropías estructurales propias de cada unidad morfotectónica.
Esta Tesis investiga la relación entre la deformación cortical somera y la evolución en tiempo y espacio del campo de esfuerzos en el sector sur del plateau Andino, durante el cenozoico tardío. Para realizar esta investigación, se utilizaron técnicas de obtención de edades radiométricas con el método Uranio-Plomo (U-Pb), análisis de fallas mesoscópicas para la obtención de tensores de esfuerzos y delimitación de la orientación de los ejes principales de esfuerzos, análisis de anisotropía de susceptibilidad magnética en rocas sedimentarias y volcanoclásticas para estimar direcciones de acortamiento o direcciones de transporte sedimentario, técnicas de modelado cinemático para llegar a una aproximación de las estructuras corticales profundas asociadas a la deformación allí registrada, y un análisis morfométrico para la identificación de indicadores geomorfológicos asociados a deformación producto de la actividad tectónica cuaternaria.
Combinando estos resultados con los antecedentes previamente documentados, el estudio revela una compleja variación del campo de esfuerzo caracterizado por cambios en la orientación y permutaciones verticales de los ejes principales de esfuerzos, durante cada régimen de deformación, durante los últimos ~24 Ma. La evolución del campo de esfuerzos puede ser asociada temporalmente a tres fases orogénicas involucradas con la evolución de los Andes Centrales en esta latitud: (1) una primera fase con un régimen de esfuerzos compresivos de acortamiento E-O documentado desde el Eoceno, Oligoceno tardío hasta el Mioceno medio en el área, coincide con la fase de construcción andina, engrosamiento y crecimiento de la corteza y levantamiento topográfico; (2) una segunda fase caracterizada por un régimen de esfuerzos de transcurrencia, a partir de los ~11 Ma en el borde occidental y compresión y transcurrencia a los~5 Ma en el borde oriental del plateau de la Puna, y un régimen de esfuerzo compresivos en Famatina y las Sierras Pampeanas interpretado como una transición entre la construcción orogénica del Neógeno y la máxima acumulación de deformación y el alzamiento topográfico del plateau de la Puna, y (3) una tercera fase donde el régimen se caracteriza por la transcurrencia en la Puna y en su borde occidental y en su borde oriental con las Sierras Pampeanas, después de ~5-4 Ma, interpretado como un régimen de esfuerzos controlados por el engrosamiento cortical desarrollado a lo largo del borde sur del plateau Altiplano/Puna, previo a un colapso orogénico. Los resultados dejan en evidencia que el borde del plateau experimentó el paso desde un régimen compresivo hacia uno transcurrente, que se diferencia de la extensión documentada hacia el norte en el plateau Andino para el mismo período. Cambios en los esfuerzos similares han sido documentado durante la construcción del plateau Tibetano, en donde un régimen de esfuerzo predominantemente compresivo cambió a un régimen de transcurrente cuando el plateau habría alcanzado la mitad de su elevación actual, y que posteriormente derivó en un régimen extensional, entre 14 y 4 Ma, cuando la altitud del plateau fue superior al 80% respecto a su actitud actual, lo que podría estar indicando que los regímenes transcurrentes representan etapas transicionales entre las zonas externas del plateau bajo compresión y las zonas internas, en las que los regímenes extensionales son más viables de ocurrir.
Twenty-four scientists met for the annual Auxological conference held at Krobielowice castle, Poland, to discuss the diverse influences of the environment and of social behavior on growth following last year’s focus on growth and public health concerns (Hermanussen et al., 2022b). Growth and final body size exhibit marked plastic responses to ecological conditions. Among the shortest are the pygmoid people of Rampasasa, Flores, Indonesia, who still live under most secluded insular conditions. Genetics and nutrition are usually considered responsible for the poor growth in many parts of this world, but evidence is accumulating on the prominent impact of social embedding on child growth. Secular trends not only in the growth of height, but also in body proportions, accompany the secular changes in the social, economic and political conditions, with major influences on the emotional and educational circumstances under which the children grow up (Bogin, 2021). Aspects of developmental tempo and aspects of sports were discussed, and the impact of migration by the example of women from Bangladesh who grew up in the UK. Child growth was considered in particular from the point of view of strategic adjustments of individual size within the network of its social group. Theoretical considerations on network characteristics were presented and related to the evolutionary conservation of growth regulating hypothalamic neuropeptides that have been shown to link behavior and physical growth in the vertebrate species. New statistical approaches were presented for the evaluation of short term growth measurements that permit monitoring child growth at intervals of a few days and weeks.