Refine
Year of publication
- 2022 (276) (remove)
Document Type
- Article (238)
- Doctoral Thesis (25)
- Postprint (11)
- Other (1)
- Review (1)
Keywords
- subdwarfs (7)
- diffusion (6)
- X-rays: stars (5)
- stars: evolution (5)
- stochastic resetting (5)
- stars (4)
- white dwarfs (4)
- Röntgenspektroskopie (3)
- Thiouracil (3)
- analysis (3)
Institute
- Institut für Physik und Astronomie (276) (remove)
Flexible all-perovskite tandem photovoltaics open up new opportunities for application compared to rigid devices, yet their performance lags behind. Now, researchers show that molecule-bridged interfaces mitigate charge recombination and crack formation, improving the efficiency and mechanical reliability of flexible devices.
X-rays are integral to furthering our knowledge of exoplanetary systems. In this work we discuss the use of X-ray observations to understand star-planet interac- tions, mass-loss rates of an exoplanet’s atmosphere and the study of an exoplanet’s atmospheric components using future X-ray spectroscopy.
The low-mass star GJ 1151 was reported to display variable low-frequency radio emission, which is an indication of coronal star-planet interactions with an unseen exoplanet. In chapter 5 we report the first X-ray detection of GJ 1151’s corona based on XMM-Newton data. Averaged over the observation, we detect the star with a low coronal temperature of 1.6 MK and an X-ray luminosity of LX = 5.5 × 1026 erg/s. This is compatible with the coronal assumptions for a sub-Alfvénic star- planet interaction origin of the observed radio signals from this star.
In chapter 6, we aim to characterise the high-energy environment of known ex- oplanets and estimate their mass-loss rates. This work is based on the soft X-ray instrument on board the Spectrum Roentgen Gamma (SRG) mission, eROSITA, along with archival data from ROSAT, XMM-Newton, and Chandra. We use these four X-ray source catalogues to derive X-ray luminosities of exoplanet host stars in the 0.2-2 keV energy band. A catalogue of the mass-loss rates of 287 exoplan- ets is presented, with 96 of these planets characterised for the first time using new eROSITA detections. Of these first time detections, 14 are of transiting exoplanets that undergo irradiation from their host stars that is of a level known to cause ob- servable evaporation signals in other systems, making them suitable for follow-up observations.
In the next generation of space observatories, X-ray transmission spectroscopy of an exoplanet’s atmosphere will be possible, allowing for a detailed look into the atmospheric composition of these planets. In chapter 7, we model sample spectra using a toy model of an exoplanetary atmosphere to predict what exoplanet transit observations with future X-ray missions such as Athena will look like. We then estimate the observable X-ray transmission spectrum for a typical Hot Jupiter-type exoplanet, giving us insights into the advances in X-ray observations of exoplanets in the decades to come.
We study theoretically the quantum dynamics and spectroscopy of rovibrational polaritons formed in a model system composed of a single rovibrating diatomic molecule, which interacts with two degenerate, orthogonally polarized modes of an optical Fabry-Perot cavity. We employ an effective rovibrational Pauli-Fierz Hamiltonian in length gauge representation and identify three-state vibro-polaritonic conical intersections (VPCIs) between singly excited vibro-polaritonic states in a two-dimensional angular coordinate branching space. The lower and upper vibrational polaritons are of mixed light-matter hybrid character, whereas the intermediate state is purely photonic in nature. The VPCIs provide effective population transfer channels between singly excited vibrational polaritons, which manifest in rich interference patterns in rotational densities. Spectroscopically, three bright singly excited states are identified when an external infrared laser field couples to both a molecular and a cavity mode. The non-trivial VPCI topology manifests as pronounced multi-peak progression in the spectral region of the upper vibrational polariton, which is traced back to the emergence of rovibro-polaritonic light-matter hybrid states. Experimentally, ubiquitous spontaneous emission from cavity modes induces a dissipative reduction of intensity and peak broadening, which mainly influences the purely photonic intermediate state peak as well as the rovibro-polaritonic progression. Published under an exclusive license by AIP Publishing.
A task-based parallel elliptic solver for numerical relativity with discontinuous Galerkin methods
(2022)
Elliptic partial differential equations are ubiquitous in physics. In numerical relativity---the study of computational solutions to the Einstein field equations of general relativity---elliptic equations govern the initial data that seed every simulation of merging black holes and neutron stars. In the quest to produce detailed numerical simulations of these most cataclysmic astrophysical events in our Universe, numerical relativists resort to the vast computing power offered by current and future supercomputers. To leverage these computational resources, numerical codes for the time evolution of general-relativistic initial value problems are being developed with a renewed focus on parallelization and computational efficiency. Their capability to solve elliptic problems for accurate initial data must keep pace with the increasing detail of the simulations, but elliptic problems are traditionally hard to parallelize effectively.
In this thesis, I develop new numerical methods to solve elliptic partial differential equations on computing clusters, with a focus on initial data for orbiting black holes and neutron stars. I develop a discontinuous Galerkin scheme for a wide range of elliptic equations, and a stack of task-based parallel algorithms for their iterative solution. The resulting multigrid-Schwarz preconditioned Newton-Krylov elliptic solver proves capable of parallelizing over 200 million degrees of freedom to at least a few thousand cores, and already solves initial data for a black hole binary about ten times faster than the numerical relativity code SpEC. I also demonstrate the applicability of the new elliptic solver across physical disciplines, simulating the thermal noise in thin mirror coatings of interferometric gravitational-wave detectors to unprecedented accuracy. The elliptic solver is implemented in the new open-source SpECTRE numerical relativity code, and set up to support simulations of astrophysical scenarios for the emerging era of gravitational-wave and multimessenger astronomy.
The complex hierarchical structure of bone undergoes a lifelong remodeling process, where it adapts to mechanical needs. Hereby, bone resorption by osteoclasts and bone formation by osteoblasts have to be balanced to sustain a healthy and stable organ. Osteocytes orchestrate this interplay by sensing mechanical strains and translating them into biochemical signals. The osteocytes are located in lacunae and are connected to one another and other bone cells via cell processes through small channels, the canaliculi. Lacunae and canaliculi form a network (LCN) of extracellular spaces that is able to transport ions and enables cell-to-cell communication. Osteocytes might also contribute to mineral homeostasis by direct interactions with the surrounding matrix. If the LCN is acting as a transport system, this should be reflected in the mineralization pattern. The central hypothesis of this thesis is that osteocytes are actively changing their material environment. Characterization methods of material science are used to achieve the aim of detecting traces of this interaction between osteocytes and the extracellular matrix. First, healthy murine bones were characterized. The properties analyzed were then compared with three murine model systems: 1) a loading model, where a bone of the mouse was loaded during its life time; 2) a healing model, where a bone of the mouse was cut to induce a healing response; and 3) a disease model, where the Fbn1 gene is dysfunctional causing defects in the formation of the extracellular tissue.
The measurement strategy included routines that make it possible to analyze the organization of the LCN and the material components (i.e., the organic collagen matrix and the mineral particles) in the same bone volumes and compare the spatial distribution of different data sets. The three-dimensional network architecture of the LCN is visualized by confocal laser scanning microscopy (CLSM) after rhodamine staining and is then subsequently quantified. The calcium content is determined via quantitative backscattered electron imaging (qBEI), while small- and wide-angle X-ray scattering (SAXS and WAXS) are employed to determine the thickness and length of local mineral particles.
First, tibiae cortices of healthy mice were characterized to investigate how changes in LCN architecture can be attributed to interactions of osteocytes with the surrounding bone matrix. The tibial mid-shaft cross-sections showed two main regions, consisting of a band with unordered LCN surrounded by a region with ordered LCN. The unordered region is a remnant of early bone formation and exhibited short and thin mineral particles. The surrounding, more aligned bone showed ordered and dense LCN as well as thicker and longer mineral particles. The calcium content was unchanged between the two regions.
In the mouse loading model, the left tibia underwent two weeks of mechanical stimulation, which results in increased bone formation and decreased resorption in skeletally mature mice. Here the specific research question addressed was how do bone material characteristics change at (re)modeling sites? The new bone formed in response to mechanical stimulation showed similar properties in terms of the mineral particles, like the ordered calcium region but lower calcium content compared to the right, non-loaded control bone of the same mice. There was a clear, recognizable border between mature and newly formed bone. Nevertheless, some canaliculi went through this border connecting the LCN of mature and newly formed bone.
Additionally, the question should be answered whether the LCN topology and the bone matrix material properties adapt to loading. Although, mechanically stimulated bones did not show differences in calcium content compared to controls, different correlations were found between the local LCN density and the local Ca content depending on whether the bone was loaded or not. These results suggest that the LCN may serve as a mineral reservoir.
For the healing model, the femurs of mice underwent an osteotomy, stabilized with an external fixator and were allowed to heal for 21 days. Thus, the spatial variations in the LCN topology with mineral properties within different tissue types and their interfaces, namely calcified cartilage, bony callus and cortex, could be simultaneously visualized and compared in this model. All tissue types showed structural differences across multiple length scales. Calcium content increased and became more homogeneous from calcified cartilage to bony callus to lamellar cortical bone. The degree of LCN organization increased as well, while the lacunae became smaller, as did the lacunar density between these different tissue types that make up the callus. In the calcified cartilage, the mineral particles were short and thin. The newly formed callus exhibited thicker mineral particles, which still had a low degree of orientation. While most of the callus had a woven-like structure, it also served as a scaffold for more lamellar tissue at the edges. The lamelar bone callus showed thinner mineral particles, but a higher degree of alignment in both, mineral particles and the LCN. The cortex showed the highest values for mineral length, thickness and degree of orientation. At the same time, the lacunae number density was 34% lower and the lacunar volume 40% smaller compared to bony callus. The transition zone between cortical and callus regions showed a continuous convergence of bone mineral properties and lacunae shape. Although only a few canaliculi connected callus and the cortical region, this indicates that communication between osteocytes of both tissues should be possible. The presented correlations between LCN architecture and mineral properties across tissue types may suggest that osteocytes have an active role in mineralization processes of healing.
A mouse model for the disease marfan syndrome, which includes a genetic defect in the fibrillin-1 gene, was investigated. In humans, Marfan syndrome is characterized by a range of clinical symptoms such as long bone overgrowth, loose joints, reduced bone mineral density, compromised bone microarchitecture, and increased fracture rates. Thus, fibrillin-1 seems to play a role in the skeletal homeostasis. Therefore, the present work studied how marfan syndrome alters LCN architecture and the surrounding bone matrix. The mice with marfan syndrome showed longer tibiae than their healthy littermates from an age of seven weeks onwards. In contrast, the cortical development appeared retarded, which was observed across all measured characteristics, i. e. lower endocortical bone formation, looser and less organized lacuno-canalicular network, less collagen orientation, thinner and shorter mineral particles.
In each of the three model systems, this study found that changes in the LCN architecture spatially correlated with bone matrix material parameters. While not knowing the exact mechanism, these results provide indications that osteocytes can actively manipulate a mineral reservoir located around the canaliculi to make a quickly accessible contribution to mineral homeostasis. However, this interaction is most likely not one-sided, but could be understood as an interplay between osteocytes and extra-cellular matrix, since the bone matrix contains biochemical signaling molecules (e.g. non-collagenous proteins) that can change osteocyte behavior. Bone (re)modeling can therefore not only be understood as a method for removing defects or adapting to external mechanical stimuli, but also for increasing the efficiency of possible osteocyte-mineral interactions during bone homeostasis. With these findings, it seems reasonable to consider osteocytes as a target for drug development related to bone diseases that cause changes in bone composition and mechanical properties. It will most likely require the combined effort of materials scientists, cell biologists, and molecular biologists to gain a deeper understanding of how bone cells respond to their material environment.
In the present thesis I investigate the lattice dynamics of thin film hetero structures of magnetically ordered materials upon femtosecond laser excitation as a probing and manipulation scheme for the spin system. The quantitative assessment of laser induced thermal dynamics as well as generated picosecond acoustic pulses and their respective impact on the magnetization dynamics of thin films is a challenging endeavor. All the more, the development and implementation of effective experimental tools and comprehensive models are paramount to propel future academic and technological progress.
In all experiments in the scope of this cumulative dissertation, I examine the crystal lattice of nanoscale thin films upon the excitation with femtosecond laser pulses. The relative change of the lattice constant due to thermal expansion or picosecond strain pulses is directly monitored by an ultrafast X-ray diffraction (UXRD) setup with a femtosecond laser-driven plasma X-ray source (PXS). Phonons and spins alike exert stress on the lattice, which responds according to the elastic properties of the material, rendering the lattice a versatile sensor for all sorts of ultrafast interactions. On the one hand, I investigate materials with strong magneto-elastic properties; The highly magnetostrictive rare-earth compound TbFe2, elemental Dysprosium or the technological relevant Invar material FePt. On the other hand I conduct a comprehensive study on the lattice dynamics of Bi1Y2Fe5O12 (Bi:YIG), which exhibits high-frequency coherent spin dynamics upon femtosecond laser excitation according to the literature. Higher order standing spinwaves (SSWs) are triggered by coherent and incoherent motion of atoms, in other words phonons, which I quantified with UXRD. We are able to unite the experimental observations of the lattice and magnetization dynamics qualitatively and quantitatively. This is done with a combination of multi-temperature, elastic, magneto-elastic, anisotropy and micro-magnetic modeling.
The collective data from UXRD, to probe the lattice, and time-resolved magneto-optical Kerr effect (tr-MOKE) measurements, to monitor the magnetization, were previously collected at different experimental setups. To improve the precision of the quantitative assessment of lattice and magnetization dynamics alike, our group implemented a combination of UXRD and tr-MOKE in a singular experimental setup, which is to my knowledge, the first of its kind. I helped with the conception and commissioning of this novel experimental station, which allows the simultaneous observation of lattice and magnetization dynamics on an ultrafast timescale under identical excitation conditions. Furthermore, I developed a new X-ray diffraction measurement routine which significantly reduces the measurement time of UXRD experiments by up to an order of magnitude. It is called reciprocal space slicing (RSS) and utilizes an area detector to monitor the angular motion of X-ray diffraction peaks, which is associated with lattice constant changes, without a time-consuming scan of the diffraction angles with the goniometer. RSS is particularly useful for ultrafast diffraction experiments, since measurement time at large scale facilities like synchrotrons and free electron lasers is a scarce and expensive resource. However, RSS is not limited to ultrafast experiments and can even be extended to other diffraction techniques with neutrons or electrons.
Elementary particle physics is a contemporary topic in science that is slowly being integrated into high-school education. These new implementations are challenging teachers’ professional knowledge worldwide. Therefore, physics education research is faced with two important questions, namely, how can particle physics be integrated in high-school physics curricula and how best to support teachers in enhancing their professional knowledge on particle physics. This doctoral research project set up to provide better guidelines for answering these two questions by conducting three studies on high-school particle physics education.
First, an expert concept mapping study was conducted to elicit experts’ expectations on what high-school students should learn about particle physics. Overall, 13 experts in particle physics, computing, and physics education participated in 9 concept mapping rounds. The broad knowledge base of the experts ensured that the final expert concept map covers all major particle physics aspects. Specifically, the final expert concept map includes 180 concepts and examples, connected with 266 links and crosslinks. Among them are also several links to students’ prior knowledge in topics such as mechanics and thermodynamics. The high interconnectedness of the concepts shows possible opportunities for including particle physics as a context for other curricular topics. As such, the resulting expert concept map is showcased as a well-suited tool for teachers to scaffold their instructional practice.
Second, a review of 27 high-school physics curricula was conducted. The review uncovered which concepts related to particle physics can be identified in most curricula. Each curriculum was reviewed by two reviewers that followed a codebook with 60 concepts related to particle physics. The analysis showed that most curricula mention cosmology, elementary particles, and charges, all of which are considered theoretical particle physics concepts. None of the experimental particle physics concepts appeared in more than half of the reviewed curricula. Additional analysis was done on two curricular subsets, namely curricula with and curricula without an explicit particle physics chapter. Curricula with an explicit particle physics chapter mention several additional explicit particle physics concepts, namely the Standard Model of particle physics, fundamental interactions, antimatter research, and particle accelerators. The latter is an example of experimental particle physics concepts. Additionally, the analysis revealed that, overall, most curricula include Nature of Science and history of physics, albeit both are typically used as context or as a tool for teaching, respectively.
Third, a Delphi study was conducted to investigate stakeholders’ expectations regarding what teachers should learn in particle physics professional development programmes. Over 100 stakeholders from 41 countries represented four stakeholder groups, namely physics education researchers, research scientists, government representatives, and high-school teachers. The study resulted in a ranked list of the 13 most important topics to be included in particle physics professional development programmes. The highest-ranked topics are cosmology, the Standard Model, and real-life applications of particle physics. All stakeholder groups agreed on the overall ranking of the topics. While the highest-ranked topics are again more theoretical, stakeholders also expect teachers to learn about experimental particle physics topics, which are ranked as medium importance topics.
The three studies addressed two research aims of this doctoral project. The first research aim was to explore to what extent particle physics is featured in high-school physics curricula. The comparison of the outcomes of the curricular review and the expert concept map showed that curricula cover significantly less than what experts expect high-school students to learn about particle physics. For example, most curricula do not include concepts that could be classified as experimental particle physics. However, the strong connections between the different concept show that experimental particle physics can be used as context for theoretical particle physics concepts, Nature of Science, and other curricular topics. In doing so, particle physics can be introduced in classrooms even though it is not (yet) explicitly mentioned in the respective curriculum.
The second research aim was to identify which aspects of content knowledge teachers are expected to learn about particle physics. The comparison of the Delphi study results to the outcomes of the curricular review and the expert concept map showed that stakeholders generally expect teachers to enhance their school knowledge as defined by the curricula. Furthermore, teachers are also expected to enhance their deeper school knowledge by learning how to connect concepts from their school knowledge to other concepts in particle physics and beyond. As such, professional development programmes that focus on enhancing teachers’ school knowledge and deeper school knowledge best support teachers in building relevant context in their instruction.
Overall, this doctoral research project reviewed the current state of high-school particle physics education and provided guidelines for future enhancements of the particle physics content in high-school student and teacher education. The outcomes of the project support further implementations of particle physics in high-school education both as explicit content and as context for other curricular topics. Furthermore, the mixed-methods approach and the outcomes of this research project lead to several implications for professional development programmes and science education research, that are discussed in the final chapters of this dissertation.
How do different reset protocols affect ergodicity of a diffusion process in single-particle-tracking experiments? We here address the problem of resetting of an arbitrary stochastic anomalous-diffusion process (ADP) from the general mathematical points of view and assess ergodicity of such reset ADPs for an arbitrary resetting protocol. The process of stochastic resetting describes the events of the instantaneous restart of a particle’s motion via randomly distributed returns to a preset initial position (or a set of those). The waiting times of such resetting events obey the Poissonian, Gamma, or more generic distributions with specified conditions regarding the existence of moments. Within these general approaches, we derive general analytical results and support them by computer simulations for the behavior of the reset mean-squared displacement (MSD), the new reset increment-MSD (iMSD), and the mean reset time-averaged MSD (TAMSD). For parental nonreset ADPs with the MSD(t)∝ tμ we find a generic behavior and a switch of the short-time growth of the reset iMSD and mean reset TAMSDs from ∝ _μ for subdiffusive to ∝ _1 for superdiffusive reset ADPs. The critical condition for a reset ADP that recovers its ergodicity is found to be more general than that for the nonequilibrium stationary state, where obviously the iMSD and the mean TAMSD are equal. The consideration of the new statistical quantifier, the iMSD—as compared to the standard MSD—restores the ergodicity of an arbitrary reset ADP in all situations when the μth moment of the waiting-time distribution of resetting events is finite. Potential applications of these new resetting results are, inter alia, in the area of biophysical and soft-matter systems.
How do different reset protocols affect ergodicity of a diffusion process in single-particle-tracking experiments? We here address the problem of resetting of an arbitrary stochastic anomalous-diffusion process (ADP) from the general mathematical points of view and assess ergodicity of such reset ADPs for an arbitrary resetting protocol. The process of stochastic resetting describes the events of the instantaneous restart of a particle’s motion via randomly distributed returns to a preset initial position (or a set of those). The waiting times of such resetting events obey the Poissonian, Gamma, or more generic distributions with specified conditions regarding the existence of moments. Within these general approaches, we derive general analytical results and support them by computer simulations for the behavior of the reset mean-squared displacement (MSD), the new reset increment-MSD (iMSD), and the mean reset time-averaged MSD (TAMSD). For parental nonreset ADPs with the MSD(t)∝ tμ we find a generic behavior and a switch of the short-time growth of the reset iMSD and mean reset TAMSDs from ∝ _μ for subdiffusive to ∝ _1 for superdiffusive reset ADPs. The critical condition for a reset ADP that recovers its ergodicity is found to be more general than that for the nonequilibrium stationary state, where obviously the iMSD and the mean TAMSD are equal. The consideration of the new statistical quantifier, the iMSD—as compared to the standard MSD—restores the ergodicity of an arbitrary reset ADP in all situations when the μth moment of the waiting-time distribution of resetting events is finite. Potential applications of these new resetting results are, inter alia, in the area of biophysical and soft-matter systems.
Over the past decades, there has been a growing interest in ‘extreme events’ owing to the increasing threats that climate-related extremes such as floods, heatwaves, droughts, etc., pose to society. While extreme events have diverse definitions across various disciplines, ranging from earth science to neuroscience, they are characterized mainly as dynamic occurrences within a limited time frame that impedes the normal functioning of a system. Although extreme events are rare in occurrence, it has been found in various hydro-meteorological and physiological time series (e.g., river flows, temperatures, heartbeat intervals) that they may exhibit recurrent behavior, i.e., do not end the lifetime of the system. The aim of this thesis to develop some
sophisticated methods to study various properties of extreme events.
One of the main challenges in analyzing such extreme event-like time series is that they have large temporal gaps due to the paucity of the number of observations of extreme events. As a result, existing time series analysis tools are usually not helpful to decode the underlying
information. I use the edit distance (ED) method to analyze extreme event-like time series in their unaltered form. ED is a specific distance metric, mainly designed to measure the similarity/dissimilarity between point process-like data. I combine ED with recurrence plot techniques to identify the recurrence property of flood events in the Mississippi River in the United States. I also use recurrence quantification analysis to show the deterministic properties
and serial dependency in flood events.
After that, I use this non-linear similarity measure (ED) to compute the pairwise dependency in extreme precipitation event series. I incorporate the similarity measure within the framework of complex network theory to study the collective behavior of climate extremes. Under this architecture, the nodes are defined by the spatial grid points of the given spatio-temporal climate dataset. Each node is associated with a time series corresponding to the temporal evolution
of the climate observation at that grid point. Finally, the network links are functions of the pairwise statistical interdependence between the nodes. Various network measures, such as degree, betweenness centrality, clustering coefficient, etc., can be used to quantify the network’s topology. We apply the methodology mentioned above to study the spatio-temporal coherence pattern of extreme rainfall events in the United States and the Ganga River basin, which reveals its relation to various climate processes and the orography of the region.
The identification of precursors associated with the occurrence of extreme events in the near future is extremely important to prepare the masses for an upcoming disaster and mitigate the potential risks associated with such events. Under this motivation, I propose an in-data prediction recipe for predicting the data structures that typically occur prior to extreme events using the Echo state network, a type of Recurrent Neural Network which is a part of the reservoir
computing framework. However, unlike previous works that identify precursory structures in the same variable in which extreme events are manifested (active variable), I try to predict these structures by using data from another dynamic variable (passive variable) which does not show large excursions from the nominal condition but carries imprints of these extreme events. Furthermore, my results demonstrate that the quality of prediction depends on the magnitude
of events, i.e., the higher the magnitude of the extreme, the better is its predictability skill. I show quantitatively that this is because the input signals collectively form a more coherent pattern for an extreme event of higher magnitude, which enhances the efficiency of the machine to predict the forthcoming extreme events.
There is a large variety of goals instructors have for laboratory courses, with different courses focusing on different subsets of goals. An often implicit, but crucial, goal is to develop students’ attitudes, views, and expectations about experimental physics to align with practicing experimental physicists. The assessment of laboratory courses upon this one dimension of learning has been intensively studied in U.S. institutions using the Colorado Learning Attitudes about Science Survey for Experimental Physics (E-CLASS). However, there is no such an instrument available to use in Germany, and the influence of laboratory courses on students views about the nature of experimental physics is still unexplored at German-speaking institutions. Motivated by the lack of an assessment tool to investigate this goal in laboratory courses at German-speaking institutions, we present a translated version of the E-CLASS adapted to the context at German-speaking institutions. We call the German version of the E-CLASS, the GE-CLASS. We describe the translation process and the creation of an automated web-based system for instructors to assess their laboratory courses. We also present first results using GE-CLASS obtained at the University of Potsdam. A first comparison between E-CLASS and GE-CLASS results shows clear differences between University of Potsdam and U.S. students’ views and beliefs about experimental physics.
There is a large variety of goals instructors have for laboratory courses, with different courses focusing on different subsets of goals. An often implicit, but crucial, goal is to develop students’ attitudes, views, and expectations about experimental physics to align with practicing experimental physicists. The assessment of laboratory courses upon this one dimension of learning has been intensively studied in U.S. institutions using the Colorado Learning Attitudes about Science Survey for Experimental Physics (E-CLASS). However, there is no such an instrument available to use in Germany, and the influence of laboratory courses on students views about the nature of experimental physics is still unexplored at German-speaking institutions. Motivated by the lack of an assessment tool to investigate this goal in laboratory courses at German-speaking institutions, we present a translated version of the E-CLASS adapted to the context at German-speaking institutions. We call the German version of the E-CLASS, the GE-CLASS. We describe the translation process and the creation of an automated web-based system for instructors to assess their laboratory courses. We also present first results using GE-CLASS obtained at the University of Potsdam. A first comparison between E-CLASS and GE-CLASS results shows clear differences between University of Potsdam and U.S. students’ views and beliefs about experimental physics.
We introduce and study a Lévy walk (LW) model of particle spreading with a finite propagation speed combined with soft resets, stochastically occurring periods in which an harmonic external potential is switched on and forces the particle towards a specific position. Soft resets avoid instantaneous relocation of particles that in certain physical settings may be considered unphysical. Moreover, soft resets do not have a specific resetting point but lead the particle towards a resetting point by a restoring Hookean force. Depending on the exact choice for the LW waiting time density and the probability density of the periods when the harmonic potential is switched on, we demonstrate a rich emerging response behaviour including ballistic motion and superdiffusion. When the confinement periods of the soft-reset events are dominant, we observe a particle localisation with an associated non-equilibrium steady state. In this case the stationary particle probability density function turns out to acquire multimodal states. Our derivations are based on Markov chain ideas and LWs with multiple internal states, an approach that may be useful and flexible for the investigation of other generalised random walks with soft and hard resets. The spreading efficiency of soft-rest LWs is characterised by the first-passage time statistic.
We introduce and study a Lévy walk (LW) model of particle spreading with a finite propagation speed combined with soft resets, stochastically occurring periods in which an harmonic external potential is switched on and forces the particle towards a specific position. Soft resets avoid instantaneous relocation of particles that in certain physical settings may be considered unphysical. Moreover, soft resets do not have a specific resetting point but lead the particle towards a resetting point by a restoring Hookean force. Depending on the exact choice for the LW waiting time density and the probability density of the periods when the harmonic potential is switched on, we demonstrate a rich emerging response behaviour including ballistic motion and superdiffusion. When the confinement periods of the soft-reset events are dominant, we observe a particle localisation with an associated non-equilibrium steady state. In this case the stationary particle probability density function turns out to acquire multimodal states. Our derivations are based on Markov chain ideas and LWs with multiple internal states, an approach that may be useful and flexible for the investigation of other generalised random walks with soft and hard resets. The spreading efficiency of soft-rest LWs is characterised by the first-passage time statistic.
Stellar interferometry is the only method in observational astronomy for obtaining the highest resolution images of astronomical targets. This method is based on combining light from two or more separate telescopes to obtain the complex visibility that contains information about the brightness distribution of an astronomical source. The applications of stellar interferometry have made significant contributions in the exciting research areas of astronomy and astrophysics, including the precise measurement of stellar diameters, imaging of stellar surfaces, observations of circumstellar disks around young stellar objects, predictions of Einstein's General relativity at the galactic center, and the direct search for exoplanets to name a few. One important related technique is aperture masking interferometry, pioneered in the 1960s, which uses a mask with holes at the re-imaged pupil of the telescope, where the light from the holes is combined using the principle of stellar interferometry. While this can increase the resolution, it comes with a disadvantage. Due to the finite size of the holes, the majority of the starlight (typically > 80 %) is lost at the mask, thus limiting the signal-to-noise ratio (SNR) of the output images. This restriction of aperture masking only to the bright targets can be avoided using pupil remapping interferometry - a technique combining aperture masking interferometry and advances in photonic technologies using single-mode fibers. Due to the inherent spatial filtering properties, the single-mode fibers can be placed at the focal plane of the re-imaged pupil, allowing the utilization of the whole pupil of the telescope to produce a high-dynamic range along with high-resolution images. Thus, pupil remapping interferometry is one of the most promising application areas in the emerging field of astrophotonics.
At the heart of an interferometric facility, a beam combiner exists whose primary function is to combine light to obtain high-contrast fringes. A beam combiner can be as simple as a beam splitter or an anamorphic lens to combine light from 2 apertures (or telescopes) or as complex as a cascade of beam splitters and lenses to combine light for > 2 apertures. However, with the field of astrophotonics, interferometric facilities across the globe are increasingly employing some form of photonics technologies by using single-mode fibers or integrated optics (IO) chips as an efficient way to combine light from several apertures. The state-of-the-art instrument - GRAVITY at the very large telescope interferometer (VLTI) facility uses an IO-based beam combiner device reaching visibilities accuracy of better than < 0.25 %, which is roughly 50× as precise as a few decades back.
Therefore, in the context of IO-based components for applications in stellar interferometry, this Thesis describes the work towards the development of a 3-dimensional (3-D) IO device - a monolithic astrophotonics component containing both the pupil remappers and a discrete beam combiner (DBC). In this work, the pupil remappers are 3-D single-mode waveguides in a glass substrate collecting light from the re-imaged pupil of the telescope and feeding the light to a DBC, where the combination takes place. The DBC is a lattice of 3-D single-mode waveguides, which interact through evanescent coupling. By observing the output power of single-mode waveguides of the DBC, the visibilities are retrieved by using a calibrated transfer matrix ({U}) of the device.
The feasibility of the DBC in retrieving the visibilities theoretically and experimentally had already been studied in the literature but was only limited to laboratory tests with monochromatic light sources. Thus, a part of this work extends these studies by investigating the response of a 4-input DBC to a broad-band light source. Hence, the objectives of this Thesis are the following: 1) Design an IO device for broad-band light operation such that accurate and precise visibilities could be retrieved experimentally at astronomical H-band (1.5-1.65 μm), and 2) Validation of the DBC as a possible beam combination scheme for future interferometric facilities through on-sky testing at the William Herschel Telescope (WHT).
This work consisted of designing three different 3-D IO devices. One of the popular methods for fabricating 3-D photonic components in a glass substrate is ultra-fast laser inscription (ULI). Thus, manufacturing of the designed devices was outsourced to Politecnico di Milano as part of an iterative fabrication process using their state-of-the-art ULI facility. The devices were then characterized using a 2-beam Michelson interferometric setup obtaining both the monochromatic and polychromatic visibilities. The retrieved visibilities for all devices were in good agreement as predicted by the simulation results of a DBC, which confirms both the repeatability of the ULI process and the stability of the Michelson setup, thus fulfilling the first objective.
The best-performing device was then selected for the pupil-remapping of the WHT using a different optical setup consisting of a deformable mirror and a microlens array. The device successfully collected stellar photons from Vega and Altair. The visibilities were retrieved using a previously calibrated {U} but showed significant deviations from the expected results. Based on the analysis of comparable simulations, it was found that such deviations were primarily caused by the limited SNR of the stellar observations, thus constituting a first step towards the fulfillment of the second objective.
Understanding the changes that follow UV-excitation in thionucleobases is of great importance for the study of light-induced DNA lesions and, in a broader context, for their applications in medicine and biochemistry. Their ultrafast photophysical reactions can alter the chemical structure of DNA - leading to damages to the genetic code - as proven by the increased skin cancer risk observed for patients treated with thiouracil for its immunosuppressant properties.
In this thesis, I present four research papers that result from an investigation of the ultrafast dynamics of 2-thiouracil by means of ultrafast x-ray probing combined with electron spectroscopy. A molecular jet in the gas phase is excited with a uv pulse and then ionized with x-ray radiation from a Free Electron Laser. The kinetic energy of the emitted electrons is measured in a magnetic bottle spectrometer. The spectra of the measured photo and Auger electrons are used to derive a picture of the changes in the geometrical and electronic configurations. The results allow us to look at the dynamical processes from a new perspective, thanks to the element- and site- sensitivity of x-rays. The custom-built URSA-PQ apparatus used in the experiment is described. It has been commissioned and used at the FL24 beamline of the FLASH2 FEL, showing an electron kinetic energy resolution of ∆E/E ~ 40 and a pump-probe timing resolution of 190 f s. X-ray only photoelectron and Auger spectra of 2-thiouracil are extracted from the data and used as reference. Photoelectrons following the formation a 2p core hole are identified, as well as resonant and non-resonant Auger electrons. At the L 1 edge, Coster-Kronig decay is observed from the 2s core hole.
The UV-induced changes in the 2p photoline allow the study the electronic-state dynamics. With the use of an Excited-State Chemical Shift (ESCS) model, we observe a ultrafast ground-state relaxation within 250 f s. Furthermore, an oscillation with a 250 f s period is observed in the 2p binding energy, showing a coherent population exchange between electronic states. Auger electrons from the 2p core hole are analyzed and used to deduce a ultrafast C −S bond expansion on a sub 100 f s scale. A simple Coulomb-model, coupled to quantum chemical calculations, can be used to infer the geometrical changes in the molecular structure.
Organic solar cells offer an efficient and cost-effective alternative for solar energy harvesting. This type of photovoltaic cell typically consists of a blend of two organic semiconductors, an electron donating polymer and a low molecular weight electron acceptor to create what is known as a bulk heterojunction (BHJ) morphology. Traditionally, fullerene-based acceptors have been used for this purpose. In recent years, the development of new acceptor molecules, so-called non-fullerene acceptors (NFA), has breathed new life into organic solar cell research, enabling record efficiencies close to 19%. Today, NFA-based solar cells are approaching their inorganic competitors in terms of photocurrent generation, but lag in terms of open circuit voltage (V_OC). Interestingly, the V_OC of these cells benefits from small offsets of orbital energies at the donor-NFA interface, although previous knowledge considered large energy offsets to be critical for efficient charge carrier generation. In addition, there are several other electronic and structural features that distinguish NFAs from fullerenes.
My thesis focuses on understanding the interplay between the unique attributes of NFAs and the physical processes occurring in solar cells. By combining various experimental techniques with drift-diffusion simulations, the generation of free charge carriers as well as their recombination in state-of-the-art NFA-based solar cells is characterized. For this purpose, solar cells based on the donor polymer PM6 and the NFA Y6 have been investigated. The generation of free charge carriers in PM6:Y6 is efficient and independent of electric field and excitation energy. Temperature-dependent measurements show a very low activation energy for photocurrent generation (about 6 meV), indicating barrierless charge carrier separation. Theoretical modeling suggests that Y6 molecules have large quadrupole moments, leading to band bending at the donor-acceptor interface and thereby reducing the electrostatic Coulomb dissociation barrier. In this regard, this work identifies poor extraction of free charges in competition with nongeminate recombination as a dominant loss process in PM6:Y6 devices. Subsequently, the spectral characteristics of PM6:Y6 solar cells were investigated with respect to the dominant process of charge carrier recombination. It was found that the photon emission under open-circuit conditions can be almost entirely attributed to the occupation and recombination of Y6 singlet excitons. Nevertheless, the recombination pathway via the singlet state contributes only 1% to the total recombination, which is dominated by the charge transfer state (CT-state) at the donor-acceptor interface. Further V_OC gains can therefore only be expected if the density and/or recombination rate of these CT-states can be significantly reduced. Finally, the role of energetic disorder in NFA solar cells is investigated by comparing Y6 with a structurally related derivative, named N4. Layer morphology studies combined with temperature-dependent charge transport experiments show significantly lower structural and energetic disorder in the case of the PM6:Y6 blend. For both PM6:Y6 and PM6:N4, disorder determines the maximum achievable V_OC, with PM6:Y6 benefiting from improved morphological order. Overall, the obtained findings point to avenues for the realization of NFA-based solar cells with even smaller V_OC losses. Further reduction of nongeminate recombination and energetic disorder should result in organic solar cells with efficiencies above 20% in the future.
Weather extremes pose a persistent threat to society on multiple layers. Besides an average of ~37,000 deaths per year, climate-related disasters cause destroyed properties and impaired economic activities, eroding people's livelihoods and prosperity. While global temperature rises – caused by anthropogenic greenhouse gas emissions – the direct impacts of climatic extreme events increase and will further intensify without proper adaptation measures. Additionally, weather extremes do not only have local direct effects. Resulting economic repercussions can propagate either upstream or downstream along trade chains causing indirect effects. One approach to analyze these indirect effects within the complex global supply network is the agent-based model Acclimate. Using and extending this loss-propagation model, I focus in this thesis on three aspects of the relation between weather extremes and economic repercussions.
First, extreme weather events cause direct impacts on local economic performance. I compute daily local direct output loss time series of heat stress, river floods, tropical cyclones, and their consecutive occurrence using (near-future) climate projection ensembles. These regional impacts are estimated based on physical drivers and local productivity distribution. Direct effects of the aforementioned disaster categories are widely heterogeneous concerning regional and temporal distribution. As well, their intensity changes differently under future warming. Focusing on the hurricane-impacted capital, I find that long-term growth losses increase with higher heterogeneity of a shock ensemble.
Second, repercussions are sectorally and regionally distributed via economic ripples within the trading network, causing higher-order effects. I use Acclimate to identify three phases of those economic ripples. Furthermore, I compute indirect impacts and analyze overall regional and global production and consumption changes. Regarding heat stress, global consumer losses double while direct output losses increase by a factor 1.5 between 2000 – 2039. In my research I identify the effect of economic ripple resonance and introduce it to climate impact research. This effect occurs if economic ripples of consecutive disasters overlap, which increases economic responses such as an enhancement of consumption losses. These loss enhancements can even be more amplified with increasing direct output losses, e.g. caused by climate crises.
Transport disruptions can cause economic repercussions as well. For this, I extend the model Acclimate with a geographical transportation route and expand the decision horizon of economic agents. Using this, I show that policy-induced sudden trade restrictions (e.g. a no-deal Brexit) can significantly reduce the longer-term economic prosperity of affected regions. Analyses of transportation disruptions in typhoon seasons indicate that severely affected regions must reduce production as demand falls during a storm. Substituting suppliers may compensate for fluctuations at the beginning of the storm, which fails for prolonged disruptions.
Third, possible coping mechanisms and adaptation strategies arise from direct and indirect economic responses to weather extremes. Analyzing annual trade changes due to typhoon-induced transport disruptions depict that overall exports rise. This trade resilience increases with higher network node diversification. Further, my research shows that a basic insurance scheme may diminish hurricane-induced long-term growth losses due to faster reconstruction in disasters aftermaths. I find that insurance coverage could be an economically reasonable coping scheme towards higher losses caused by the climate crisis. Indirect effects within the global economic network from weather extremes indicate further adaptation possibilities. For one, diversifying linkages reduce the hazard of sharp price increases. Next to this, close economic interconnections with regions that do not share the same extreme weather season can be economically beneficial in the medium run. Furthermore, economic ripple resonance effects should be considered while computing costs. Overall, an increase in local adaptation measures reduces economic ripples within the trade network and possible losses elsewhere. In conclusion, adaptation measures are necessary and potential present, but it seems rather not possible to avoid all direct or indirect losses.
As I show in this thesis, dynamical modeling gives valuable insights into how direct and indirect economic impacts arise from different categories of weather extremes. Further, it highlights the importance of resolving individual extremes and reflecting amplifying effects caused by incomplete recovery or consecutive disasters.
In this thesis, the dependencies of charge localization and itinerance in two classes of aromatic molecules are accessed: pyridones and porphyrins. The focus lies on the effects of isomerism, complexation, solvation, and optical excitation, which are concomitant with different crucial biological applications of specific members of these groups of compounds. Several porphyrins play key roles in the metabolism of plants and animals. The nucleobases, which store the genetic information in the DNA and RNA are pyridone derivatives. Additionally, a number of vitamins are based on these two groups of substances.
This thesis aims to answer the question of how the electronic structure of these classes of molecules is modified, enabling the versatile natural functionality. The resulting insights into the effect of constitutional and external factors are expected to facilitate the design of new processes for medicine, light-harvesting, catalysis, and environmental remediation.
The common denominator of pyridones and porphyrins is their aromatic character. As aromaticity was an early-on topic in chemical physics, the overview of relevant theoretical models in this work also mirrors the development of this scientific field in the 20th century. The spectroscopic investigation of these compounds has long been centered on their global, optical transition between frontier orbitals.
The utilization and advancement of X-ray spectroscopic methods characterizing the local electronic structure of molecular samples form the core of this thesis. The element selectivity of the near-edge X-ray absorption fine structure (NEXAFS) is employed to probe the unoccupied density of states at the nitrogen site, which is key for the chemical reactivity of pyridones and porphyrins. The results contribute to the growing database of NEXAFS features and their interpretation, e.g., by advancing the debate on the porphyrin N K-edge through systematic experimental and theoretical arguments. Further, a state-of-the-art laser pump – NEXAFS probe scheme is used to characterize the relaxation pathway of a photoexcited porphyrin on the atomic level.
Resonant inelastic X-ray scattering (RIXS) provides complementary results by accessing the highest occupied valence levels including symmetry information. It is shown that RIXS is an effective experimental tool to gain detailed information on charge densities of individual species in tautomeric mixtures. Additionally, the hRIXS and METRIXS high-resolution RIXS spectrometers, which have been in part commissioned in the course of this thesis, will gain access to the ultra-fast and thermal chemistry of pyridones, porphyrins, and many other compounds.
With respect to both classes of bio-inspired aromatic molecules, this thesis establishes that even though pyridones and porphyrins differ largely by their optical absorption bands and hydrogen bonding abilities, they all share a global stabilization of local constitutional changes and relevant external perturbation. It is because of this wide-ranging response that pyridones and porphyrins can be applied in a manifold of biological and technical processes.
Proteine sind an praktisch allen Prozessen in lebenden Zellen maßgeblich beteiligt. Auch in der Biotechnologie werden Proteine in vielfältiger Weise eingesetzt.
Ein Protein besteht aus einer Kette von Aminosäuren. Häufig lagern sich mehrere dieser Ketten zu größeren Strukturen und Funktionseinheiten, sogenannten Proteinkomplexen,
zusammen. Kürzlich wurde gezeigt, dass eine Proteinkomplexbildung bereits während der Biosynthese der Proteine (co-translational) stattfinden kann
und nicht stets erst danach (post-translational) erfolgt. Da Fehlassemblierungen von Proteinen zu Funktionsverlusten und adversen Effekten führen, ist eine präzise und verlässliche Proteinkomplexbildung sowohl für zelluläre Prozesse als auch für biotechnologische Anwendungen essenziell. Mit experimentellen Methoden lassen sich zwar u.a. die Stöchiometrie und die Struktur von Proteinkomplexen bestimmen,
jedoch bisher nicht die Dynamik der Komplexbildung auf unterschiedlichen Zeitskalen. Daher sind grundlegende Mechanismen der Proteinkomplexbildung noch nicht vollständig verstanden. Die hier vorgestellte, auf experimentellen Erkenntnissen aufbauende, computergestützte Modellierung der Proteinkomplexbildung erlaubt eine umfassende Analyse des Einflusses physikalisch-chemischer Parameter
auf den Assemblierungsprozess. Die Modelle bilden möglichst realistisch die experimentellen Systeme der Kooperationspartner (Bar-Ziv, Weizmann-Institut, Israel; Bukau und Kramer, Universität Heidelberg) ab, um damit die Assemblierung von Proteinkomplexen einerseits in einem quasi-zweidimensionalen synthetischen Expressionssystem (in vitro) und andererseits im Bakterium Escherichia coli (in vivo) untersuchen zu können. Mit Hilfe eines vereinfachten Expressionssystems, in dem die Proteine nur an die Chip-Oberfläche, aber nicht aneinander binden können, wird das theoretische Modell parametrisiert. In diesem vereinfachten in-vitro-System durchläuft die Effizienz der Komplexbildung drei Regime – ein bindedominiertes Regime, ein Mischregime und ein produktionsdominiertes Regime. Ihr Maximum erreicht die Effizienz dabei kurz nach dem Übergang vom bindedominierten ins Mischregime und fällt anschließend monoton ab. Sowohl im nicht-vereinfachten in-vitro- als auch im in-vivo-System koexistieren je zwei konkurrierende Assemblierungspfade: Im in-vitro-System erfolgt die Komplexbildung entweder spontan in wässriger Lösung (Lösungsassemblierung) oder aber in einer definierten Schrittfolge an der Chip-Oberfläche (Oberflächenassemblierung); Im in-vivo-System konkurrieren hingegen die co- und die post-translationale Komplexbildung. Es zeigt sich, dass die Dominanz der Assemblierungspfade im in-vitro-System zeitabhängig ist und u.a. durch die Limitierung und Stärke der Bindestellen auf der Chip-Oberfläche beeinflusst werden kann. Im in-vivo-System hat der räumliche Abstand zwischen den Syntheseorten der beiden Proteinkomponenten nur dann einen Einfluss auf die Komplexbildung, wenn die Untereinheiten schnell degradieren. In diesem Fall dominiert die co-translationale Assemblierung auch auf kurzen Zeitskalen deutlich, wohingegen es bei stabilen Untereinheiten zu einem Wechsel von der Dominanz der post- hin zu einer geringen Dominanz der co-translationalen Assemblierung kommt. Mit den in-silico-Modellen lässt sich neben der Dynamik u.a. auch die Lokalisierung der Komplexbildung und -bindung darstellen, was einen Vergleich der theoretischen Vorhersagen mit experimentellen Daten und somit eine Validierung der Modelle ermöglicht. Der hier präsentierte in-silico Ansatz ergänzt die experimentellen Methoden, und erlaubt so, deren Ergebnisse zu interpretieren und neue Erkenntnisse davon abzuleiten.
Poly(vinylidene fluoride) (PVDF)-based homo-, co- and ter-polymers are well-known for their ferroelectric and relaxor-ferroelectric properties. Their semi-crystalline morphology consists of crystalline and amorphous phases, plus interface regions in between, and governs the relevant electro-active properties. In this work, the influence of chemical, thermal and mechanical treatments on the structure and morphology of PVDF-based polymers and on the related ferroelectric/relaxor-ferroelectric properties is investigated. Polymer films were prepared in different ways and subjected to various treatments such as annealing, quenching and stretching. The resulting changes in the transitions and relaxations of the polymer samples were studied by means of dielectric, thermal, mechanical and optical techniques. In particular, the origin(s) behind the mysterious mid-temperature transition (T_{mid}) that is observed in all PVDF-based polymers was assessed. A new hypothesis is proposed to describe the T_{mid} transition as a result of multiple processes taking place within the temperature range of the transition. The contribution of the individual processes to the observed overall transition depends on both the chemical structure of the monomer units and the processing conditions which also affect the melting transition. Quenching results in a decrease of the overall crystallinity and in smaller crystallites. On samples quenched after annealing, notable differences in the fractions of different crystalline phases have been observed when compared to samples that had been slowly cooled. Stretching of poly(vinylidene fluoride-tetrafluoroethylene) (P(VDF-TFE)) films causes an increase in the fraction of the ferroelectric β-phase with simultaneous increments in the melting point (T_m) and the crystallinity (\chi_c) of the copolymer. While an increase in the stretching temperature does not have a profound effect on the amount of the ferroelectric phase, its stability appears to improve.
Measurements of the non-linear dielectric permittivity \varepsilon_2^\prime in a poly(vinylidenefluoride-trifluoroethylene-chlorofluoroethylene) (P(VDF-TrFE- CFE)) relaxor-ferroelectric (R-F) terpolymer reveal peaks at 30 and 80 °C that cannot be identified in conventional dielectric spectroscopy. The former peak is associated with T_{mid}\ and may help to understand the non-zero \varepsilon_2^\prime values that are found for the paraelectric terpolymer phase. The latter peak can also be observed during cooling of P(VDF-TrFE) copolymer samples at 100 °C and is due to conduction processes and space-charge polarization as a result of the accumulation of real charges at the electrode-sample interface. Annealing lowers the Curie-transition temperature of the terpolymer as a consequence of its smaller ferroelectric-phase fraction, which by default exists even in terpolymers with relatively high CFE content. Changes in the transition temperatures are in turn related to the behavior of the hysteresis curves observed on differently heat-treated samples. Upon heating, the hysteresis curves evolve from those known for a ferroelectric to those of a typical relaxor-ferroelectric material. Comparing dielectric-hysteresis loops obtained at various temperatures, we find that annealed terpolymer films show higher electric-displacement values and lower coercive fields than the non-annealed samples − irrespective of the measurement temperature − and also exhibit ideal relaxor-ferroelectric behavior at ambient temperatures, which makes them excellent candidates for related applications at or near room temperature. However, non-annealed films − by virtue of their higher ferroelectric activity − show a larger and more stable remanent polarization at room temperature, while annealed samples need to be poled below 0 °C to induce a well-defined polarization. Overall, by modifying the three phases in PVDF-based polymers, it has been demonstrated how the preparation steps and processing conditions can be tailored to achieve the desired properties that are optimal for specific applications.
The current generation of ground-based instruments has rapidly extended the limits of the range accessible to us with very-high-energy (VHE) gamma-rays, and more than a hundred sources have now been detected in the Milky Way. These sources represent only the tip of the iceberg, but their number has reached a level that allows population studies. In this work, a model of the global population of VHE gamma-ray sources based on the most comprehensive census of Galactic sources in this energy regime, the H.E.S.S. Galactic plane survey (HGPS), will be presented. A population synthesis approach was followed in the construction of the model. Particular attention was paid to correcting for the strong observational bias inherent in the sample of detected sources. The methods developed for estimating the model parameters have been validated with extensive Monte Carlo simulations and will be shown to provide unbiased estimates of the model parameters. With these methods, five models for different spatial distributions of sources have been constructed. To test the validity of these models, their predictions for the composition of sources within the sensitivity range of the HGPS are compared with the observed sample. With one exception, similar results are obtained for all spatial distributions, showing that the observed longitude profile and the source distribution over photon flux are in fair agreement with observation. Regarding the latitude profile and the source distribution over angular extent, it becomes apparent that the model needs to be further adjusted to bring its predictions in agreement with observation. Based on the model, predictions of the global properties of the Galactic population of VHE gamma-ray sources and the prospects of the Cherenkov Telescope Array (CTA) will be presented.
CTA will significantly increase our knowledge of VHE gamma-ray sources by lowering the threshold for source detection, primarily through a larger detection area compared to current-generation instruments. In ground-based gamma-ray astronomy, the sensitivity of an instrument depends strongly, in addition to the detection area, on the ability to distinguish images of air showers produced by gamma-rays from those produced by cosmic rays, which are a strong background. This means that the number of detectable sources depends on the background rejection algorithm used and therefore may also be increased by improving the performance of such algorithms. In this context, in addition to the population model, this work presents a study on the application of deep-learning techniques to the task of gamma-hadron separation in the analysis of data from ground-based gamma-ray instruments. Based on a systematic survey of different neural-network architectures, it is shown that robust classifiers can be constructed with competitive performance compared to the best existing algorithms. Despite the broad coverage of neural-network architectures discussed, only part of the potential offered by the
application of deep-learning techniques to the analysis of gamma-ray data is exploited in the context of this study. Nevertheless, it provides an important basis for further research on this topic.
Organic solar cells (OSCs), in recent years, have shown high efficiencies through the development of novel non-fullerene acceptors (NFAs). Fullerene derivatives have been the centerpiece of the accepting materials used throughout organic photovoltaic (OPV) research. However, since 2015 novel NFAs have been a game-changer and have overtaken fullerenes. However, the current understanding of the properties of NFAs for OPV is still relatively limited and critical mechanisms defining the performance of OPVs are still topics of debate.
In this thesis, attention is paid to understanding reduced-Langevin recombination with respect to the device physics properties of fullerene and non-fullerene systems. The work is comprised of four closely linked studies. The first is a detailed exploration of the fill factor (FF) expressed in terms of transport and recombination properties in a comparison of fullerene and non-fullerene acceptors. We investigated the key reason behind the reduced FF in the NFA (ITIC-based) devices which is faster non-geminate recombination relative to the fullerene (PCBM[70]-based) devices. This is then followed by a consideration of a newly synthesized NFA Y-series derivative which exhibits the highest power conversion efficiency for OSC at the time. Such that in the second study, we illustrated the role of disorder on the non-geminate recombination and charge extraction of thick NFA (Y6-based) devices. As a result, we enhanced the FF of thick PM6:Y6 by reducing the disorder which leads to suppressing the non-geminate recombination toward non-Langevin system. In the third work, we revealed the reason behind thickness independence of the short circuit current of PM6:Y6 devices, caused by the extraordinarily long diffusion length of Y6. The fourth study entails a broad comparison of a selection of fullerene and non-fullerene blends with respect to charge generation efficiency and recombination to unveil the importance of efficient charge generation for achieving reduced recombination.
I employed transient measurements such as Time Delayed Collection Field (TDCF), Resistance dependent Photovoltage (RPV), and steady-state techniques such as Bias Assisted Charge Extraction (BACE), Temperature-Dependent Space Charge Limited Current (T-SCLC), Capacitance-Voltage (CV), and Photo-Induce Absorption (PIA), to analyze the OSCs.
The outcomes in this thesis together draw a complex picture of multiple factors that affect reduced-Langevin recombination and thereby the FF and overall performance. This provides a suitable platform for identifying important parameters when designing new blend systems. As a result, we succeeded to improve the overall performance through enhancing the FF of thick NFA device by adjustment of the amount of the solvent additive in the active blend solution. It also highlights potentially critical gaps in the current experimental understanding of fundamental charge interaction and recombination dynamics.
Extending synchrotron X-ray refraction techniques to the quantitative analysis of metallic materials
(2022)
In this work, two X-ray refraction based imaging methods, namely, synchrotron X-ray refraction radiography (SXRR) and synchrotron X-ray refraction computed tomography (SXRCT), are applied to analyze quantitatively cracks and porosity in metallic materials. SXRR and SXRCT make use of the refraction of X-rays at inner surfaces of the material, e.g., the surfaces of cracks and pores, for image contrast. Both methods are, therefore, sensitive to smaller defects than their absorption based counterparts X-ray radiography and computed tomography. They can detect defects of nanometric size. So far the methods have been applied to the analysis of ceramic materials and fiber reinforced plastics. The analysis of metallic materials requires higher photon energies to achieve sufficient X-ray transmission due to their higher density. This causes smaller refraction angles and, thus, lower image contrast because the refraction index depends on the photon energy. Here, for the first time, a conclusive study is presented exploring the possibility to apply SXRR and SXRCT to metallic materials. It is shown that both methods can be optimized to overcome the reduced contrast due to smaller refraction angles. Hence, the only remaining limitation is the achievable X-ray transmission which is common to all X-ray imaging methods. Further, a model for the quantitative analysis of the inner surfaces is presented and verified.
For this purpose four case studies are conducted each posing a specific challenge to the imaging task. Case study A investigates cracks in a coupon taken from an aluminum weld seam. This case study primarily serves to verify the model for quantitative analysis and prove the sensitivity to sub-resolution features. In case study B, the damage evolution in an aluminum-based particle reinforced metal-matrix composite is analyzed. Here, the accuracy and repeatability of subsequent SXRR measurements is investigated showing that measurement errors of less than 3 % can be achieved. Further, case study B marks the fist application of SXRR in combination with in-situ tensile loading. Case study C is out of the highly topical field of additive manufacturing. Here, porosity in additively manufactured Ti-Al6-V4 is analyzed with a special interest in the pore morphology. A classification scheme based on SXRR measurements is devised which allows to distinguish binding defects from keyhole pores even if the defects cannot be spatially resolved. In case study D, SXRCT is applied to the analysis of hydrogen assisted cracking in steel. Due to the high X-ray attenuation of steel a comparatively high photonenergy of 50 keV is required here. This causes increased noise and lower contrast in the data compared to the other case studies. However, despite the lower data quality a quantitative analysis of the occurance of cracks in dependence of hydrogen content and applied mechanical load is possible.