Institut für Physik und Astronomie
Refine
Year of publication
Document Type
- Article (3714)
- Doctoral Thesis (737)
- Monograph/Edited Volume (118)
- Postprint (114)
- Other (81)
- Preprint (44)
- Review (33)
- Habilitation Thesis (23)
- Conference Proceeding (15)
- Master's Thesis (11)
Keywords
- diffusion (51)
- stars: massive (49)
- gamma rays: general (46)
- stars: early-type (41)
- stars: winds, outflows (39)
- anomalous diffusion (38)
- cosmic rays (38)
- radiation mechanisms: non-thermal (37)
- Magellanic Clouds (36)
- ISM: supernova remnants (34)
Institute
- Institut für Physik und Astronomie (4891)
- Extern (45)
- Interdisziplinäres Zentrum für Dynamik komplexer Systeme (38)
- Department Psychologie (9)
- Institut für Geowissenschaften (8)
- Mathematisch-Naturwissenschaftliche Fakultät (7)
- Institut für Chemie (6)
- Potsdam Institute for Climate Impact Research (PIK) e. V. (6)
- Department Linguistik (2)
- Institut für Biochemie und Biologie (2)
This work introduces an embedded approach for the prediction of Solar Particle Events (SPEs) in space applications by combining the real-time Soft Error Rate (SER) measurement with SRAM-based detector and the offline trained machine learning model. The proposed approach is intended for the self-adaptive fault-tolerant multiprocessing systems employed in space applications. With respect to the state-of-the-art, our solution allows for predicting the SER 1 h in advance and fine-grained hourly tracking of SER variations during SPEs as well as under normal conditions. Therefore, the target system can activate the appropriate mechanisms for radiation hardening before the onset of high radiation levels. Based on the comparison of five different machine learning algorithms trained with the public space flux database, the preliminary results indicate that the best prediction accuracy is achieved with the recurrent neural network (RNN) with long short-term memory (LSTM).
The radiation belts of the Earth, filled with energetic electrons, comprise complex and dynamic systems that pose a significant threat to satellite operation. While various models of electron flux both for low and relativistic energies have been developed, the behavior of medium energy (120-600 keV) electrons, especially in the MEO region, remains poorly quantified. At these energies, electrons are driven by both convective and diffusive transport, and their prediction usually requires sophisticated 4D modeling codes. In this paper, we present an alternative approach using the Light Gradient Boosting (LightGBM) machine learning algorithm. The Medium Energy electRon fLux In Earth's outer radiatioN belt (MERLIN) model takes as input the satellite position, a combination of geomagnetic indices and solar wind parameters including the time history of velocity, and does not use persistence. MERLIN is trained on >15 years of the GPS electron flux data and tested on more than 1.5 years of measurements. Tenfold cross validation yields that the model predicts the MEO radiation environment well, both in terms of dynamics and amplitudes o f flux. Evaluation on the test set shows high correlation between the predicted and observed electron flux (0.8) and low values of absolute error. The MERLIN model can have wide space weather applications, providing information for the scientific community in the form of radiation belts reconstructions, as well as industry for satellite mission design, nowcast of the MEO environment, and surface charging analysis.
In the last century, several astronomical measurements have supported that a significant percentage (about 22%) of the total mass of the Universe, on galactic and extragalactic scales, is composed of a mysterious ”dark” matter (DM). DM does not interact with the electromagnetic force; in other words it does not reflect, absorb or emit light. It is possible that DM particles are weakly interacting massive particles (WIMPs) that can annihilate (or decay) into Standard Model (SM) particles, and modern very- high-energy (VHE; > 100 GeV) instruments such as imaging atmospheric Cherenkov telescopes (IACTs) can play an important role in constraining the main properties of such DM particles, by detecting these products. One of the most privileged targets where to look for DM signal are dwarf spheroidal galaxies (dSphs), as they are expected to be high DM-dominated objects with a clean, gas-free environment. Some dSphs could be considered as extended sources, considering the angular resolution of IACTs; their angu- lar resolution is adequate to detect extended emission from dSphs. For this reason, we performed an extended-source analysis, by taking into account in the unbinned maximum likelihood estimation both the energy and the angular extension dependency of observed events. The goal was to set more constrained upper limits on the velocity-averaged cross-section annihilation of WIMPs with VERITAS data. VERITAS is an array of four IACTs, able to detect γ-ray photons ranging between 100 GeV and 30 TeV. The results of this extended analysis were compared against the traditional spectral analysis. We found that a 2D analysis may lead to more constrained results, depending on the DM mass, channel, and source. Moreover, in this thesis, the results of a multi-instrument project are presented too. Its goal was to combine already published 20 dSphs data from five different experiments, such as Fermi-LAT, MAGIC, H.E.S.S., VERITAS and HAWC, in order to set upper limits on the WIMP annihilation cross-section in the widest mass range ever reported.
A comet is a highly dynamic object, undergoing a permanent state of change. These changes have to be carefully classified and considered according to their intrinsic temporal and spatial scales. The Rosetta mission has, through its contiguous in-situ and remote sensing coverage of comet 67P/Churyumov-Gerasimenko (hereafter 67P) over the time span of August 2014 to September 2016, monitored the emergence, culmination, and winding down of the gas and dust comae. This provided an unprecedented data set and has spurred a large effort to connect in-situ and remote sensing measurements to the surface. In this review, we address our current understanding of cometary activity and the challenges involved when linking comae data to the surface. We give the current state of research by describing what we know about the physical processes involved from the surface to a few tens of kilometres above it with respect to the gas and dust emission from cometary nuclei. Further, we describe how complex multidimensional cometary gas and dust models have developed from the Halley encounter of 1986 to today. This includes the study of inhomogeneous outgassing and determination of the gas and dust production rates. Additionally, the different approaches used and results obtained to link coma data to the surface will be discussed. We discuss forward and inversion models and we describe the limitations of the respective approaches. The current literature suggests that there does not seem to be a single uniform process behind cometary activity. Rather, activity seems to be the consequence of a variety of erosion processes, including the sublimation of both water ice and more volatile material, but possibly also more exotic processes such as fracture and cliff erosion under thermal and mechanical stress, sub-surface heat storage, and a complex interplay of these processes. Seasons and the nucleus shape are key factors for the distribution and temporal evolution of activity and imply that the heliocentric evolution of activity can be highly individual for every comet, and generalisations can be misleading.
In X-ray computed tomography (XCT), an X-ray beam of intensity I0 is transmitted through an object and its attenuated intensity I is measured when it exits the object. The attenuation of the beam depends on the attenuation coefficients along its path. The attenuation coefficients provide information about the structure and composition of the object and can be determined through mathematical operations that are referred to as reconstruction.
The standard reconstruction algorithms are based on the filtered backprojection (FBP) of the measured data. While these algorithms are fast and relatively simple, they do not always succeed in computing a precise reconstruction, especially from under-sampled data.
Alternatively, an image or volume can be reconstructed by solving a system of linear equations. Typically, the system of equations is too large to be solved but its solution can be approximated by iterative methods, such as the Simultaneous Iterative Reconstruction Technique (SIRT) and the Conjugate Gradient Least Squares (CGLS).
This dissertation focuses on the development of a novel iterative algorithm, the Direct Iterative Reconstruction of Computed Tomography Trajectories (DIRECTT). After its reconstruction principle is explained, its performance is assessed for real parallel- and cone-beam CT (including under-sampled) data and compared to that of other established algorithms. Finally, it is demonstrated how the shape of the measured object can be modelled into DIRECTT to achieve even better reconstruction results.
The random nature of self-amplified spontaneous emission (SASE) is a well-known challenge for x-ray core level spectroscopy at SASE free-electron lasers (FELs). Especially in time-resolved experiments that require a combination of good temporal and spectral resolution the jitter and drifts in the spectral characteristics, relative arrival time as well as power fluctuations can smear out spectral-temporal features. We present a combination of methods for the analysis of time-resolved photoelectron spectra based on power and time corrections as well as self-referencing of a strong photoelectron line. Based on sulfur 2p photoelectron spectra of 2-thiouracil taken at the SASE FEL FLASH2, we show that it is possible to correct for some of the photon energy drift and jitter even when reliable shot-to-shot photon energy data is not available. The quality of pump-probe difference spectra improves as random jumps in energy between delay points reduce significantly. The data analysis allows to identify coherent oscillations of 1 eV shift on the mean photoelectron line of 4 eV width with an error of less than 0.1 eV.
Global quantum thermometry
(2021)
A paradigm shift in quantum thermometry is proposed. To date, thermometry has relied on local estimation, which is useful to reduce statistical fluctuations once the temperature is very well known. In order to estimate temperatures in cases where few measurement data or no substantial prior knowledge are available, we build instead a method for global quantum thermometry. Based on scaling arguments, a mean logarithmic error is shown here to be the correct figure of merit for thermometry. Its full minimization provides an operational and optimal rule to postprocess measurements into a temperature reading, and it establishes a global precision limit. We apply these results to the simulated outcomes of measurements on a spin gas, finding that the local approach can lead to biased temperature estimates in cases where the global estimator converges to the true temperature. The global framework thus enables a reliable approach to data analysis in thermometry experiments.
This paper presents an experimental procedure for the characterization of the granitic rocks on a Mars-like environment. To gain a better understanding of the drilling conditions on Mars, the dynamic tensile behavior of the two granitic rocks was studied using the Brazilian disc test and a Split Hopkinson Pressure Bar. The room temperature tests were performed on the specimens, which had gone through thermal cycling between room temperature and - 70 degrees C for 0, 10, 15, and 20 cycles. In addition, the high strain rate Brazilian disc tests were carried out on the samples without the thermal cyclic loading at test temperatures of - 30 degrees C, - 50 degrees C, and - 70 degrees C. Microscopy results show that the rocks with different microstructures respond differently to cyclic thermal loading. However, decreasing the test temperature leads to an increasing in the tensile strength of both studied rocks, and the softening of the rocks is observed for both rocks as the temperature reaches - 70 degrees C. This paper presents a quantitative assessment of the effects of the thermal cyclic loading and temperature on the mechanical behavior of studied rocks in the Mars-like environment. The results of this work will bring new insight into the mechanical response of rock material in extreme environments.
Isoflux tension propagation (IFTP) theory and Langevin dynamics (LD) simulations are employed to study the dynamics of channel-driven polymer translocation in which a polymer translocates into a narrow channel and the monomers in the channel experience a driving force fc. In the high driving force limit, regardless of the channel width, IFTP theory predicts τ ∝ f βc for the translocation time, where β = −1 is the force scaling exponent. Moreover, LD data show that for a very narrow channel fitting only a single file of monomers, the entropic force due to the subchain inside the channel does not play a significant role in the translocation dynamics and the force exponent β = −1 regardless of the force magnitude. As the channel width increases the number of possible spatial configurations of the subchain inside the channel becomes significant and the resulting entropic force causes the force exponent to drop below unity.
Isoflux tension propagation (IFTP) theory and Langevin dynamics (LD) simulations are employed to study the dynamics of channel-driven polymer translocation in which a polymer translocates into a narrow channel and the monomers in the channel experience a driving force fc. In the high driving force limit, regardless of the channel width, IFTP theory predicts τ ∝ f βc for the translocation time, where β = −1 is the force scaling exponent. Moreover, LD data show that for a very narrow channel fitting only a single file of monomers, the entropic force due to the subchain inside the channel does not play a significant role in the translocation dynamics and the force exponent β = −1 regardless of the force magnitude. As the channel width increases the number of possible spatial configurations of the subchain inside the channel becomes significant and the resulting entropic force causes the force exponent to drop below unity.