Refine
Has Fulltext
- yes (101) (remove)
Year of publication
- 2020 (101) (remove)
Document Type
- Doctoral Thesis (101) (remove)
Language
- English (101) (remove)
Is part of the Bibliography
- yes (101)
Keywords
- Maschinelles Lernen (3)
- Anden (2)
- Andes (2)
- Arktis (2)
- Datenassimilation (2)
- Diffusion (2)
- Galaktische Archäologie (2)
- Geomagnetic activity (2)
- Geomagnetische Aktivität (2)
- Geophysik (2)
Institute
- Institut für Physik und Astronomie (24)
- Institut für Geowissenschaften (20)
- Institut für Biochemie und Biologie (18)
- Institut für Chemie (9)
- Hasso-Plattner-Institut für Digital Engineering GmbH (7)
- Institut für Umweltwissenschaften und Geographie (7)
- Department Psychologie (3)
- Department Linguistik (2)
- Institut für Anglistik und Amerikanistik (2)
- Institut für Mathematik (2)
With rising complexity of today's software and hardware systems and the hypothesized increase in autonomous, intelligent, and self-* systems, developing correct systems remains an important challenge. Testing, although an important part of the development and maintainance process, cannot usually establish the definite correctness of a software or hardware system - especially when systems have arbitrarily large or infinite state spaces or an infinite number of initial states. This is where formal verification comes in: given a representation of the system in question in a formal framework, verification approaches and tools can be used to establish the system's adherence to its similarly formalized specification, and to complement testing.
One such formal framework is the field of graphs and graph transformation systems. Both are powerful formalisms with well-established foundations and ongoing research that can be used to describe complex hardware or software systems with varying degrees of abstraction. Since their inception in the 1970s, graph transformation systems have continuously evolved; related research spans extensions of expressive power, graph algorithms, and their implementation, application scenarios, or verification approaches, to name just a few topics.
This thesis focuses on a verification approach for graph transformation systems called k-inductive invariant checking, which is an extension of previous work on 1-inductive invariant checking. Instead of exhaustively computing a system's state space, which is a common approach in model checking, 1-inductive invariant checking symbolically analyzes graph transformation rules - i.e. system behavior - in order to draw conclusions with respect to the validity of graph constraints in the system's state space. The approach is based on an inductive argument: if a system's initial state satisfies a graph constraint and if all rules preserve that constraint's validity, we can conclude the constraint's validity in the system's entire state space - without having to compute it.
However, inductive invariant checking also comes with a specific drawback: the locality of graph transformation rules leads to a lack of context information during the symbolic analysis of potential rule applications. This thesis argues that this lack of context can be partly addressed by using k-induction instead of 1-induction. A k-inductive invariant is a graph constraint whose validity in a path of k-1 rule applications implies its validity after any subsequent rule application - as opposed to a 1-inductive invariant where only one rule application is taken into account. Considering a path of transformations then accumulates more context of the graph rules' applications.
As such, this thesis extends existing research and implementation on 1-inductive invariant checking for graph transformation systems to k-induction. In addition, it proposes a technique to perform the base case of the inductive argument in a symbolic fashion, which allows verification of systems with an infinite set of initial states. Both k-inductive invariant checking and its base case are described in formal terms. Based on that, this thesis formulates theorems and constructions to apply this general verification approach for typed graph transformation systems and nested graph constraints - and to formally prove the approach's correctness.
Since unrestricted graph constraints may lead to non-termination or impracticably high execution times given a hypothetical implementation, this thesis also presents a restricted verification approach, which limits the form of graph transformation systems and graph constraints. It is formalized, proven correct, and its procedures terminate by construction. This restricted approach has been implemented in an automated tool and has been evaluated with respect to its applicability to test cases, its performance, and its degree of completeness.
"How Wenzel and Cassie were wrong" – this was the eye-catching title of an article published by Lichao Gao and Thomas McCarthy in 2007, in which fundamental interpretations of wetting behavior were put into question. The authors initiated a discussion on a subject, which had been generally accepted a long time ago and they showed that wetting phenomena were not as fully understood as imagined. Similarly, this thesis tries to put a focus on certain aspects of liquid wetting, which so far have been widely neglected in terms of interpretation and experimental proof. While the effect of surface roughness on the macroscopically observed wetting behavior is commonly and reliably interpreted according to the well-known models of Wenzel and Cassie/Baxter, the size-scale of the structures responsible for the surface's rough texture has not been of further interest. Analogously, the limits of these models have not been described and exploited. Thus, the question arises, what will happen when the size of surface structures is reduced to the size of the contacting liquid molecules itself? Are common methods still valid or can deviations from macroscopic behavior be observed?
This thesis wants to create a starting point regarding these questions. In order to investigate the effect of smallest-scale surface structures on liquid wetting, a suitable model system is developed by means of self-assembled monolayer (SAM) formation from (fluoro)organic thiols of differing lengths of the alkyl chain. Surface topographies are created which rely on size differences of several Ångströms and exhibit surprising wetting behavior depending on the choice of the individual precursor system. Thus, contact angles are experimentally detected, which deviate considerably from theoretical calculations based on Wenzel and Cassie/Baxter models and confirm that sub-nm surface topographies affect wetting. Moreover, experimentally determined wetting properties are found to correlate well to an assumed scale-dependent surface tension of the contacting liquid. This behavior has already been described for scattering experiments taking into account capillary waves on the liquid surface induced by temperature and had been predicted earlier by theoretical calculations.
However, the investigation of model surfaces requires the provision of suitable precursor molecules, which are not commercially available and opens up a door to the exotic chemistry of fluoro-organic materials. During the course of this work, the synthesis of long-chain precursors is examined with a particular focus put on oligomerically pure semi-fluorinated n-alkyl thiols and n-alkyl trichlorosilanes. For this, general protocols for the syntheses of the desired compounds are developed and product mixtures are assayed to be separated into fractions of individual chain lengths by fluorous-phase high-performance liquid chromatography (F-HPLC).
The transition from model systems to technically more relevant surfaces and applications is initiated through the deposition of SAMs from long-chain fluorinated n-alkyl trichlorosilanes. Depositions are accomplished by a vapor-phase deposition process conducted on a pilot-scale set-up, which enables the exact control of relevant process parameters. Thus, the influence of varying deposition conditions on the properties of the final coating is examined and analyzed for the most important parameters. The strongest effect is observed for the partial pressure of reactive water vapor, which directly controls the extent of precursor hydrolysis during the deposition process. Experimental results propose that the formation of ordered monolayers rely on the amount of hydrolyzed silanol species present in the deposition system irrespective of the exact grade of hydrolysis. However, at increased amounts of species which are able to form cross-linked molecules due to condensation reactions, films deteriorate in quality. This effect is assumed to be caused by the introduction of defects within the film and the adsorption of cross linked agglomerates. Deposition conditions are also investigated for chain extended precursor species and reveal distinct differences caused by chain elongation.
This thesis is concerned with Data Assimilation, the process of combining model predictions with observations. So called filters are of special interest. One is inter- ested in computing the probability distribution of the state of a physical process in the future, given (possibly) imperfect measurements. This is done using Bayes’ rule. The first part focuses on hybrid filters, that bridge between the two main groups of filters: ensemble Kalman filters (EnKF) and particle filters. The first are a group of very stable and computationally cheap algorithms, but they request certain strong assumptions. Particle filters on the other hand are more generally applicable, but computationally expensive and as such not always suitable for high dimensional systems. Therefore it exists a need to combine both groups to benefit from the advantages of each. This can be achieved by splitting the likelihood function, when assimilating a new observation and treating one part of it with an EnKF and the other part with a particle filter.
The second part of this thesis deals with the application of Data Assimilation to multi-scale models and the problems that arise from that. One of the main areas of application for Data Assimilation techniques is predicting the development of oceans and the atmosphere. These processes involve several scales and often balance rela- tions between the state variables. The use of Data Assimilation procedures most often violates relations of that kind, which leads to unrealistic and non-physical pre- dictions of the future development of the process eventually. This work discusses the inclusion of a post-processing step after each assimilation step, in which a minimi- sation problem is solved, which penalises the imbalance. This method is tested on four different models, two Hamiltonian systems and two spatially extended models, which adds even more difficulties.
Orogenic peridotites represent portions of upper subcontinental mantle now incorporated in mountain belts. They often contain layers, lenses and irregular bodies of pyroxenite and eclogite. The origin of this heterogeneity and the nature of these layers is still debated but it is likely to involve processes such as transient melts coming from the crust or the mantle and segregating in magma conduits, crust-mantle interaction, upwelling of the asthenosphere and metasomatism. All these processes occur in the lithospheric mantle and are often related with the subduction of crustal rocks to mantle depths. In fact, during subduction, fluids and melts are released from the slab and can interact with the overlying mantle, making the study of deep melts in this environment crucial to understand mantle heterogeneity and crust-mantle interaction. The aim of this thesis is precisely to better constrain how such processes take place studying directly the melt trapped as primary inclusions in pyroxenites and eclogites. The Bohemian Massif, crystalline core of the Variscan belt, is targeted for these purposes because it contains orogenic peridotites with layers of pyroxenite and eclogite and other mafic rocks enclosed in felsic high pressure and ultra-high pressure crustal rocks. Within this Massif mafic rocks from two areas have been selected: the garnet clinopyroxenite in orogenic peridotite of the Granulitgebirge and the ultra-high pressure eclogite in the diamond-bearing gneisses of the Erzgebirge. In both areas primary melt inclusions were recognized in the garnet, ranging in size between 2-25 µm and with different degrees of crystallization, from glassy to polycrystalline. They have been investigated with Micro Raman spectroscopy and EDS mapping and the mineral assemblage is kumdykolite, phlogopite, quartz, kokchetavite, phase with a main Raman peak at 430 cm-1, phase with a main Raman peak at 412 cm-1, white mica and calcite with some variability in relative abundance depending on the case study. In the Granulitgebirge osumilite and pyroxene are also present, whereas calcite is one of the main phases in the Erzgebirge. The presence of glass and the mineral assemblage in the nanogranitoids suggest that they were former droplets of melt trapped in the garnet while it was growing. Glassy inclusions and re-homogenized nanogranitoids show a silicate melt that is granitic, hydrous, high in alkalis and weakly peraluminous. The melt is also enriched in both case studies in Cs, Pb, Rb, U, Th, Li and B suggesting the involvement of crustal component, i.e. white mica (main carrier of Cs, Pb, Rb, Li and B), and a fluid (Cs, Th and U) in the melt producing reaction. The whole rock in both cases mainly consists of garnet and clinopyroxene with, in Erzgebirge samples, the additional presence of quartz both in the matrix and as a polycrystalline inclusion in the garnet. The latter is interpreted as a quartz pseudomorph after coesite and occurs in the same microstructural position as the melt inclusions. Both rock types show a crustal and subduction zone signature with garnet and clinopyroxene in equilibrium. Melt was likely present during the metamorphic peak of the rock, as it occurs in garnet.
Our data suggest that the processes most likely responsible for the formation of the investigated rocks in both areas is a metasomatic reaction between a melt produced in the crust and mafic layers formerly located in the mantle wedge for the Granulitgebirge and in the subducted continental crust itself in the Erzgebirge. Thus metasomatism in the first case took place in the mantle overlying the slab, whereas in the second case metasomatism took place in the continental crust that already contained, before subduction, mafic layers. Moreover, the presence of former coesite in the same microstructural position of the melt inclusions in the Erzgebirge garnets suggest that metasomatism took place at ultra-high pressure conditions.
Summarizing, in this thesis we provide new insights into the geodynamic evolution of the Bohemian Massif based on the study of melt inclusions in garnet in two different mafic rock types, combining the direct microstructural and geochemical investigation of the inclusions with the whole-rock and mineral geochemistry. We report for the first time data, directly extracted from natural rocks, on the metasomatic melt responsible for the metasomatism of several areas of the Bohemian Massif. Besides the two locations here investigated, belonging to the Saxothuringian Zone, a signature similar to the investigated melt is clearly visible in pyroxenite and peridotite of the T-7 borehole (again Saxothuringian Zone) and the durbachite suite located in the Moldanubian Zone.
Single-column data profiling
(2020)
The research area of data profiling consists of a large set of methods and processes to examine a given dataset and determine metadata about it. Typically, different data profiling tasks address different kinds of metadata, comprising either various statistics about individual columns (Single-column Analysis) or relationships among them (Dependency Discovery). Among the basic statistics about a column are data type, header, the number of unique values (the column's cardinality), maximum and minimum values, the number of null values, and the value distribution. Dependencies involve, for instance, functional dependencies (FDs), inclusion dependencies (INDs), and their approximate versions.
Data profiling has a wide range of conventional use cases, namely data exploration, cleansing, and integration. The produced metadata is also useful for database management and schema reverse engineering. Data profiling has also more novel use cases, such as big data analytics. The generated metadata describes the structure of the data at hand, how to import it, what it is about, and how much of it there is. Thus, data profiling can be considered as an important preparatory task for many data analysis and mining scenarios to assess which data might be useful and to reveal and understand a new dataset's characteristics.
In this thesis, the main focus is on the single-column analysis class of data profiling tasks. We study the impact and the extraction of three of the most important metadata about a column, namely the cardinality, the header, and the number of null values.
First, we present a detailed experimental study of twelve cardinality estimation algorithms. We classify the algorithms and analyze their efficiency, scaling far beyond the original experiments and testing theoretical guarantees. Our results highlight their trade-offs and point out the possibility to create a parallel or a distributed version of these algorithms to cope with the growing size of modern datasets.
Then, we present a fully automated, multi-phase system to discover human-understandable, representative, and consistent headers for a target table in cases where headers are missing, meaningless, or unrepresentative for the column values. Our evaluation on Wikipedia tables shows that 60% of the automatically discovered schemata are exact and complete. Considering more schema candidates, top-5 for example, increases this percentage to 72%.
Finally, we formally and experimentally show the ghost and fake FDs phenomenon caused by FD discovery over datasets with missing values. We propose two efficient scores, probabilistic and likelihood-based, for estimating the genuineness of a discovered FD. Our extensive set of experiments on real-world and semi-synthetic datasets show the effectiveness and efficiency of these scores.
Lately, the integration of upconverting nanoparticles (UCNP) in industrial, biomedical and scientific applications has been increasingly accelerating, owing to the exceptional photophysical properties that UCNP offer. Some of the most promising applications lie in the field of medicine and bioimaging due to such advantages as, among others, deeper tissue penetration, reduced optical background, possibility for multicolor imaging, and lower toxicity, compared to many known luminophores. However, some questions regarding not only the fundamental photophysical processes, but also the interaction of the UCNP with other luminescent reporters frequently used for bioimaging and the interaction with biological media remain unanswered. These issues were the primary motivation for the presented work.
This PhD thesis investigated several aspects of various properties and possibilities for bioapplications of Yb3+,Tm3+-doped NaYF4 upconverting nanoparticles. First, the effect of Gd3+ doping on the structure and upconverting behaviour of the nanocrystals was assessed. The ageing process of the UCNP in cyclohexane was studied over 24 months on the samples with different Gd3+ doping concentrations. Structural information was gathered by means of X-ray diffraction (XRD), transmission electron microscopy (TEM), dynamic light scattering (DLS), and discussed in relation to spectroscopic results, obtained through multiparameter upconversion luminescence studies at various temperatures (from 4 K to 295 K). Time-resolved and steady-state emission spectra recorded over this ample temperature range allowed for a deeper understanding of photophysical processes and their dependence on structural changes of UCNP.
A new protocol using a commercially available high boiling solvent allowed for faster and more controlled production of very small and homogeneous UCNP with better photophysical properties, and the advantages of a passivating NaYF4 shell were shown.
Förster resonance energy transfer (FRET) between four different species of NaYF4: Yb3+, Tm3+ UCNP (synthesized using the improved protocol) and a small organic dye was studied. The influence of UCNP composition and the proximity of Tm3+ ions (donors in the process of FRET) to acceptor dye molecules have been assessed. The brightest upconversion luminescence was observed in the UCNP with a protective inert shell. UCNP with Tm3+ ions only in the shell were the least bright, but showed the most efficient energy transfer.
In the final part, two surface modification strategies were applied to make UCNP soluble in water, which simultaneously allowed for linking them via a non-toxic copper-free click reaction to the liposomes, which served as models for further cell experiments. The results were assessed on a confocal microscope system, which was made possible by lesser known downshifting properties of Yb3+, Tm3+-doped UCNP. Preliminary antibody-staining tests using two primary and one dye-labelled secondary antibodies were performed on MDCK-II cells.
Over the last decades, the Arctic regions of the earth have warmed at a rate 2–3 times faster than the global average– a phenomenon called Arctic Amplification. A complex, non-linear interplay of physical processes and unique pecularities in the Arctic climate system is responsible for this, but the relative role of individual processes remains to be debated. This thesis focuses on the climate change and related processes on Svalbard, an archipelago in the North Atlantic sector of the Arctic, which is shown to be a "hotspot" for the amplified recent warming during winter. In this highly dynamical region, both oceanic and atmospheric large-scale transports of heat and moisture interfere with spatially inhomogenous surface conditions, and the corresponding energy exchange strongly shapes the atmospheric boundary layer. In the first part, Pan-Svalbard gradients in the surface air temperature (SAT) and sea ice extent (SIE) in the fjords are quantified and characterized. This analysis is based on observational data from meteorological stations, operational sea ice charts, and hydrographic observations from the adjacent ocean, which cover the 1980–2016 period. It is revealed that typical estimates of SIE during late winter range from 40–50% (80–90%) in the western (eastern) parts of Svalbard. However, strong SAT warming during winter of the order of 2–3K per decade dictates excessive ice loss, leaving fjords in the western parts essentially ice-free in recent winters. It is further demostrated that warm water currents on the west coast of Svalbard, as well as meridional winds contribute to regional differences in the SIE evolution. In particular, the proximity to warm water masses of the West Spitsbergen Current can explain 20–37% of SIE variability in fjords on west Svalbard, while meridional winds and associated ice drift may regionally explain 20–50% of SIE variability in the north and northeast. Strong SAT warming has overruled these impacts in recent years, though.
In the next part of the analysis, the contribution of large-scale atmospheric circulation changes to the Svalbard temperature development over the last 20 years is investigated. A study employing kinematic air-back trajectories for Ny-Ålesund reveals a shift in the source regions of lower-troposheric air over time for both the winter and the summer season. In winter, air in the recent decade is more often of lower-latitude Atlantic origin, and less frequent of Arctic origin. This affects heat- and moisture advection towards Svalbard, potentially manipulating clouds and longwave downward radiation in that region. A closer investigation indicates that this shift during winter is associated with a strengthened Ural blocking high and Icelandic low, and contributes about 25% to the observed winter warming on Svalbard over the last 20 years. Conversely, circulation changes during summer include a strengthened Greenland blocking high which leads to more frequent cold air advection from the central Arctic towards Svalbard, and less frequent air mass origins in the lower latitudes of the North Atlantic. Hence, circulation changes during winter are shown to have an amplifying effect on the recent warming on Svalbard, while summer circulation changes tend to mask warming.
An observational case study using upper air soundings from the AWIPEV research station in Ny-Ålesund during May–June 2017 underlines that such circulation changes during summer are associated with tropospheric anomalies in temperature, humidity and boundary layer height.
In the last part of the analysis, the regional representativeness of the above described changes around Svalbard for the broader Arctic is investigated. Therefore, the terms in the diagnostic temperature equation in the Arctic-wide lower troposphere are examined for the Era-Interim atmospheric reanalysis product. Significant positive trends in diabatic heating rates, consistent with latent heat transfer to the atmosphere over regions of increasing ice melt, are found for all seasons over the Barents/Kara Seas, and in individual months in the vicinity of Svalbard. The above introduced warm (cold) advection trends during winter (summer) on Svalbard are successfully reproduced. Regarding winter, they are regionally confined to the Barents Sea and Fram Strait, between 70°–80°N, resembling a unique feature in the whole Arctic. Summer cold advection trends are confined to the area between eastern Greenland and Franz Josef Land, enclosing Svalbard.
Cleft exhaustivity
(2020)
In this dissertation a series of experimental studies are presented which demonstrate that the exhaustive inference of focus-background it-clefts in English and their cross-linguistic counterparts in Akan, French, and German is neither robust nor systematic. The inter-speaker and cross-linguistic variability is accounted for with a discourse-pragmatic approach to cleft exhaustivity, in which -- following Pollard & Yasavul 2016 -- the exhaustive inference is derived from an interaction with another layer of meaning, namely, the existence presupposition encoded in clefts.
To investigate the reliability and stability of spherical harmonic models based on archeo/-paleomagnetic data, 2000 Geomagnetic models were calculated. All models are based on the same data set but with randomized uncertainties. Comparison of these models to the geomagnetic field model gufm1 showed that large scale magnetic field structures up to spherical harmonic degree 4 are stable throughout all models. Through a ranking of all models by comparing the dipole coefficients to gufm1 more realistic uncertainty estimates were derived than the authors of the data provide.
The derived uncertainty estimates were used in further modelling, which combines archeo/-paleomagnetic and historical data. The huge difference in data count, accuracy and coverage of these two very different data sources made it necessary to introduce a time dependent spatial damping, which was constructed to constrain the spatial complexity of the model. Finally 501 models were calculated by considering that each data point is a Gaussian random variable, whose mean is the original value and whose standard deviation is its uncertainty. The final model arhimag1k is calculated by taking the mean of the 501 sets of Gauss coefficients. arhimag1k fits different dependent and independent data sets well. It shows an early reverse flux patch at the core-mantle boundary between 1000 AD and 1200 AD at the location of the South Atlantic Anomaly today. Another interesting feature is a high latitude flux patch over Greenland between 1200 and 1400 AD. The dipole moment shows a constant behaviour between 1600 and 1840 AD.
In the second part of the thesis 4 new paleointensities from 4 different flows of the island Fogo, which is part of Cape Verde, are presented. The data is fitted well by arhimag1k with the exception of the value at 1663 of 28.3 microtesla, which is approximately 10 microtesla lower than the model suggest.
The goal of this thesis was to thoroughly investigate the behavior of multimode fibres to aid the development of modern and forthcoming fibre-fed spectrograph systems. Based on the Eigenmode Expansion Method, a field propagation model was created that can emulate effects in fibres relevant for astronomical spectroscopy, such as modal noise, scrambling, and focal ratio degradation. These effects are of major concern for any fibre-coupled spectrograph used in astronomical research. Changes in the focal ratio, modal distribution of light or non-perfect scrambling limit the accuracy of measurements, e.g. the flux determination of the astronomical object, the sky-background subtraction and detection limit for faint galaxies, or the spectral line position accuracy used for the detection of extra-solar planets.
Usually, fibres used for astronomical instrumentation are characterized empirically through tests. The results of this work allow to predict the fibre behaviour under various conditions using sophisticated software tools to simulate the waveguide behaviour and mode transport of fibres.
The simulation environment works with two software interfaces. The first is the mode solver module FemSIM from Rsoft. It is used to calculate all the propagation modes and effective refractive indexes of a given system. The second interface consists of Python scripts which enable the simulation of the near- and far-field outputs of a given fibre. The characteristics of the input field can be manipulated to emulate real conditions. Focus variations, spatial translation, angular fluctuations, and disturbances through the mode coupling factor can also be simulated.
To date, complete coherent propagation or complete incoherent propagation can be simulated. Partial coherence was not addressed in this work. Another limitation of the simulations is that they work exclusively for the monochromatic case and that the loss coefficient of the fibres is not considered. Nevertheless, the simulations were able to match the results of realistic measurements.
To test the validity of the simulations, real fibre measurements were used for comparison. Two fibres with different cross-sections were characterized. The first fibre had a circular cross-section, and the second one had an octagonal cross-section. The utilized test-bench was originally developed for the prototype fibres of the 4MOST fibre feed characterization. It allowed for parallel laser beam measurements, light cone measurements, and scrambling measurements. Through the appropriate configuration, the acquisition of the near- and/or far-field was feasible.
By means of modal noise analysis, it was possible to compare the near-field speckle patterns of simulations and measurements as a function of the input angle. The spatial frequencies that originate from the modal interference could be analyzed by using the power spectral density analysis. Measurements and simulations yielded similar results. Measurements with induced modal scrambling were compared to simulations using incoherent propagation and once again similar results were achieved. Through both measurements and simulations, the enlargement of the near-field distribution could be observed and analyzed. The simulations made it possible to explain incoherent intensity fluctuations that appear in real measurements due to the field distribution of the active propagation modes.
By using the Voigt analysis in the far-field distribution, it was possible to separate the modal diffusion component in order to compare it with the simulations. Through an appropriate assessment, the modal diffusion component as a function of the input angle could be translated into angular divergence. The simulations gave the minimal angular divergence of the system. Through the mean of the difference between simulations and measurements, a figure of merit is given which can be used to characterize the angular divergence of real fibres using the simulations. Furthermore, it was possible to simulate light cone measurements. Due to the overall consistent results, it can be stated that the simulations represent a good tool to assist the fibre characterization process for fibre-fed spectrograph systems.
This work was possible through the BMBF Grant 05A14BA1 which was part of the phase A study of the fibre system for MOSAIC, a multi-object spectrograph for the Extremely Large Telescope (ELT-MOS).