Refine
Has Fulltext
- yes (5872) (remove)
Year of publication
Document Type
- Postprint (2275)
- Doctoral Thesis (1720)
- Article (644)
- Preprint (425)
- Monograph/Edited Volume (246)
- Conference Proceeding (185)
- Working Paper (165)
- Master's Thesis (59)
- Habilitation Thesis (39)
- Part of Periodical (26)
Language
- English (5872) (remove)
Keywords
- climate change (72)
- Klimawandel (50)
- information structure (39)
- morphology (39)
- MOOC (37)
- machine learning (37)
- e-learning (36)
- syntax (36)
- digital education (35)
- Curriculum Framework (34)
Institute
- Institut für Physik und Astronomie (639)
- Institut für Biochemie und Biologie (548)
- Mathematisch-Naturwissenschaftliche Fakultät (485)
- Institut für Mathematik (474)
- Institut für Geowissenschaften (469)
- Extern (456)
- Institut für Chemie (425)
- Department Linguistik (230)
- Humanwissenschaftliche Fakultät (205)
- Institut für Umweltwissenschaften und Geographie (200)
The "Lomonosov" space project is lead by Lomonosov Moscow State University in collaboration with the following key partners: Joint Institute for Nuclear Research, Russia, University of California, Los Angeles (USA), University of Pueblo (Mexico), Sungkyunkwan University (Republic of Korea) and with Russian space industry organi-zations to study some of extreme phenomena in space related to astrophysics, astroparticle physics, space physics, and space biology. The primary goals of this experiment are to study:
-Ultra-high energy cosmic rays (UHECR) in the energy range of the Greizen-ZatsepinKuzmin (GZK) cutoff;
-Ultraviolet (UV) transient luminous events in the upper atmosphere;
-Multi-wavelength study of gamma-ray bursts in visible, UV, gamma, and X-rays;
-Energetic trapped and precipitated radiation (electrons and protons) at low-Earth orbit (LEO) in connection with global geomagnetic disturbances;
-Multicomponent radiation doses along the orbit of spacecraft under different geomagnetic conditions and testing of space segments of optical observations of space-debris and other space objects;
-Instrumental vestibular-sensor conflict of zero-gravity phenomena during space flight.
This paper is directed towards the general description of both scientific goals of the project and scientific equipment on board the satellite. The following papers of this issue are devoted to detailed descriptions of scientific instruments.
Nanostructured inorganic materials are routinely synthesized by the use of templates. Depending on the synthesis conditions of the product material, either “soft” or “hard” templates can be applied. For sol-gel processes, usually “soft” templating techniques are employed, while “hard” templates are used for high temperature synthesis pathways. In classical templating approaches, the template has the unique role of structure directing agent, in the sense that it is not participating to the chemical formation of the resulting material. This work investigates a new templating pathway to nanostructured materials, where the template is also a reagent in the formation of the final material. This concept is described as “reactive templating” and opens a synthetic path toward materials which cannot be synthesised on a nanometre scale by classical templating approaches. Metal nitrides are such kind of materials. They are usually produced by the conversion of metals or metal oxides in ammonia flow at high temperature (T > 1000°C), which make the application of classical templating techniques difficult. Graphitic carbon nitride, g-C3N4, despite its fundamental and theoretical importance, is probably one of the most promising materials to complement carbon in material science and many efforts are put in the synthesis of this material. A simple polyaddition/elimination reaction path at high temperature (T = 550°C) allows the polymerisation of cyanamide toward graphitic carbon nitride solids. By hard templating, using nanostructured silica or aluminium oxide as nanotemplates, a variety of nanostructured graphitic carbon nitrides such as nanorods, nanotubes, meso- and macroporous powders could be obtained by nanocasting or nanocoating. Due to the special semi-conducting properties of the graphitic carbon nitride matrix, the nanostructured graphitic carbon nitrides show unexpected catalytic activity for the activation of benzene in Friedel-Crafts type reactions, making this material an interesting metal free catalyst. Furthermore, due to the chemical composition of g-C3N4 and the fact that it is totally decomposed at temperatures between 600°C and 800°C even under inert atmosphere, g-C3N4 was shown to be a good nitrogen donor for the synthesis of early transition metal nitrides at high temperatures. Thus using the nanostructured carbon nitrides as “reactive templates” or “nanoreactors”, various metal nitride nanostructures, such as nanoparticles and porous frameworks could be obtained at high temperature. In this approach the carbon nitride nanostructure played both the role of the nitrogen source and of the exotemplate, imprinting its size and shape to the resulting metal nitride nanostructure.
In its practical outlook, the interdisciplinary-driven colonial discourse theory is often criticized for its totalizing tendencies regarding the structure of the exam-ined discourse and the power relations prevailing in this framework. As a result of this structural totalization, the concerned subjects got disempowered and de-graded to mere passive objects incapable of raising their voices within the dis-course. Based on this justified criticism, this thesis investigates the role colonial subjects played in the emergence, the distribution, as well as in questioning and critiquing of the colonial discourse during the initial phase of British colonialism in West Africa. The focal point lies on three themes relevant to the period be-tween 1874 and 1914: The Ashanti-Wars, the creation of an educational system, and the issue of the so-called "Europeanized Africans." Newspapers published by the colonial elite serve as the central source material in order to reconstruct Afri-can perspectives on these subjects. First, the discursive trajectory of the first two themes will be reconstructed and then shown why the initial support of the elite gradually declined towards the end of the century. Eventually, the analyzed tendencies culminated in the emergence of the "African Regeneration" discourse, which was able to reverse the colonial discourse's basic assumptions, at least on a theoretical level. Consequently, the Africans were displayed as the "civilizer" of Europe. On the structural level, however, this discourse likewise employed a to-talizing picture of African and European societies, respectively.
This article investigates a public debate in Germany that put a special spotlight on the interaction of standard language ideologies with social dichotomies, centering on the question of whether Kiezdeutsch, a new way of speaking in multilingual urban neighbourhoods, is a legitimate German dialect. Based on a corpus of emails and postings to media websites, I analyse central topoi in this debate and an underlying narrative on language and identity. Central elements of this narrative are claims of cultural elevation and cultural unity for an idealised standard language High German', a view of German dialects as part of a national folk culture, and the construction of an exclusive in-group of German' speakers who own this language and its dialects. The narrative provides a potent conceptual frame for the Othering of Kiezdeutsch and its speakers, and for the projection of social and sometimes racist deliminations onto the linguistic plane.
"Unavoidably side by side"
(2011)
This MA thesis examines novels by Native American authors of the 20th century in regard to their representation of conflicts between the indigenous population of North America and the dominant Christian religion of the mainstream society. Several major points can be followed throughout the century, which have been presented repeatedly and discussed in various perspectives. Historical conflicts of colonization and Christianization, as well as the perpetual question of Native American Christians -- 'How can you go to a church that killed so many Indians?' [Alexie, Reservation Blues] -- are debated in these novels and analyzed in this paper. Furthermore, I have tried to position and classify the works according to their representation of these problems within literary history. Following Charles Larson's chronologic and thematic examination of American Indian Fiction, the categories rejection, (syncretic) adaptation, and postmodern-ironic revision are introduced to describe the various forms of representation. On the basis of five main examples, we can observe an evolution of contemporary Native American literature, which has liberated itself from the narrow definition of the 1960s and 1970s, in favor of a broader and more varied approach. In so doing, and by means of intercultural and intertextual referencing, postmodern irony, and a new Indian self-confidence, it has also taken a new position towards the religion of the former colonizer.
We present novel experimental evidence on the availability and the status of exhaustivity inferences with focus partitioning in German, English, and Hungarian. Results suggest that German and English focus-background clefts and Hungarian focus share important properties, (É. Kiss 1998, 1999; Szabolcsi 1994; Percus 1997; Onea & Beaver 2009). Those constructions are anaphoric devices triggering an existence presupposition. EXH-inferences are not obligatory in such constructions in English, German, or Hungarian, against some previous literature (Percus 1997; Büring & Križ 2013; É. Kiss 1998), but in line with pragmatic analyses of EXH-inferences in clefts (Horn 1981, 2016; Pollard & Yasavul 2016). The cross-linguistic differences in the distribution of EXH-inferences are attributed to properties of the Hungarian number marking system.
.NET Gadgeteer Workshop
(2013)
A new sedimentary sequence from Lago di Venere on Pantelleria Island, located in the Strait of Sicily between Tunisia and Sicily was recovered. The lake is located in the coastal infra-Mediterranean vegetation belt at 2 m a.s.l. Pollen, charcoal and sedimentological analyses are used to explore linkages among vegetation, fire and climate at a decadal scale over the past 1200 years. A dry period from ad 800 to 1000 that corresponds to the Medieval Warm Period' (WMP) is inferred from sedimentological analysis. The high content of carbonate recorded in this period suggests a dry phase, when the ratio of evaporation/precipitation was high. During this period the island was dominated by thermophilous and drought-tolerant taxa, such as Quercus ilex, Olea, Pistacia and Juniperus. A marked shift in the sediment properties is recorded at ad 1000, when carbonate content became very low suggesting wetter conditions until ad 1850-1900. Broadly, this period coincides with the Little Ice Age' (LIA), which was characterized by wetter and colder conditions in Europe. During this time rather mesic conifers (i.e. Pinus pinaster), shrubs and herbs (e.g. Erica arborea and Selaginella denticulata) expanded, whereas more drought-adapted species (e.g. Q. ilex) declined. Charcoal data suggest enhanced fire activity during the LIA probably as a consequence of anthropogenic burning and/or more flammable fuel (e.g. resinous Pinus biomass). The last century was characterized by a shift to high carbonate content, indicating a change towards drier conditions, and re-expansion of Q. ilex and Olea. The post-LIA warming is in agreement with historical documents and meteorological time series. Vegetation dynamics were co-determined by agricultural activities on the island. Anthropogenic indicators (e.g. Cerealia-type, Sporormiella) reveal the importance of crops and grazing on the island. Our pollen data suggest that extensive logging caused the local extinction of deciduous Quercus pubescens around ad1750.
The Earth’s shallow subsurface with sedimentary cover acts as a waveguide to any incoming wavefield. Within the framework of my thesis, I focused on the characterization of this shallow subsurface within tens to few hundreds of meters of sediment cover. I imaged the seismic 1D shear wave velocity (and possibly the 1D compressional wave velocity). This information is not only required for any seismic risk assessment, geotechnical engineering or microzonation activities, but also for exploration and global seismology where site effects are often neglected in seismic waveform modeling.
First, the conventional frequency-wavenumber (f - k) technique is used to derive the dispersion characteristic of the propagating surface waves recorded using distinct arrays of seismometers in 1D and 2D configurations. Further, the cross-correlation technique is applied to seismic array data to estimate the Green’s function between receivers pairs combination assuming one is the source and the other the receiver. With the consideration of a 1D media, the estimated cross-correlation Green’s functions are sorted with interstation distance in a virtual 1D active seismic experiment. The f - k technique is then used to estimate the dispersion curves. This integrated analysis is important for the interpretation of a large bandwidth of the phase velocity dispersion curves and therefore improving the resolution of the estimated 1D Vs profile.
Second, the new theoretical approach based on the Diffuse Field Assumption (DFA) is used for the interpretation of the observed microtremors H/V spectral ratio. The theory is further extended in this research work to include not only the interpretation of the H/V measured at the surface, but also the H/V measured at depths and in marine environments. A modeling and inversion of synthetic H/V spectral ratio curves on simple predefined geological structures shows an almost perfect recovery of the model parameters (mainly Vs and to a lesser extent Vp). These results are obtained after information from a receiver at depth has been considered in the inversion.
Finally, the Rayleigh wave phase velocity information, estimated from array data, and the H/V(z, f) spectral ratio, estimated from a single station data, are combined and inverted for the velocity profile information. Obtained results indicate an improved depth resolution in comparison to estimations using the phase velocity dispersion curves only. The overall estimated sediment thickness is comparable to estimations obtained by inverting the full micortremor H/V spectral ratio.
Monolithic perovskite silicon tandem solar cells can overcome the theoretical efficiency limit of silicon solar cells. This requires an optimum bandgap, high quantum efficiency, and high stability of the perovskite. Herein, a silicon heterojunction bottom cell is combined with a perovskite top cell, with an optimum bandgap of 1.68 eV in planar p-i-n tandem configuration. A methylammonium-free FA(0.75)Cs(0.25)Pb(I0.8Br0.2)(3) perovskite with high Cs content is investigated for improved stability. A 10% molarity increase to 1.1 m of the perovskite precursor solution results in approximate to 75 nm thicker absorber layers and 0.7 mA cm(-2) higher short-circuit current density. With the optimized absorber, tandem devices reach a high fill factor of 80% and up to 25.1% certified efficiency. The unencapsulated tandem device shows an efficiency improvement of 2.3% (absolute) over 5 months, showing the robustness of the absorber against degradation. Moreover, a photoluminescence quantum yield analysis reveals that with adapted charge transport materials and surface passivation, along with improved antireflection measures, the high bandgap perovskite absorber has the potential for 30% tandem efficiency in the near future.
Abstract. The Sea of Marmara, in northwestern Turkey, is a transition zone where the dextral North Anatolian Fault zone (NAFZ) propagates westward from the Anatolian Plate to the Aegean Sea Plate. The area is of interest in the context of seismic hazard of Istanbul, a metropolitan area with about 15 million inhabitants. Geophysical observations indicate that the crust is heterogeneous beneath the Marmara basin, but a detailed characterization of the crustal heterogeneities is still missing. To assess if and how crustal heterogeneities are related to the NAFZ segmentation below the Sea of Marmara, we develop new crustal-scale 3-D density models which integrate geological and seismological data and that are additionally constrained by 3-D gravity modeling. For the latter, we use two different gravity datasets including global satellite data and local marine gravity observation. Considering the two different datasets and the general non-uniqueness in potential field modeling, we suggest three possible “end-member” solutions that are all consistent with the observed gravity field and illustrate the spectrum of possible solutions. These models indicate that the observed gravitational anomalies originate from significant density heterogeneities within the crust. Two layers of sediments, one syn-kinematic and one pre-kinematic with respect to the Sea of Marmara formation are underlain by a heterogeneous crystalline crust. A felsic upper crystalline crust (average density of 2720 kgm⁻³) and an intermediate to mafic lower crystalline crust (average density of 2890 kgm⁻³) appear to be cross-cut by two large, dome-shaped mafic highdensity bodies (density of 2890 to 3150 kgm⁻³) of considerable thickness above a rather uniform lithospheric mantle (3300 kgm⁻³). The spatial correlation between two major bends of the main Marmara fault and the location of the highdensity bodies suggests that the distribution of lithological heterogeneities within the crust controls the rheological behavior along the NAFZ and, consequently, maybe influences fault segmentation and thus the seismic hazard assessment in the region.
The study of outcrop modeling is located at the interface between two fields of expertise, Sedimentology and Computing Geoscience, which respectively investigates and simulates geological heterogeneity observed in the sedimentary record. During the last past years, modeling tools and techniques were constantly improved. In parallel, the study of Phanerozoic carbonate deposits emphasized the common occurrence of a random facies distribution along single depositional domain. Although both fields of expertise are intrinsically linked during outcrop simulation, their respective advances have not been combined in literature to enhance carbonate modeling studies. The present study re-examines the modeling strategy adapted to the simulation of shallow-water carbonate systems, based on a close relationship between field sedimentology and modeling capabilities. In the present study, the evaluation of three commonly used algorithms Truncated Gaussian Simulation (TGSim), Sequential Indicator Simulation (SISim), and Indicator Kriging (IK), were performed for the first time using visual and quantitative comparisons on an ideally suited carbonate outcrop. The results show that the heterogeneity of carbonate rocks cannot be fully simulated using one single algorithm. The operating mode of each algorithm involves capabilities as well as drawbacks that are not capable to match all field observations carried out across the modeling area. Two end members in the spectrum of carbonate depositional settings, a low-angle Jurassic ramp (High Atlas, Morocco) and a Triassic isolated platform (Dolomites, Italy), were investigated to obtain a complete overview of the geological heterogeneity in shallow-water carbonate systems. Field sedimentology and statistical analysis performed on the type, morphology, distribution, and association of carbonate bodies and combined with palaeodepositional reconstructions, emphasize similar results. At the basin scale (x 1 km), facies association, composed of facies recording similar depositional conditions, displays linear and ordered transitions between depositional domains. Contrarily, at the bedding scale (x 0.1 km), individual lithofacies type shows a mosaic-like distribution consisting of an arrangement of spatially independent lithofacies bodies along the depositional profile. The increase of spatial disorder from the basin to bedding scale results from the influence of autocyclic factors on the transport and deposition of carbonate sediments. Scale-dependent types of carbonate heterogeneity are linked with the evaluation of algorithms in order to establish a modeling strategy that considers both the sedimentary characteristics of the outcrop and the modeling capabilities. A surface-based modeling approach was used to model depositional sequences. Facies associations were populated using TGSim to preserve ordered trends between depositional domains. At the lithofacies scale, a fully stochastic approach with SISim was applied to simulate a mosaic-like lithofacies distribution. This new workflow is designed to improve the simulation of carbonate rocks, based on the modeling of each scale of heterogeneity individually. Contrarily to simulation methods applied in literature, the present study considers that the use of one single simulation technique is unlikely to correctly model the natural patterns and variability of carbonate rocks. The implementation of different techniques customized for each level of the stratigraphic hierarchy provides the essential computing flexibility to model carbonate systems. Closer feedback between advances carried out in the field of Sedimentology and Computing Geoscience should be promoted during future outcrop simulations for the enhancement of 3-D geological models.
3D from 2D touch
(2013)
While interaction with computers used to be dominated by mice and keyboards, new types of sensors now allow users to interact through touch, speech, or using their whole body in 3D space. These new interaction modalities are often referred to as "natural user interfaces" or "NUIs." While 2D NUIs have experienced major success on billions of mobile touch devices sold, 3D NUI systems have so far been unable to deliver a mobile form factor, mainly due to their use of cameras. The fact that cameras require a certain distance from the capture volume has prevented 3D NUI systems from reaching the flat form factor mobile users expect. In this dissertation, we address this issue by sensing 3D input using flat 2D sensors. The systems we present observe the input from 3D objects as 2D imprints upon physical contact. By sampling these imprints at very high resolutions, we obtain the objects' textures. In some cases, a texture uniquely identifies a biometric feature, such as the user's fingerprint. In other cases, an imprint stems from the user's clothing, such as when walking on multitouch floors. By analyzing from which part of the 3D object the 2D imprint results, we reconstruct the object's pose in 3D space. While our main contribution is a general approach to sensing 3D input on 2D sensors upon physical contact, we also demonstrate three applications of our approach. (1) We present high-accuracy touch devices that allow users to reliably touch targets that are a third of the size of those on current touch devices. We show that different users and 3D finger poses systematically affect touch sensing, which current devices perceive as random input noise. We introduce a model for touch that compensates for this systematic effect by deriving the 3D finger pose and the user's identity from each touch imprint. We then investigate this systematic effect in detail and explore how users conceptually touch targets. Our findings indicate that users aim by aligning visual features of their fingers with the target. We present a visual model for touch input that eliminates virtually all systematic effects on touch accuracy. (2) From each touch, we identify users biometrically by analyzing their fingerprints. Our prototype Fiberio integrates fingerprint scanning and a display into the same flat surface, solving a long-standing problem in human-computer interaction: secure authentication on touchscreens. Sensing 3D input and authenticating users upon touch allows Fiberio to implement a variety of applications that traditionally require the bulky setups of current 3D NUI systems. (3) To demonstrate the versatility of 3D reconstruction on larger touch surfaces, we present a high-resolution pressure-sensitive floor that resolves the texture of objects upon touch. Using the same principles as before, our system GravitySpace analyzes all imprints and identifies users based on their shoe soles, detects furniture, and enables accurate touch input using feet. By classifying all imprints, GravitySpace detects the users' body parts that are in contact with the floor and then reconstructs their 3D body poses using inverse kinematics. GravitySpace thus enables a range of applications for future 3D NUI systems based on a flat sensor, such as smart rooms in future homes. We conclude this dissertation by projecting into the future of mobile devices. Focusing on the mobility aspect of our work, we explore how NUI devices may one day augment users directly in the form of implanted devices.
We present results of full 3D hydrodynamical and radiative transfer simulations of the colliding stellar winds in the massive binary system η Carinae. We accomplish this by applying the SimpleX algorithm for 3D radiative transfer on an unstructured Voronoi-Delaunay grid to recent 3D smoothed particle hydrodynamics (SPH) simulations of the binary colliding winds. We use SimpleX to obtain detailed ionization fractions of hydrogen and helium, in 3D, at the resolution of the original SPH simulations. We investigate several computational domain sizes and Luminous Blue Variable primary star mass-loss rates. We furthermore present new methods of visualizing and interacting with output from complex 3D numerical simulations, including 3D interactive graphics and 3D printing. While we initially focus on η Car, the methods employed can be applied to numerous other colliding wind (WR 140, WR 137, WR 19) and dusty `pinwheel' (WR 104, WR 98a) binary systems. Coupled with 3D hydrodynamical simulations, SimpleX simulations have the potential to help determine the regions where various observed time-variable emission and absorption lines form in these unique objects.
Massive stars usually form groups such as OB associations. Their fast stellar winds sweep up collectively the surrounding insterstellar medium (ISM) to generate superbubbles. Observations suggest that superbubble evolution on the surrounding ISM can be very irregular. Numerical simulations considering these conditions could help to understand the evolution of these superbubbles and to clarify the dynamics of these objects as well as the difference between observed X-ray luminosities and the predicted ones by the standard model (Weaver et al. 1977).
We present 3D numerical simulations of the NGC6888 nebula considering the proper motion and the evolution of the star, from the red supergiant (RSG) to the Wolf-Rayet (WR) phase. Our simulations reproduce the limb-brightened morphology observed in [OIII] and X-ray emission maps. The synthetic maps computed by the numerical simulations show filamentary and clumpy structures produced by instabilities triggered in the interaction between the WR wind and the RSG shell.
Hepcidin-25 (Hep-25) plays a crucial role in the control of iron homeostasis. Since the dysfunction of the hepcidin pathway leads to multiple diseases as a result of iron imbalance, hepcidin represents a potential target for the diagnosis and treatment of disorders of iron metabolism. Despite intense research in the last decade targeted at developing a selective immunoassay for iron disorder diagnosis and treatment and better understanding the ferroportin-hepcidin interaction, questions remain. The key to resolving these underlying questions is acquiring exact knowledge of the 3D structure of native Hep-25. Since it was determined that the N-terminus, which is responsible for the bioactivity of Hep-25, contains a small Cu(II)-binding site known as the ATCUN motif, it was assumed that the Hep-25-Cu(II) complex is the native, bioactive form of the hepcidin. This structure has thus far not been elucidated in detail. Owing to the lack of structural information on metal-bound Hep-25, little is known about its possible biological role in iron metabolism. Therefore, this work is focused on structurally characterizing the metal-bound Hep-25 by NMR spectroscopy and molecular dynamics simulations. For the present work, a protocol was developed to prepare and purify properly folded Hep-25 in high quantities. In order to overcome the low solubility of Hep-25 at neutral pH, we introduced the C-terminal DEDEDE solubility tag. The metal binding was investigated through a series of NMR spectroscopic experiments to identify the most affected amino acids that mediate metal coordination. Based on the obtained NMR data, a structural calculation was performed in order to generate a model structure of the Hep-25-Ni(II) complex. The DEDEDE tag was excluded from the structural calculation due to a lack of NMR restraints. The dynamic nature and fast exchange of some of the amide protons with solvent reduced the overall number of NMR restraints needed for a high-quality structure. The NMR data revealed that the 20 Cterminal Hep-25 amino acids experienced no significant conformational changes, compared to published results, as a result of a pH change from pH 3 to pH 7 and metal binding. A 3D model of the Hep-25-Ni(II) complex was constructed from NMR data recorded for the hexapeptideNi(II) complex and Hep-25-DEDEDE-Ni(II) complex in combination with the fixed conformation of 19 C-terminal amino acids. The NMR data of the Hep-25-DEDEDE-Ni(II) complex indicates that the ATCUN motif moves independently from the rest of the structure. The 3D model structure of the metal-bound Hep-25 allows for future works to elucidate hepcidin’s interaction with its receptor ferroportin and should serve as a starting point for the development of antibodies with improved selectivity.
The scientific drilling campaign PALEOVAN was conducted in the summer of 2010 and was part of the international continental drilling programme (ICDP). The main goal of the campaign was the recovery of a sensitive climate archive in the East of Anatolia. Lacustrine deposits underneath the lake floor of ‘Lake Van’ constitute this archive. The drilled core material was recovered from two locations: the Ahlat Ridge and the Northern Basin. A composite core was constructed from cored material of seven parallel boreholes at the Ahlat Ridge and covers an almost complete lacustrine history of Lake Van. The composite record offered sensitive climate proxies such as variations of total organic carbon, K/Ca ratios, or a relative abundance of arboreal pollen. These proxies revealed patterns that are similar to climate proxy variations from Greenland ice cores. Climate variations in Greenland ice cores have been dated by modelling the timing of orbital forces to affect the climate. Volatiles from melted ice aliquots are often taken as high-resolution proxies and provide a base for fitting the according temporal models.
The ICDP PALEOVAN scientific team fitted proxy data from the lacustrine drilling record to ice core data and constructed an age model. Embedded volcaniclastic layers had to be dated radiometrically in order to provide independent age constraints to the climate-stratigraphic age model. Solving this task by an application of the 40Ar/39Ar method was the main objective of this thesis. Earlier efforts to apply the 40Ar/39Ar dating resulted in inaccuracies that could not be explained satisfactorily.
The absence of K-rich feldspars in suitable tephra layers implied that feldspar crystals needed to be 500 μm in size minimum, in order to apply single-crystal 40Ar/39Ar dating. Some of the samples did not contain any of these grain sizes or only very few crystals of that size. In order to overcome this problem this study applied a combined single-crystal and multi-crystal approach with different crystal fractions from the same sample. The preferred method of a stepwise heating analysis of an aliquot of feldspar crystals has been applied to three samples. The Na-rich crystals and their young geological age required 20 mg of inclusion-free, non-corroded feldspars. Small sample volumes (usually 25 % aliquots of 5 cm3 of sample material – a spoon full of tephra) and the widespread presence of melt-inclusion led to the application of combined single- and multigrain total fusion analyses. 40Ar/39Ar analyses on single crystals have the advantage of being able to monitor the presence of excess 40Ar and detrital or xenocrystic contamination in the samples. Multigrain analyses may hide the effects from these obstacles. The results from the multigrain analyses are therefore discussed with respect to the findings from the respective cogenetic single crystal ages. Some of the samples in this study were dated by 40Ar/39Ar on feldspars on multigrain separates and (if available) in combination with only a few single crystals. 40Ar/39Ar ages from two of the samples deviated statistically from the age model. All other samples resulted in identical ages. The deviations displayed older ages than those obtained from the age model. t-Tests compared radiometric ages with available age control points from various proxies and from the relative paleointensity of the earth magnetic field within a stratigraphic range of ± 10 m. Concordant age control points from different relative chronometers indicated that deviations are a result of erroneous 40Ar/39Ar ages. The thesis argues two potential reasons for these ages: (1) the irregular appearance of 40Ar from rare melt- and fluid- inclusions and (2) the contamination of the samples with older crystals due to a rapid combination of assimilation and ejection.
Another aliquot of feldspar crystals that underwent separation for the application of 40Ar/39Ar dating was investigated for geochemical inhomogeneities. Magmatic zoning is ubiquitous in the volcaniclastic feldspar crystals. Four different types of magmatic zoning were detected. The zoning types are compositional zoning (C-type zoning), pseudo-oscillatory zoning of trace ele- ment concentrations (PO-type zoning), chaotic and patchy zoning of major and trace element concentrations (R-type zoning) and concentric zoning of trace elements (CC-type zoning). Sam- ples that deviated in 40Ar/39Ar ages showed C-type zoning, R-type zoning or a mix of different types of zoning (C-type and PO-type). Feldspars showing PO-type zoning typically represent the smallest grain size fractions in the samples. The constant major element compositions of these crystals are interpreted to represent the latest stages in the compositional evolution of feldspars in a peralkaline melt. PO-type crystals contain less melt- inclusions than other zoning types and are rarely corroded. This thesis concludes that feldspars that show PO-type zoning are most promising chronometers for the 40Ar/39Ar method, if samples provide mixed zoning types of Quaternary anorthoclase feldspars.
Five samples were dated by applying the 40Ar/39Ar method to volcanic glass. High fractions of atmospheric Ar (typically > 98%) significantly hampered the precision of the 40Ar/39Ar ages and resulted in rough age estimates that widely overlap the age model. Ar isotopes indicated that the glasses bore a chorine-rich Ar-end member. The chlorine-derived 38Ar indicated chlorine-rich fluid-inclusions or the hydration of the volcanic glass shards. This indication strengthened the evidence that irregularly distributed melt-inclusions and thus irregular distributed excess 40Ar influenced the problematic feldspar 40Ar/39Ar ages. Whether a connection between a corrected initial 40Ar/36Ar ratio from glasses to the 40Ar/36Ar ratios from pore waters exists remains unclear.
This thesis offers another age model, which is similarly based on the interpolation of the temporal tie points from geophysical and climate-stratigraphic data. The model used a PCHIP- interpolation (piecewise cubic hermite interpolating polynomial) whereas the older age model used a spline-interpolation. Samples that match in ages from 40Ar/39Ar dating of feldspars with the earlier published age model were additionally assigned with an age from the PCHIP- interpolation. These modelled ages allowed a recalculation of the Alder Creek sanidine mineral standard. The climate-stratigraphic calibration of an 40Ar/39Ar mineral standard proved that the age versus depth interpolations from PAELOVAN drilling cores were accurate, and that the applied chronometers recorded the temporal evolution of Lake Van synchronously.
Petrochemical discrimination of the sampled volcaniclastic material is also given in this thesis. 41 from 57 sampled volcaniclastic layers indicate Nemrut as their provenance. Criteria that served for the provenance assignment are provided and reviewed critically. Detailed correlations of selected PALEOVAN volcaniclastics to onshore samples that were described in detail by earlier studies are also discussed. The sampled volcaniclastics dominantly have a thickness of < 40 cm and have been ejected by small to medium sized eruptions. Onshore deposits from these types of eruptions are potentially eroded due to predominant strong winds on Nemrut and Süphan slopes. An exact correlation with the data presented here is therefore equivocal or not possible at all.
Deviating feldspar 40Ar/39Ar ages can possibly be explained by inherited 40Ar from feldspar xenocrysts contaminating the samples. In order to test this hypothesis diffusion couples of Ba were investigated in compositionally zoned feldspar crystals. The diffusive behaviour of Ba in feldspar is known, and gradients in the changing concentrations allowed for the calculation of the duration of the crystal’s magmatic development since the formation of the zoning interface. Durations were compared with degassing scenarios that model the Ar-loss during assimilation and subsequent ejection of the xenocrystals. Diffusive equilibration of the contrasting Ba concentrations is assumed to generate maximum durations as the gradient could have been developed in several growth and heating stages. The modelling does not show any indication of an involvement of inherited 40Ar in any of the deviating samples. However, the analytical set-up represents the lower limit of the required spatial resolution. Therefore, it cannot be excluded that the degassing modelling relies on a significant overestimation of the maximum duration of the magmatic history. Nevertheless, the modelling of xenocrystal degassing evidences that the irregular incorporation of excess 40Ar by melt- and fluid inclusions represents the most critical problem that needs to be overcome in dating volcaniclastic feldspars from the PALEOVAN drill cores. This thesis provides the complete background in generating and presenting 40Ar/39Ar ages that are compared to age data from a climate-stratigraphic model. Deviations are identified statistically and then discussed in order to find explanations from the age model and/or from 40Ar/39Ar geochronology. Most of the PALEOVAN stratigraphy provides several chronometers that have been proven for their synchronicity. Lacustrine deposits from Lake Van represent a key archive for reconstructing climate evolution in the eastern Mediterranean and in the Near East. The PALEOVAN record offers a climate-stratigraphic age model with a remarkable accuracy and resolution.
Planar bis(1,2-dithiooxalato)nickelate(II), [Ni(dto)]2− reacts in aqueous solutions with lanthanide ions (Ln3+) to form pentanuclear, hetero-bimetallic complexes of the general composition [{Ln(H2O)n}2{Ni(dto)2}3]·xH2O. (n = 4 or 5; x = 9–12). The complex [{Ho(H2O)5}2{Ni(dto)2}3]·10H2O, Ho2Ni3, was synthesized and characterized by single crystal X-ray structure analysis and powder diffraction. The Ho2Ni3 complex crystallizes as monoclinic crystals in the space group P21/c. The channels and cavities, appearing in the crystal packing of the complex molecules, are occupied by a varying amount of non-coordinated water molecules.
The main objective of this work is to investigate the evolution of massive stars, and the interplay between them and the ionized gas for a sample of local metal-poor Wolf-Rayet galaxies.
Optical integral field spectrocopy was used in combination with multi-wavelength radio data.
Combining optical and radio data, we locate Wolf-Rayet stars and supernova remnants across the Wolf-Rayet galaxies to study the spatial correlation between them. This study will shed light on the massive star formation and its feedback, and will help us to better understand
distant star-forming galaxies.
The potential increase in frequency and magnitude of extreme floods is currently discussed in terms of global warming and the intensification of the hydrological cycle. The profound knowledge of past natural variability of floods is of utmost importance in order to assess flood risk for the future. Since instrumental flood series cover only the last ~150 years, other approaches to reconstruct historical and pre-historical flood events are needed. Annually laminated (varved) lake sediments are meaningful natural geoarchives because they provide continuous records of environmental changes > 10000 years down to a seasonal resolution. Since lake basins additionally act as natural sediment traps, the riverine sediment supply, which is preserved as detrital event layers in the lake sediments, can be used as a proxy for extreme discharge events. Within my thesis I examined a ~ 8.50 m long sedimentary record from the pre-Alpine Lake Mondsee (Northeast European Alps), which covered the last 7000 years. This sediment record consists of calcite varves and intercalated detrital layers, which range in thickness from 0.05 to 32 mm. Detrital layer deposition was analysed by a combined method of microfacies analysis via thin sections, Scanning Electron Microscopy (SEM), μX-ray fluorescence (μXRF) scanning and magnetic susceptibility. This approach allows characterizing individual detrital event layers and assigning a corresponding input mechanism and catchment. Based on varve counting and controlled by 14C age dates, the main goals of this thesis are (i) to identify seasonal runoff processes, which lead to significant sediment supply from the catchment into the lake basin and (ii) to investigate flood frequency under changing climate boundary conditions. This thesis follows a line of different time slices, presenting an integrative approach linking instrumental and historical flood data from Lake Mondsee in order to evaluate the flood record inferred from Lake Mondsee sediments. The investigation of eleven short cores covering the last 100 years reveals the abundance of 12 detrital layers. Therein, two types of detrital layers are distinguished by grain size, geochemical composition and distribution pattern within the lake basin. Detrital layers, which are enriched in siliciclastic and dolomitic material, reveal sediment supply from the Flysch sediments and Northern Calcareous Alps into the lake basin. These layers are thicker in the northern lake basin (0.1-3.9 mm) and thinner in the southern lake basin (0.05-1.6 mm). Detrital layers, which are enriched in dolomitic components forming graded detrital layers (turbidites), indicate the provenance from the Northern Calcareous Alps. These layers are generally thicker (0.65-32 mm) and are solely recorded within the southern lake basin. In comparison with instrumental data, thicker graded layers result from local debris flow events in summer, whereas thin layers are deposited during regional flood events in spring/summer. Extreme summer floods as reported from flood layer deposition are principally caused by cyclonic activity from the Mediterranean Sea, e.g. July 1954, July 1997 and August 2002. During the last two millennia, Lake Mondsee sediments reveal two significant flood intervals with decadal-scale flood episodes, during the Dark Ages Cold Period (DACP) and the transition from the Medieval Climate Anomaly (MCA) into the Little Ice Age (LIA) suggesting a linkage of transition to climate cooling and summer flood recurrences in the Northeastern Alps. In contrast, intermediate or decreased flood episodes appeared during the MWP and the LIA. This indicates a non-straightforward relationship between temperature and flood recurrence, suggesting higher cyclonic activity during climate transition in the Northeast Alps. The 7000-year flood chronology reveals 47 debris flows and 269 floods, with increased flood activity shifting around 3500 and 1500 varve yr BP (varve yr BP = varve years before present, before present = AD 1950). This significant increase in flood activity shows a coincidence with millennial-scale climate cooling that is reported from main Alpine glacier advances and lower tree lines in the European Alps since about 3300 cal. yr BP (calibrated years before present). Despite relatively low flood occurrence prior to 1500 varve yr BP, floods at Lake Mondsee could have also influenced human life in early Neolithic lake dwellings (5750-4750 cal. yr BP). While the first lake dwellings were constructed on wetlands, the later lake dwellings were built on piles in the water suggesting an early flood risk adaptation of humans and/or a general change of the Late Neolithic Culture of lake-dwellers because of socio-economic reasons. However, a direct relationship between the final abandonment of the lake dwellings and higher flood frequencies is not evidenced.
Data stream processing systems (DSPSs) are a key enabler to integrate continuously generated data, such as sensor measurements, into enterprise applications. DSPSs allow to steadily analyze information from data streams, e.g., to monitor manufacturing processes and enable fast reactions to anomalous behavior. Moreover, DSPSs continuously filter, sample, and aggregate incoming streams of data, which reduces the data size, and thus data storage costs.
The growing volumes of generated data have increased the demand for high-performance DSPSs, leading to a higher interest in these systems and to the development of new DSPSs. While having more DSPSs is favorable for users as it allows choosing the system that satisfies their requirements the most, it also introduces the challenge of identifying the most suitable DSPS regarding current needs as well as future demands. Having a solution to this challenge is important because replacements of DSPSs require the costly re-writing of applications if no abstraction layer is used for application development. However, quantifying performance differences between DSPSs is a difficult task. Existing benchmarks fail to integrate all core functionalities of DSPSs and lack tool support, which hinders objective result comparisons. Moreover, no current benchmark covers the combination of streaming data with existing structured business data, which is particularly relevant for companies.
This thesis proposes a performance benchmark for enterprise stream processing called ESPBench. With enterprise stream processing, we refer to the combination of streaming and structured business data. Our benchmark design represents real-world scenarios and allows for an objective result comparison as well as scaling of data. The defined benchmark query set covers all core functionalities of DSPSs. The benchmark toolkit automates the entire benchmark process and provides important features, such as query result validation and a configurable data ingestion rate.
To validate ESPBench and to ease the use of the benchmark, we propose an example implementation of the ESPBench queries leveraging the Apache Beam software development kit (SDK). The Apache Beam SDK is an abstraction layer designed for developing stream processing applications that is applied in academia as well as enterprise contexts. It allows to run the defined applications on any of the supported DSPSs. The performance impact of Apache Beam is studied in this dissertation as well. The results show that there is a significant influence that differs among DSPSs and stream processing applications. For validating ESPBench, we use the example implementation of the ESPBench queries developed using the Apache Beam SDK. We benchmark the implemented queries executed on three modern DSPSs: Apache Flink, Apache Spark Streaming, and Hazelcast Jet. The results of the study prove the functioning of ESPBench and its toolkit. ESPBench is capable of quantifying performance characteristics of DSPSs and of unveiling differences among systems.
The benchmark proposed in this thesis covers all requirements to be applied in enterprise stream processing settings, and thus represents an improvement over the current state-of-the-art.
Background: With increasing age neuromuscular deficits (e.g., sarcopenia) may result in impaired physical performance and an increased risk for falls. Prominent intrinsic fall-risk factors are age-related decreases in balance and strength / power performance as well as cognitive decline. Additional studies are needed to develop specifically tailored exercise programs for older adults that can easily be implemented into clinical practice. Thus, the objective of the present trial is to assess the effects of a fall prevention program that was developed by an interdisciplinary expert panel on measures of balance, strength / power, body composition, cognition, psychosocial well-being, and falls self-efficacy in healthy older adults. Additionally, the time-related effects of detraining are tested.
Methods/Design: Healthy old people (n = 54) between the age of 65 to 80 years will participate in this trial. The testing protocol comprises tests for the assessment of static / dynamic steady-state balance (i.e., Sharpened Romberg Test, instrumented gait analysis), proactive balance (i.e., Functional Reach Test; Timed Up and Go Test), reactive balance (i.e., perturbation test during bipedal stance; Push and Release Test), strength (i.e., hand grip strength test; Chair Stand Test), and power (i.e., Stair Climb Power Test; countermovement jump). Further, body composition will be analysed using a bioelectrical impedance analysis system. In addition, questionnaires for the assessment of psychosocial (i.e., World Health Organisation Quality of Life Assessment-Bref), cognitive (i.e., Mini Mental State Examination), and fall risk determinants (i.e., Fall Efficacy Scale -International) will be included in the study protocol. Participants will be randomized into two intervention groups or the control / waiting group. After baseline measures, participants in the intervention groups will conduct a 12-week balance and strength / power exercise intervention 3 times per week, with each training session lasting 30 min. (actual training time). One intervention group will complete an extensive supervised training program, while the other intervention group will complete a short version (` 3 times 3') that is home-based and controlled by weekly phone calls. Post-tests will be conducted right after the intervention period. Additionally, detraining effects will be measured 12 weeks after program cessation. The control group / waiting group will not participate in any specific intervention during the experimental period, but will receive the extensive supervised program after the experimental period.
Discussion: It is expected that particularly the supervised combination of balance and strength / power training will improve performance in variables of balance, strength / power, body composition, cognitive function, psychosocial well-being, and falls self-efficacy of older adults. In addition, information regarding fall risk assessment, dose-response-relations, detraining effects, and supervision of training will be provided. Further, training-induced health-relevant changes, such as improved performance in activities of daily living, cognitive function, and quality of life, as well as a reduced risk for falls may help to lower costs in the health care system. Finally, practitioners, therapists, and instructors will be provided with a scientifically evaluated feasible, safe, and easy-to-administer exercise program for fall prevention.
Abiotic stresses cause oxidative damage in plants. Here, we demonstrate that foliar application of an extract from the seaweed Ascophyllum nodosum, SuperFifty (SF), largely prevents paraquat (PQ)-induced oxidative stress in Arabidopsis thaliana. While PQ-stressed plants develop necrotic lesions, plants pre-treated with SF (i.e., primed plants) were unaffected by PQ. Transcriptome analysis revealed induction of reactive oxygen species (ROS) marker genes, genes involved in ROS-induced programmed cell death, and autophagy-related genes after PQ treatment. These changes did not occur in PQ-stressed plants primed with SF. In contrast, upregulation of several carbohydrate metabolism genes, growth, and hormone signaling as well as antioxidant-related genes were specific to SF-primed plants. Metabolomic analyses revealed accumulation of the stress-protective metabolite maltose and the tricarboxylic acid cycle intermediates fumarate and malate in SF-primed plants. Lipidome analysis indicated that those lipids associated with oxidative stress-induced cell death and chloroplast degradation, such as triacylglycerols (TAGs), declined upon SF priming. Our study demonstrated that SF confers tolerance to PQ-induced oxidative stress in A. thaliana, an effect achieved by modulating a range of processes at the transcriptomic, metabolic, and lipid levels.
On 6 June 1982, Israel invaded Lebanon to fight the Palestinian
Liberation Organization (PLO). Between August 1982 and February
1984, the US, France, Britain and Italy deployed a Multinational
Force (MNF) to Beirut. Its task was to act as an interposition force to
bolster the government and to bring peace to the people. The
mission is often forgotten or merely remembered in context with
the bombing of US Marines’ barracks. However, an analysis of the
Italian contingent shows that the MNF was not doomed to fail and
could accomplish its task when operational and diplomatic efforts
were coordinated. The Italian commander in Beirut, General Franco
Angioni, followed a successful approach that sustained neutrality,
respectful behaviour and minimal force, which resulted in a
qualified success of the Italian efforts.
Abdominal and general adiposity are independently associated with mortality, but there is no consensus on how best to assess abdominal adiposity. We compared the ability of alternative waist indices to complement body mass index (BMI) when assessing all-cause mortality. We used data from 352,985 participants in the European Prospective Investigation into Cancer and Nutrition (EPIC) and Cox proportional hazards models adjusted for other risk factors. During a mean follow-up of 16.1 years, 38,178 participants died. Combining in one model BMI and a strongly correlated waist index altered the association patterns with mortality, to a predominantly negative association for BMI and a stronger positive association for the waist index, while combining BMI with the uncorrelated A Body Shape Index (ABSI) preserved the association patterns. Sex-specific cohort-wide quartiles of waist indices correlated with BMI could not separate high-risk from low-risk individuals within underweight (BMI<18.5 kg/m(2)) or obese (BMI30 kg/m(2)) categories, while the highest quartile of ABSI separated 18-39% of the individuals within each BMI category, which had 22-55% higher risk of death. In conclusion, only a waist index independent of BMI by design, such as ABSI, complements BMI and enables efficient risk stratification, which could facilitate personalisation of screening, treatment and monitoring.
In 1914 Bohr proved that there is an r ∈ (0, 1) such that if a power series converges in the unit disk and its sum has modulus less than 1 then, for |z| < r, the sum of absolute values of its terms is again less than 1. Recently analogous results were obtained for functions of several variables. The aim of this paper is to comprehend the theorem of Bohr in the context of solutions to second order elliptic equations meeting the maximum principle.
The paper is devoted to pseudodifferential boundary value problems in domains with singular points on the boundary. The tangent cone at a singular point is allowed to degenerate. In particular, the boundary may rotate and oscillate in a neighbourhood of such a point. We show a criterion for the Fredholm property of a boundary value problem and derive estimates of solutions close to singular points.
A Case for Serious Play
(2017)
The present study approaches the Spanish postposed constructions creo Ø and creo yo ‘[p], [I] think’ from a cognitive-constructionist perspective. It is argued that both constructions are to be distinguished from one another because creo Ø has a subjective function, while in creo yo, it is the intersubjective dimension that is particularly prominent. The present investigation takes both a qualitative and a quantitative perspective. With regard to the latter, the problem of quantitative representativity is addressed. The discussion posed the question of how empirical research can feed back into theory, more precisely, into the framework of Cognitive Construction Grammar. The data to be analyzed here are retrieved from the corpora Corpus de Referencia del Español Actual and Corpus del Español.
The aim of this doctoral thesis was to establish a technique for the analysis of biomolecules with infrared matrix-assisted laser dispersion (IR-MALDI) ion mobility (IM) spectrometry. The main components of the work were the characterization of the IR-MALDI process, the development and characterization of different ion mobility spectrometers, the use of IR-MALDI-IM spectrometry as a robust, standalone spectrometer and the development of a collision cross-section estimation approach for peptides based on molecular dynamics and thermodynamic reweighting.
First, the IR-MALDI source was studied with atmospheric pressure ion mobility spectrometry and shadowgraphy. It consisted of a metal capillary, at the tip of which a self-renewing droplet of analyte solution was met by an IR laser beam. A relationship between peak shape, ion desolvation, diffusion and extraction pulse delay time (pulse delay) was established. First order desolvation kinetics were observed and related to peak broadening by diffusion, both influenced by the pulse delay. The transport mechanisms in IR-MALDI were then studied by relating different laser impact positions on the droplet surface to the corresponding ion mobility spectra. Two different transport mechanisms were determined: phase explosion due to the laser pulse and electrical transport due to delayed ion extraction. The velocity of the ions stemming from the phase explosion was then measured by ion mobility and shadowgraphy at different time scales and distances from the source capillary, showing an initially very high but rapidly decaying velocity. Finally, the anatomy of the dispersion plume was observed in detail with shadowgraphy and general conclusions over the process were drawn.
Understanding the IR-MALDI process enabled the optimization of the different IM spectrometers at atmospheric and reduced pressure (AP and RP, respectively). At reduced pressure, both an AP and an RP IR-MALDI source were used. The influence of the pulsed ion extraction parameters (pulse delay, width and amplitude) on peak shape, resolution and area was systematically studied in both AP and RP IM spectrometers and discussed in the context of the IR-MALDI process. Under RP conditions, the influence of the closing field and of the pressure was also examined for both AP and RP sources. For the AP ionization RP IM spectrometer, the influence of the inlet field (IF) in the source region was also examined. All of these studies led to the determination of the optimal analytical parameters as well as to a better understanding of the initial ion cloud anatomy.
The analytical performance of the spectrometer was then studied. Limits of detection (LOD) and linear ranges were determined under static and pulsed ion injection conditions and interpreted in the context of the IR-MALDI mechanism. Applications in the separation of simple mixtures were also illustrated, demonstrating good isomer separation capabilities and the advantages of singly charged peaks. The possibility to couple high performance liquid chromatography (HPLC) to IR-MALDI-IM spectrometry was also demonstrated. Finally, the reduced pressure spectrometer was used to study the effect of high reduced field strength on the mobility of polyatomic ions in polyatomic gases.
The last focus point was on the study of peptide ions. A dataset obtained with electrospray IM spectrometry was characterized and used for the calibration of a collision cross-section (CCS) determination method based on molecular dynamics (MD) simulations at high temperature. Instead of producing candidate structures which are evaluated one by one, this semi-automated method uses the simulation as a whole to determine a single average collision cross-section value by reweighting the CCS of a few representative structures. The method was compared to the intrinsic size parameter (ISP) method and to experimental results. Additional MD data obtained from the simulations was also used to further analyze the peptides and understand the experimental results, an advantage with regard to the ISP method. Finally, the CCS of peptide ions analyzed by IR-MALDI were also evaluated with both ISP and MD methods and the results compared to experiment, resulting in a first validation of the MD method. Thus, this thesis brings together the soft ionization technique that is IR-MALDI, which produces mostly singly charged peaks, with ion mobility spectrometry, which can distinguish between isomers, and a collision cross-section determination method which also provides structural information on the analyte at hand.
Downscaling of microfluidic cell culture and detection devices for electrochemical monitoring has mostly focused on miniaturization of the microfluidic chips which are often designed for specific applications and therefore lack functional flexibility. We present a compact microfluidic cell culture and electrochemical analysis platform with in-built fluid handling and detection, enabling complete cell based assays comprising on-line electrode cleaning, sterilization, surface functionalization, cell seeding, cultivation and electrochemical real-time monitoring of cellular dynamics. To demonstrate the versatility and multifunctionality of the platform, we explored amperometric monitoring of intracellular redox activity in yeast (Saccharomyces cerevisiae) and detection of exocytotically released dopamine from rat pheochromocytoma cells (PC12). Electrochemical impedance spectroscopy was used in both applications for monitoring cell sedimentation and adhesion as well as proliferation in the case of PC12 cells. The influence of flow rate on the signal amplitude in the detection of redox metabolism as well as the effect of mechanical stimulation on dopamine release were demonstrated using the programmable fluid handling capability. The here presented platform is aimed at applications utilizing cell based assays, ranging from e.g. monitoring of drug effects in pharmacological studies, characterization of neural stem cell differentiation, and screening of genetically modified microorganisms to environmental monitoring.
In this paper two groups supporting different views on the mechanism of light induced polymer deformation argue about the respective underlying theoretical conceptions, in order to bring this interesting debate to the attention of the scientific community. The group of Prof. Nicolae Hurduc supports the model claiming that the cyclic isomerization of azobenzenes may cause an athermal transition of the glassy azobenzene containing polymer into a fluid state, the so-called photo-fluidization concept. This concept is quite convenient for an intuitive understanding of the deformation process as an anisotropic flow of the polymer material. The group of Prof. Svetlana Santer supports the re-orientational model where the mass-transport of the polymer material accomplished during polymer deformation is stated to be generated by the light-induced re-orientation of the azobenzene side chains and as a consequence of the polymer backbone that in turn results in local mechanical stress, which is enough to irreversibly deform an azobenzene containing material even in the glassy state. For the debate we chose three polymers differing in the glass transition temperature, 32 °C, 87 °C and 95 °C, representing extreme cases of flexible and rigid materials. Polymer film deformation occurring during irradiation with different interference patterns is recorded using a homemade set-up combining an optical part for the generation of interference patterns and an atomic force microscope for acquiring the kinetics of film deformation. We also demonstrated the unique behaviour of azobenzene containing polymeric films to switch the topography in situ and reversibly by changing the irradiation conditions. We discuss the results of reversible deformation of three polymers induced by irradiation with intensity (IIP) and polarization (PIP) interference patterns, and the light of homogeneous intensity in terms of two approaches: the re-orientational and the photo-fluidization concepts. Both agree in that the formation of opto-mechanically induced stresses is a necessary prerequisite for the process of deformation. Using this argument, the deformation process can be characterized either as a flow or mass transport.
Flooding is assessed as the most important natural hazard in Europe, causing thousands of deaths, affecting millions of people and accounting for large economic losses in the past decade. Little is known about the damage processes associated with extreme rainfall in cities, due to a lack of accurate, comparable and consistent damage data. The objective of this study is to investigate the impacts of extreme rainfall on residential buildings and how affected households coped with these impacts in terms of precautionary and emergency actions. Analyses are based on a unique dataset of damage characteristics and a wide range of potential damage explaining variables at the household level, collected through computer-aided telephone interviews (CATI) and an online survey. Exploratory data analyses based on a total of 859 completed questionnaires in the cities of Munster (Germany) and Amsterdam (the Netherlands) revealed that the uptake of emergency measures is related to characteristics of the hazardous event. In case of high water levels, more efforts are made to reduce damage, while emergency response that aims to prevent damage is less likely to be effective. The difference in magnitude of the events in Munster and Amsterdam, in terms of rainfall intensity and water depth, is probably also the most important cause for the differences between the cities in terms of the suffered financial losses. Factors that significantly contributed to damage in at least one of the case studies are water contamination, the presence of a basement in the building and people's awareness of the upcoming event. Moreover, this study confirms conclusions by previous studies that people's experience with damaging events positively correlates with precautionary behaviour. For improving future damage data acquisition, we recommend the inclusion of cell phones in a CATI survey to avoid biased sampling towards certain age groups.
A comparison of current trends within computer science teaching in school in Germany and the UK
(2013)
In the last two years, CS as a school subject has gained a lot of attention worldwide, although different countries have differing approaches to and experiences of introducing CS in schools. This paper reports on a study comparing current trends in CS at school, with a major focus on two countries, Germany and UK. A survey was carried out of a number of teaching professionals and experts from the UK and Germany with regard to the content and delivery of CS in school. An analysis of the quantitative data reveals a difference in foci in the two countries; putting this into the context of curricular developments we are able to offer interpretations of these trends and suggest ways in which curricula in CS at school should be moving forward.
Situated in an active tectonic region, Santiago de Chile, the country´s capital with more than six million inhabitants, faces tremendous earthquake hazard. Macroseismic data for the 1985 Valparaiso and the 2010 Maule events show large variations in the distribution of damage to buildings within short distances indicating strong influence of local sediments and the shape of the sediment-bedrock interface on ground motion. Therefore, a temporary seismic network was installed in the urban area for recording earthquake activity, and a study was carried out aiming to estimate site amplification derived from earthquake data and ambient noise. The analysis of earthquake data shows significant dependence on the local geological structure with regards to amplitude and duration. Moreover, the analysis of noise spectral ratios shows that they can provide a lower bound in amplitude for site amplification and, since no variability in terms of time and amplitude is observed, that it is possible to map the fundamental resonance frequency of the soil for a 26 km x 12 km area in the northern part of the Santiago de Chile basin. By inverting the noise spectral rations, local shear wave velocity profiles could be derived under the constraint of the thickness of the sedimentary cover which had previously been determined by gravimetric measurements. The resulting 3D model was derived by interpolation between the single shear wave velocity profiles and shows locally good agreement with the few existing velocity profile data, but allows the entire area, as well as deeper parts of the basin, to be represented in greater detail. The wealth of available data allowed further to check if any correlation between the shear wave velocity in the uppermost 30 m (vs30) and the slope of topography, a new technique recently proposed by Wald and Allen (2007), exists on a local scale. While one lithology might provide a greater scatter in the velocity values for the investigated area, almost no correlation between topographic gradient and calculated vs30 exists, whereas a better link is found between vs30 and the local geology. When comparing the vs30 distribution with the MSK intensities for the 1985 Valparaiso event it becomes clear that high intensities are found where the expected vs30 values are low and over a thick sedimentary cover. Although this evidence cannot be generalized for all possible earthquakes, it indicates the influence of site effects modifying the ground motion when earthquakes occur well outside of the Santiago basin. Using the attained knowledge on the basin characteristics, simulations of strong ground motion within the Santiago Metropolitan area were carried out by means of the spectral element technique. The simulation of a regional event, which has also been recorded by a dense network installed in the city of Santiago for recording aftershock activity following the 27 February 2010 Maule earthquake, shows that the model is capable to realistically calculate ground motion in terms of amplitude, duration, and frequency and, moreover, that the surface topography and the shape of the sediment bedrock interface strongly modify ground motion in the Santiago basin. An examination on the dependency of ground motion on the hypocenter location for a hypothetical event occurring along the active San Ramón fault, which is crossing the eastern outskirts of the city, shows that the unfavorable interaction between fault rupture, radiation mechanism, and complex geological conditions in the near-field may give rise to large values of peak ground velocity and therefore considerably increase the level of seismic risk for Santiago de Chile.
Introduction
Varus knee alignment has been identified as a risk factor for the progression of medial knee osteoarthritis. However, the underlying mechanisms have not been elucidated yet in children. Thus, the aims of the present study were to examine differences in ground reaction forces, loading rate, impulses, and free moment values during running in children with and without genu varus.
Methods
Thirty-six boys aged 9–14 volunteered to participate in this study. They were divided in two age-matched groups (genu varus versus healthy controls). Body weight adjusted three dimensional kinetic data (Fx, Fy, Fz) were collected during running at preferred speed using two Kistler force plates for the dominant and non-dominant limb.
Results
Individuals with knee genu varus produced significantly higher (p = .01; d = 1.09; 95%) body weight adjusted ground reaction forces in the lateral direction (Fx) of the dominant limb compared to controls. On the non-dominant limb, genu varus patients showed significantly higher body weight adjusted ground reaction forces values in the lateral (p = .01; d = 1.08; 86%) and medial (p < .001; d = 1.55; 102%) directions (Fx). Further, genu varus patients demonstrated 55% and 36% greater body weight adjusted loading rates in the dominant (p < .001; d = 2.09) and non-dominant (p < .001; d = 1.02) leg, respectively. No significant between-group differences were observed for adjusted free moment values (p>.05). Discussion Higher mediolateral ground reaction forces and vertical loading rate amplitudes in boys with genu varus during running at preferred running speed may accelerate the development of progressive joint degeneration in terms of the age at knee osteoarthritis onset. Therefore, practitioners and therapists are advised to conduct balance and strength training programs to improve lower limb alignment and mediolateral control during dynamic movements.
Accurate time series representation of paleoclimatic proxy records is challenging because such records involve dating errors in addition to proxy measurement errors. Rigorous attention is rarely given to age uncertainties in paleoclimatic research, although the latter can severely bias the results of proxy record analysis. Here, we introduce a Bayesian approach to represent layer-counted proxy records - such as ice cores, sediments, corals, or tree rings - as sequences of probability distributions on absolute, error-free time axes. The method accounts for both proxy measurement errors and uncertainties arising from layer-counting-based dating of the records. An application to oxygen isotope ratios from the North Greenland Ice Core Project (NGRIP) record reveals that the counting errors, although seemingly small, lead to substantial uncertainties in the final representation of the oxygen isotope ratios. In particular, for the older parts of the NGRIP record, our results show that the total uncertainty originating from dating errors has been seriously underestimated. Our method is next applied to deriving the overall uncertainties of the Suigetsu radiocarbon comparison curve, which was recently obtained from varved sediment cores at Lake Suigetsu, Japan. This curve provides the only terrestrial radiocarbon comparison for the time interval 12.5-52.8 kyr BP. The uncertainties derived here can be readily employed to obtain complete error estimates for arbitrary radiometrically dated proxy records of this recent part of the last glacial interval.
Companies develop process models to explicitly describe their business operations. In the same time, business operations, business processes, must adhere to various types of compliance requirements. Regulations, e.g., Sarbanes Oxley Act of 2002, internal policies, best practices are just a few sources of compliance requirements. In some cases, non-adherence to compliance requirements makes the organization subject to legal punishment. In other cases, non-adherence to compliance leads to loss of competitive advantage and thus loss of market share. Unlike the classical domain-independent behavioral correctness of business processes, compliance requirements are domain-specific. Moreover, compliance requirements change over time. New requirements might appear due to change in laws and adoption of new policies. Compliance requirements are offered or enforced by different entities that have different objectives behind these requirements. Finally, compliance requirements might affect different aspects of business processes, e.g., control flow and data flow. As a result, it is infeasible to hard-code compliance checks in tools. Rather, a repeatable process of modeling compliance rules and checking them against business processes automatically is needed. This thesis provides a formal approach to support process design-time compliance checking. Using visual patterns, it is possible to model compliance requirements concerning control flow, data flow and conditional flow rules. Each pattern is mapped into a temporal logic formula. The thesis addresses the problem of consistency checking among various compliance requirements, as they might stem from divergent sources. Also, the thesis contributes to automatically check compliance requirements against process models using model checking. We show that extra domain knowledge, other than expressed in compliance rules, is needed to reach correct decisions. In case of violations, we are able to provide a useful feedback to the user. The feedback is in the form of parts of the process model whose execution causes the violation. In some cases, our approach is capable of providing automated remedy of the violation.
Over the past decades, natural hazards, many of which are aggravated by climate change and reveal an increasing trend in frequency and intensity, have caused significant human and economic losses and pose a considerable obstacle to sustainable development. Hence, dedicated action toward disaster risk reduction is needed to understand the underlying drivers and create efficient risk mitigation plans. Such action is requested by the Sendai Framework for Disaster Risk Reduction 2015-2030 (SFDRR), a global agreement launched in 2015 that establishes stating priorities for action, e.g. an improved understanding of disaster risk. Turkey is one of the SFDRR contracting countries and has been severely affected by many natural hazards, in particular earthquakes and floods. However, disproportionately little is known about flood hazards and risks in Turkey. Therefore, this thesis aims to carry out a comprehensive analysis of flood hazards for the first time in Turkey from triggering drivers to impacts. It is intended to contribute to a better understanding of flood risks, improvements of flood risk mitigation and the facilitated monitoring of progress and achievements while implementing the SFDRR.
In order to investigate the occurrence and severity of flooding in comparison to other natural hazards in Turkey and provide an overview of the temporal and spatial distribution of flood losses, the Turkey Disaster Database (TABB) was examined for the years 1960-2014. The TABB database was reviewed through comparison with the Emergency Events Database (EM-DAT), the Dartmouth Flood Observatory database, the scientific literature and news archives. In addition, data on the most severe flood events between 1960 and 2014 were retrieved. These served as a basis for analyzing triggering mechanisms (i.e. atmospheric circulation and precipitation amounts) and aggravating pathways (i.e. topographic features, catchment size, land use types and soil properties). For this, a new approach was developed and the events were classified using hierarchical cluster analyses to identify the main influencing factor per event and provide additional information about the dominant flood pathways for severe floods. The main idea of the study was to start with the event impacts based on a bottom-up approach and identify the causes that created damaging events, instead of applying a model chain with long-term series as input and searching for potentially impacting events as model outcomes. However, within the frequency analysis of the flood-triggering circulation pattern types, it was discovered that events in terms of heavy precipitation were not included in the list of most severe floods, i.e. their impacts were not recorded in national and international loss databases but were mentioned in news archives and reported by the Turkish State Meteorological Service. This finding challenges bottom-up modelling approaches and underlines the urgent need for consistent event and loss documentation. Therefore, as a next step, the aim was to enhance the flood loss documentation by calibrating, validating and applying the United Nations Office for Disaster Risk Reduction (UNDRR) loss estimation method for the recent severe flood events (2015-2020). This provided, a consistent flood loss estimation model for Turkey, allowing governments to estimate losses as quickly as possible after events, e.g. to better coordinate financial aid.
This thesis reveals that, after earthquakes, floods have the second most destructive effects in Turkey in terms of human and economic impacts, with over 800 fatalities and US$ 885.7 million in economic losses between 1960 and 2020, and that more attention should be paid on the national scale. The clustering results of the dominant flood-producing mechanisms (e.g. circulation pattern types, extreme rainfall, sudden snowmelt) present crucial information regarding the source and pathway identification, which can be used as base information for hazard identification in the preliminary risk assessment process. The implementation of the UNDRR loss estimation model shows that the model with country-specific parameters, calibrated damage ratios and sufficient event documentation (i.e. physically damaged units) can be recommended in order to provide first estimates of the magnitude of direct economic losses, even shortly after events have occurred, since it performed well when estimates were compared to documented losses.
The presented results can contribute to improving the national disaster loss database in Turkey and thus enable a better monitoring of the national progress and achievements with regard to the targets stated by the SFDRR. In addition, the outcomes can be used to better characterize and classify flood events. Information on the main underlying factors and aggravating flood pathways further supports the selection of suitable risk reduction policies.
All input variables used in this thesis were obtained from publicly available data. The results are openly accessible and can be used for further research.
As an overall conclusion, it can be stated that consistent loss data collection and better event documentation should gain more attention for a reliable monitoring of the implementation of the SFDRR. Better event documentation should be established according to a globally accepted standard for disaster classification and loss estimation in Turkey. Ultimately, this enables stakeholders to create better risk mitigation actions based on clear hazard definitions, flood event classification and consistent loss estimations.
Successful sentence comprehension requires the comprehender to correctly figure out who did what to whom. For example, in the sentence John kicked the ball, the comprehender has to figure out who did the action of kicking and what was being kicked. This process of identifying and connecting the syntactically-related words in a sentence is called dependency completion. What are the cognitive constraints that determine dependency completion? A widely-accepted theory is cue-based retrieval. The theory maintains that dependency completion is driven by a content-addressable search for the co-dependents in memory. The cue-based retrieval explains a wide range of empirical data from several constructions including subject-verb agreement, subject-verb non-agreement, plausibility mismatch configurations, and negative polarity items.
However, there are two major empirical challenges to the theory: (i) Grammatical sentences’ data from subject-verb number agreement dependencies, where the theory predicts a slowdown at the verb in sentences like the key to the cabinet was rusty compared to the key to the cabinets was rusty, but the data are inconsistent with this prediction; and, (ii) Data from antecedent-reflexive dependencies, where a facilitation in reading times is predicted at the reflexive in the bodybuilder who worked with the trainers injured themselves vs. the bodybuilder who worked with the trainer injured themselves, but the data do not show a facilitatory effect.
The work presented in this dissertation is dedicated to building a more general theory of dependency completion that can account for the above two datasets without losing the original empirical coverage of the cue-based retrieval assumption. In two journal articles, I present computational modeling work that addresses the above two empirical challenges.
To explain the grammatical sentences’ data from subject-verb number agreement dependencies, I propose a new model that assumes that the cue-based retrieval operates on a probabilistically distorted representation of nouns in memory (Article I). This hybrid distortion-plus-retrieval model was compared against the existing candidate models using data from 17 studies on subject-verb number agreement in 4 languages. I find that the hybrid model outperforms the existing models of number agreement processing suggesting that the cue-based retrieval theory must incorporate a feature distortion assumption.
To account for the absence of facilitatory effect in antecedent-reflexive dependencies, I propose an individual difference model, which was built within the cue-based retrieval framework (Article II). The model assumes that individuals may differ in how strongly they weigh a syntactic cue over a number cue. The model was fitted to data from two studies on antecedent-reflexive dependencies, and the participant-level cue-weighting was estimated. We find that one-fourth of the participants, in both studies, weigh the syntactic cue higher than the number cue in processing reflexive dependencies and the remaining participants weigh the two cues equally. The result indicates that the absence of predicted facilitatory effect at the level of grouped data is driven by some, not all, participants who weigh syntactic cues higher than the number cue. More generally, the result demonstrates that the assumption of differential cue weighting is important for a theory of dependency completion processes. This differential cue weighting idea was independently supported by a modeling study on subject-verb non-agreement dependencies (Article III).
Overall, the cue-based retrieval, which is a general theory of dependency completion, needs to incorporate two new assumptions: (i) the nouns stored in memory can undergo probabilistic feature distortion, and (ii) the linguistic cues used for retrieval can be weighted differentially. This is the cumulative result of the modeling work presented in this dissertation.
The dissertation makes an important theoretical contribution: Sentence comprehension in humans is driven by a mechanism that assumes cue-based retrieval, probabilistic feature distortion, and differential cue weighting. This insight is theoretically important because there is some independent support for these three assumptions in sentence processing and the broader memory literature. The modeling work presented here is also methodologically important because for the first time, it demonstrates (i) how the complex models of sentence processing can be evaluated using data from multiple studies simultaneously, without oversimplifying the models, and (ii) how the inferences drawn from the individual-level behavior can be used in theory development.
The Net Reclassification Improvement (NRI) has become a popular metric for evaluating improvement in disease prediction models through the past years. The concept is relatively straightforward but usage and interpretation has been different across studies. While no thresholds exist for evaluating the degree of improvement, many studies have relied solely on the significance of the NRI estimate. However, recent studies recommend that statistical testing with the NRI should be avoided. We propose using confidence ellipses around the estimated values of event and non-event NRIs which might provide the best measure of variability around the point estimates. Our developments are illustrated using practical examples from EPIC-Potsdam study.
A Conjunction of Mysteries
(2016)