Refine
Year of publication
- 2024 (117)
- 2023 (602)
- 2022 (1669)
- 2021 (1613)
- 2020 (1869)
- 2019 (2063)
- 2018 (2201)
- 2017 (1963)
- 2016 (1796)
- 2015 (1631)
- 2014 (1387)
- 2013 (1317)
- 2012 (1321)
- 2011 (1133)
- 2010 (705)
- 2009 (893)
- 2008 (459)
- 2007 (487)
- 2006 (840)
- 2005 (842)
- 2004 (844)
- 2003 (513)
- 2002 (421)
- 2001 (497)
- 2000 (557)
- 1999 (599)
- 1998 (562)
- 1997 (453)
- 1996 (386)
- 1995 (445)
- 1994 (253)
- 1993 (58)
- 1992 (49)
- 1991 (23)
- 1990 (4)
- 1989 (11)
- 1988 (7)
- 1987 (10)
- 1986 (9)
- 1985 (4)
- 1984 (7)
- 1983 (8)
- 1982 (2)
- 1981 (2)
- 1980 (1)
Document Type
- Article (21061)
- Doctoral Thesis (3156)
- Postprint (2347)
- Monograph/Edited Volume (1217)
- Other (657)
- Review (618)
- Preprint (529)
- Conference Proceeding (456)
- Part of a Book (243)
- Working Paper (179)
- Habilitation Thesis (75)
- Master's Thesis (61)
- Part of Periodical (45)
- Report (13)
- Bachelor Thesis (8)
- Journal/Publication series (6)
- Contribution to a Periodical (4)
- Moving Images (2)
- Lecture (1)
Language
- English (30678) (remove)
Keywords
- climate change (178)
- Germany (100)
- machine learning (80)
- diffusion (76)
- German (68)
- morphology (67)
- Arabidopsis thaliana (65)
- anomalous diffusion (58)
- stars: massive (56)
- Climate change (55)
Institute
- Institut für Physik und Astronomie (4954)
- Institut für Biochemie und Biologie (4683)
- Institut für Geowissenschaften (3306)
- Institut für Chemie (2908)
- Institut für Mathematik (1868)
- Department Psychologie (1480)
- Institut für Ernährungswissenschaft (1030)
- Department Linguistik (1006)
- Wirtschaftswissenschaften (858)
- Institut für Informatik und Computational Science (839)
Quantum theory (QT) is usually formulated in terms of abstract mathematical postulates involving Hilbert spaces, state vectors and unitary operators. In this paper, we show that the full formalism of QT can instead be derived from five simple physical requirements, based on elementary assumptions regarding preparations, transformations and measurements. This is very similar to the usual formulation of special relativity, where two simple physical requirements-the principles of relativity and light speed invariance-are used to derive the mathematical structure of Minkowski space-time. Our derivation provides insights into the physical origin of the structure of quantum state spaces (including a group-theoretic explanation of the Bloch ball and its three dimensionality) and suggests several natural possibilities to construct consistent modifications of QT.
With the emergence of the Internet of things (IoT), plenty of battery-powered and energy-harvesting devices are being deployed to fulfill sensing and actuation tasks in a variety of application areas, such as smart homes, precision agriculture, smart cities, and industrial automation. In this context, a critical issue is that of denial-of-sleep attacks. Such attacks temporarily or permanently deprive battery-powered, energy-harvesting, or otherwise energy-constrained devices of entering energy-saving sleep modes, thereby draining their charge. At the very least, a successful denial-of-sleep attack causes a long outage of the victim device. Moreover, to put battery-powered devices back into operation, their batteries have to be replaced. This is tedious and may even be infeasible, e.g., if a battery-powered device is deployed at an inaccessible location. While the research community came up with numerous defenses against denial-of-sleep attacks, most present-day IoT protocols include no denial-of-sleep defenses at all, presumably due to a lack of awareness and unsolved integration problems. After all, despite there are many denial-of-sleep defenses, effective defenses against certain kinds of denial-of-sleep attacks are yet to be found.
The overall contribution of this dissertation is to propose a denial-of-sleep-resilient medium access control (MAC) layer for IoT devices that communicate over IEEE 802.15.4 links. Internally, our MAC layer comprises two main components. The first main component is a denial-of-sleep-resilient protocol for establishing session keys among neighboring IEEE 802.15.4 nodes. The established session keys serve the dual purpose of implementing (i) basic wireless security and (ii) complementary denial-of-sleep defenses that belong to the second main component. The second main component is a denial-of-sleep-resilient MAC protocol. Notably, this MAC protocol not only incorporates novel denial-of-sleep defenses, but also state-of-the-art mechanisms for achieving low energy consumption, high throughput, and high delivery ratios. Altogether, our MAC layer resists, or at least greatly mitigates, all denial-of-sleep attacks against it we are aware of. Furthermore, our MAC layer is self-contained and thus can act as a drop-in replacement for IEEE 802.15.4-compliant MAC layers. In fact, we implemented our MAC layer in the Contiki-NG operating system, where it seamlessly integrates into an existing protocol stack.
The phase behavior of a dendritic amphiphile containing a Newkome-type dendron as the hydrophilic moiety and a cholesterol unit as the hydrophobic segment is investigated at the air-liquid interface. The amphiphile forms stable monomolecular films at the airliquid interface on different subphases. Furthermore, the mineralization of calcium phosphate beneath the monolayer at different calcium and phosphate concentrations versus mineralization time shows that at low calcium and phosphate concentrations needles form, whereas flakes and spheres dominate at higher concentrations. Energy-dispersive X-ray spectroscopy, X-ray photoelectron spectroscopy, and electron diffraction confirm the formation of calcium phosphate. High-resolution transmission electron microscopy and electron diffraction confirm the predominant formation of octacalcium phosphate and hydroxyapatite. The data also indicate that the final products form via a complex multistep reaction, including an association step, where nano-needles aggregate into larger flake-like objects.
We study those nonlinear partial differential equations which appear as Euler-Lagrange equations of variational problems. On defining weak boundary values of solutions to such equations we initiate the theory of Lagrangian boundary value problems in spaces of appropriate smoothness. We also analyse if the concept of mapping degree of current importance applies to Lagrangian problems.
The molecular biomarker composition of two sediment cores from Sanabria Lake (NW Iberian Peninsula) and a survey of modern plants in the watershed provide a reconstruction of past vegetation and landscape dynamics since deglaciation. During a proglacial stage in Lake Sanabria (prior to 14.7 cal ka BP), very low biomarker concentration and carbon preference index (CPI) values similar to 1 suggest that the n-alkanes could have derived from eroded ancient sediment sources or older organic matter with high degree of maturity. During the Late glacial (14.7-11.7 cal ka BP) and the Holocene (last 11.7 cal ka BP) intervals with higher biomarker and triterpenoid concentrations (high %nC(29) , nC(31) alkanes), higher CPI and average carbon length (ACL), and lower P-aq (proportion of aquatic plants) are indicative of major contribution of vascular land plants from a more forested watershed (e.g. Mid Holocene period 7.0-4.0 cal ka BP). Lower biomarker concentrations (high %nC(27) alkanes), CPI and ACL values responded to short phases with decreased allochthonous contribution into the lake that correspond to centennial-scale periods of regional forest decline (e.g. 4-3 ka BP, Roman deforestation after 2.0 ka, and some phases of the LIA, seventeenth-nineteenth centuries). Human activities in the watershed were significant during early medieval times (1.3-1.0 cal ka BP) and since 1960 CE, in both cases associated with relatively higher productivity stages in the lake (lower biomarker and triterpenoid concentrations, high %nC(23) and %nC(31) respectively, lower ACL and CPI values and higher P-aq). The lipid composition of Sanabria Lake sediments indicates a major allochthonous (watershed-derived) contribution to the organic matter budget since deglaciation, and a dominant oligotrophic status during the lake history. The study constrains the climate and anthropogenic forcings and watershed versus lake sources in organic matter accumulation processes and helps to design conservation and management policies in mountain, oligotrophic lakes.
Multimodal representation learning has gained increasing importance in various real-world multimedia applications. Most previous approaches focused on exploring inter-modal correlation by learning a common or intermediate space in a conventional way, e.g. Canonical Correlation Analysis (CCA). These works neglected the exploration of fusing multiple modalities at higher semantic level. In this paper, inspired by the success of deep networks in multimedia computing, we propose a novel unified deep neural framework for multimodal representation learning. To capture the high-level semantic correlations across modalities, we adopted deep learning feature as image representation and topic feature as text representation respectively. In joint model learning, a 5-layer neural network is designed and enforced with a supervised pre-training in the first 3 layers for intra-modal regularization. The extensive experiments on benchmark Wikipedia and MIR Flickr 25K datasets show that our approach achieves state-of-the-art results compare to both shallow and deep models in multimodal and cross-modal retrieval.
In nowadays production, fluctuations in demand, shortening product life-cycles, and highly configurable products require an adaptive and robust control approach to maintain competitiveness. This approach must not only optimise desired production objectives but also cope with unforeseen machine failures, rush orders, and changes in short-term demand. Previous control approaches were often implemented using a single operations layer and a standalone deep learning approach, which may not adequately address the complex organisational demands of modern manufacturing systems. To address this challenge, we propose a hyper-heuristics control model within a semi-heterarchical production system, in which multiple manufacturing and distribution agents are spread across pre-defined modules. The agents employ a deep reinforcement learning algorithm to learn a policy for selecting low-level heuristics in a situation-specific manner, thereby leveraging system performance and adaptability. We tested our approach in simulation and transferred it to a hybrid production environment. By that, we were able to demonstrate its multi-objective optimisation capabilities compared to conventional approaches in terms of mean throughput time, tardiness, and processing of prioritised orders in a multi-layered production system. The modular design is promising in reducing the overall system complexity and facilitates a quick and seamless integration into other scenarios.
In nowadays production, fluctuations in demand, shortening product life-cycles, and highly configurable products require an adaptive and robust control approach to maintain competitiveness. This approach must not only optimise desired production objectives but also cope with unforeseen machine failures, rush orders, and changes in short-term demand. Previous control approaches were often implemented using a single operations layer and a standalone deep learning approach, which may not adequately address the complex organisational demands of modern manufacturing systems. To address this challenge, we propose a hyper-heuristics control model within a semi-heterarchical production system, in which multiple manufacturing and distribution agents are spread across pre-defined modules. The agents employ a deep reinforcement learning algorithm to learn a policy for selecting low-level heuristics in a situation-specific manner, thereby leveraging system performance and adaptability. We tested our approach in simulation and transferred it to a hybrid production environment. By that, we were able to demonstrate its multi-objective optimisation capabilities compared to conventional approaches in terms of mean throughput time, tardiness, and processing of prioritised orders in a multi-layered production system. The modular design is promising in reducing the overall system complexity and facilitates a quick and seamless integration into other scenarios.
Earthquake localization is both a necessity within the field of seismology, and a prerequisite for further analysis such as source studies and hazard assessment. Traditional localization methods often rely on manually picked phases. We present an alternative approach using deep learning that once trained can predict hypocenter locations efficiently. In seismology, neural networks have typically been trained with either single-station records or based on features that have been extracted previously from the waveforms. We use three-component full-waveform records of multiple stations directly. This means no information is lost during preprocessing and preparation of the data does not require expert knowledge. The first convolutional layer of our deep convolutional neural network (CNN) becomes sensitive to features that characterize the waveforms it is trained on. We show that this layer can therefore additionally be used as an event detector. As a test case, we trained our CNN using more than 2000 earthquake swarm events from West Bohemia, recorded by nine local three-component stations. The CNN successfully located 908 validation events with standard deviations of 56.4 m in east-west, 123.8 m in north-south, and 136.3 m in vertical direction compared to a double-difference relocated reference catalog. The detector is sensitive to events with magnitudes down to M-L = -0.8 with 3.5% false positive detections.
A very sensitive X-ray investigation of the giant HII region N11 in the Large Megallanic Cloud was performed using the Chandra X-ray Observatory. The 300 ks observation reveals X-ray sources with luminosities down to 10(32) erg s(-1), increasing the number of known point sources in the field by more than a factor of five. Among these detections are 13 massive stars (3 compact groups of massive stars, 9 O stars, and one early B star) with log(L-X/L-BOL) similar to -6.5 to -7, which may suggest that they are highly magnetic or colliding-wind systems. On the other hand, the stacked signal for regions corresponding to undetected O stars yields log(L-X/L-BOL) similar to -7.3, i.e., an emission level comparable to similar Galactic stars despite the lower metallicity. Other point sources coincide with 11 foreground stars, 6 late-B/A stars in N11, and many background objects. This observation also uncovers the extent and detailed spatial properties of the soft, diffuse emission regions, but the presence of some hotter plasma in their spectra suggests contamination by the unresolved stellar population.
We present an experimental approach to study the three-dimensional microstructure of gas diffusion layer (GDL) materials under realistic compression conditions. A dedicated compression device was designed that allows for synchrotron-tomographic investigation of circular samples under well-defined compression conditions. The tomographic data provide the experimental basis for stochastic modeling of nonwoven GDL materials. A plain compression tool is used to study the fiber courses in the material at different compression stages. Transport relevant geometrical parameters, such as porosity, pore size, and tortuosity distributions, are exemplarily evaluated for a GDL sample in the uncompressed state and for a compression of 30 vol.%. To mimic the geometry of the flow-field, we employed a compression punch with an integrated channel-rib-profile. It turned out that the GDL material is homogeneously compressed under the ribs, however, much less compressed underneath the channel. GDL fibers extend far into the channel volume where they might interfere with the convective gas transport and the removal of liquid water from the cell. (C) 2015 AIP Publishing LLC.
A function has vanishing mean oscillation (VMO) on R up(n) if its mean oscillation - the local average of its pointwise deviation from its mean value - both is uniformly bounded over all cubes within R up(n) and converges to zero with the volume of the cube. The more restrictive class of functions with vanishing lower oscillation (VLO) arises when the mean value is replaced by the minimum value in this definition. It is shown here that each VMO function is the difference of two functions in VLO.
Faced with the increasing needs of companies, optimal dimensioning of IT hardware is becoming challenging for decision makers. In terms of analytical infrastructures, a highly evolutionary environment causes volatile, time dependent workloads in its components, and intelligent, flexible task distribution between local systems and cloud services is attractive. With the aim of developing a flexible and efficient design for analytical infrastructures, this paper proposes a flexible architecture model, which allocates tasks following a machine-specific decision heuristic. A simulation benchmarks this system with existing strategies and identifies the new decision maxim as superior in a first scenario-based simulation.
Acetanilides can be deacetylated and diazotized in situ, and subsequently used in Pd-catalyzed coupling reactions without isolation of the diazonium intermediate. Heck reactions, Suzuki cross-coupling reactions, and a Pd-catalyzed [2+2+1]cycloaddition have been investigated as terminating CC bond-forming steps of this one-flask sequence. The sequence does not require the exchange of solvents or removal of by-products between the individual steps, but proceeds by addition of reagents and catalysts in due course.
In this study we investigate a dayside, midlatitude plasma depletion (DMLPD) encountered on 22 May 2014 by the Swarm and GRACE satellites, as well as ground-based instruments. The DMLPD was observed near Puerto Rico by Swarm near 10 LT under quiet geomagnetic conditions at altitudes of 475-520 km and magnetic latitudes of similar to 25 degrees-30 degrees. The DMLPD was also revealed in total electron content observations by the Saint Croix station and by the GRACE satellites (430 km) near 16 LT and near the same geographic location. The unique Swarm constellation enables the horizontal tilt of the DMLPD to be measured (35 degrees clockwise from the geomagnetic east-west direction). Ground-based airglow images at Arecibo showed no evidence for plasma density depletions during the night prior to this dayside event. The C/NOFS equatorial satellite showed evidence for very modest plasma density depletions that had rotated into the morningside from nightside. However, the equatorial depletions do not appear related to the DMLPD, for which the magnetic apex height is about 2500 km. The origins of the DMLPD are unknown, but may be related to gravity waves.
Compound natural hazards likeEl Ninoevents cause high damage to society, which to manage requires reliable risk assessments. Damage modelling is a prerequisite for quantitative risk estimations, yet many procedures still rely on expert knowledge, and empirical studies investigating damage from compound natural hazards hardly exist. A nationwide building survey in Peru after theEl Ninoevent 2017 - which caused intense rainfall, ponding water, flash floods and landslides - enables us to apply data-mining methods for statistical groundwork, using explanatory features generated from remote sensing products and open data. We separate regions of different dominant characteristics through unsupervised clustering, and investigate feature importance rankings for classifying damage via supervised machine learning. Besides the expected effect of precipitation, the classification algorithms select the topographic wetness index as most important feature, especially in low elevation areas. The slope length and steepness factor ranks high for mountains and canyons. Partial dependence plots further hint at amplified vulnerability in rural areas. An example of an empirical damage probability map, developed with a random forest model, is provided to demonstrate the technical feasibility.
We consider an extension of the Standard Model within the framework of Noncommutative Geometry. The model is based on an older model [C. A. Stephan, Phys. Rev. D 79, 065013 (2009)] which extends the Standard Model by new fermions, a new U(1)-gauge group and, crucially, a new scalar field which couples to the Higgs field. This new scalar field allows to lower the mass of the Higgs mass from similar to 170 GeV, as predicted by the Spectral Action for the Standard Model, to a value of 120-130 GeV. The shortcoming of the previous model lay in its inability to meet all the constraints on the gauge couplings implied by the Spectral Action. These shortcomings are cured in the present model which also features a "dark sector" containing fermions and scalar particles.
Dark matter, DM, has not yet been directly observed, but it has a very solid theoretical basis. There are observations that provide indirect evidence, like galactic rotation curves that show that the galaxies are rotating too fast to keep their constituent parts, and galaxy clusters that bends the light coming from behind-lying galaxies more than expected with respect to the mass that can be calculated from what can be visibly seen. These observations, among many others, can be explained with theories that include DM. The missing piece is to detect something that can exclusively be explained by DM. Direct observation in a particle accelerator is one way and indirect detection using telescopes is another. This thesis is focused on the latter method.
The Very Energetic Radiation Imaging Telescope Array System, V ERITAS, is a telescope array that detects Cherenkov radiation. Theory predicts that DM particles annihilate into, e.g., a γγ pair and create a distinctive energy spectrum when detected by such telescopes, e.i., a monoenergetic line at the same energy as the particle mass. This so called ”smoking-gun” signature is sought with a sliding window line search within the sub-range ∼ 0.3 − 10 TeV of the VERITAS energy range, ∼ 0.01 − 30 TeV.
Standard analysis within the VERITAS collaboration uses Hillas analysis and look-up tables, acquired by analysing particle simulations, to calculate the energy of the particle causing the Cherenkov shower. In this thesis, an improved analysis method has been used. Modelling each shower as a 3Dgaussian should increase the energy recreation quality. Five dwarf spheroidal galaxies were chosen as targets with a total of ∼ 224 hours of data. The targets were analysed individually and stacked. Particle simulations were based on two simulation packages, CARE and GrISU.
Improvements have been made to the energy resolution and bias correction, up to a few percent each, in comparison to standard analysis. Nevertheless, no line with a relevant significance has been detected. The most promising line is at an energy of ∼ 422 GeV with an upper limit cross section of 8.10 · 10^−24 cm^3 s^−1 and a significance of ∼ 2.73 σ, before trials correction and ∼ 1.56 σ after. Upper limit cross sections have also been calculated for the γγ annihilation process and four other outcomes. The limits are in line with current limits using other methods, from ∼ 8.56 · 10^−26 − 6.61 · 10^−23 cm^3s^−1. Future larger telescope arrays, like the upcoming Cherenkov Telescope Array, CTA, will provide better results with the help of this analysis method.
Rainfall erosivities as defined by the R factor from the universal soil loss equation were determined for all events during a two-year period at the station La Cuenca in western Amazonia. Three methods based on a power relationship between rainfall amount and erosivity were then applied to estimate event and daily rainfall erosivities from the respective rainfall amounts. A test of the resulting regression equations against an independent data set proved all three methods equally adequate in predicting rainfall erosivity from daily rainfall amount. We recommend the Richardson model for testing in the Amazon Basin, and its use with the coefficient from La Cuenca in western Amazonia.
A cytoplasmically inherited chlorophyll-deficient mutant of barley (Hordeum vulgare) termed cytoplasmic line 3 (CL3), displaying a viridis (homogeneously light-green colored) phenotype, has been previously shown to be affected by elevated temperatures. In this article, biochemical, biophysical, and molecular approaches were used to study the CL3 mutant under different temperature and light conditions. The results lead to the conclusion that an impaired assembly of photosystem I (PSI) under higher temperatures and certain light conditions is the primary cause of the CL3 phenotype. Compromised splicing of ycf3 transcripts, particularly at elevated temperature, resulting from a mutation in a noncoding region (intron 1) in the mutant ycf3 gene results in a defective synthesis of Ycf3, which is a chaperone involved in PSI assembly. The defective PSI assembly causes severe photoinhibition and degradation of PSII.
Microservice Architectures (MSA) structure applications as a collection of loosely coupled services that implement business capabilities. The key advantages of MSA include inherent support for continuous deployment of large complex applications, agility and enhanced productivity. However, studies indicate that most MSA are homogeneous, and introduce shared vulnerabilites, thus vulnerable to multi-step attacks, which are economics-of-scale incentives to attackers. In this paper, we address the issue of shared vulnerabilities in microservices with a novel solution based on the concept of Moving Target Defenses (MTD). Our mechanism works by performing risk analysis against microservices to detect and prioritize vulnerabilities. Thereafter, security risk-oriented software diversification is employed, guided by a defined diversification index. The diversification is performed at runtime, leveraging both model and template based automatic code generation techniques to automatically transform programming languages and container images of the microservices. Consequently, the microservices attack surfaces are altered thereby introducing uncertainty for attackers while reducing the attackability of the microservices. Our experiments demonstrate the efficiency of our solution, with an average success rate of over 70% attack surface randomization.
Ten square-based pyramidal molybdenum complexes with different sulfur donor ligands, that is, a variety of dithiolenes and sulfides, were prepared, which mimic coordination motifs of the molybdenum cofactors of molybdenum-dependent oxidoreductases. The model compounds were investigated by Mo K-edge X-ray absorption spectroscopy (XAS) and (with one exception) their molecular structures were analyzed by X-ray diffraction to derive detailed information on bond lengths and geometries of the first coordination shell of molybdenum. Only small variations in Mo=O and Mo-S bond lengths and their respective coordination angles were observed for all complexes including those containing Mo(CO)(2) or Mo(mu-S)(2)Mo motifs. XAS analysis (edge energy) revealed higher relative oxidation levels in the molybdenum ion in compounds with innocent sulfur-based ligands relative to those in dithiolene complexes, which are known to exhibit noninnocence, that is, donation of substantial electron density from ligand to metal. In addition, longer average Mo-S and Mo=O bonds and consequently lower.(Mo=O) stretching frequencies in the IR spectra were observed for complexes with dithiolene-derived ligands. The results emphasize that the noninnocent character of the dithiolene ligand influences the electronic structure of the model compounds, but does not significantly affect their metal coordination geometry, which is largely determined by the Mo(IV) or (V) ion itself. The latter conclusion also holds for the molybdenum site geometries in the oxidized Mo-VI cofactor of DMSO reductase and the reduced Mo-IV cofactor of arsenite oxidase. The innocent behavior of the dithiolene molybdopterin ligands observed in the enzymes is likely to be related to cofactor-protein interactions.
Protection of natural or semi-natural ecosystems is an important part of societal strategies for maintaining biodiversity, ecosystem services, and achieving overall sustainable development. The assessment of multiple emerging land use trade-offs is complicated by the fact that land use changes occur and have consequences at local, regional, and even global scale. Outcomes also depend on the underlying socio-economic trends. We apply a coupled, multi-scale modelling system to assess an increase in nature protection areas as a key policy option in the European Union (EU). The main goal of the analysis is to understand the interactions between policy-induced land use changes across different scales and sectors under two contrasting future socio-economic pathways. We demonstrate how complementary insights into land system change can be gained by coupling land use models for agriculture, forestry, and urban areas for Europe, in connection with other world regions. The simulated policy case of nature protection shows how the allocation of a certain share of total available land to newly protected areas, with specific management restrictions imposed, may have a range of impacts on different land-based sectors until the year 2040. Agricultural land in Europe is slightly reduced, which is partly compensated for by higher management intensity. As a consequence of higher costs, total calorie supply per capita is reduced within the EU. While wood harvest is projected to decrease, carbon sequestration rates increase in European forests. At the same time, imports of industrial roundwood from other world regions are expected to increase. Some of the aggregate effects of nature protection have very different implications at the local to regional scale in different parts of Europe. Due to nature protection measures, agricultural production is shifted from more productive land in Europe to on average less productive land in other parts of the world. This increases, at the global level, the allocation of land resources for agriculture, leading to a decrease in tropical forest areas, reduced carbon stocks, and higher greenhouse gas emissions outside of Europe. The integrated modelling framework provides a method to assess the land use effects of a single policy option while accounting for the trade-offs between locations, and between regional, European, and global scales.
Diagnostics of Autoimmune Diseases involve screening of patient samples for containing autoantibodies against various antigens. To ensure quality of diagnostic assays a calibrator is needed in each assay system. Different calibrators as recombinant human monoclonal antibodies as well as chimeric antibodies against the autoantigens of interest are described. A less cost-intensive and also more representative possibility covering different targets on the antigens is the utilization of polyclonal sera from other species. Nevertheless, the detection of human autoantibodies as well as the calibration reagent containing antibodies from other species in one assay constitutes a challenge in terms of assay calibration. We therefore developed a cross-reactive monoclonal antibody which binds human as well as rabbit sera with similar affinities in the nanomolar range. We tested our monoclonal antibody S38CD11B12 successfully in the commercial Serazym (R) Anti-Cardiolipin-beta 2-GPI IgG/IgM assay and could thereby prove the eligibility of S38CD11B12 as detection antibody in autoimmune diagnostic assays using rabbit derived sera as reference material.
A protected derivative of (3R, 4R)-hexa-1,5-diene-3,4-diol, a conveniently accessible C-2-symmetric building block, undergoes single or double cross metathesis with methyl acryl-ate. The cross metathesis products are amenable to stereoselective conjugate addition reactions and can be converted into either gamma-butyrolactones or gamma-lactams.
The aim of this dissertation was to conduct a larger-scale cross-linguistic empirical investigation of similarity-based interference effects in sentence comprehension.
Interference studies can offer valuable insights into the mechanisms that are involved in long-distance dependency completion.
Many studies have investigated similarity-based interference effects, showing that syntactic and semantic information are employed during long-distance dependency formation (e.g., Arnett & Wagers, 2017; Cunnings & Sturt, 2018; Van Dyke, 2007, Van Dyke & Lewis, 2003; Van Dyke & McElree, 2011). Nevertheless, there are some important open questions in the interference literature that are critical to our understanding of the constraints involved in dependency resolution.
The first research question concerns the relative timing of syntactic and semantic interference in online sentence comprehension. Only few interference studies have investigated this question, and, to date, there is not enough data to draw conclusions with regard to their time course (Van Dyke, 2007; Van Dyke & McElree, 2011).
Our first cross-linguistic study explores the relative timing of syntactic and semantic interference in two eye-tracking reading experiments that implement the study design used in Van Dyke (2007). The first experiment tests English sentences. The second, larger-sample experiment investigates the two interference types in German.
Overall, the data suggest that syntactic and semantic interference can arise simultaneously during retrieval.
The second research question concerns a special case of semantic interference: We investigate whether cue-based retrieval interference can be caused by semantically similar items which are not embedded in a syntactic structure.
This second interference study builds on a landmark study by Van Dyke & McElree (2006). The study design used in their study is unique in that it is able to pin down the source of interference as a consequence of cue overload during retrieval, when semantic retrieval cues do not uniquely match the retrieval target. Unlike most other interference studies, this design is able to rule out encoding interference as an alternative explanation. Encoding accounts postulate that it is not cue overload at the retrieval site but the erroneous encoding of similar linguistic items in memory that leads to interference (Lewandowsky et al., 2008; Oberauer & Kliegl, 2006). While Van Dyke & McElree (2006) reported cue-based retrieval interference from sentence-external distractors, the evidence for this effect was weak. A subsequent study did not show interference of this type (Van Dyke et al., 2014). Given these inconclusive findings, further research is necessary to investigate semantic cue-based retrieval interference.
The second study in this dissertation provides a larger-scale cross-linguistic investigation of cue-based retrieval interference from sentence-external items. Three larger-sample eye-tracking studies in English, German, and Russian tested cue-based interference in the online processing of filler-gap dependencies. This study further extends the previous research by investigating interference in each language under varying task demands (Logačev & Vasishth, 2016; Swets et al., 2008).
Overall, we see some very modest support for proactive cue-based retrieval interference in English. Unexpectedly, this was observed only under a low task demand. In German and Russian, there is some evidence against the interference effect. It is possible that interference is attenuated in languages with richer case marking.
In sum, the cross-linguistic experiments on the time course of syntactic and semantic interference from sentence-internal distractors support existing evidence of syntactic and semantic interference during sentence comprehension. Our data further show that both types of interference effects can arise simultaneously. Our cross-linguistic experiments investigating semantic cue-based retrieval interference from sentence-external distractors suggest that this type of interference may arise only in specific linguistic contexts.
This study compares the duration and first two formants (F1 and F2) of 11 nominal monophthongs and five nominal diphthongs in Standard Southern British English (SSBE) and a Northern English dialect. F1 and F2 trajectories were fitted with parametric curves using the discrete cosine transform (DCT) and the zeroth DCT coefficient represented formant trajectory means and the first DCT coefficient represented the magnitude and direction of formant trajectory change to characterize vowel inherent spectral change (VISC). Cross-dialectal comparisons involving these measures revealed significant differences for the phonologically back monophthongs /D, , , u:/ and also /3z:/ and the diphthongs /eI, e, aI, I/. Most cross-dialectal differences are in zeroth DCT coefficients, suggesting formant trajectory means tend to characterize such differences, while first DCT coefficient differences were more numerous for diphthongs. With respect to VISC, the most striking differences are that /u:/is considerably more diphthongized in the Northern dialect and that the F2 trajectory of /e/proceeds in opposite directions in the two dialects. Cross-dialectal differences were found to be largely unaffected by the consonantal context in which the vowels were produced. The implications of the results are discussed in relation to VISC, consonantal context effects and speech perception. (c) 2014 Acoustical Society of America.
Despite the fact that development aid has broadened from economic growth theory to include human and social capital, there is a lack of a general agreement as to its benefits. This critical review and analyses of the development aid academic and institutional discourse identifies some major shortcomings. The dominance of economics at the expense of politics, and the imposition of development aid neoliberal conditionalities act as barriers to socio-economic development in aid recipient countries. An inference is offered to recast development aid through reconciliation within critical frameworks of different sides of the political spectrum.
The last years have been affected by Covid-19 and the international emergency mecha-nism to deal with health-related threats. The effects of this period manifested differ-ently worldwide, depending on matters such as international relations, national policies, power dynamics etc. Additionally, the impact of this time will likely have long-term effects which are yet to be known. This paper gives a critical overview of the Public Health Emergency of International Concern (PHEIC) mechanism in the context of Covid-19. It does so by explaining the legal framework for states of emergency, specifically in the context of a PHEIC, while considering its restrictions and limitations on human rights. It further outlines issues in the manifestation of global protections and limitations on human rights during Covid-19. Lastly, considering the likelihood of future PHEICs and the known systemic obstructions, this paper offers ways to im-prove this mechanism from a holistic, non-zero-sum perspective.
Monsoon systems around the world are governed by the so-called moisture-advection feedback. Here we show that, in a minimal conceptual model, this feedback implies a critical threshold with respect to the atmospheric specific humidity q(o) over the ocean adjacent to the monsoon region. If q(o) falls short of this critical value q(o)(c), monsoon rainfall over land cannot be sustained. Such a case could occur if evaporation from the ocean was reduced, e.g. due to low sea surface temperatures. Within the restrictions of the conceptual model, we estimate q(o)(c) from present-day reanalysis data for four major monsoon systems, and demonstrate how this concept can help understand abrupt variations in monsoon strength on orbital timescales as found in proxy records.
National Action Plans (NAPs) have been increas-ingly adopted world-wide after the Vienna Dec-laration in 1993, where it was urged to consider the improvement and promotion of Human Rights. In this paper, we discuss their usefulness and success by analysing the challenges present-ed during NAP processes as well as the benefits this set of actions entails: The challenges for their implementation outweigh its actual benefits. Nevertheless, NAPs have great potential. Based on new research, we elaborate a set of recom-mendations for improving the design and imple-mentation of national action planning. In order to effectively bring NAP into practice, we consider it crucial to plan and analyse every state local circumstances in detail. The latter is important, since the implementation of a concrete set of actions is intended to directly transform and improve the local living conditions of the people. In a long-term perspective, we defend the benefit of NAP’s implementation for complying obliga-tions set up by HR treaties.
Nonlinear force-free field (NLFFF) models are thought to be viable tools for investigating the structure, dynamics, and evolution of the coronae of solar active regions. In a series of NLFFF modeling studies, we have found that NLFFF models are successful in application to analytic test cases, and relatively successful when applied to numerically constructed Sun-like test cases, but they are less successful in application to real solar data. Different NLFFF models have been found to have markedly different field line configurations and to provide widely varying estimates of the magnetic free energy in the coronal volume, when applied to solar data. NLFFF models require consistent, force-free vector magnetic boundary data. However, vector magnetogram observations sampling the photosphere, which is dynamic and contains significant Lorentz and buoyancy forces, do not satisfy this requirement, thus creating several major problems for force-free coronal modeling efforts. In this paper, we discuss NLFFF modeling of NOAA Active Region 10953 using Hinode/SOT-SP, Hinode/XRT, STEREO/SECCHI-EUVI, and SOHO/MDI observations, and in the process illustrate three such issues we judge to be critical to the success of NLFFF modeling: (1) vector magnetic field data covering larger areas are needed so that more electric currents associated with the full active regions of interest are measured, (2) the modeling algorithms need a way to accommodate the various uncertainties in the boundary data, and (3) a more realistic physical model is needed to approximate the photosphere-to-corona interface in order to better transform the forced photospheric magnetograms into adequate approximations of nearly force-free fields at the base of the corona. We make recommendations for future modeling efforts to overcome these as yet unsolved problems.
Transverse dispersion, or tracer spreading orthogonal to the mean flow direction, which is relevant e.g, for quantifying bio-degradation of contaminant plumes or mixing of reactive solutes, has been studied in the literature less than the longitudinal one. Inferring transverse dispersion coefficients from field experiments is a difficult and error-prone task, requiring a spatial resolution of solute plumes which is not easily achievable in applications. In absence of field data, it is a questionable common practice to set transverse dispersivities as a fraction of the longitudinal one, with the ratio 1/10 being the most prevalent. We collected estimates of field-scale transverse dispersivities from existing publications and explored possible scale relationships as guidance criteria for applications. Our investigation showed that a large number of estimates available in the literature are of low reliability and should be discarded from further analysis. The remaining reliable estimates are formation-specific, span three orders of magnitude and do not show any clear scale-dependence on the plume traveled distance. The ratios with the longitudinal dispersivity are also site specific and vary widely. The reliability of transverse dispersivities depends significantly on the type of field experiment and method of data analysis. In applications where transverse dispersion plays a significant role, inference of transverse dispersivities should be part of site characterization with the transverse dispersivity estimated as an independent parameter rather than related heuristically to longitudinal dispersivity.
The Upper Cretaceous (Campanian-Maastrichtian) bioclastic wedge of the Orfento Formation in the Montagna della Maiella, Italy, is compared to newly discovered contourite drifts in the Maldives. Like the drift deposits in the Maldives, the Orfento Formation fills a channel and builds a Miocene delta-shaped and mounded sedimentary body in the basin that is similar in size to the approximately 350 km(2) large coarse-grained bioclastic Miocene delta drifts in the Maldives. The composition of the bioclastic wedge of the Orfento Formation is also exclusively bioclastic debris sourced from the shallow-water areas and reworked clasts of the Orfento Formation itself. In the near mud-free succession, age-diagnostic fossils are sparse. The depositional textures vary from wackestone to float-rudstone and breccia/conglomerates, but rocks with grainstone and rudstone textures are the most common facies. In the channel, lensoid convex-upward breccias, cross-cutting channelized beds and thick grainstone lobes with abundant scours indicate alternating erosion and deposition from a high-energy current. In the basin, the mounded sedimentary body contains lobes with a divergent progradational geometry. The lobes are built by decametre thick composite megabeds consisting of sigmoidal clinoforms that typically have a channelized topset, a grainy foreset and a fine-grained bottomset with abundant irregular angular clasts. Up to 30 m thick channels filled with intraformational breccias and coarse grainstones pinch out downslope between the megabeds. In the distal portion of the wedge, stacked grainstone beds with foresets and reworked intraclasts document continuous sediment reworking and migration. The bioclastic wedge of the Orfento Formation has been variously interpreted as a succession of sea-level controlled slope deposits, a shoaling shoreface complex, or a carbonate tidal delta. Current-controlled delta drifts in the Maldives, however, offer a new interpretation because of their similarity in architecture and composition. These similarities include: (i) a feeder channel opening into the basin; (ii) an excavation moat at the exit of the channel; (iii) an overall mounded geometry with an apex that is in shallower water depth than the source channel; (iv) progradation of stacked lobes; (v) channels that pinch out in a basinward direction; and (vi) smaller channelized intervals that are arranged in a radial pattern. As a result, the Upper Cretaceous (Campanian-Maastrichtian) bioclastic wedge of the Orfento Formation in the Montagna della Maiella, Italy, is here interpreted as a carbonate delta drift.
For each compact subset K of the complex plane C which does not surround zero, the Riemann surface Sζ of the Riemann zeta function restricted to the critical half-strip 0 < Rs < 1/2 contains infinitely many schlicht copies of K lying ‘over’ K. If Sζ also contains at least one such copy, for some K which surrounds zero, then the Riemann hypothesis fails.
Children's participation in legal proceedings affecting them personally has been gaining importance. So far, a primary research concern has been how children experience their participation in court proceedings. However, little is known about the child's voice itself: Are children able to clearly express their wishes, and if so, what do they say in child protection cases? In this study, we extracted information about children's statements from court file data of 220 child protection cases in Germany. We found 182 children were asked about their wishes. The majority of the statements found came either from reports of the guardians ad litem or from judicial records of the child hearings. Using content analysis, three main aspects of the statements were extracted: wishes concerning main place of residence, wishes about whom to have or not contact with, and children granting decision-making authority to someone else. Children's main focus was on their parents, but others (e.g., relatives and foster care providers) were also mentioned. Intercoder agreement was substantial. Making sure that child hearings are as informative as possible is in the child's best interest. Therefore, the categories developed herein might help professionals to ask questions more precisely relevant to the child.
The literature contains a sizable number of publications where weather types are used to decompose climate shifts or trends into contributions of frequency and mean of those types. They are all based on the product rule, that is, a transformation of a product of sums into a sum of products, the latter providing the decomposition. While there is nothing to argue about the transformation itself, its interpretation as a climate shift or trend decomposition is bound to fail. While the case of a climate shift may be viewed as an incomplete description of a more complex behaviour, trend decomposition indeed produces bogus trends, as demonstrated by a synthetic counterexample with well-defined trends in type frequency and mean. Consequently, decompositions based on that transformation, be it for climate shifts or trends, must not be used.
The manuscript describes the phytochemical investigation of the roots, leaves and stem bark of Millettia lasiantha resulting in the isolation of twelve compounds including two new isomeric isoflavones lascoumestan and las-coumaronochromone. The structures of the new compounds were determined using different spectroscopic techniques.
This paper addresses semantic/pragmatic variability of tag questions in German and makes three main contributions. First, we document the prevalence and variety of question tags in German across three different types of conversational corpora. Second, by annotating question tags according to their syntactic and semantic context, discourse function, and pragmatic effect, we demonstrate the existing overlap and differences between the individual tag variants. Finally, we distinguish several groups of question tags by identifying the factors that influence the speakers’ choices of tags in the conversational context, such as clause type, function, speaker/hearer knowledge, as well as conversation type and medium. These factors provide the limits of variability by constraining certain question tags in German against occurring in specific contexts or with individual functions.
Eclipsing systems of massive stars allow one to explore the properties of their components in great detail. We perform a multi-wavelength, non-LTE analysis of the three components of the massive multiple system delta Ori A, focusing on the fundamental stellar properties, stellar winds, and X-ray characteristics of the system. The primary's distance-independent parameters turn out to be characteristic for its spectral type (O9.5 II), but usage of the Hipparcos parallax yields surprisingly low values for the mass, radius, and luminosity. Consistent values follow only if delta Ori lies at about twice the Hipparcos distance, in the vicinity of the sigma-Orionis cluster. The primary and tertiary dominate the spectrum and leave the secondary only marginally detectable. We estimate the V-band magnitude difference between primary and secondary to be Delta V approximate to 2.(m)8. The inferred parameters suggest that the secondary is an early B-type dwarf (approximate to B1 V), while the tertiary is an early B-type subgiant (approximate to B0 IV). We find evidence for rapid turbulent velocities (similar to 200 km s(-1)) and wind inhomogeneities, partially optically thick, in the primary's wind. The bulk of the X-ray emission likely emerges from the primary's stellar wind (logL(X)/L-Bol approximate to -6.85), initiating close to the stellar surface at R-0 similar to 1.1 R-*. Accounting for clumping, the mass-loss rate of the primary is found to be log (M) over dot approximate to -6.4 (M-circle dot yr(-1))., which agrees with hydrodynamic predictions, and provides a consistent picture along the X-ray, UV, optical, and radio spectral domains.
We report on both high-precision photometry from the Microvariability and Oscillations of Stars (MOST) space telescope and ground-based spectroscopy of the triple system delta Ori A, consisting of a binary O9.5II+early-B (Aa1 and Aa2) with P = 5.7 days, and a more distant tertiary (O9 IV P > 400 years). This data was collected in concert with X-ray spectroscopy from the Chandra X-ray Observatory. Thanks to continuous coverage for three weeks, the MOST light curve reveals clear eclipses between Aa1 and Aa2 for the first time in non-phased data. From the spectroscopy, we have a well-constrained radial velocity (RV) curve of Aa1. While we are unable to recover RV variations of the secondary star, we are able to constrain several fundamental parameters of this system and determine an approximate mass of the primary using apsidal motion. We also detected second order modulations at 12 separate frequencies with spacings indicative of tidally influenced oscillations. These spacings have never been seen in a massive binary, making this system one of only a handful of such binaries that show evidence for tidally induced pulsations.
We present time-resolved and phase-resolved variability studies of an extensive X-ray high-resolution spectral data set of the delta Ori Aa binary system. The four observations, obtained with Chandra ACIS HETGS, have a total exposure time of approximate to 479 ks and provide nearly complete binary phase coverage. Variability of the total X-ray flux in the range of 5-25 is is confirmed, with a maximum amplitude of about +/- 15% within a single approximate to 125 ks observation. Periods of 4.76 and 2.04 days are found in the total X-ray flux, as well as an apparent overall increase in the flux level throughout the nine-day observational campaign. Using 40 ks contiguous spectra derived from the original observations, we investigate the variability of emission line parameters and ratios. Several emission lines are shown to be variable, including S XV, Si XIII, and Ne IX. For the first time, variations of the X-ray emission line widths as a function of the binary phase are found in a binary system, with the smallest widths at phi = 0.0 when the secondary delta Ori Aa2 is at the inferior conjunction. Using 3D hydrodynamic modeling of the interacting winds, we relate the emission line width variability to the presence of a wind cavity created by a wind-wind collision, which is effectively void of embedded wind shocks and is carved out of the X-ray-producing primary wind, thus producing phase-locked X-ray variability.
We present an overview of four deep phase-constrained Chandra HETGS X-ray observations of delta Ori A. Delta Ori A is actually a triple system that includes the nearest massive eclipsing spectroscopic binary, delta Ori Aa, the only such object that can be observed with little phase-smearing with the Chandra gratings. Since the fainter star, delta Ori Aa2, has a much lower X-ray luminosity than the brighter primary (delta Ori Aa1), delta Ori Aa provides a unique system with which to test the spatial distribution of the X-ray emitting gas around delta Ori Aa1 via occultation by the photosphere of, and wind cavity around, the X-ray dark secondary. Here we discuss the X-ray spectrum and X-ray line profiles for the combined observation, having an exposure time of nearly 500 ks and covering nearly the entire binary orbit. The companion papers discuss the X-ray variability seen in the Chandra spectra, present new space-based photometry and ground-based radial velocities obtained simultaneously with the X-ray data to better constrain the system parameters, and model the effects of X-rays on the optical and UV spectra. We find that the X-ray emission is dominated by embedded wind shock emission from star Aa1, with little contribution from the tertiary star Ab or the shocked gas produced by the collision of the wind of Aa1 against the surface of Aa2. We find a similar temperature distribution to previous X-ray spectrum analyses. We also show that the line half-widths are about 0.3-0.5 times the terminal velocity of the wind of star Aa1. We find a strong anti-correlation between line widths and the line excitation energy, which suggests that longer-wavelength, lower-temperature lines form farther out in the wind. Our analysis also indicates that the ratio of the intensities of the strong and weak lines of Fe XVII and Ne X are inconsistent with model predictions, which may be an effect of resonance scattering.