Refine
Year of publication
- 2024 (5)
- 2023 (8)
- 2022 (12)
- 2021 (12)
- 2020 (37)
- 2019 (156)
- 2018 (195)
- 2017 (117)
- 2016 (73)
- 2015 (18)
- 2014 (11)
- 2013 (17)
- 2012 (25)
- 2011 (21)
- 2010 (6)
- 2009 (7)
- 2008 (12)
- 2007 (11)
- 2006 (27)
- 2005 (17)
- 2004 (11)
- 2003 (4)
- 2002 (3)
- 2001 (2)
- 2000 (4)
- 1999 (2)
- 1998 (10)
- 1997 (5)
- 1996 (13)
- 1995 (9)
- 1994 (15)
- 1993 (4)
- 1992 (3)
- 1991 (1)
Document Type
- Other (874) (remove)
Language
- English (639)
- German (221)
- Spanish (5)
- Italian (3)
- Multiple languages (2)
- Polish (2)
- French (1)
- Portuguese (1)
Keywords
- Arrayseismologie (5)
- array seismology (5)
- Dysphagie (4)
- E-Learning (4)
- Erdbeben (4)
- Judaism (4)
- Judentum (4)
- MOOC (4)
- Patholinguistik (4)
- Schluckstörung (4)
Institute
- Institut für Biochemie und Biologie (85)
- Hasso-Plattner-Institut für Digital Engineering GmbH (83)
- Institut für Physik und Astronomie (82)
- Institut für Geowissenschaften (72)
- Institut für Mathematik (46)
- Department Psychologie (44)
- Department Sport- und Gesundheitswissenschaften (44)
- Institut für Ernährungswissenschaft (31)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (30)
- Institut für Chemie (29)
In Arabidopsis thaliana and Oryza sativa, two types of PI 4-kinase (PI4Ks) have been isolated and functionally characterized. The alpha-type PI4Ks (similar to 220 kDa) contain a PH domain, which is lacking in beta-type PI4Ks (similar to 120 kDa). beta-Type PI4Ks, exemplified by Arabidopsis AtPI4K beta and rice OsPI4K2, contain a highly charged repetitive segment designated PPC (Plant PI4K Charged) region, which is an unique domain only found in plant beta-type PI4Ks at present. The PPC region has a length of similar to 300 amino acids and harboring 11 (AtPI4K beta) and 14 (OsPI4K2) repeats, respectively, of a 20-aa motif. Studies employing a modified yeast-based "Sequence of Membrane- Targeting Detection'' system demonstrate that the PPC(OsPI4K2) region, as well as the former 8 and latter 6 repetitive motifs within the PPC region, are able to target fusion proteins to the plasma membrane. Further detection on the transiently expressed GFP fusion proteins in onion epidermal cells showed that the PPC(OsPI4K2) region alone, as well as the region containing repetitive motifs 1-8, was able to direct GFP to the plasma membrane, while the regions containing less repetitive motifs, i.e. 6, 4, 2 or single motif(s) led to predominantly intracellular localization. Agrobacterium-mediated transient expression of PPC-GFP fusion protein further confirms the membrane-targeting capacities of PPC region. In addition, the predominant plasma membrane localization of AtPI4Kb was mediated by the PPC region. Recombinant PPC peptide, expressed in E. coli, strongly binds phosphatidic acid, PI and PI4P, but not phosphatidylcholine, PI5P, or PI(4,5) P-2 in vitro, providing insights into potential mechanisms for regulating sub- cellular localization and lipid binding for the plant beta-type PI4Ks
Gamma-ray bursts (GRBs) are some of the Universe’s most enigmatic and exotic events. However, at energies above 10 GeV their behaviour remains largely unknown. Although space based telescopes such as the Fermi-LAT have been able to detect GRBs in this energy range, their photon statistics are limited by the small detector size. Such limitations are not present in ground based gamma-ray telescopes such as the H.E.S.S. experiment, which has now entered its second phase with the addition of a large 600 m2 telescope to the centre of the array. Such a large telescope allows H.E.S.S. to access the sub 100-GeV energy range while still maintaining a large effective collection area, helping to potentially probe the short timescale emission of these events.
We present a description of the H.E.S.S. GRB observation programme, summarising the performance of the rapid GRB repointing system and the conditions under which GRB observations are initiated. Additionally we will report on the GRB follow-ups made during the 2014-15 observation campaigns.
The German Enlightenment
(2017)
The term Enlightenment (or Aufklärung) remains heavily contested. Even when historians delimit the remit of the concept, assigning it to a particular historical period rather than to an intellectual or moral programme, the public resonance of the Enlightenment remains high and problematic—especially when equated in an essentialist manner with modernity or some core values of ‘the West’. This Forum has been convened to discuss recent research on the Enlightenment in Germany, different views of the term and its ideological use in public discourse outside academia (and sometimes within it).
Massive Open Online Courses (MOOCs) have left their mark on the face of education during the recent years. At the Hasso Plattner Institute (HPI) in Potsdam, Germany, we are actively developing a MOOC platform, which provides our research with a plethora of e-learning topics, such as learning analytics, automated assessment, peer assessment, team-work, online proctoring, and gamification. We run several instances of this platform. On openHPI, we provide our own courses from within the HPI context. Further instances are openSAP, openWHO, and mooc.HOUSE, which is the smallest of these platforms, targeting customers with a less extensive course portfolio. In 2013, we started to work on the gamification of our platform. By now, we have implemented about two thirds of the features that we initially have evaluated as useful for our purposes. About a year ago we activated the implemented gamification features on mooc.HOUSE. Before activating the features on openHPI as well, we examined, and re-evaluated our initial considerations based on the data we collected so far and the changes in other contexts of our platforms.
In an effort to explain the formation of a narrow third radiation belt at ultra-relativistic energies detected during a solar storm in September 20121, Mann et al.2 present simulations from which they conclude it arises from a process of outward radial diffusion alone, without the need for additional loss processes from higher frequency waves. The comparison of observations with the model in Figs 2 and 3 of their Article clearly shows that even with strong radial diffusion rates, the model predicts a third belt near L* = 3 that is twice as wide as observed and approximately an order of magnitude more intense. We therefore disagree with their interpretation that “the agreement between the absolute fluxes from the model and those observed by REPT [the Relativistic Electron Proton Telescope] shown on Figs 2 and 3 is excellent.”
Previous studies3 have shown that outward radial diffusion plays a very important role in the dynamics of the outer belt and is capable of explaining rapid reductions in the electron flux. It has also been shown that it can produce remnant belts (Fig. 2 of a long-term simulation study4). However, radial diffusion alone cannot explain the formation of the narrow third belt at multi-MeV during September 2012. An additional loss mechanism is required.
Higher radial diffusion rates cannot improve the comparison of model presented by Mann et al. with observations. A further increase in the radial diffusion rates (reported in Fig. 4 of the Supplementary Information of ref. 2) results in the overestimation of the outer belt fluxes by up to three orders of magnitude at energy of 3.4 MeV.
Observations at 2 MeV, where belts show only a two-zone structure, were not presented by Mann et al. Moreover, simulations of electrons with energies below 2 MeV with the same diffusion rates and boundary conditions used by the authors would probably produce very strong depletions down to L = 3–3.5, where L is radial distance from the centre of the Earth to the given field line in the equatorial plane. Observations do not show a non-adiabatic loss below L ∼ 4.5 for 2 MeV. Such different dynamics between 2 MeV and above 4 MeV at around L = 3.5 are another indication that particles are scattered by electromagnetic ion cyclotron (EMIC) waves that affect only energies above a certain threshold.
Observations of the phase space density (PSD) provide additional evidence for the local loss of electrons. Around L* = 3.5–4 PSD shows significant decrease by an order of magnitude starting in the afternoon of 3 September (Fig. 1a), while PSD above L* = 4 is increasing. The minimum in PSD between L* = 3.5–4 continues to decrease until 4 September. This evolution demonstrates that the loss is not produced by outward diffusion. Radial diffusion cannot produce deepening minima, as it works to smooth gradients. Just as growing peaks in PSD show the presence of localized acceleration5, deepening minima show the presence of localized loss.
Figure 1: Time evolution of radiation profiles in electron PSD at relativistic and ultra-relativistic energies.
figure 1
a, Similar to Supplementary Fig. 3 of ref. 2, but using TS07D model10 and for μ = 2,500 MeV G−1, K = 0.05 RE G0.5 (where RE is the radius of the Earth). b, Similar to Supplementary Fig. 3 of ref. 2, but using TS07D model and for μ = 700 MeV G−1, corresponding to MeV energies in the heart of the belt. Minimum in PSD in the heart of the multi-MeV electron radiation belt between 3.5 and 4 RE deepening between the afternoon of 3 September and 5 September clearly show that the narrow remnant belt at multi-MeV below 3.5 RE is produced by the local loss.
Full size image
The minimum in the outer boundary is reached on the evening of 2 September. After that, the outer boundary moves up, while the minimum decreases by approximately an order of magnitude, clearly showing that this main decrease cannot be explained by outward diffusion, and requires additional loss processes. The analysis of profiles of PSD is a standard tool used, for example, in the study about electron acceleration5 and routinely used by the entire Van Allen Probes team. In the Supplementary Information, we show that this analysis is validated by using different magnetic field models. The Supplementary Information also shows that measurements are above background noise.
Deepening minima at multi-MeV during the times when the boundary flux increases are clearly seen in Fig. 1a. They show that there must be localized loss, as radial diffusion cannot produce a minimum that becomes lower with time. At lower energies of 1–2 MeV, which corresponds to lower values of the first adiabatic invariant μ (Fig. 1b), the profiles are monotonic between L* = 3–3.5, consistent with the absence of scattering by EMIC waves that affect only electrons above a certain energy threshold6,7,8,9.
In summary, the results of the modelling and observations presented by Mann et al. do not lend support to the claim of explaining the dynamics of the ultra-relativistic third Van Allen radiation belt in terms of an outward radial diffusion process alone. While the outward radial diffusion driven by the loss to the magnetopause2 is certainly operating during this storm, there is compelling observational and modelling2,6 evidence that shows that very efficient localized electron loss operates during this storm at multi-MeV energies, consistent with localized loss produced by EMIC waves.
The "Bachelor Project"
(2019)
One of the challenges of educating the next generation of computer scientists is to teach them to become team players, that are able to communicate and interact not only with different IT systems, but also with coworkers and customers with a non-it background. The “bachelor project” is a project based on team work and a close collaboration with selected industry partners. The authors hosted some of the teams since spring term 2014/15. In the paper at hand we explain and discuss this concept and evaluate its success based on students' evaluation and reports. Furthermore, the technology-stack that has been used by the teams is evaluated to understand how self-organized students in IT-related projects work. We will show that and why the bachelor is the most successful educational format in the perception of the students and how this positive results can be improved by the mentors.
An essential, respected, and critical aspect of the modern practice of science and scientific publishing is peer review. The process of peer review facilitates best practices in scientific conduct and communication, ensuring that manuscripts published are as accurate, valuable, and clearly communicated. The over 216 papers published in Tectonics in 2018 benefit from the time, effort, and expertise of our reviewers who have provided thoughtfully considered advice on each manuscript. This role is critical to advancing our understanding of the evolution of the continents and their margins, as these reviews lead to even clearer and higher-quality papers. In 2018, the over 443 papers submitted to Tectonics were the beneficiaries of more than 1,010 reviews provided by 668 members of the tectonics community and related disciplines. To everyone who has volunteered their time and intellect to peer reviewing, thank you for helping Tectonics and all other AGU Publications provide the best science possible.
An essential, respected, and critical aspect of the modern practice of science and scientific publishing is peer review. The process of peer review facilitates best practices in scientific conduct and communication, ensuring that manuscripts published as accurate, valuable, and clearly communicated. The over 152 papers published in Tectonics in 2017 benefit from the time, effort, and expertise of our reviewers who have provided thoughtfully considered advice on each manuscript. This role is critical to advancing our understanding of the evolution of the continents and their margins, as these reviews lead to even clearer and higher-quality papers. In 2017, the over 423 papers submitted to Tectonics were the beneficiaries of more than 786 reviews provided by 562 members of the tectonics community and related disciplines. To everyone who has volunteered their time and intellect to peer reviewing, thank you for helping Tectonics and all other AGU Publications provide the best science possible.
Teen dating violence
(2021)
Subject-oriented learning
(2019)
The transformation to a digitized company changes not only the work but also social context for the employees and requires inter alia new knowledge and skills from them. Additionally, individual action problems arise. This contribution proposes the subject-oriented learning theory, in which the employees´ action problems are the starting point of training activities in learning factories. In this contribution, the subject-oriented learning theory is exemplified and respective advantages for vocational training in learning factories are pointed out both theoretically and practically. Thereby, especially the individual action problems of learners and the infrastructure are emphasized as starting point for learning processes and competence development.
Receiver functions are a good tool to investigate the seismotectonic structure beneath the a seismic station. In this study we apply the method to stations situated on or near Sumatra to find constraints on a more detailed velocity model which should improve earthquake localisation. We estimate shallow Moho-depths (~ 21 km) close to the trench and depths of ~30 km at greater distances. First evidences for the dip direction of the slab of ~60° are provided. Receiver functions were calculated for 20 stations for altogether 110 earthquakes in the distance range between 30° and 95° from the receiver. However the number of receiver functions per station is strongly variable as it depends on the installation date, the signal-to-noise-ratio of the station and the reliability of the acquisition.
Twenty-four scientists met at Aschauhof, Altenhof, Germany, to discuss the associations between child growth and development, and nutrition, health, environment and psychology. Meta-analyses of body height, height variability and household inequality, in historic and modern growth studies published since 1794, highlighting the enormously flexible patterns of child and adolescent height and weight increments throughout history which do not only depend on genetics, prenatal development, nutrition, health, and economic circumstances, but reflect social interactions. A Quality of Life in Short Stature Youth Questionnaire was presented to cross-culturally assess health-related quality of life in children. Changes of child body proportions in recent history, the relation between height and longevity in historic Dutch samples and also measures of body height in skeletal remains belonged to the topics of this meeting. Bayesian approaches and Monte Carlo simulations offer new statistical tools for the study of human growth.
Structural health monitoring activities are of primal importance for managing transport infrastructure, however most SHM methodologies are based on point-based sensors that have limitations in terms of their spatial positioning requirements, cost of development and measurement range. This paper describes the progress on the SENSKIN EC project whose objective is to develop a dielectric-elastomer and micro-electronics-based sensor, formed from a large highly extensible capacitance sensing membrane supported by advanced microelectronic circuitry, for monitoring transport infrastructure bridges. Such a sensor could provide spatial measurements of strain in excess of 10%. The actual sensor along with the data acquisition module, the communication module and power electronics are all integrated into a compact unit, the SENSKIN device, which is energy-efficient, requires simple signal processing and it is easy to install over various surface types. In terms of communication, SENSKIN devices interact with each other to form the SENSKIN system; a fully distributed and autonomous wireless sensor network that is able to self-monitor. SENSKIN system utilizes Delay-/Disruption-Tolerant Networking technologies to ensure that the strain measurements will be received by the base station even under extreme conditions where normal communications are disrupted. This paper describes the architecture of the SENSKIN system and the development and testing of the first SENSKIN prototype sensor, the data acquisition system, and the communication system.
Stress and bone health
(2019)
The present work is part of a collaborative H2020 European funded research project called SENSKIN, that aims to improve Structural Health Monitoring (SHM) for transport infrastructure through the development of an innovative monitoring and management system for bridges based on a novel, inexpensive, skin-like sensor. The integrated SENSKIN technology will be implemented in the case of steel and concrete bridges, and tested, field-evaluated and benchmarked on actual bridge environment against a conventional health monitoring solution developed by Mistras Group Hellas. The main objective of the present work is to implement the autonomous, fully functional strain monitoring system based on commercially available off-the-shelf components, that will be used to accomplish direct comparison between the performance of the innovative SENSKIN sensors and the conventional strain sensors commonly used for structural monitoring of bridges. For this purpose, the mini Structural Monitoring System (mini SMS) of Physical Acoustics Corporation, a comprehensive data acquisition unit designed specifically for long-term unattended operation in outdoor environments, was selected. For the completion of the conventional system, appropriate foil-type strain sensors were selected, driven by special conditioners manufactured by Mistras Group. A comprehensive description of the strain monitoring system and its peripheral components is provided in this paper. For the evaluation of the integrated system’s performance and the effect of various parameters on the long-term behavior of sensors, several test steel pieces instrumented with different strain sensors configurations were prepared and tested in both laboratory and field ambient conditions. Furthermore, loading tests were performed aiming to validate the response of the system in monitoring the strains developed in steel beam elements subject to bending regimes. Representative results obtained from the above experimental tests have been included in this paper as well.
Steigende Mieten?
(2022)
Vor dem Hintergrund rasant steigender Mieten in deutschen Großstädten untersuchen wir in einer neuen Studie die Auswirkungen von Gentrifizierung sowie von politischen Gegenmaßnahmen auf unterschiedliche Einkommensgruppen anhand eines quantitativen Modells für Berlin. Wir finden, dass eine Mietpreisbindung (wie der „Mietendeckel“) allen Haushalten, vor allem aber den ärmeren Haushalten, schadet. Andere Maßnahmen wie Neubau oder direkte Subventionen schneiden besser ab.
JavaScript is the most popular programming language for web applications. Static analysis of JavaScript applications is highly challenging due to its dynamic language constructs and event-driven asynchronous executions, which also give rise to many security-related bugs. Several static analysis tools to detect such bugs exist, however, research has not yet reported much on the precision and scalability trade-off of these analyzers. As a further obstacle, JavaScript programs structured in Node. js modules need to be collected for analysis, but existing bundlers are either specific to their respective analysis tools or not particularly suitable for static analysis.
Stable covalently photo-cross-linked porous poly(ionic liquid) membrane with gradient pore size
(2018)
Porous polyelectrolyte membranes stable in a highly ionic environment are obtained by covalent crosslinking of an imidazolium-based poly(ionic liquid). The crosslinking reaction involves the UV light-induced thiol-ene (click) chemistry, and the phase separation, occurring during the crosslinking step, generates a fully interconnected porous structure in the membrane. The porosity is on the order of the micrometer scale and the membrane shows a gradient of pore size across the membrane cross-section. The membrane can separate polystyrene latex particles of different size and undergoes actuation in contact with acetone due to the asymmetric porous structure.
SpringFit
(2019)
Joints are crucial to laser cutting as they allow making three-dimensional objects; mounts are crucial because they allow embedding technical components, such as motors. Unfortunately, mounts and joints tend to fail when trying to fabricate a model on a different laser cutter or from a different material. The reason for this lies in the way mounts and joints hold objects in place, which is by forcing them into slightly smaller openings. Such "press fit" mechanisms unfortunately are susceptible to the small changes in diameter that occur when switching to a machine that removes more or less material ("kerf"), as well as to changes in stiffness, as they occur when switching to a different material. We present a software tool called springFit that resolves this problem by replacing the problematic press fit-based mounts and joints with what we call cantilever-based mounts and joints. A cantilever spring is simply a long thin piece of material that pushes against the object to be held. Unlike press fits, cantilever springs are robust against variations in kerf and material; they can even handle very high variations, simply by using longer springs. SpringFit converts models in the form of 2D cutting plans by replacing all contained mounts, notch joints, finger joints, and t-joints. In our technical evaluation, we used springFit to convert 14 models downloaded from the web.
P>Despite ample research, understanding plant spread and predicting their ability to track projected climate changes remain a formidable challenge to be confronted. We modelled the spread of North American wind-dispersed trees in current and future (c. 2060) conditions, accounting for variation in 10 key dispersal, demographic and environmental factors affecting population spread. Predicted spread rates vary substantially among 12 study species, primarily due to inter-specific variation in maturation age, fecundity and seed terminal velocity. Future spread is predicted to be faster if atmospheric CO2 enrichment would increase fecundity and advance maturation, irrespective of the projected changes in mean surface windspeed. Yet, for only a few species, predicted wind-driven spread will match future climate changes, conditioned on seed abscission occurring only in strong winds and environmental conditions favouring high survival of the farthest-dispersed seeds. Because such conditions are unlikely, North American wind-dispersed trees are expected to lag behind the projected climate range shift.
Short period double degenerate white dwarf (WD) binaries with periods of less than similar to 1 day are considered to be one of the likely progenitors of type Ia supernovae. These binaries have undergone a period of common envelope evolution. If the core ignites helium before the envelope is ejected, then a hot subdwarf remains prior to contracting into a WD. Here we present a comparison of two very rare systems that contain two hot subdwarfs in short period orbits. We provide a quantitative spectroscopic analysis of the systems using synthetic spectra from state-of-the-art non-LTE models to constrain the atmospheric parameters of the stars. We also use these models to determine the radial velocities, and thus calculate dynamical masses for the stars in each system.
Voice onset time (VOT), a primary cue for voicing in many languages including English and German, is known to vary greatly between speakers, but also displays robust within-speaker consistencies, at least in English. The current analysis extends these findings to German. VOT measures were investigated from voiceless alveolar and velar stops in CV syllables cued by a visual prompt in a cue-distractor task. Comparably to English, a considerable portion of German VOT variability can be attributed to the syllable’s vowel length and the stop’s place of articulation. Individual differences in VOT still remain irrespective of speech rate. However, significant correlations across places of articulation and between speaker-specific mean VOTs and standard deviations indicate that talkers employ a relatively unified VOT profile across places of articulation. This could allow listeners to more efficiently adapt to speaker-specific realisations.
High storage density magnetic devices rely on the precise, reliable and ultrafast switching times of the magnetic states. Optical control of magnetization using femtosecond laser without applying any external magnetic field offers the advantage of switching magnetic states at ultrashort time scales, which has attracted a significant attention. Recently, it has been reported and demonstrated the,so-called, all-optical helicity-dependent switching (AO-HDS) in which a circularly polarized femtosecond laser pulse switches the magnetization of a ferromagnetic thin film as function of laser helicity [1]. Afterward, in more recent studies, it has been reported that AO-HDS is a general phenomenon existing in magnetic materials ranging from rare earth - transition metals ferrimagnetic (e.g. alloys, multilayers and hetero-structures system) to even ferromagnetic thin films. Among numerous studies in the literature which are discussing the microscopic origin of AO-HDS in ferromagnets or ferrimagnetic alloys, the most renowned concepts are momentum transfer via Inverse Faraday Effect (IFE) [1-3]and the concept of preferential thermal demagnetization for one magnetization direction by heating close to Tc (Curie temperature) in the presence of magnetic circular dichroism (MCD) [4-6]. In this study, we investigate all-optical magnetic switching using a stationary femtosecond laser spot (3-5 μm) in TbFe alloys via photoemission electron microscopy (PEEM) and x-ray magnetic circular dichroism (XMCD) with a spatial resolution of approximately 30 nm. We spatially characterize the effect of laser heating and local temperature profile created across the laser spot on AO-HDS in TbFe thin films. We find that AO-HDS occurs only in a `ring' shaped region surrounding the thermally demagnetized region formed by the laser spot and the formation of switched domains relies further on thermally induced domain wall motion. Our temperature dependent measurements highlight the importance of attainin...
The northward movement and collision of the Arabian plate with Eurasia generates compressive stresses and resulting shortening in Iran. Within the Alborz Mountains, North Iran, a complex and not well understood system of strike-slip and thrust faults accomodates a fundamental part of the NNE-SSW oriented shortening. On 28th of May 2004 the Mw 6.3 Baladeh earthquake hit the north-central Alborz Mountains. It is one of the rare and large events in this region in modern time and thus a seldom chance to study earthquake mechanisms and the local ongoing deformation processes. It also demonstrated the high vulnerability of this densily populated region.
Since the Shallow Structure Hypothesis (SSH) was first put forward in 2006, it has inspired a growing body of research on grammatical processing in nonnative (L2) speakers. More than 10 years later, we think it is time for the SSH to be reconsidered in the light of new empirical findings and current theoretical assumptions about human language processing. The purpose of our critical commentary is twofold: to clarify some issues regarding the SSH and to sketch possible ways in which this hypothesis might be refined and improved to better account for L1 and L2 speakers’ performance patterns.
Currently, a transformation of our technical world into a networked technical world where besides the embedded systems with their interaction with the physical world the interconnection of these nodes in the cyber world becomes a reality can be observed. In parallel nowadays there is a strong trend to employ artificial intelligence techniques and in particular machine learning to make software behave smart. Often cyber-physical systems must be self-adaptive at the level of the individual systems to operate as elements in open, dynamic, and deviating overall structures and to adapt to open and dynamic contexts while being developed, operated, evolved, and governed independently.
In this presentation, we will first discuss the envisioned future scenarios for cyber-physical systems with an emphasis on the synergies networking can offer and then characterize which challenges for the design, production, and operation of these systems result. We will then discuss to what extent our current capabilities, in particular concerning software engineering match these challenges and where substantial improvements for the software engineering are crucial. In today's software engineering for embedded systems models are used to plan systems upfront to maximize envisioned properties on the one hand and minimize cost on the other hand. When applying the same ideas to software for smart cyber-physical systems, it soon turned out that for these systems often somehow more subtle links between the involved models and the requirements, users, and environment exist. Self-adaptation and runtime models have been advocated as concepts to covers the demands that result from these subtler links. Lately, both trends have been brought together more thoroughly by the notion of self-aware computing systems. We will review the underlying causes, discuss some our work in this direction, and outline related open challenges and potential for future approaches to software engineering for smart cyber-physical systems.
Gegenstand dieser Arbeit sind die (Selbst-)Darstellungen von Gründer_innen von Nichtregierungsorganisationen (NGOs) im Bereich Kinder- und Frauenrechte in Tamil Nadu, Südindien. Um diese (Selbst-)Darstellungen angemessen analysieren zu können, wird zuerst eine analytische Herangehensweise entworfen, die davon ausgeht, dass bestehende soziologische Konzepte, die in erster Linie in Auseinandersetzung mit einem spezifischen (west-europäischen) Kontext entstanden sind, nicht unhinterfragt auf andere Kontexte übertragen werden können. Das erschwert die Verwendung von Begrifflichkeiten wie „Zivilgesellschaft“, „Entwicklung“ oder auch der scheinbar klaren Dichotomie von Moderne und Tradition. Eisenstadt machte diese Problematik in der von ihm begonnenen Debatte um „Multiple Modernities“ deutlich. In der vorliegenden Arbeit wird an diese Diskussion mit handlungstheoretischen Argumenten angeknüpft, um auch Akteursperspektiven angemessen analysieren zu können. Nachdem der theoretische Rahmen und die methodische Grundlage der Arbeit erläutert wurden, wird Kontextwissen erarbeitet, um die Analyse der Interviews einzubetten. Es werden Diskurse um Kaste und den Status von Frauen sowie Aspekte der aktuellen politischen Situation Tamil Nadus betrachtet. Die (Selbst-)Darstellungen lassen sich dann anhand der im Titel angedeuteten Dreiteilung aufschlüsseln: Die Gründer_innen setzen sich zum ersten mit der eigenen Rolle auseinander. Sie beschreiben sich als „social worker“ und greifen in den Selbstbeschreibungen zum Teil auf populistische Elemente des politischen Umfeldes zurück. Zum zweiten beschreiben sie die eigene Position gegenüber ihren „Zielgruppen“. Dabei wird deutlich, dass die Beziehungen zwischen NGO und „community“ zwischen Partizipation und Paternalismus schwanken. Zum dritten formulieren sie Zielsetzungen in Abgrenzung zu anderen (lokalen) politischen Akteuren: Sie grenzen sich zum Beispiel von einem ihrem Verständnis nach „westlichen“ Begriff von Entwicklung ab und formulieren demgegenüber „eigene“ Ziele. Sie reflektieren über lokale Kooperationen, z.B. mit politischen Persönlichkeiten, Kastenassoziationen, aber auch über Abgrenzungen oder Zusammenstöße, die sich dabei ergeben. Insgesamt wird deutlich, dass die (Selbst-)Darstellungen der Gründer_innen sich spannungsgeladen und ambivalent auf unterschiedliche Diskurse, Ideen und soziale Praktiken beziehen. Sie lassen sich insbesondere nicht in eine Perspektive von „Entwicklung“ einordnen, welche auf der Dichotomie von Moderne und Tradition aufbaut.
Studies indicate that reliable access to power is an important enabler for economic growth. To this end, modern energy management systems have seen a shift from reliance on time-consuming manual procedures , to highly automated management , with current energy provisioning systems being run as cyber-physical systems . Operating energy grids as a cyber-physical system offers the advantage of increased reliability and dependability , but also raises issues of security and privacy. In this chapter, we provide an overview of the contents of this book showing the interrelation between the topics of the chapters in terms of smart energy provisioning. We begin by discussing the concept of smart-grids in general, proceeding to narrow our focus to smart micro-grids in particular. Lossy networks also provide an interesting framework for enabling the implementation of smart micro-grids in remote/rural areas, where deploying standard smart grids is economically and structurally infeasible. To this end, we consider an architectural design for a smart micro-grid suited to low-processing capable devices. We model malicious behaviour, and propose mitigation measures based properties to distinguish normal from malicious behaviour .
The most recent intense earthquake swarm in the Vogtland lasted from 6 October 2008 until January 2009. Greatest magnitudes exceeded M3.5 several times in October making it the greatest swarm since 1985/86. In contrast to the swarms in 1985 and 2000, seismic moment release was concentrated near swarm onset. Focal area and temporal evolution are similar to the swarm in 2000. Work hypothysis: uprising upper-mantle fluids trigger swarm earthquakes at low stress level. To monitor the seismicity, the University of Potsdam operated a small aperture seismic array at 10 km epicentral distance between 18 October 2008 and 18 March 2009. Consisting of 12 seismic stations and 3 additional microphones, the array is capable of detecting earthquakes from larger to very low magnitudes (M<-1) as well as associated air waves. We use array techniques to determine properties of the incoming wavefield: noise, direct P and S waves, and converted phases.
Background:
Inflammatory bowel disease (IBD) represents a dysregulation of the mucosal immune system. The pathogenesis of Crohn’s disease (CD) and ulcerative colitis (UC) is linked to the loss of intestinal tolerance and barrier function. The healthy mucosal immune system has previously been shown to be inert against food antigens. Since the small intestine is the main contact surface for antigens and therefore the immunological response, the present study served to analyse food-antigen-specific T cells in the peripheral blood of IBD patients.
Methods:
Peripheral blood mononuclear cells of CD, with an affected small intestine, and UC (colitis) patients, either active or in remission, were stimulated with the following food antigens: gluten, soybean, peanut and ovalbumin. Healthy controls and celiac disease patients were included as controls. Antigen-activated CD4+ T cells in the peripheral blood were analysed by a magnetic enrichment of CD154+ effector T cells and a cytometric antigen-reactive T-cell analysis (‘ARTE’ technology) followed by characterisation of the ef- fector response.
Results:
The effector T-cell response of antigen-specific T cells were compared between CD with small intestinal inflammation and UC where inflammation was restricted to the colon. Among all tested food antigens, the highest frequency of antigen-specific T cells (CD4+CD154+) was found for gluten. Celiac disease patients were included as control, since gluten has been identified as the disease- causing antigen. The highest frequency of gluten antigen-specific T cells was revealed in active CD when compared with UC, celiac disease on a gluten-free diet (GFD) and healthy controls. Ovalbuminspecific T cells were almost undetectable, whereas the reaction to soybean and peanut was slightly higher. But again, the strong- est reaction was observed in CD with small intestinal involvement compared with UC. Remarkably, in celiac disease on a GFD only
antigen-specific cells for gluten were detected. These gluten-specific T cells were characterised by up-regulation of the pro-inflammatory cytokines IFN-γ, IL-17A and TNF-α. IFN-g was exclusively elevated in CD patients with active disease. Gluten-specific T-cells expressing IL-17A were increased in all IBD patients. Furthermore, T cells of CD patients, independent of disease activity, revealed a high expression of the pro-inflammatory cytokine TNF-α.
Conclusion:
The ‘ARTE’-technique allows to analyse and quantify food antigen specific T cells in the peripheral blood of IBD patients indicating a potential therapeutic insight. These data provide evidence that small intestinal inflammation in CD is key for the development of a systemic pro-inflammatory effector T-cell response driven by food antigens.
The coupling between molecular excitations and nanoparticles leads to promising applications. It is for example used to enhance the optical cross-section of molecules in surface enhanced Raman scattering, Purcell enhancement or plasmon enhanced dye lasers. In a coupled system new resonances emerge resulting from the original plasmon (ωpl) and exciton (ωex) resonances as
ω±=12(ωpl+ωex)±14(ωpl−ωex)2+g2−−−−−−−−−−−−−−−√,
(1)
where g is the coupling parameter. Hence, the new resonances show a separation of Δ = ω+ − ω− from which the coupling strength can be deduced from the minimum distance between the two resonances, Ω = Δ(ω+ = ω−).
Surface acoustic wave (SAW) devices are well-known for gravimetric sensor applications. In biosensing applications, chemical-and biochemically evoked adsorption processes at surfaces are detected in liquid environments using delay-line or resonator sensor configurations, preferably in combination with appropriate microfluidic devices. In this paper, a novel SAW-based impedance sensor type is introduced which uses only one interdigital electrode transducer (IDT) simultaneously as SAW generator and sensor element. It is shown that the amplitude of the reflected S-11 signal directly depends on the input impedance of the SAW device. The input impedance is strongly influenced by mass adsorption which causes a characteristic and measurable impedance mismatch.
Sikum Hilkhot Shabat
(2010)
This document summarises the commandments of Shabbat.
The electromagnetic coupling of molecular excitations to plasmonic nanoparticles offers a promising method to manipulate the light-matter interaction at the nanoscale. Plasmonic nanoparticles foster exceptionally high coupling strengths, due to their capacity to strongly concentrate the light-field to sub-wavelength mode volumes. A particularly interesting coupling regime occurs, if the coupling increases to a level such that the coupling strength surpasses all damping rates in the system. In this so-called strong-coupling regime hybrid light-matter states emerge, which can no more be divided into separate light and matter components. These hybrids unite the features of the original components and possess new resonances whose positions are separated by the Rabi splitting energy h Omega. Detuning the resonance of one of the components leads to an anticrossing of the two arising branches of the new resonances omega(+) and omega(-) with a minimal separation of Omega = omega(+) - omega(-).
Acute ankle sprain leads in 40% of all cases to chronic ankle instability (CAI). CAI is related to a variety of motor adaptations at the lower extremities. Previous investigations identified increased muscle activities while landing in CAI compared to healthy control participants. However, it remains unclear whether muscular alterations at the knee muscles are limited to the involved (unstable) ankle or are also present at the uninvolved leg. The latter might potentially indicate a risk of ankle sprain or future injury on the uninvolved leg. Purpose: To assess if there is a difference of knee muscle activities between the involved and uninvolved leg in participants with CAI during perturbed walking. Method: 10 participants (6 females; 4 males; 26±4 years; 169±9 cm; 65±7 kg) with unilateral CAI walked on a split-belt treadmill (1m/s) for 5 minutes of baseline walking and 6 minutes of perturbed walking (left and right side, each 10 perturbations). Electromyography (EMG) measurements were performed at biceps femoris (BF) and rectus femoris (RF). EMG amplitude (RMS; normalized to MVIC) were analyzed for 200ms pre-heel contact (Pre200), 100ms post heel contact (Post100) and 200ms after perturbation (Pert200). Data was analyzed by paired t-test/Wilcoxon test based on presence or absence of normal distribution (Bonferroni adjusted α level p≤ 0.0125). Results: No statistical difference was found between involved and uninvolved leg for RF (Pre200: 4±2% and 11± 22%, respectively, p= 0.878; Post100: 10± 5 and 18±31%, p=0.959; Pert200: 6±3% and 13±24%, p=0.721) as well as for BF (Pre200: 12±7% and 11±6, p=0.576; Post100: 10±7% and 9±7%, p=0.732; Pert200: 7±4 and 7±7%, p=0.386). Discussion: No side differences in muscle activity could be revealed for assessed feedforward and feedback responses (perturbed and unperturbed) in unilateral CAI. Reduced inter-individual variability of muscular activities at the involved leg might indicate a rather stereotypical response pattern. It remains to be investigated, whether muscular control at the knee is not affected by CAI, or whether both sides adapted in a similar style to the chronic condition at the ankle.
Currently we are witnessing profound changes in the geospatial domain. Driven by recent ICT developments, such as web services, serviceoriented computing or open-source software, an explosion of geodata and geospatial applications or rapidly growing communities of non-specialist users, the crucial issue is the provision and integration of geospatial intelligence in these rapidly changing, heterogeneous developments. This paper introduces the concept of Servicification into geospatial data processing. Its core idea is the provision of expertise through a flexible number of web-based software service modules. Selection and linkage of these services to user profiles, application tasks, data resources, or additional software allow for the compilation of flexible, time-sensitive geospatial data handling processes. Encapsulated in a string of discrete services, the approach presented here aims to provide non-specialist users with geospatial expertise required for the effective, professional solution of a defined application problem. Providing users with geospatial intelligence in the form of web-based, modular services, is a completely different approach to geospatial data processing. This novel concept puts geospatial intelligence, made available through services encapsulating rule bases and algorithms, in the centre and at the disposal of the users, regardless of their expertise.
The keynote article (Mayberry & Kluender, 2017) makes an important contribution to questions concerning the existence and characteristics of sensitive periods in language acquisition. Specifically, by comparing groups of non-native L1 and L2 signers, the authors have been able to ingeniously disentangle the effects of maturation from those of early language exposure. Based on L1 versus L2 contrasts, the paper convincingly argues that L2 learning is a less clear test of sensitive periods. Nevertheless, we believe Mayberry and Kluender underestimate the evidence for maturational factors in L2 learning, especially that coming from recent research.
The availability of detailed virtual 3D building models including representations of indoor elements, allows for a wide number of applications requiring effective exploration and navigation functionality. Depending on the application context, users should be enabled to focus on specific Objects-of-Interests (OOIs) or important building elements. This requires approaches to filtering building parts as well as techniques to visualize important building objects and their relations. For it, this paper explores the application and combination of interactive rendering techniques as well as their semanticallydriven configuration in the context of 3D indoor models.
Detecting and recognizing text in natural scene images is a challenging, yet not completely solved task. In recent years several new systems that try to solve at least one of the two sub-tasks (text detection and text recognition) have been proposed. In this paper we present SEE, a step towards semi-supervised neural networks for scene text detection and recognition, that can be optimized end-to-end. Most existing works consist of multiple deep neural networks and several pre-processing steps. In contrast to this, we propose to use a single deep neural network, that learns to detect and recognize text from natural images, in a semi-supervised way. SEE is a network that integrates and jointly learns a spatial transformer network, which can learn to detect text regions in an image, and a text recognition network that takes the identified text regions and recognizes their textual content. We introduce the idea behind our novel approach and show its feasibility, by performing a range of experiments on standard benchmark datasets, where we achieve competitive results.
Cloud storage brokerage is an abstraction aimed at providing value-added services. However, Cloud Service Brokers are challenged by several security issues including enlarged attack surfaces due to integration of disparate components and API interoperability issues. Therefore, appropriate security risk assessment methods are required to identify and evaluate these security issues, and examine the efficiency of countermeasures. A possible approach for satisfying these requirements is employment of threat modeling concepts, which have been successfully applied in traditional paradigms. In this work, we employ threat models including attack trees, attack graphs and Data Flow Diagrams against a Cloud Service Broker (CloudRAID) and analyze these security threats and risks. Furthermore, we propose an innovative technique for combining Common Vulnerability Scoring System (CVSS) and Common Configuration Scoring System (CCSS) base scores in probabilistic attack graphs to cater for configuration-based vulnerabilities which are typically leveraged for attacking cloud storage systems. This approach is necessary since existing schemes do not provide sufficient security metrics, which are imperatives for comprehensive risk assessments. We demonstrate the efficiency of our proposal by devising CCSS base scores for two common attacks against cloud storage: Cloud Storage Enumeration Attack and Cloud Storage Exploitation Attack. These metrics are then used in Attack Graph Metric-based risk assessment. Our experimental evaluation shows that our approach caters for the aforementioned gaps and provides efficient security hardening options. Therefore, our proposals can be employed to improve cloud security.
Mobile operating systems, such as Google's Android, have become a fixed part of our daily lives and are entrusted with a plethora of private information. Congruously, their data protection mechanisms have been improved steadily over the last decade and, in particular, for Android, the research community has explored various enhancements and extensions to the access control model. However, the vast majority of those solutions has been concerned with controlling the access to data, but equally important is the question of how to control the flow of data once released. Ignoring control over the dissemination of data between applications or between components of the same app, opens the door for attacks, such as permission re-delegation or privacy-violating third-party libraries. Controlling information flows is a long-standing problem, and one of the most recent and practical-oriented approaches to information flow control is secure multi-execution.
In this paper, we present Ariel, the design and implementation of an IFC architecture for Android based on the secure multi-execution of apps. Ariel demonstrably extends Android's system with support for executing multiple instances of apps, and it is equipped with a policy lattice derived from the protection levels of Android's permissions as well as an I/O scheduler to achieve control over data flows between application instances. We demonstrate how secure multi-execution with Ariel can help to mitigate two prominent attacks on Android, permission re-delegations and malicious advertisement libraries.
Scrum2kanban
(2018)
Using university capstone courses to teach agile software development methodologies has become commonplace, as agile methods have gained support in professional software development. This usually means students are introduced to and work with the currently most popular agile methodology: Scrum. However, as the agile methods employed in the industry change and are adapted to different contexts, university courses must follow suit. A prime example of this is the Kanban method, which has recently gathered attention in the industry. In this paper, we describe a capstone course design, which adds the hands-on learning of the lean principles advocated by Kanban into a capstone project run with Scrum. This both ensures that students are aware of recent process frameworks and ideas as well as gain a more thorough overview of how agile methods can be employed in practice. We describe the details of the course and analyze the participating students' perceptions as well as our observations. We analyze the development artifacts, created by students during the course in respect to the two different development methodologies. We further present a summary of the lessons learned as well as recommendations for future similar courses. The survey conducted at the end of the course revealed an overwhelmingly positive attitude of students towards the integration of Kanban into the course.
Screeninginstrumente
(2018)
Schulgesetz Rheinland-Pfalz
(2000)
Scenograph
(2018)
When developing a real-walking virtual reality experience, designers generally create virtual locations to fit a specific tracking volume. Unfortunately, this prevents the resulting experience from running on a smaller or differently shaped tracking volume. To address this, we present a software system called Scenograph. The core of Scenograph is a tracking volume-independent representation of real-walking experiences. Scenograph instantiates the experience to a tracking volume of given size and shape by splitting the locations into smaller ones while maintaining narrative structure. In our user study, participants' ratings of realism decreased significantly when existing techniques were used to map a 25m2 experience to 9m2 and an L-shaped 8m2 tracking volume. In contrast, ratings did not differ when Scenograph was used to instantiate the experience.
No other means of communication determines through its seemingly unrestricted possibilities our everyday life more than the internet. From the mid-90s onwards, more and more technical advancements in the field of communication appear on the market, which in turn call for new terminology. In the first place, it is the internet (essentially based on the interaction between users and experts), which requires effective nomenclature in order to mediate between lay users and their restricted knowledge on the one, and experts and their sophisticated terminology on the other hand. At the interface between the new and complex realities and the need for simple linguistic access, a huge quantity of metaphoric denominations is used, making abstract innovations more comprehensible. Metaphor in the internet discourse serves to "reduce verticality" (Stenschke 2006) between specialized terminology and common language. The paper deals with metaphors based on spatial concepts. Space and spatiality play a key role in cognitive theories of metaphor as these theories themselves (according to Lakoff/Johnson 1980) are often based on the application of spatial concepts to non-spatial relations. After describing spatial concepts in general (referring to the internet), the paper explores which kind of metaphor takes advantage of the complexity present in the internet and how the medial space is linguistically recaptured in terms of spatial perception.
We study the rupture propagation of the 2008/05/12 Ms8.0 Wenchuan Earthquake. We apply array techniques such as semblance vespagram analysis to P waves recorded at seismic broadband station within 30-100° epicentral distance. By combination of multiple large aperture station groups spatial and temporal resolution is enhanced and problems due source directivity and source mechanism are avoided. We find that seismic energy was released for at least 110 s. Propagating unilaterally at sub-shear rupture velocity of about 2.5 km/s in NE direction, the earthquake reaches a lateral extent of more than 300 km. Whereas high semblance during within 70 s from rupture start indicates simple propagation more complex source processes are indicated thereafter by decreases coherency in seismograms. At this stage of the event coherency is low but significantly above noise level. We emphasize that first result of our computations where obtain within 30 minutes after source time by using an atomized algorithm. This procedure has been routinely and globally applied to major earthquakes. Results are made public through internet.
The spatio-temporal evolution of the three recent tsunamogenic earthquakes (TsE) off-coast N-Sumatra (Mw9.3), 28/03/2005 (Mw8.5) off-coast Nias, on 17/07/2006 (Mw7.7) off-coast Java. Start time, duration, and propagation of the rupture are retrieved. All parameters can be obtained rapidly after recording of the first-arrival phases in near-real time processing. We exploit semblance analysis, backpropagation and broad-band seismograms within 30°-95° distance. Image enhancement is reached by stacking the semblance of arrays within different directions. For the three events, the rupture extends over about 1150, 150, and 200km, respectively. The events in 2004, 2005, and 2006 had source durations of at least 480s, 120s, and 180s, respectively. We observe unilateral rupture propagation for all events except for the rupture onset and the Nias event, where there is evidence for a bilateral start of the rupture. Whereas average rupture speed of the events in 2004 and 2005 is in the order of the S-wave speed (≈2.5-3km/s), unusually slow rupturing (≈1.5 km/s) is indicated for the July 2006 event. For the July 2006 event we find rupturing of a 200 x 100 km wide area in at least 2 phases with propagation from NW to SE. The event has some characteristics of a circular rupture followed by unilateral faulting with change in slip rate. Fault area and aftershock distribution coincide. Spatial and temporal resolution are frequency dependent. Studies of a Mw6.0 earthquake on 2006/09/21 and one synthetic source show a ≈1° limit in resolution. Retrieved source area, source duration as well as peak values for semblance and beam power generally increase with the size of the earthquake making possible an automatic detection and classification of large and small earthquakes.
Rapid and robust characterization of large earthquakes in terms of their spatial extent and temporal duration is of high importance for disaster mitigation and early warning applications. Backtracking of seismic P-waves was successfully used by several authors to image the rupture process of the great Sumatra earthquake (26.12.2004) using short period and broadband arrays. We follow here an approach of Walker et al. to backtrack and stack broadband waveforms from global network stations using traveltimes for a global Earth model to obtain the overall spatio-temporal development of the energy radiation of large earthquakes in a quick and robust way. We present results for selected events with well studied source processes (Kokoxili 14.11.2001, Tokachi-Oki 25.09.2003, Nias 28.03.2005). Further, we apply the technique in a semi-real time fashion to broadband data of earthquakes with a broadband magnitude >= 7 (roughly corresponding to Mw 6.5). Processing is based on first automatic detection messages from the GEOFON extended virtual network (GEVN).
Root infinitives on Twitter
(2017)