Refine
Year of publication
Document Type
- Other (682) (remove)
Language
- English (682) (remove)
Keywords
- Arrayseismologie (5)
- array seismology (5)
- E-Learning (4)
- Erdbeben (4)
- MOOC (4)
- Scrum (4)
- Seismology (4)
- embodied cognition (4)
- errata, addenda (4)
- Cloud-Security (3)
Institute
- Hasso-Plattner-Institut für Digital Engineering GmbH (84)
- Institut für Biochemie und Biologie (84)
- Institut für Physik und Astronomie (83)
- Institut für Geowissenschaften (75)
- Department Psychologie (42)
- Department Sport- und Gesundheitswissenschaften (37)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (29)
- Institut für Chemie (27)
- Institut für Ernährungswissenschaft (27)
- Institut für Informatik und Computational Science (26)
Kijko et al. (2016) present various methods to estimate parameters that are relevant for probabilistic seismic-hazard assessment. One of these parameters, although not the most influential, is the maximum possible earthquake magnitude m(max). I show that the proposed estimation of m(max) is based on an erroneous equation related to a misuse of the estimator in Cooke (1979) and leads to unstable results. So far, reported finite estimations of m(max) arise from data selection, because the estimator in Kijko et al. (2016) diverges with finite probability. This finding is independent of the assumed distribution of earthquake magnitudes. For the specific choice of the doubly truncated Gutenberg-Richter distribution, I illustrate the problems by deriving explicit equations. Finally, I conclude that point estimators are generally not a suitable approach to constrain m(max).
Data limitations can lead to unrealistic fits of predictive species distribution models (SDMs) and spurious extrapolation to novel environments. Here, we want to draw attention to novel combinations of environmental predictors that are within the sampled range of individual predictors but are nevertheless outside the sample space. These tend to be overlooked when visualizing model behaviour. They may be a cause of differing model transferability and environmental change predictions between methods, a problem described in some studies but generally not well understood. We here use a simple simulated data example to illustrate the problem and provide new and complementary visualization techniques to explore model behaviour and predictions to novel environments. We then apply these in a more complex real-world example. Our results underscore the necessity of scrutinizing model fits, ecological theory and environmental novelty.
Domain-specific physical activity patterns and cardiorespiratory fitness among adults in Germany
(2019)
Background Studies show that occupational physical activity (OPA) has less health-enhancing effects than leisure-time physical activity (LTPA). The spare data available suggests that OPA rarely includes aerobic PAs with little or no enhancing effects on cardiorespiratory fitness (CRF) as a possible explanation. This study aims to investigate the associations between patterns of OPA and LTPA and CRF among adults in Germany. Methods 1,204 men and 1,303 women (18-64 years), who participated in the German Health Interview and Examination Survey 2008-2011, completed a standardized sub-maximal cycle ergometer test to estimate maximal oxygen consumption (VO2max). Job positions were coded according to the level of physical effort to construct an occupational PA index and categorized as low vs. high OPA. LTPA was assessed via questionnaires and dichotomized in no vs. any LTPA participation. A combined LTPA/OPA variable was used (high OPA/ LTPA, low OPA/LTPA, high OPA/no LTPA, low OPA/no LTPA). Information on potential confounders was obtained via questionnaires (e.g., smoking and education) or physical measurements (e.g., waist circumference). Multi-variable logistic regression was used to analyze associations between OPA/LTPA patterns and VO2max. Results Preliminary analyses showed that less-active men were more likely to have a low VO2max with odds ratios (ORs) of 0.80 for low OPA/LTPA, 1.84 for high OPA/no LTPA and 3.46 for low OPA/no LTPA compared to high OPA/LTPA. The corresponding ORs for women were 1.11 for low OPA/LTPA, 3.99 for high OPA/no LTPA and 2.44 for low OPA/no LTPA, indicating the highest likelihood of low fitness for women working in physically demanding jobs and not engaging in LTPA. Conclusions Findings confirm a strong association between LTPA and CRF and suggest an interaction between OPA and LTPA patterns on CRF within the workforce in Germany. Women without LTPA are at high risk of having a low CRF, especially if they work in physically demanding jobs. Key messages Women not practicing leisure-time physical activity are at risk of having a low cardiorespiratory fitness, especially if they work in physically demanding jobs. Different impact of domains of physical activity should be considered when planning interventions to enhance fitness among the adult population.
Zero-shot learning in Language & Vision is the task of correctly labelling (or naming) objects of novel categories. Another strand of work in L&V aims at pragmatically informative rather than "correct" object descriptions, e.g. in reference games. We combine these lines of research and model zero-shot reference games, where a speaker needs to successfully refer to a novel object in an image. Inspired by models of "rational speech acts", we extend a neural generator to become a pragmatic speaker reasoning about uncertain object categories. As a result of this reasoning, the generator produces fewer nouns and names of distractor categories as compared to a literal speaker. We show that this conversational strategy for dealing with novel objects often improves communicative success, in terms of resolution accuracy of an automatic listener.
Reply to Peng et al.: Archaeological contexts should not be ignored for early chicken domestication
(2015)
Xenikoudakis et al. report a partial mitochondrial genome of the extinct giant beaver Castoroides and estimate the origin of aquatic behavior in beavers to approximately 20 million years. This time estimate coincides with the extinction of terrestrial beavers and raises the question whether the two events had a common cause.
The origin of ambling horses
(2016)
Horseback riding is the most fundamental use of domestic horses and has had a huge influence on the development of human societies for millennia. Over time, riding techniques and the style of riding improved. Therefore, horses with the ability to perform comfortable gaits (e.g. ambling or pacing), so-called ‘gaited’ horses, have been highly valued by humans, especially for long distance travel. Recently, the causative mutation for gaitedness in horses has been linked to a substitution causing a premature stop codon in the DMRT3 gene (DMRT3_Ser301STOP) [1]. In mice, Dmrt3 is expressed in spinal cord interneurons and plays an important role in the development of limb movement coordination [1]. Genotyping the position in 4396 modern horses from 141 breeds revealed that nowadays the mutated allele is distributed worldwide with an especially high frequency in gaited horses and breeds used for harness racing [2]. Here, we examine historic horse remains for the DMRT3 SNP, tracking the origin of gaitedness to Medieval England between 850 and 900 AD. The presence of the corresponding allele in Icelandic horses (9th–11th century) strongly suggests that ambling horses were brought from the British Isles to Iceland by Norse people. Considering the high frequency of the ambling allele in early Icelandic horses, we believe that Norse settlers selected for this comfortable mode of horse riding soon after arrival. The absence of the allele in samples from continental Europe (including Scandinavia) at this time implies that ambling horses may have spread from Iceland and maybe also the British Isles across the continent at a later date.
DOES AGE INFLUENCE BRAIN POTENTIALS DURING AFFECTIVE PICTURE PROCESSING IN MIDDLE-AGED WOMEN?
(2017)
BACK PAIN: THE STUDY OF MECHANISMS AND THE TRANSLATION IN INTERVENTIONS WITHIN THE MISPEX NETWORK
(2016)
Stress and bone health
(2019)
Speech scientists have long noted that the qualities of naturally-produced vowels do not remain constant over their durations regardless of being nominally "monophthongs" or "diphthongs". Recent acoustic corpora show that there are consistent patterns of first (F1) and second (F2) formant frequency change across different vowel categories. The three Australian English (AusE) close front vowels /i:, 1, i/ provide a striking example: while their midpoint or mean F1 and F2 frequencies are virtually identical, their spectral change patterns distinctly differ. The results indicate that, despite the distinct patterns of spectral change of AusE /i:, i, la/ in production, its perceptual relevance is not uniform, but rather vowel-category dependent.
Recent research indicates that non- invasive stimulation of the afferent auricular vagal nerve (tVNS) may modulate various cognitive and affec-tive functions, likely via activation of the locus coeruleus- norepinephrine (LC- NE) system. In a series of ERP studies we found that the attention- related P300 component is enhanced during continuous vagal stimula-tion, compared to sham, which is also related to increased salivary alpha amylase levels (a putative indirect marker for central NE activation). In another study, we investigated the effect of continuous tVNS on the late positive potential (LPP), an electrophysiological index for motivated atten-tion toward emotionally evocative cues, and the effects of tVNS on later recognition memory (1- week delay). Here, vagal stimulation prompted earlier LPP differences (300- 500 ms) between unpleasant and neutral scenes. During retrieval, vagal stimulation significantly improved memory performance for unpleasant, but not neutral pictures, compared to sham stimulation, which was also related to enhanced salivary alpha amylase levels. In line, unpleasant images encoded under tVNS compared to sham stimulation also produced enhanced ERP old/new differences (500- 800 ms) during retrieval indicating better recollection. Taken together, our studies suggest that tVNS facilitates attention, learning and episodic memory, likely via afferent projections to the arousal- modulated LC- NE system. We will, however, also show data that point to critical stimulation parameters (likely duration and frequency) that need to be considered when applying tVNS
THE P300 AND THE LC-NE SYSTEM: NEW INSIGHTS FROM TRANSCUTANEOUS VAGUS NERVE STIMULATION (TVNS)
(2017)
Predicting macroscopic elastic rock properties requires detailed information on microstructure
(2017)
Predicting variations in macroscopic mechanical rock behaviour due to microstructural changes, driven by mineral precipitation and dissolution is necessary to couple chemo-mechanical processes in geological subsurface simulations. We apply 3D numerical homogenization models to estimate Young’s moduli for five synthetic microstructures, and successfully validate our results for comparable geometries with the analytical Mori-Tanaka approach. Further, we demonstrate that considering specific rock microstructures is of paramount importance, since calculated elastic properties may deviate by up to 230 % for the same mineral composition. Moreover, agreement between simulated and experimentally determined Young’s moduli is significantly improved, when detailed spatial information are employed.
Preface to BPM 2014
(2016)
JavaScript is the most popular programming language for web applications. Static analysis of JavaScript applications is highly challenging due to its dynamic language constructs and event-driven asynchronous executions, which also give rise to many security-related bugs. Several static analysis tools to detect such bugs exist, however, research has not yet reported much on the precision and scalability trade-off of these analyzers. As a further obstacle, JavaScript programs structured in Node. js modules need to be collected for analysis, but existing bundlers are either specific to their respective analysis tools or not particularly suitable for static analysis.
As a potentially toxic agent on nervous system and bone, the safety of aluminium exposure from adjuvants in vaccines and subcutaneous immune therapy (SCIT) products has to be continuously reevaluated, especially regarding concomitant administrations. For this purpose, knowledge on absorption and disposition of aluminium in plasma and tissues is essential. Pharmacokinetic data after vaccination in humans, however, are not available, and for methodological and ethical reasons difficult to obtain. To overcome these limitations, we discuss the possibility of an in vitro-in silico approach combining a toxicokinetic model for aluminium disposition with biorelevant kinetic absorption parameters from adjuvants. We critically review available kinetic aluminium-26 data for model building and, on the basis of a reparameterized toxicokinetic model (Nolte et al., 2001), we identify main modelling gaps. The potential of in vitro dissolution experiments for the prediction of intramuscular absorption kinetics of aluminium after vaccination is explored. It becomes apparent that there is need for detailed in vitro dissolution and in vivo absorption data to establish an in vitro-in vivo correlation (IVIVC) for aluminium adjuvants. We conclude that a combination of new experimental data and further refinement of the Nolte model has the potential to fill a gap in aluminium risk assessment. (C) 2017 Elsevier Inc. All rights reserved.
Previous work has shown that surface modification with orthophosphoric acid can significantly enhance the charge stability on polypropylene (PP) surface by generating deeper traps. In the present study, thermally stimulated potential-decay measurements revealed that the chemical treatment may also significantly increase the number of available trapping sites on the surface. Thus, as a consequence, the so-called "cross-over" phenomenon, which is observed on as-received and thermally treated PP electrets, may be overcome in a certain range of initial charge densities. Furthermore, the discharge behavior of chemically modified samples indicates that charges can be injected from the treated surface into the bulk, and/or charges of opposite polarity can be pulled from the rear electrode into the bulk at elevated temperatures and at the high electric fields that are caused by the deposited charges. In the bulk, a lack of deep traps causes rapid charge decay already in the temperature range around 95 degrees C.
The maximum entropy method is used to predict flows on water distribution networks. This analysis extends the water distribution network formulation of Waldrip et al. (2016) Journal of Hydraulic Engineering (ASCE), by the use of a continuous relative entropy defined on a reduced parameter set. This reduction in the parameters that the entropy is defined over ensures consistency between different representations of the same network. The performance of the proposed reduced parameter method is demonstrated with a one-loop network case study.
The maximum entropy method is used to derive an alternative gravity model for a transport network. The proposed method builds on previous methods which assign the discrete value of a maximum entropy distribution to equal the traffic flow rate. The proposed method however, uses a distribution to represent each flow rate. The proposed method is shown to be able to handle uncertainty in a more elegant way and give similar results to traditional methods. It is able to incorporate more of the observed data through the entropy function, prior distribution and integration limits potentially allowing better inferences to be made.
The nature restoration project ‘Lenzener Elbtalaue’, realised from 2002 to 2011 at the river Elbe, included the first large scale dike relocation in Germany (420 ha). Its aim was to initiate the development of endangered natural wetland habitats and processes, accompanied by greater biodiversity in the former grassland dominated area. The monitoring of spatial and temporal variations of soil moisture in this dike relocation area is therefore particularly important for estimating the restoration success. The topsoil moisture monitoring from 1990 to 2017 is based on the Soil Moisture Index (SMI)1 derived with the triangle method2 by use of optical remotely sensed data: land surface temperature and Normalized Differnce Vegetation Index are calculated from Landsat 4/5/7/8 data and atmospheric corrected by use of MODIS data. Spatial and temporal soil moisture variations in the restored area of the dike relocation are compared to the agricultural and pasture area behind the new dike. Ground truth data in the dike relocation area was obtained from field measurements in October 2017 with a FDR device. Additionally, data from a TERENO soil moisture sensor network (SoilNet) and mobile cosmic ray neutron sensing (CRNS) rover measurements are compared to the results of the triangle method for a region in the Harz Mountains (Germany). The SMI time series illustrates, that the dike relocation area has become significantly wetter between 1990 and 2017, due to restructuring measurements. Whereas the SMI of the dike hinterland reflects constant and drier conditions. An influence of climate is unlikely. However, validation of the dimensionless index with ground truth measurements is very difficult, mostly due to large differences in scale.
Editorial
(2019)
Background: Evidence that home telemonitoring (HTM) for patients with chronic heart failure (CHF) offers clinical benefit over usual care is controversial as is evidence of a health economic advantage. Therefore the CardioBBEAT trial was designed to prospectively assess the health economic impact of a dedicated home monitoring system for patients with CHF based on actual costs directly obtained from patients’ health care providers.
Methods: Between January 2010 and June 2013, 621 patients (mean age 63,0 ± 11,5 years, 88 % male) with a confirmed diagnosis of CHF (LVEF ≤ 40 %) were enrolled and randomly assigned to two study groups comprising usual care with and without an interactive bi-directional HTM (Motiva®). The primary endpoint was the Incremental Cost-Effectiveness Ratio (ICER) established by the groups’ difference in total cost and in the combined clinical endpoint “days alive and not in hospital nor inpatient care per potential days in study” within the follow up of 12 months. Secondary outcome measures were total mortality and health related quality of life (SF-36, WHO-5 and KCCQ).
Results: In the intention-to-treat analysis, total mortality (HR 0.81; 95% CI 0.45 – 1.45) and days alive and not in hospital (343.3 ± 55.4 vs. 347.2 ± 43.9; p = 0.909) were not significantly different between HTM and usual care. While the resulting primary endpoint ICER was not positive (-181.9; 95% CI −1626.2 ± 1628.9), quality of life assessed by SF-36, WHO-5 and KCCQ as a secondary endpoint was significantly higher in the HTW group at 6 and 12 months of follow-up.
Conclusions: The first simultaneous assessment of clinical and economic outcome of HTM in patients with CHF did not demonstrate superior incremental cost effectiveness compared to usual care. On the other hand, quality of life was improved. It remains open whether the tested HTM solution represents a useful innovative approach in the recent health care setting.
Cain and Abel
(2021)
The biblical story of Cain and Abel in Genesis 4:1–16 appears as the first case of siblings’ rivalry in the Torah. It is the starting point of a socio-ethical process of human development within the book of Genesis. The sibling narrative also includes the first report of homicide, more precisely a fratricide, as Cain slays his own brother Abel (Gen 4:8). The Jewish and Christian reception discourse of the Cain-Abel-story developed early on to deal with a range of open questions and difficult passages provided by the biblical text. The basic assumptions of Jewish and Christian interpretations are initially similar in terms of attempting to explain God’s preference for Abel’s sacrifice and Cain’s motivation for killing his brother.
Jonah
(2023)
In the Masoretic canon of the Tanakh the book of Jonah appears as the fifth part of Tre Assar, or Twelve Minor Prophets, between Obadiah and Micah. In the Septuagint, on the other hand, Jonah appears as the sixth book in the series, and is followed immediately by Nahum. As both Jonah and Nahum speak out against the city of Nineveh, their chronology became an issue early in their discourses of reception (Liv. Pro. 11:1; Josephus, Ant. 9:239–242; Tg.Nah 1:1).
Social institutions
(2024)
Social institutions are a system of behavioral and relationship patterns that are densely interwoven and enduring and function across an entire society. They order and structure the behavior of individuals in core areas of society and thus have a strong impact on the quality of life of individuals. Institutions regulate the following: (a) family and relationship networks carry out social reproduction and socialization; (b) institutions in the realm of education and training ensure the transmission and cultivation of knowledge, abilities, and specialized skills; (c) institutions in the labor market and economy provide for the production and distribution of goods and services; (d) institutions in the realm of law, governance, and politics provide for the maintenance of the social order; (e) while cultural, media, and religious institutions further the development of contexts of meaning, value orientations, and symbolic codes.
The keynote article (Mayberry & Kluender, 2017) makes an important contribution to questions concerning the existence and characteristics of sensitive periods in language acquisition. Specifically, by comparing groups of non-native L1 and L2 signers, the authors have been able to ingeniously disentangle the effects of maturation from those of early language exposure. Based on L1 versus L2 contrasts, the paper convincingly argues that L2 learning is a less clear test of sensitive periods. Nevertheless, we believe Mayberry and Kluender underestimate the evidence for maturational factors in L2 learning, especially that coming from recent research.
The Gradient Symbolic Computation (GSC) model presented in the keynote article (Goldrick, Putnam & Schwarz) constitutes a significant theoretical development, not only as a model of bilingual code-mixing, but also as a general framework that brings together symbolic grammars and graded representations. The authors are to be commended for successfully integrating a theory of grammatical knowledge with the voluminous research on lexical co-activation in bilinguals. It is, however, unfortunate that a certain conception of bilingualism was inherited from this latter research tradition, one in which the contrast between native and non-native language takes a back seat.
Audit - and then what?
(2019)
Current trends such as digital transformation, Internet of Things, or Industry 4.0 are challenging the majority of learning factories. Regardless of whether a conventional learning factory, a model factory, or a digital learning factory, traditional approaches such as the monotonous execution of specific instructions don‘t suffice the learner’s needs, market requirements as well as especially current technological developments. Contemporary teaching environments need a clear strategy, a road to follow for being able to successfully cope with the changes and develop towards digitized learning factories. This demand driven necessity of transformation leads to another obstacle: Assessing the status quo and developing and implementing adequate action plans. Within this paper, details of a maturity-based audit of the hybrid learning factory in the Research and Application Centre Industry 4.0 and a thereof derived roadmap for the digitization of a learning factory are presented.
Introduction
(2019)
This book started as a conversation about successful societies and human development. It was originally based on a simple idea— it would be unusual if, in a society that might be reasonably deemed as successful, its citizens were deeply unhappy. This combination— successful societies and happy citizens— raised immediate and obvious problems. How might one define “success” when dealing, for example, with a society as large and as complex as the United States? We ran into equally major problems when trying to understand “happiness.” Yet one constantly hears political analysts talking about the success or failure of various democratic institutions. In ordinary conversations one constantly hears people talking about being happy or unhappy. In the everyday world, conversations about living in a successful society or about being happy do not appear to cause bewilderment or confusion. “Ordinary people” do not appear to find questions like— is your school successful or are you happily married?— meaningless or absurd. Yet, in the social sciences, both “successful societies” and “happy lives” are seen to be troublesome.
As our research into happiness and success unfolded, the conundrums we discussed were threefold: societal conditions, measurements and concepts. What are the key social factors that are indispensable for the social and political stability of any given society? Is it possible to develop precise measures of social success that would give us reliable data? There are a range of economic indicators that might be associated with success, such as labor productivity, economic growth rates, low inflation and a robust GDP. Are there equally reliable political and social measures of a successful society and human happiness? For example, rule of law and the absence of large- scale corruption might be relevant to the assessment of societal happiness. These questions about success led us inexorably to what seems to be a futile notion: happiness. Economic variables such as income or psychological measures of well- being in terms of mental health could be easily analyzed; however, happiness is a dimension that has been elusive to the social sciences.
In our unfolding conversation, there was also another stream of thought, namely that the social sciences appeared to be more open to the study of human unhappiness rather than happiness.
Interactive Close-Up Rendering for Detail plus Overview Visualization of 3D Digital Terrain Models
(2019)
This paper presents an interactive rendering technique for detail+overview visualization of 3D digital terrain models using interactive close-ups. A close-up is an alternative presentation of input data varying with respect to geometrical scale, mapping, appearance, as well as Level-of-Detail (LOD) and Level-of-Abstraction (LOA) used. The presented 3D close-up approach enables in-situ comparison of multiple Regionof-Interests (ROIs) simultaneously. We describe a GPU-based rendering technique for the image-synthesis of multiple close-ups in real-time.
A fundamental task in 3D geovisualization and GIS applications is the visualization of vector data that can represent features such as transportation networks or land use coverage. Mapping or draping vector data represented by geometric primitives (e.g., polylines or polygons) to 3D digital elevation or 3D digital terrain models is a challenging task. We present an interactive GPU-based approach that performs geometry-based draping of vector data on per-frame basis using an image-based representation of a 3D digital elevation or terrain model only.
CSBAuditor
(2018)
Cloud Storage Brokers (CSB) provide seamless and concurrent access to multiple Cloud Storage Services (CSS) while abstracting cloud complexities from end-users. However, this multi-cloud strategy faces several security challenges including enlarged attack surfaces, malicious insider threats, security complexities due to integration of disparate components and API interoperability issues. Novel security approaches are imperative to tackle these security issues. Therefore, this paper proposes CSBAuditor, a novel cloud security system that continuously audits CSB resources, to detect malicious activities and unauthorized changes e.g. bucket policy misconfigurations, and remediates these anomalies. The cloud state is maintained via a continuous snapshotting mechanism thereby ensuring fault tolerance. We adopt the principles of chaos engineering by integrating Broker Monkey, a component that continuously injects failure into our reference CSB system, Cloud RAID. Hence, CSBAuditor is continuously tested for efficiency i.e. its ability to detect the changes injected by Broker Monkey. CSBAuditor employs security metrics for risk analysis by computing severity scores for detected vulnerabilities using the Common Configuration Scoring System, thereby overcoming the limitation of insufficient security metrics in existing cloud auditing schemes. CSBAuditor has been tested using various strategies including chaos engineering failure injection strategies. Our experimental evaluation validates the efficiency of our approach against the aforementioned security issues with a detection and recovery rate of over 96 %.
Cloud storage brokerage is an abstraction aimed at providing value-added services. However, Cloud Service Brokers are challenged by several security issues including enlarged attack surfaces due to integration of disparate components and API interoperability issues. Therefore, appropriate security risk assessment methods are required to identify and evaluate these security issues, and examine the efficiency of countermeasures. A possible approach for satisfying these requirements is employment of threat modeling concepts, which have been successfully applied in traditional paradigms. In this work, we employ threat models including attack trees, attack graphs and Data Flow Diagrams against a Cloud Service Broker (CloudRAID) and analyze these security threats and risks. Furthermore, we propose an innovative technique for combining Common Vulnerability Scoring System (CVSS) and Common Configuration Scoring System (CCSS) base scores in probabilistic attack graphs to cater for configuration-based vulnerabilities which are typically leveraged for attacking cloud storage systems. This approach is necessary since existing schemes do not provide sufficient security metrics, which are imperatives for comprehensive risk assessments. We demonstrate the efficiency of our proposal by devising CCSS base scores for two common attacks against cloud storage: Cloud Storage Enumeration Attack and Cloud Storage Exploitation Attack. These metrics are then used in Attack Graph Metric-based risk assessment. Our experimental evaluation shows that our approach caters for the aforementioned gaps and provides efficient security hardening options. Therefore, our proposals can be employed to improve cloud security.
Microservice Architectures (MSA) structure applications as a collection of loosely coupled services that implement business capabilities. The key advantages of MSA include inherent support for continuous deployment of large complex applications, agility and enhanced productivity. However, studies indicate that most MSA are homogeneous, and introduce shared vulnerabilites, thus vulnerable to multi-step attacks, which are economics-of-scale incentives to attackers. In this paper, we address the issue of shared vulnerabilities in microservices with a novel solution based on the concept of Moving Target Defenses (MTD). Our mechanism works by performing risk analysis against microservices to detect and prioritize vulnerabilities. Thereafter, security risk-oriented software diversification is employed, guided by a defined diversification index. The diversification is performed at runtime, leveraging both model and template based automatic code generation techniques to automatically transform programming languages and container images of the microservices. Consequently, the microservices attack surfaces are altered thereby introducing uncertainty for attackers while reducing the attackability of the microservices. Our experiments demonstrate the efficiency of our solution, with an average success rate of over 70% attack surface randomization.
This paper discusses a new approach for designing and deploying Security-as-a-Service (SecaaS) applications using cloud native design patterns. Current SecaaS approaches do not efficiently handle the increasing threats to computer systems and applications. For example, requests for security assessments drastically increase after a high-risk security vulnerability is disclosed. In such scenarios, SecaaS applications are unable to dynamically scale to serve requests. A root cause of this challenge is employment of architectures not specifically fitted to cloud environments. Cloud native design patterns resolve this challenge by enabling certain properties e.g. massive scalability and resiliency via the combination of microservice patterns and cloud-focused design patterns. However adopting these patterns is a complex process, during which several security issues are introduced. In this work, we investigate these security issues, we redesign and deploy a monolithic SecaaS application using cloud native design patterns while considering appropriate, layered security counter-measures i.e. at the application and cloud networking layer. Our prototype implementation out-performs traditional, monolithic applications with an average Scanner Time of 6 minutes, without compromising security. Our approach can be employed for designing secure, scalable and performant SecaaS applications that effectively handle unexpected increase in security assessment requests.
The ionospheric delay of global navigation satellite systems (GNSS) signals typically is compensated by adding a single correction value to the pseudorange measurement of a GNSS receiver. Yet, this neglects the dispersive nature of the ionosphere. In this context we analyze the ionospheric signal distortion beyond a constant delay. These effects become increasingly significant with the signal bandwidth and hence more important for new broadband navigation signals. Using measurements of the Galileo E5 signal, captured with a high gain antenna, we verify that the expected influence can indeed be observed and compensated. A new method to estimate the total electron content (TEC) from a single frequency high gain antenna measurement of a broadband GNSS signal is proposed and described in detail. The received signal is de facto unaffected by multi-path and interference because of the narrow aperture angle of the used antenna which should reduce the error source of the result in general. We would like to point out that such measurements are independent of code correlation, like in standard receiver applications. It is therefore also usable without knowledge of the signal coding. Results of the TEC estimation process are shown and discussed comparing to common TEC products like TEC maps and dual frequency receiver estimates.
What Stays in Mind?
(2018)
Recent advances in high-throughput sequencing experiments and their theoretical descriptions have determined fast dynamics of the "chromatin and epigenetics" field, with new concepts appearing at high rate. This field includes but is not limited to the study of DNA-protein-RNA interactions, chromatin packing properties at different scales, regulation of gene expression and protein trafficking in the cell nucleus, binding site search in the crowded chromatin environment and modulation of physical interactions by covalent chemical modifications of the binding partners. The current special issue does not pretend for the full coverage of the field, but it rather aims to capture its development and provide a snapshot of the most recent concepts and approaches. Eighteen open-access articles comprising this issue provide a delicate balance between current theoretical and experimental biophysical approaches to uncover chromatin structure and understand epigenetic regulation, allowing free flow of new ideas and preliminary results.
Subject-oriented learning
(2019)
The transformation to a digitized company changes not only the work but also social context for the employees and requires inter alia new knowledge and skills from them. Additionally, individual action problems arise. This contribution proposes the subject-oriented learning theory, in which the employees´ action problems are the starting point of training activities in learning factories. In this contribution, the subject-oriented learning theory is exemplified and respective advantages for vocational training in learning factories are pointed out both theoretically and practically. Thereby, especially the individual action problems of learners and the infrastructure are emphasized as starting point for learning processes and competence development.
We study the parameter sensitivity of hetero-polymeric DNA within the purview of DNA breathing dynamics. The degree of correlation between the mean bubble size and the model parameters is estimated for this purpose for three different DNA sequences. The analysis leads us to a better understanding of the sequence dependent nature of the breathing dynamics of hetero-polymeric DNA. Out of the 14 model parameters for DNA stability in the statistical Poland-Scheraga approach, the hydrogen bond interaction epsilon(hb)(AT) for an AT base pair and the ring factor. turn out to be the most sensitive parameters. In addition, the stacking interaction epsilon(st)(TA-TA) for an TA-TA nearest neighbor pair of base-pairs is found to be the most sensitive one among all stacking interactions. Moreover, we also establish that the nature of stacking interaction has a deciding effect on the DNA breathing dynamics, not the number of times a particular stacking interaction appears in a sequence. We show that the sensitivity analysis can be used as an effective measure to guide a stochastic optimization technique to find the kinetic rate constants related to the dynamics as opposed to the case where the rate constants are measured using the conventional unbiased way of optimization.
The relentless improvement of silicon photonics is making optical interconnects and networks appealing for use in miniaturized systems, where electrical interconnects cannot keep up with the growing levels of core integration due to bandwidth density and power efficiency limitations. At the same time, solutions such as 3D stacking or 2.5D integration open the door to a fully dedicated process optimization for the photonic die. However, an architecture-level integration challenge arises between the electronic network and the optical one in such tightly-integrated parallel systems. It consists of adapting signaling rates, matching the different levels of communication parallelism, handling cross-domain flow control, addressing re-synchronization concerns, and avoiding protocol-dependent deadlock. The associated energy and performance overhead may offset the inherent benefits of the emerging technology itself. This paper explores a hybrid CMOS-ECL bridge architecture between 3D-stacked technology-heterogeneous networks-on-chip (NoCs). The different ways of overcoming the serialization challenge (i.e., through an improvement of the signaling rate and/or through space-/wavelength division multiplexing options) give rise to a configuration space that the paper explores, in search for the most energy-efficient configuration for high-performance.
Cloud Storage Broker (CSB) provides value-added cloud storage service for enterprise usage by leveraging multi-cloud storage architecture. However, it raises several challenges for managing resources and its access control in multiple Cloud Service Providers (CSPs) for authorized CSB stakeholders. In this paper we propose unified cloud access control model that provides the abstraction of CSP's services for centralized and automated cloud resource and access control management in multiple CSPs. Our proposal offers role-based access control for CSB stakeholders to access cloud resources by assigning necessary privileges and access control list for cloud resources and CSB stakeholders, respectively, following privilege separation concept and least privilege principle. We implement our unified model in a CSB system called CloudRAID for Business (CfB) with the evaluation result shows it provides system-and-cloud level security service for cfB and centralized resource and access control management in multiple CSPs.
Unified logging system for monitoring multiple cloud storage providers in cloud storage broker
(2018)
With the increasing demand for personal and enterprise data storage service, Cloud Storage Broker (CSB) provides cloud storage service using multiple Cloud Service Providers (CSPs) with guaranteed Quality of Service (QoS), such as data availability and security. However monitoring cloud storage usage in multiple CSPs has become a challenge for CSB due to lack of standardized logging format for cloud services that causes each CSP to implement its own format. In this paper we propose a unified logging system that can be used by CSB to monitor cloud storage usage across multiple CSPs. We gather cloud storage log files from three different CSPs and normalise these into our proposed log format that can be used for further analysis process. We show that our work enables a coherent view suitable for data navigation, monitoring, and analytics.
Prevention of Cognitive Decline: A Physical Exercise Perspective on Brain Health in the Long Run
(2016)
S-test results for the USGS and RELM forecasts. The differences between the simulated log-likelihoods and the observed log-likelihood are labelled on the horizontal axes, with scaling adjustments for the 40year.retro experiment. The horizontal lines represent the confidence intervals, within the 0.05 significance level, for each forecast and experiment. If this range contains a log-likelihood difference of zero, the forecasted log-likelihoods are consistent with the observed, and the forecast passes the S-test (denoted by thin lines). If the minimum difference within this range does not contain zero, the forecast fails the S-test for that particular experiment, denoted by thick lines. Colours distinguish between experiments (see Table 2 for explanation of experiment durations). Due to anomalously large likelihood differences, S-test results for Wiemer-Schorlemmer.ALM during the 10year.retro and 40year.retro experiments are not displayed. The range of log-likelihoods for the Holliday-et-al.PI forecast is lower than for the other forecasts due to relatively homogeneous forecasted seismicity rates and use of a small fraction of the RELM testing region.
The Schwarzenberg mining district in the western Erzgebirge hosts numerous skarn-hosted tin-polymetallic deposits, such as Breitenbrunn. The St. Christoph mine is located in the Breitenbrunn deposit and is the locus typicus of christophite, an iron-rich sphalerite variety, which can be associated with indium enrichment. This study presents a revision of the paragenetic scheme, a contribution to the indium behavior and potential, and discussion on the origin of the sulfur. This was achieved through reflected light microscopy, SEM-based MLA, EPMA, and bulk mineral sulfur isotope analysis on 37 sulfide-rich skarn samples from a mineral collection. The paragenetic scheme includes: a pre-mineralization stage of anhydrous calc-silicates and hydrous minerals; an oxide stage, dominated by magnetite; a sulfide stage of predominantly sphalerite, minor pyrite, chalcopyrite, arsenopyrite, and galena. Some sphalerite samples present elevated indium contents of up to 0.44 wt%. Elevated iron contents (4-10 wt%) in sphalerite can be tentatively linked to increased indium incorporation, but further analyses are required. Analyzed sulfides exhibit homogeneous delta S-34 values (-1 to +2 parts per thousand VCDT), assumed to be post-magmatic. They correlate with other Fe-Sn-Zn-Cu-In skarn deposits in the western Erzgebirge, and Permian vein-hosted associations throughout the Erzgebirge region.
The rapid digitalization of the Facility Management (FM) sector has increased the demand for mobile, interactive analytics approaches concerning the operational state of a building. These approaches provide the key to increasing stakeholder engagement associated with Operation and Maintenance (O&M) procedures of living and working areas, buildings, and other built environment spaces. We present a generic and fast approach to process and analyze given 3D point clouds of typical indoor office spaces to create corresponding up-to-date approximations of classified segments and object-based 3D models that can be used to analyze, record and highlight changes of spatial configurations. The approach is based on machine-learning methods used to classify the scanned 3D point cloud data using 2D images. This approach can be used to primarily track changes of objects over time for comparison, allowing for routine classification, and presentation of results used for decision making. We specifically focus on classification, segmentation, and reconstruction of multiple different object types in a 3D point-cloud scene. We present our current research and describe the implementation of these technologies as a web-based application using a services-oriented methodology.
The supercritical Hopf bifurcation is one of the simplest ways in which a stationary state of a nonlinear system can undergo a transition to stable self-sustained oscillations. At the bifurcation point, a small-amplitude limit cycle is born, which already at onset displays a finite frequency. If we consider a reaction-diffusion system that undergoes a supercritical Hopf bifurcation, its dynamics is described by the complex Ginzburg-Landau equation (CGLE). Here, we study such a system in the parameter regime where the CGLE shows spatio-temporal chaos. We review a type of time-delay feedback methods which is suitable to suppress chaos and replace it by other spatio-temporal solutions such as uniform oscillations, plane waves, standing waves, and the stationary state.
The coupling between molecular excitations and nanoparticles leads to promising applications. It is for example used to enhance the optical cross-section of molecules in surface enhanced Raman scattering, Purcell enhancement or plasmon enhanced dye lasers. In a coupled system new resonances emerge resulting from the original plasmon (ωpl) and exciton (ωex) resonances as
ω±=12(ωpl+ωex)±14(ωpl−ωex)2+g2−−−−−−−−−−−−−−−√,
(1)
where g is the coupling parameter. Hence, the new resonances show a separation of Δ = ω+ − ω− from which the coupling strength can be deduced from the minimum distance between the two resonances, Ω = Δ(ω+ = ω−).
The electromagnetic coupling of molecular excitations to plasmonic nanoparticles offers a promising method to manipulate the light-matter interaction at the nanoscale. Plasmonic nanoparticles foster exceptionally high coupling strengths, due to their capacity to strongly concentrate the light-field to sub-wavelength mode volumes. A particularly interesting coupling regime occurs, if the coupling increases to a level such that the coupling strength surpasses all damping rates in the system. In this so-called strong-coupling regime hybrid light-matter states emerge, which can no more be divided into separate light and matter components. These hybrids unite the features of the original components and possess new resonances whose positions are separated by the Rabi splitting energy h Omega. Detuning the resonance of one of the components leads to an anticrossing of the two arising branches of the new resonances omega(+) and omega(-) with a minimal separation of Omega = omega(+) - omega(-).
Massive Open Online Courses (MOOCs) have left their mark on the face of education during the recent years. At the Hasso Plattner Institute (HPI) in Potsdam, Germany, we are actively developing a MOOC platform, which provides our research with a plethora of e-learning topics, such as learning analytics, automated assessment, peer assessment, team-work, online proctoring, and gamification. We run several instances of this platform. On openHPI, we provide our own courses from within the HPI context. Further instances are openSAP, openWHO, and mooc.HOUSE, which is the smallest of these platforms, targeting customers with a less extensive course portfolio. In 2013, we started to work on the gamification of our platform. By now, we have implemented about two thirds of the features that we initially have evaluated as useful for our purposes. About a year ago we activated the implemented gamification features on mooc.HOUSE. Before activating the features on openHPI as well, we examined, and re-evaluated our initial considerations based on the data we collected so far and the changes in other contexts of our platforms.
MOOCs in Secondary Education
(2019)
Computer science education in German schools is often less than optimal. It is only mandatory in a few of the federal states and there is a lack of qualified teachers. As a MOOC (Massive Open Online Course) provider with a German background, we developed the idea to implement a MOOC addressing pupils in secondary schools to fill this gap. The course targeted high school pupils and enabled them to learn the Python programming language. In 2014, we successfully conducted the first iteration of this MOOC with more than 7000 participants. However, the share of pupils in the course was not quite satisfactory. So we conducted several workshops with teachers to find out why they had not used the course to the extent that we had imagined. The paper at hand explores and discusses the steps we have taken in the following years as a result of these workshops.
The ability to work in teams is an important skill in today's work environments. In MOOCs, however, team work, team tasks, and graded team-based assignments play only a marginal role. To close this gap, we have been exploring ways to integrate graded team-based assignments in MOOCs. Some goals of our work are to determine simple criteria to match teams in a volatile environment and to enable a frictionless online collaboration for the participants within our MOOC platform. The high dropout rates in MOOCs pose particular challenges for team work in this context. By now, we have conducted 15 MOOCs containing graded team-based assignments in a variety of topics. The paper at hand presents a study that aims to establish a solid understanding of the participants in the team tasks. Furthermore, we attempt to determine which team compositions are particularly successful. Finally, we examine how several modifications to our platform's collaborative toolset have affected the dropout rates and performance of the teams.
This Research-to-Practice paper examines the practical application of various forms of collaborative learning in MOOCs. Since 2012, about 60 MOOCs in the wider context of Information Technology and Computer Science have been conducted on our self-developed MOOC platform. The platform is also used by several customers, who either run their own platform instances or use our white label platform. We, as well as some of our partners, have experimented with different approaches in collaborative learning in these courses. Based on the results of early experiments, surveys amongst our participants, and requests by our business partners we have integrated several options to offer forms of collaborative learning to the system. The results of our experiments are directly fed back to the platform development, allowing to fine tune existing and to add new tools where necessary. In the paper at hand, we discuss the benefits and disadvantages of decisions in the design of a MOOC with regard to the various forms of collaborative learning. While the focus of the paper at hand is on forms of large group collaboration, two types of small group collaboration on our platforms are briefly introduced.
Dielectrophoretic functionalization of nanoelectrode arrays for the detection of influenza viruses
(2017)
Our Conclusions
(2018)
Charges dropped
(2015)
Cold regulated protein 15A (COR15A) is a nuclear encoded, intrinsically disordered protein that is found in Arabidopsis thaliana. It belongs to the Late Embryogenesis Abundant (LEA) family of proteins and is responsible for increased freezing tolerance in plants. COR15A is intrinsically disordered in dilute solutions and adopts a helical structure upon dehydration or in the presence of co-solutes such as TFE and ethylene glycol. This helical structure is thought to be important for protecting plants from dehydration induced by freezing. Multiple protein sequence alignments revealed the presence of several conserved glycine residues that we hypothesize keeps COR15A from becoming helical in dilute solutions. Using AGADIR, the change in helical content of COR15A when these conserved glycine residues were mutated to alanine residues was predicted. Based on the predictions, glycine to alanine mutants were made at position 68, and 54,68,81, and 84. Labeled samples of wildtype COR15A and mutant proteins were purified and NMR experiments were performed to examine any structural changes induced by the mutations. To test the effects of dehydration on the structure of COR15A, trifluoroethanol, an alcohol based co solvent that is proposed to induce/stabilize helical structure in peptides was added to the NMR samples, and the results of the experiment showed an increase in helical content, compared to the samples without TFE. To test the functional differences between wild type and the mutants, liposome leakage assays were performed. The results from these assays suggest the more helical mutants may augment membrane stability.
High Mountain Asia provides water for more than a billion downstream users. Many catchments receive the majority of their yearly water budget in the form of snow - the vast majority of which is not monitored by sparse weather networks. We leverage passive microwave data from the SSMI series of satellites (SSMI, SSMI/S, 1987-2016), reprocessed to 3.125 km resolution, to examine trends in the volume and spatial distribution of snow-water equivalent (SWE) in the Indus Basin. We find that the majority of the Indus has seen an increase in snow-water storage. There exists a strong elevation-trend relationship, where high-elevation zones have more positive SWE trends. Negative trends are confined to the Himalayan foreland and deeply-incised valleys which run into the Upper Indus. This implies a temperature-dependent cutoff below which precipitation increases are not translated into increased SWE. Earlier snowmelt or a higher percentage of liquid precipitation could both explain this cutoff.(1) Earlier work 2 found a negative snow-water storage trend for the entire Indus catchment over the time period 1987-2009 (-4 x 10(-3) mm/yr). In this study based on an additional seven years of data, the average trend reverses to 1.4 x 10(-3). This implies that the decade since the mid-2000s was likely wetter, and positively impacted long-term SWE trends. This conclusion is supported by an analysis of snowmelt onset and end dates which found that while long-term trends are negative, more recent (since 2005) trends are positive (moving later in the year).(3)
Capsella
(2018)