Refine
Year of publication
Document Type
- Other (677) (remove)
Language
- English (677) (remove)
Keywords
- Arrayseismologie (5)
- array seismology (5)
- E-Learning (4)
- Erdbeben (4)
- MOOC (4)
- Scrum (4)
- Seismology (4)
- embodied cognition (4)
- errata, addenda (4)
- Cloud-Security (3)
Institute
- Hasso-Plattner-Institut für Digital Engineering GmbH (84)
- Institut für Biochemie und Biologie (83)
- Institut für Physik und Astronomie (83)
- Institut für Geowissenschaften (75)
- Department Psychologie (42)
- Department Sport- und Gesundheitswissenschaften (37)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (30)
- Institut für Chemie (27)
- Institut für Ernährungswissenschaft (27)
- Institut für Informatik und Computational Science (26)
- Institut für Mathematik (25)
- Department Linguistik (17)
- Fachgruppe Politik- & Verwaltungswissenschaft (17)
- Institut für Umweltwissenschaften und Geographie (11)
- Institut für Anglistik und Amerikanistik (10)
- Sozialwissenschaften (10)
- Öffentliches Recht (9)
- Wirtschaftswissenschaften (8)
- Department Erziehungswissenschaft (5)
- Fachgruppe Soziologie (5)
- Historisches Institut (5)
- Fachgruppe Betriebswirtschaftslehre (4)
- Institut für Germanistik (4)
- Institut für Jüdische Studien und Religionswissenschaft (3)
- Department Musik und Kunst (2)
- Institut für Jüdische Theologie (2)
- Institut für Philosophie (2)
- Institut für Romanistik (2)
- Senat (2)
- Department Grundschulpädagogik (1)
- Department für Inklusionspädagogik (1)
- Forschungsbereich „Politik, Verwaltung und Management“ (1)
- Kommissionen des Senats (1)
- Lehreinheit für Wirtschafts-Arbeit-Technik (1)
- Potsdam Institute for Climate Impact Research (PIK) e. V. (1)
- Vereinigung für Jüdische Studien e. V. (1)
- Wirtschafts- und Sozialwissenschaftliche Fakultät (1)
This chapter investigates the trajectory of establishing the Forest Stewardship Council (FSC) in the early 1990s as the first private transnational certification organization with an antagonistic stakeholder body. Its main contribution is a micro-analysis of the founding assembly in 1993. By investigating the role of brokers within the negotiation as one institutional scope condition for ‘arguing’ having occurred, the chapter adopts a dramaturgical approach. It contends that the authority of brokers is not necessarily institutionally given, but needs to be gained: brokers have to prove situationally that their knowledge is relevant and that they are speaking impartially in the interest of progress rather than their own. The chapter stresses the importance of procedural knowledge which brokers provide in contrast to policy knowledge.
Already for decades it has been known that the winds of massive stars are inhomogeneous (i.e. clumped). To properly model observed spectra of massive star winds it is necessary to incorporate the 3-D nature of clumping into radiative transfer calculations. In this paper we present our full 3-D Monte Carlo radiative transfer code for inhomogeneous expanding stellar winds. We use a set of parameters to describe dense as well as the rarefied wind components. At the same time, we account for non-monotonic velocity fields. We show how the 3-D density and velocity wind inhomogeneities strongly affect the resonance line formation. We also show how wind clumping can solve the discrepancy between P v and H alpha mass-loss rate diagnostics.
A balance to death
(2018)
Leaf senescence plays a crucial role in nutrient recovery in late-stage plant development and requires vast transcriptional reprogramming by transcription factors such as ORESARA1 (ORE1). A proteolytic mechanism is now found to control ORE1 degradation, and thus senescence, during nitrogen starvation.
We analyze the problem of response suggestion in a closed domain along a real-world scenario of a digital library. We present a text-processing pipeline to generate question-answer pairs from chat transcripts. On this limited amount of training data, we compare retrieval-based, conditioned-generation, and dedicated representation learning approaches for response suggestion. Our results show that retrieval-based methods that strive to find similar, known contexts are preferable over parametric approaches from the conditioned-generation family, when the training data is limited. We, however, identify a specific representation learning approach that is competitive to the retrieval-based approaches despite the training data limitation.
Increasing demand for analytical processing capabilities can be managed by replication approaches. However, to evenly balance the replicas' workload shares while at the same time minimizing the data replication factor is a highly challenging allocation problem. As optimal solutions are only applicable for small problem instances, effective heuristics are indispensable. In this paper, we test and compare state-of-the-art allocation algorithms for partial replication. By visualizing and exploring their (heuristic) solutions for different benchmark workloads, we are able to derive structural insights and to detect an algorithm's strengths as well as its potential for improvement. Further, our application enables end-to-end evaluations of different allocations to verify their theoretical performance.
Microservice Architectures (MSA) structure applications as a collection of loosely coupled services that implement business capabilities. The key advantages of MSA include inherent support for continuous deployment of large complex applications, agility and enhanced productivity. However, studies indicate that most MSA are homogeneous, and introduce shared vulnerabilites, thus vulnerable to multi-step attacks, which are economics-of-scale incentives to attackers. In this paper, we address the issue of shared vulnerabilities in microservices with a novel solution based on the concept of Moving Target Defenses (MTD). Our mechanism works by performing risk analysis against microservices to detect and prioritize vulnerabilities. Thereafter, security risk-oriented software diversification is employed, guided by a defined diversification index. The diversification is performed at runtime, leveraging both model and template based automatic code generation techniques to automatically transform programming languages and container images of the microservices. Consequently, the microservices attack surfaces are altered thereby introducing uncertainty for attackers while reducing the attackability of the microservices. Our experiments demonstrate the efficiency of our solution, with an average success rate of over 70% attack surface randomization.
A Landscape for Case Models
(2019)
Case Management is a paradigm to support knowledge-intensive processes. The different approaches developed for modeling these types of processes tend to result in scattered models due to the low abstraction level at which the inherently complex processes are therein represented. Thus, readability and understandability is more challenging than that of traditional process models. By reviewing existing proposals in the field of process overviews and case models, this paper extends a case modeling language - the fragment-based Case Management (fCM) language - with the goal of modeling knowledge-intensive processes from a higher abstraction level - to generate a so-called fCM landscape. This proposal is empirically evaluated via an online experiment. Results indicate that interpreting an fCM landscape might be more effective and efficient than interpreting an informationally equivalent case model.
Low back pain (LBP) is a leading cause of activity limitation. Objective assessment of the spinal motion plays a key role in diagnosis and treatment of LBP. We propose a method that facilitates clinical assessment of lower back motions by means of a wireless inertial sensor network. The sensor units are attached to the right and left side of the lumbar region, the pelvis and the thighs, respectively. Since magnetometers are known to be unreliable in indoor environments, we use only 3D accelerometer and 3D gyroscope readings. Compensation of integration drift in the horizontal plane is achieved by estimating the gyroscope biases from automatically detected initial rest phases. For the estimation of sensor orientations, both a smoothing algorithm and a filtering algorithm are presented. From these orientations, we determine three-dimensional joint angles between the thighs and the pelvis and between the pelvis and the lumbar region. We compare the orientations and joint angles to measurements of an optical motion tracking system that tracks each skin-mounted sensor by means of reflective markers. Eight subjects perform a neutral initial pose, then flexion/extension, lateral flexion, and rotation of the trunk. The root mean square deviation between inertial and optical angles is about one degree for angles in the frontal and sagittal plane and about two degrees for angles in the transverse plane (both values averaged over all trials). We choose five features that characterize the initial pose and the three motions. Interindividual differences of all features are found to be clearly larger than the observed measurement deviations. These results indicate that the proposed inertial sensor-based method is a promising tool for lower back motion assessment.
General intelligence has a substantial genetic background in children, adolescents, and adults, but environmental factors also strongly correlate with cognitive performance as evidenced by a strong (up to one SD) increase in average intelligence test results in the second half of the previous century. This change occurred in a period apparently too short to accommodate radical genetic changes. It is highly suggestive that environmental factors interact with genotype by possible modification of epigenetic factors that regulate gene expression and thus contribute to individual malleability. This modification might as well be reflected in recent observations of an association between dopamine-dependent encoding of reward prediction errors and cognitive capacity, which was modulated by adverse life events.
Recently, Kocyan & Wiland-Szymańska (2016) have published a thorough research article on one of the outstanding members of the family Hypoxidaceae on the Seychelles, which resulted in the raise of a new genus (Friedmannia Kocyan & Wiland-Szymańska 2016: 60) to accommodate the former Curculigo seychellensis Bojer ex Baker (1877: 368). However, it has turned out that the name Friedmannia Chantanachat & Bold (1962: 45) already exists in literature for a green alga, which renders the new hypoxid genus illegitimate (Melbourne Code; McNeill et al. 2012). Therefore, we assign a new generic epithet to Curculigo seychellensis.
3D point cloud technology facilitates the automated and highly detailed digital acquisition of real-world environments such as assets, sites, cities, and countries; the acquired 3D point clouds represent an essential category of geodata used in a variety of geoinformation applications and systems. In this paper, we present a web-based system for the interactive and collaborative exploration and inspection of arbitrary large 3D point clouds. Our approach is based on standard WebGL on the client side and is able to render 3D point clouds with billions of points. It uses spatial data structures and level-of-detail representations to manage the 3D point cloud data and to deploy out-of-core and web-based rendering concepts. By providing functionality for both, thin-client and thick-client applications, the system scales for client devices that are vastly different in computing capabilities. Different 3D point-based rendering techniques and post-processing effects are provided to enable task-specific and data-specific filtering and highlighting, e.g., based on per-point surface categories or temporal information. A set of interaction techniques allows users to collaboratively work with the data, e.g., by measuring distances and areas, by annotating, or by selecting and extracting data subsets. Additional value is provided by the system's ability to display additional, context-providing geodata alongside 3D point clouds and to integrate task-specific processing and analysis operations. We have evaluated the presented techniques and the prototype system with different data sets from aerial, mobile, and terrestrial acquisition campaigns with up to 120 billion points to show their practicality and feasibility.
The rapid digitalization of the Facility Management (FM) sector has increased the demand for mobile, interactive analytics approaches concerning the operational state of a building. These approaches provide the key to increasing stakeholder engagement associated with Operation and Maintenance (O&M) procedures of living and working areas, buildings, and other built environment spaces. We present a generic and fast approach to process and analyze given 3D point clouds of typical indoor office spaces to create corresponding up-to-date approximations of classified segments and object-based 3D models that can be used to analyze, record and highlight changes of spatial configurations. The approach is based on machine-learning methods used to classify the scanned 3D point cloud data using 2D images. This approach can be used to primarily track changes of objects over time for comparison, allowing for routine classification, and presentation of results used for decision making. We specifically focus on classification, segmentation, and reconstruction of multiple different object types in a 3D point-cloud scene. We present our current research and describe the implementation of these technologies as a web-based application using a services-oriented methodology.
Rapid advances in location-acquisition technologies have led to large amounts of trajectory data. This data is the foundation for a broad spectrum of services driven and improved by trajectory data mining. However, for hybrid transactional and analytical workloads, the storing and processing of rapidly accumulated trajectory data is a non-trivial task. In this paper, we present a detailed survey about state-of-the-art trajectory data management systems. To determine the relevant aspects and requirements for such systems, we developed a trajectory data mining framework, which summarizes the different steps in the trajectory data mining process. Based on the derived requirements, we analyze different concepts to store, compress, index, and process spatio-temporal data. There are various trajectory management systems, which are optimized for scalability, data footprint reduction, elasticity, or query performance. To get a comprehensive overview, we describe and compare different exciting systems. Additionally, the observed similarities in the general structure of different systems are consolidated in a general blueprint of trajectory management systems.
We present a prototype of an integrated reasoning environment for educational purposes. The presented tool is a fragment of a proof assistant and automated theorem prover. We describe the existing and planned functionality of the theorem prover and especially the functionality of the educational fragment. This currently supports working with terms of the untyped lambda calculus and addresses both undergraduate students and researchers. We show how the tool can be used to support the students' understanding of functional programming and discuss general problems related to the process of building theorem proving software that aims at supporting both research and education.
High-throughput RNA sequencing (RNAseq) produces large data sets containing expression levels of thousands of genes. The analysis of RNAseq data leads to a better understanding of gene functions and interactions, which eventually helps to study diseases like cancer and develop effective treatments. Large-scale RNAseq expression studies on cancer comprise samples from multiple cancer types and aim to identify their distinct molecular characteristics. Analyzing samples from different cancer types implies analyzing samples from different tissue origin. Such multi-tissue RNAseq data sets require a meaningful analysis that accounts for the inherent tissue-related bias: The identified characteristics must not originate from the differences in tissue types, but from the actual differences in cancer types. However, current analysis procedures do not incorporate that aspect. As a result, we propose to integrate a tissue-awareness into the analysis of multi-tissue RNAseq data. We introduce an extension for gene selection that provides a tissue-wise context for every gene and can be flexibly combined with any existing gene selection approach. We suggest to expand conventional evaluation by additional metrics that are sensitive to the tissue-related bias. Evaluations show that especially low complexity gene selection approaches profit from introducing tissue-awareness.
Selection of initial points, the number of clusters and finding proper clusters centers are still the main challenge in clustering processes. In this paper, we suggest genetic algorithm based method which searches several solution spaces simultaneously. The solution spaces are population groups consisting of elements with similar structure. Elements in a group have the same size, while elements in different groups are of different sizes. The proposed algorithm processes the population in groups of chromosomes with one gene, two genes to k genes. These genes hold corresponding information about the cluster centers. In the proposed method, the crossover and mutation operators can accept parents with different sizes; this can lead to versatility in population and information transfer among sub-populations. We implemented the proposed method and evaluated its performance against some random datasets and the Ruspini dataset as well. The experimental results show that the proposed method could effectively determine the appropriate number of clusters and recognize their centers. Overall this research implies that using heterogeneous population in the genetic algorithm can lead to better results.
Abrupt monsoon transitions as seen in paleorecords can be explained by moisture-advection feedback
(2016)
Chronic ankle instability (CAI) is not only an ankle issue, but also affects sensorimotor system. People with CAI show altered muscle activation in proximal joints such as hip and knee. However, evidence is limited as controversial results have been presented regarding changes in activation of hip muscles in CAI population. PURPOSE: To investigate the effect of CAI on activity of hip muscles during normal walking and walking with perturbations. METHODS: 8 subjects with CAI (23 ± 2 years, 171 ± 7 cm and 65 ± 4 kg) and 8 controls (CON) matched by age, height, weight and dominant leg (25 ± 3 years, 172 ± 7 cm and 65 ± 6 kg) walked shoed on a split-belt treadmill (1 m/s). Subjects performed 5 minutes of baseline walking and 6 minutes walking with 10 perturbations (at 200 ms after heel contact with 42 m/s2 deceleration impulse) on each side. Electromyography signals from gluteus medius (Gmed) and gluteus maximus (Gmax) were recorded while walking. Muscle amplitudes (Root Mean Square normalized to maximum voluntary isometric contraction) were calculated at 200 ms before heel contact (Pre200), 100 ms after heel contact (Post100) during normal walking and 200 ms after perturbations (Pert200). Differences between groups were examined using Mann Whitney U test and Bonferroni correction to account for multiple testing (adjust α level p≤ 0.0125). RESULT: In Gmed, CAI group showed lower muscle amplitude than CON group after heel contact (Post100: 18±7 % and 47±21 %, p< .01) and after walking perturbations ( 31±13 % and 62±26 %, p< .01), but not before heel contact (Pre200: 5±2 % and 11±10 %, p= 0.195). In Gmax, no difference was found between CAI and CON groups in all three time points (Pre200: 12±5 % and 17±12 %, p= 0.574; Post100: 41±21 % and 41±13 %, p= 1.00; Pert200: 79±46 % and 62±35 %, p= 0.505). CONCLUSION: People with CAI activated Gmed less than healthy control in feedback mechanism (after heel contact and walking with perturbations), but not in feedforward mechanism (before heel contact). Less activation on Gmed may affect the balance in frontal plane and increase the risk of recurrent ankle sprain, giving way or feeling ankle instability in patients with CAI during walking. Future studies should investigate the effect of Gmed strengthening or neuromuscular training on CAI rehabilitation.
Industry 4.0 and the Internet of Things are recent developments that have lead to the creation of new kinds of manufacturing data. Linking this new kind of sensor data to traditional business information is crucial for enterprises to take advantage of the data’s full potential. In this paper, we present a demo which allows experiencing this data integration, both vertically between technical and business contexts and horizontally along the value chain. The tool simulates a manufacturing company, continuously producing both business and sensor data, and supports issuing ad-hoc queries that answer specific questions related to the business. In order to adapt to different environments, users can configure sensor characteristics to their needs.
Eighteen scientists met at Jurata, Poland, to discuss various aspects of the transition from adolescence to adulthood. This transition is a delicate period facing complex interactions between the adolescents and the social group they belong to. Social identity, group identification and identity signalling, but also stress affecting basal salivary cortisol rhythms, hypertension, inappropriate nutrition causing latent and manifest obesity, moreover, in developing and under-developed countries, parasitosis causing anaemia thereby impairing growth and development, are issues to be dealt with during this period of the human development. In addition, some new aspects of the association between weight, height and head circumference in the newborns were discussed, as well as intrauterine head growth and head circumference as health risk indicators.
Adsorption of amino acids on the magnetite-(111)-surface: a force field study (vol 19, 851, 2013)
(2016)
Working in iterations and repeatedly improving team workflows based on collected feedback is fundamental to agile software development processes. Scrum, the most popular agile method, provides dedicated retrospective meetings to reflect on the last development iteration and to decide on process improvement actions. However, agile methods do not prescribe how these improvement actions should be identified, managed or tracked in detail. The approaches to detect and remove problems in software development processes are therefore often based on intuition and prior experiences and perceptions of team members. Previous research in this area has focused on approaches to elicit a team's improvement opportunities as well as measurements regarding the work performed in an iteration, e.g. Scrum burn-down charts. Little research deals with the quality and nature of identified problems or how progress towards removing issues is measured. In this research, we investigate how agile development teams in the professional software industry organize their feedback and process improvement approaches. In particular, we focus on the structure and content of improvement and reflection meetings, i.e. retrospectives, and their outcomes. Researching how the vital mechanism of process improvement is implemented in practice in modern software development leads to a more complete picture of agile process improvement.
alt'ai is an agent-based simulation inspired by aesthetics, culture and environmental conditions of the Altai mountain region on the borders between Russia, Kazakhstan, China and Mongolia. It is set into a scenario of a remote automated landscape populated by sentient machines, where biological species, machines and environments autonomously interact to produce unforeseeable visual outputs. It poses a question of designing future machine-to-machine authentication protocols that are based on the use of images encoding agent behavior. Also, the simulation provides rich visual perspective on this challenge. The project pleads for a heavily aestheticized approach to design practice and highlights the importance of productively inefficient and information redundant systems.
An energy consumption model for multiModal wireless sensor networks based on wake-up radio receivers
(2018)
Energy consumption is a major concern in Wireless Sensor Networks. A significant waste of energy occurs due to the idle listening and overhearing problems, which are typically avoided by turning off the radio, while no transmission is ongoing. The classical approach for allowing the reception of messages in such situations is to use a low-duty-cycle protocol, and to turn on the radio periodically, which reduces the idle listening problem, but requires timers and usually unnecessary wakeups. A better solution is to turn on the radio only on demand by using a Wake-up Radio Receiver (WuRx). In this paper, an energy model is presented to estimate the energy saving in various multi-hop network topologies under several use cases, when a WuRx is used instead of a classical low-duty-cycling protocol. The presented model also allows for estimating the benefit of various WuRx properties like using addressing or not.
Mobile sensing technology allows us to investigate human behaviour on a daily basis. In the study, we examined temporal orientation, which refers to the capacity of thinking or talking about personal events in the past and future. We utilise the mksense platform that allows us to use the experience-sampling method. Individual's thoughts and their relationship with smartphone's Bluetooth data is analysed to understand in which contexts people are influenced by social environments, such as the people they spend the most time with. As an exploratory study, we analyse social condition influence through a collection of Bluetooth data and survey information from participant's smartphones. Preliminary results show that people are likely to focus on past events when interacting with close-related people, and focus on future planning when interacting with strangers. Similarly, people experience present temporal orientation when accompanied by known people. We believe that these findings are linked to emotions since, in its most basic state, emotion is a state of physiological arousal combined with an appropriated cognition. In this contribution, we envision a smartphone application for automatically inferring human emotions based on user's temporal orientation by using Bluetooth sensors, we briefly elaborate on the influential factor of temporal orientation episodes and conclude with a discussion and lessons learned.
An Information System Supporting the Eliciting of Expert Knowledge for Successful IT Projects
(2018)
In order to guarantee the success of an IT project, it is necessary for a company to possess expert knowledge. The difficulty arises when experts no longer work for the company and it then becomes necessary to use their knowledge, in order to realise an IT project. In this paper, the ExKnowIT information system which supports the eliciting of expert knowledge for successful IT projects, is presented and consists of the following modules: (1) the identification of experts for successful IT projects, (2) the eliciting of expert knowledge on completed IT projects, (3) the expert knowledge base on completed IT projects, (4) the Group Method for Data Handling (GMDH) algorithm, (5) new knowledge in support of decisions regarding the selection of a manager for a new IT project. The added value of our system is that these three approaches, namely, the elicitation of expert knowledge, the success of an IT project and the discovery of new knowledge, gleaned from the expert knowledge base, otherwise known as the decision model, complement each other.
E-commerce marketplaces are highly dynamic with constant competition. While this competition is challenging for many merchants, it also provides plenty of opportunities, e.g., by allowing them to automatically adjust prices in order to react to changing market situations. For practitioners however, testing automated pricing strategies is time-consuming and potentially hazardously when done in production. Researchers, on the other side, struggle to study how pricing strategies interact under heavy competition. As a consequence, we built an open continuous time framework to simulate dynamic pricing competition called Price Wars. The microservice-based architecture provides a scalable platform for large competitions with dozens of merchants and a large random stream of consumers. Our platform stores each event in a distributed log. This allows to provide different performance measures enabling users to compare profit and revenue of various repricing strategies in real-time. For researchers, price trajectories are shown which ease evaluating mutual price reactions of competing strategies. Furthermore, merchants can access historical marketplace data and apply machine learning. By providing a set of customizable, artificial merchants, users can easily simulate both simple rule-based strategies as well as sophisticated data-driven strategies using demand learning to optimize their pricing strategies.
Xenikoudakis et al. report a partial mitochondrial genome of the extinct giant beaver Castoroides and estimate the origin of aquatic behavior in beavers to approximately 20 million years. This time estimate coincides with the extinction of terrestrial beavers and raises the question whether the two events had a common cause.
One-tube osmotic fragility (OF) test is a rapid test used widely for screening thalassemia in countries with limited resources. The test has important limitation in that its accuracy relies on observers’ experience.
The iCheck Turbidity is a prototype of portable nephelometer developed by BioAnalyt (Bioanalyt GmbH, Germany). In this study, we assessed the applicability of the iCheck Turbidity, for checking turbidity of the OF-test
ASEDS
(2018)
The Massive adoption of social media has provided new ways for individuals to express their opinion and emotion online. In 2016, Facebook introduced a new reactions feature that allows users to express their psychological emotions regarding published contents using so-called Facebook reactions. In this paper, a framework for predicting the distribution of Facebook post reactions is presented. For this purpose, we collected an enormous amount of Facebook posts associated with their reactions labels using the proposed scalable Facebook crawler. The training process utilizes 3 million labeled posts for more than 64,000 unique Facebook pages from diverse categories. The evaluation on standard benchmarks using the proposed features shows promising results compared to previous research. The final model is able to predict the reaction distribution on Facebook posts with a recall score of 0.90 for "Joy" emotion.
Manufacturing industries are undergoing a major paradigm shift towards more autonomy. Automated planning and scheduling then becomes a necessity. The Planning and Execution Competition for Logistics Robots in Simulation held at ICAPS is based on this scenario and provides an interesting testbed. However, the posed problem is challenging as also demonstrated by the somewhat weak results in 2017. The domain requires temporal reasoning and dealing with uncertainty. We propose a novel planning system based on Answer Set Programming and the Clingo solver to tackle these problems and incentivize robot cooperation. Our results show a significant performance improvement, both, in terms of lowering computational requirements and better game metrics.
Aspirin inhibits release of platelet-derived sphingosine-1-phosphate in
acute myocardial infarction
(2013)
Cost models play an important role for the efficient implementation of software systems. These models can be embedded in operating systems and execution environments to optimize execution at run time. Even though non-uniform memory access (NUMA) architectures are dominating today's server landscape, there is still a lack of parallel cost models that represent NUMA system sufficiently. Therefore, the existing NUMA models are analyzed, and a two-step performance assessment strategy is proposed that incorporates low-level hardware counters as performance indicators. To support the two-step strategy, multiple tools are developed, all accumulating and enriching specific hardware event counter information, to explore, measure, and visualize these low-overhead performance indicators. The tools are showcased and discussed alongside specific experiments in the realm of performance assessment.
High-dimensional data is particularly useful for data analytics research. In the healthcare domain, for instance, high-dimensional data analytics has been used successfully for drug discovery. Yet, in order to adhere to privacy legislation, data analytics service providers must guarantee anonymity for data owners. In the context of high-dimensional data, ensuring privacy is challenging because increased data dimensionality must be matched by an exponential growth in the size of the data to avoid sparse datasets. Syntactically, anonymising sparse datasets with methods that rely of statistical significance, makes obtaining sound and reliable results, a challenge. As such, strong privacy is only achievable at the cost of high information loss, rendering the data unusable for data analytics. In this paper, we make two contributions to addressing this problem from both the privacy and information loss perspectives. First, we show that by identifying dependencies between attribute subsets we can eliminate privacy violating attributes from the anonymised dataset. Second, to minimise information loss, we employ a greedy search algorithm to determine and eliminate maximal partial unique attribute combinations. Thus, one only needs to find the minimal set of identifying attributes to prevent re-identification. Experiments on a health cloud based on the SAP HANA platform using a semi-synthetic medical history dataset comprised of 109 attributes, demonstrate the effectiveness of our approach.
Audit - and then what?
(2019)
Current trends such as digital transformation, Internet of Things, or Industry 4.0 are challenging the majority of learning factories. Regardless of whether a conventional learning factory, a model factory, or a digital learning factory, traditional approaches such as the monotonous execution of specific instructions don‘t suffice the learner’s needs, market requirements as well as especially current technological developments. Contemporary teaching environments need a clear strategy, a road to follow for being able to successfully cope with the changes and develop towards digitized learning factories. This demand driven necessity of transformation leads to another obstacle: Assessing the status quo and developing and implementing adequate action plans. Within this paper, details of a maturity-based audit of the hybrid learning factory in the Research and Application Centre Industry 4.0 and a thereof derived roadmap for the digitization of a learning factory are presented.
An der Universität Potsdam wird seit 2008 ein automatisiertes Verfahren angewandt, um Bruchparamter großer Erdbeben in quasi-Echtzeit, d.h. wenige Minuten nachdem sich das Beben ereignet hat, zu bestimmen und der Öffentlichkeit via Internet zur Verfügung zu stellen. Es ist vorgesehen, das System in das Deutsch-Indonesische Tsunamifrühwarnsystem (GITEWS) zu integrieren, für das es speziell konfiguriert ist. Wir bestimmen insbesondere die Dauer und die Ausdehnung des Erdbebens, sowie dessen Bruchgeschwindigkeit und -richtung. Dabei benutzen wir die Seismogramme der zuerst eintreffenden P Wellen vom Breitbandstationen in teleseimischer Entfernung vom Beben sowie herkömmliche Arrayverfahren in teilweise modifizierter Form. Die Semblance wir als Ähnlichkeitsmaß verwendet, um Seismogramme eines Stationsnetzes zu vergleichen. Im Falle eines Erdbebens ist die Semblance unter Berücksichtigung des Hypozentrums zur Herdzeit und während des Bruchvorgangs deutlich zeitlich und räumlich erhöht und konzentriert. Indem wir die Ergebnisse verschiedener Stationsnetzwerke kombinieren, erreichen wir Unabhängigkeit von der Herdcharakteristik und eine raum-zeitliche Auflösung, die es erlaubt die o.g. Parameter abzuleiten. In unserem Beitrag skizzieren wir die Methode. Anhand der beiden M8.0 Benkulu Erdbeben (Sumatra, Indonesien) vom 12.09.2007 und dem M8.0 Sichuan Ereignis (China) vom 12.05.2008 demonstrieren wir Auflösungsmöglichkeiten und vergleichen die Ergebnisse der automatisierten Echtzeitanwendung mit nachträglichen Berechnungen. Weiterhin stellen wir eine Internetseite zur Verfügung, die die Ergebnisse präsentiert und animiert. Diese kann z.B. in geowissenschaftlichen Einrichtungen an Computerterminals gezeigt werden. Die Internetauftritte haben die folgenden Adressen: http://www.geo.uni-potsdam.de/arbeitsgruppen/Geophysik_Seismologie/forschung/ruptrack/openday http://www.geo.uni-potsdam.de/arbeitsgruppen/Geophysik_Seismologie/forschung/ruptrack
We use seismic array methods (semblance analysis) to image areas of seismic energy release in the Sunda Arc region and world-wide. Broadband seismograms at teleseismic distances (30° ≤ Δ ≤ 100°) are compared at several subarrays. Semblance maps of different subarrays are multiplied. High semblance tracked over long time (10s of second to minutes) and long distances indicate locations of earthquakes. The method allows resolution of rupture characteristics important for tsunami early warning: start and duration, velocity and direction, length and area. The method has been successfully applied to recent and historic events (M>6.5) and is now operational in real time. Results are obtained shortly after source time, see http://www.geo.uni-potsdam.de/Forschung/Geophysik/GITEWS/tsunami.htm). Comparison of manual and automatic processing are in good agreement. Computational effort is small. Automatic results may be obtained within 15 - 20 minutes after event occurrence.
The classification of vulnerabilities is a fundamental step to derive formal attributes that allow a deeper analysis. Therefore, it is required that this classification has to be performed timely and accurate. Since the current situation demands a manual interaction in the classification process, the timely processing becomes a serious issue. Thus, we propose an automated alternative to the manual classification, because the amount of identified vulnerabilities per day cannot be processed manually anymore. We implemented two different approaches that are able to automatically classify vulnerabilities based on the vulnerability description. We evaluated our approaches, which use Neural Networks and the Naive Bayes methods respectively, on the base of publicly known vulnerabilities.
BACK PAIN: THE STUDY OF MECHANISMS AND THE TRANSLATION IN INTERVENTIONS WITHIN THE MISPEX NETWORK
(2016)
Beacon in the Dark
(2018)
The large amount of heterogeneous data in these email corpora renders experts' investigations by hand infeasible. Auditors or journalists, e.g., who are looking for irregular or inappropriate content or suspicious patterns, are in desperate need for computer-aided exploration tools to support their investigations.
We present our Beacon system for the exploration of such corpora at different levels of detail. A distributed processing pipeline combines text mining methods and social network analysis to augment the already semi-structured nature of emails. The user interface ties into the resulting cleaned and enriched dataset. For the interface design we identify three objectives expert users have: gain an initial overview of the data to identify leads to investigate, understand the context of the information at hand, and have meaningful filters to iteratively focus onto a subset of emails. To this end we make use of interactive visualisations based on rearranged and aggregated extracted information to reveal salient patterns.
Beware of SMOMBIES
(2018)
Several research evaluated the user's style of walking for the verification of a claimed identity and showed high authentication accuracies in many settings. In this paper we present a system that successfully verifies a user's identity based on many real world smartphone placements and yet not regarded interactions while walking. Our contribution is the distinction of all considered activities into three distinct subsets and a specific one-class Support Vector Machine per subset. Using sensor data of 30 participants collected in a semi-supervised study approach, we prove that unsupervised verification is possible with very low false-acceptance and false-rejection rates. We furthermore show that these subsets can be distinguished with a high accuracy and demonstrate that this system can be deployed on off-the-shelf smartphones.
Beyond Surveys
(2018)
Seek and destroy: Filtration schemes and self-detoxifying protective fabrics based on the ZrIV-containing metal—organic frameworks (MOFs) MOF-808 and UiO-66 doped with LiOtBu have been developed that capture and hydrolytically detoxify simulants of nerve agents and mustard gas. Both MOFs function as highly catalytic elements in these applications.
We study the parameter sensitivity of hetero-polymeric DNA within the purview of DNA breathing dynamics. The degree of correlation between the mean bubble size and the model parameters is estimated for this purpose for three different DNA sequences. The analysis leads us to a better understanding of the sequence dependent nature of the breathing dynamics of hetero-polymeric DNA. Out of the 14 model parameters for DNA stability in the statistical Poland-Scheraga approach, the hydrogen bond interaction epsilon(hb)(AT) for an AT base pair and the ring factor. turn out to be the most sensitive parameters. In addition, the stacking interaction epsilon(st)(TA-TA) for an TA-TA nearest neighbor pair of base-pairs is found to be the most sensitive one among all stacking interactions. Moreover, we also establish that the nature of stacking interaction has a deciding effect on the DNA breathing dynamics, not the number of times a particular stacking interaction appears in a sequence. We show that the sensitivity analysis can be used as an effective measure to guide a stochastic optimization technique to find the kinetic rate constants related to the dynamics as opposed to the case where the rate constants are measured using the conventional unbiased way of optimization.
Bridging the Gap
(2019)
The recent restructuring of the electricity grid (i.e., smart grid) introduces a number of challenges for today's large-scale computing systems. To operate reliable and efficient, computing systems must adhere not only to technical limits (i.e., thermal constraints) but they must also reduce operating costs, for example, by increasing their energy efficiency. Efforts to improve the energy efficiency, however, are often hampered by inflexible software components that hardly adapt to underlying hardware characteristics. In this paper, we propose an approach to bridge the gap between inflexible software and heterogeneous hardware architectures. Our proposal introduces adaptive software components that dynamically adapt to heterogeneous processing units (i.e., accelerators) during runtime to improve the energy efficiency of computing systems.
Pancreatic secretory zymogen-granule membrane glycoprotein 2 (GP2) has been identified to be a major autoantigenic target in Crohn’s disease patients. It was discussed recently that a long and a short isoform of GP2 exists whereas the short isoform is often detected by GP2-specific autoantibodies. In the outcome of inflammatory bowel diseases, these GP2-specific autoantibodies are discussed as new serological markers for diagnosis and therapeutic monitoring. To investigate this further, camelid nanobodies were generated by phage display and selected against the short isoform of GP2 in order to isolate specific tools for the discrimination of both isoforms. Nanobodies are single domain antibodies derived from camelid heavy chain only antibodies and characterized by a high stability and solubility. The selected candidates were expressed, purified and validated regarding their binding properties in different enzyme-linked immunosorbent assays formats, immunofluorescence, immunohistochemistry and surface plasmon resonance spectroscopy. Four different nanobodies could be selected whereof three recognize the short isoform of GP2 very specifically and one nanobody showed a high binding capacity for both isoforms. The KD values measured for all nanobodies were between 1.3 nM and 2.3 pM indicating highly specific binders suitable for the application as diagnostic tool in inflammatory bowel disease.
New Public Governance (NPG) as a paradigm for collaborative forms of public service delivery and Blockchain governance are trending topics for researchers and practitioners alike. Thus far, each topic has, on the whole, been discussed separately. This paper presents the preliminary results of ongoing research which aims to shed light on the more concrete benefits of Blockchain for the purpose of NPG. For the first time, a conceptual analysis is conducted on process level to spot benefits and limitations of Blockchain-based governance. Per process element, Blockchain key characteristics are mapped to functional aspects of NPG from a governance perspective. The preliminary results show that Blockchain offers valuable support for governments seeking methods to effectively coordinate co-producing networks. However, the extent of benefits of Blockchain varies across the process elements. It becomes evident that there is a need for off-chain processes. It is, therefore, argued in favour of intensifying research on off-chain governance processes to better understand the implications for and influences on on-chain governance.
Capsella
(2018)
Cardiovascular drift response over two different constant-load exercises in healthy non-athletes
(2019)
Cardiovascular drift (CV-d) is a steady increase in heart rate (HR) over time while performing constant load moderate intensity exercise (CME) > 20 min. CV-d presents problems for the prescription of exercise intensity by means of HR, because the work rate (WR) during exercise must be adjusted to maintain target HR, thus disturbing the intended effect of the exercise intervention. It has been shown that the increase in HR during CME is due to changes in WR and not to CV-d.
Catholicism
(2019)
Organic or inorganic (A) metal (M) halide (X) perovskites (AMX(3)) are semiconductor materials setting the basis for the development of highly efficient, low-cost and multijunction solar energy conversion devices. The best efficiencies nowadays are obtained with mixed compositions containing methylammonium, formamidinium, Cs and Rb as well as iodine, bromine and chlorine as anions. The understanding of fundamental properties such as crystal structure and its effect on the band gap, as well as their phase stability is essential. In this systematic study X-ray diffraction and photoluminescense spectroscopy were applied to evaluate structural and optoelectronic properties of hybrid perovskites with mixed compositions.
Eccentric (ECC) exercises might cause muscle damage, characterized by delayed-onset muscle soreness, elevated creatine kinase (CK) levels and local muscle oedema, shown by elevated T2 times in magnet resonance imaging (MRI) scans. Previous research suggests a high inter-individual difference regarding these systemic and local responses to eccentric workload. PURPOSE: To analyze ECC exercise-induced muscle damage in lumbar paraspinal muscles assessed via MRI. METHODS: Ten participants (3f/7m; 33±6y; 174±8cm; 71±12kg) were included in the study. Quantitative paraspinal muscle constitution of M. erector spinae and M. multifidius were assessed in supine position before and 72h after an intense eccentric trunk exercise bout in a mobile 1.5 tesla MRI device. MRI scans were recorded on spinal level L3 (T2-weighted TSE echo sequences, 11 slices, 2mm slice thickness, 3mm gap, echo times: 20, 40, 60, 80, 100ms, TR time: 2500ms). Muscle T2 times were calculated for manually traced regions of interest of the respective muscles with an imaging software. The exercise protocol was performed in an isokinetic device and consisted of 120sec alternating ECC trunk flexion-extension with maximal effort. Venous blood samples were taken before and 72h after the ECC exercise. Descriptive statistics (mean±SD) and t-testing for pre-post ECC exercises were performed. RESULTS: T2 times increased from pre- to post-ECC MRI measurements from 55±3ms to 79±28ms in M. erector spinae and from 62±5ms to 78±24ms in M. multifidius (p<0.001). CK increased from 126±97 U/L to 1447±20579 U/L. High SDs of T2 time and CK in post-ECC measures could be due to inter-individual reactions to ECC exercises. 3 participants showed high local and systemic reactions (HR) with T2 time increases of 120±24% (M. erector spinae) and 73±50% (M. multifidius). In comparison, the remaining 7 participants showed increases of 11±12% (M. erector spinae) and 7±9% (M. multifidius) in T2 time. Mean CK increased 9.5-fold in the 3 HR subjects compared with the remaining 7 subjects. CONCLUSIONS: The 120sec maximal ECC trunk flexion-extension protocol induced high amounts of muscle damage in 3 participants. Moderate to low responses were found in the remaining 7 subjects, assuming that inter-individual predictors play a role regarding physiological responses to ECC workload.
Monoclonal antibodies are highly valuable tools in biomedicine but the generation by hybridoma technology is very time-consuming and elaborate. In order to circumvent the consisting drawbacks an in vitro immunization approach was established by which murine as well as human monoclonal antibodies against a viral coat protein could be developed. The in vitro immunization process was performed by isolation of murine hematopoietic stem cells or human monocytes and an in vitro differentiation into immature dendritic cells. After antigen loading the cells were co-cultivated with naive T and B lymphocytes for three days in order to obtain antigen-specific B lymphocytes in culture, followed by fusion with murine myeloma cells or human/murine heteromyeloma cells. Antigen-specific hybridomas were selected and the generated antibodies were purified and characterized in this study by ELISA, western blot, gene sequencing, affinity measurements. Further the characteristics were compared to a monoclonal antibody against the same target generated by conventional hybridoma technology. Isotype detection revealed a murine IgM and a human IgG4 antibody in comparison to an IgG1 for the conventionally generated antibody. The antibodies derived from in vitro immunization showed indeed a lower affinity for the antigen as compared to the conventionally generated one, which is probably based on the significantly shorter B cell maturation (3 days) during the immunization process. Nevertheless, they were suitable for building up a sandwich based detection system. Therefore, the in vitro immunization approach seems to be a good and particularly fast alternative to conventional hybridoma technology.
Eccentric exercises (ECC) induce reversible muscle damage, delayed-onset muscle soreness and an inflammatory reaction that is often followed by a systemic anti-inflammatory response. Thus, ECC might be beneficial for treatment of metabolic disorders which are frequently accompanied by a low-grade systemic inflammation. However, extent and time course of a systemic immune response after repeated ECC bouts are poorly characterized.
PURPOSE: To analyze the (anti-)inflammatory response after repeated ECC loading of the trunk.
METHODS: Ten healthy participants (33 ± 6 y; 173 ± 14 cm; 74 ± 16 kg) performed three isokinetic strength measurements of the trunk (concentric (CON), ECC1, ECC2, each 2 wks apart; flexion/extension, velocity 60°/s, 120s MVC). Pre- and 4, 24, 48, 72, 168h post-exercise, muscle soreness (numeric rating scale, NRS) was assessed and blood samples were taken and analyzed [Creatine kinase (CK), C-reactive protein (CRP), Interleukin-6 (IL-6), IL-10, Tumor necrosis factor-α (TNF-α)]. Statistics were done by Friedman‘s test with Dunn‘s post hoc test (α=.05).
RESULTS: Mean peak torque was higher during ECC1 (319 ± 142 Nm) than during CON (268 ± 108 Nm; p<.05) and not different between ECC1 and ECC2 (297 ± 126 Nm; p>.05). Markers of muscle damage (peaks post-ECC1: NRS 48h, 4.4±2.9; CK 72h, 14407 ± 19991 U/l) were higher after ECC1 than after CON and ECC2 (p<.05). The responses over 72h (stated as Area under the Curve, AUC) were abolished after ECC2 compared to ECC1 (p<.05) indicating the presence of the repeated bout effect. CRP levels were not changed. IL-6 levels increased 2-fold post-ECC1 (pre: 0.5 ± 0.4 vs. 72h: 1.0 ± 0.8 pg/ml). The IL-6 response was enhanced after ECC1 (AUC 61 ± 37 pg/ml*72h) compared to CON (AUC 33 ± 31 pg/ml*72h; p<.05). After ECC2, the IL-6 response (AUC 43 ± 25 pg/ml*72h) remained lower than post-ECC1, but the difference was not statistically significant. Serum levels of TNF-α and of the anti-inflammatory cytokine IL-10 were below detection limits. Overall, markers of muscle damage and immune response showed high inter-individual variability.
CONCLUSION: Despite maximal ECC loading of a large muscle group, no anti-inflammatory and just weak inflammatory responses were detected in healthy adults. Whether ECC elicits a different reaction in inflammatory clinical conditions is unclear.
Charges dropped
(2015)
Recent advances in high-throughput sequencing experiments and their theoretical descriptions have determined fast dynamics of the "chromatin and epigenetics" field, with new concepts appearing at high rate. This field includes but is not limited to the study of DNA-protein-RNA interactions, chromatin packing properties at different scales, regulation of gene expression and protein trafficking in the cell nucleus, binding site search in the crowded chromatin environment and modulation of physical interactions by covalent chemical modifications of the binding partners. The current special issue does not pretend for the full coverage of the field, but it rather aims to capture its development and provide a snapshot of the most recent concepts and approaches. Eighteen open-access articles comprising this issue provide a delicate balance between current theoretical and experimental biophysical approaches to uncover chromatin structure and understand epigenetic regulation, allowing free flow of new ideas and preliminary results.
Clause typing in Germanic
(2018)
The questionnaire investigates the functional left periphery of various finite clauses in Germanic languages, with particular attention paid to clause-typing elements and the combinations thereof. The questionnaire is mostly concerned with clause typing in embedded clauses, but main clause counterparts are also considered for comparative purposes. The chief aim was to achieve comparable results across Germanic languages, though the standardised questionnaire may also be helpful in the study of other languages, too. Most questions examine the availability of various complementisers and clause-typing operators, and in some cases the movement of verbs to the left periphery is also taken into account. The questionnaire is split into seven major parts according to the types of clauses under scrutiny.
All instructions were given in English and the individual questions either concern translations of given sentences from English into the target language, and/or they ask for specific details about the constructions in the target language.
The present document contains the questionnaire itself (together with the instructions given at the beginning of the questionnaire and at the beginning of the individual sections, as well as the questions asking for personal data), the sociolinguistic data of the speakers, and the actual results for the individual languages. Five Germanic languages are included: Dutch, Danish, Icelandic, Norwegian and Swedish. For each language, two informants were recruited. Given the small number of informants, the present study serves as a qualitative investigation and as a basis for further, quantitative and experimental studies.
It has been observationally established that winds of hot massive stars have highly variable characteristics. The variability evident in the winds is believed to be caused by structures on a broad range of spatial scales. Small-scale structures (clumping) in stellar winds of hot stars are possible consequence of an instability appearing in their radiation hydrodynamics. To understand how clumping may influence calculation of theoretical spectra, different clumping properties and their 3D nature have to be taken into account. Properties of clumping have been examined using our 3D radiative transfer calculations. Effects of clumping for the case of the B[e] phenomenon are discussed.
This Research-to-Practice paper examines the practical application of various forms of collaborative learning in MOOCs. Since 2012, about 60 MOOCs in the wider context of Information Technology and Computer Science have been conducted on our self-developed MOOC platform. The platform is also used by several customers, who either run their own platform instances or use our white label platform. We, as well as some of our partners, have experimented with different approaches in collaborative learning in these courses. Based on the results of early experiments, surveys amongst our participants, and requests by our business partners we have integrated several options to offer forms of collaborative learning to the system. The results of our experiments are directly fed back to the platform development, allowing to fine tune existing and to add new tools where necessary. In the paper at hand, we discuss the benefits and disadvantages of decisions in the design of a MOOC with regard to the various forms of collaborative learning. While the focus of the paper at hand is on forms of large group collaboration, two types of small group collaboration on our platforms are briefly introduced.
Kijko et al. (2016) present various methods to estimate parameters that are relevant for probabilistic seismic-hazard assessment. One of these parameters, although not the most influential, is the maximum possible earthquake magnitude m(max). I show that the proposed estimation of m(max) is based on an erroneous equation related to a misuse of the estimator in Cooke (1979) and leads to unstable results. So far, reported finite estimations of m(max) arise from data selection, because the estimator in Kijko et al. (2016) diverges with finite probability. This finding is independent of the assumed distribution of earthquake magnitudes. For the specific choice of the doubly truncated Gutenberg-Richter distribution, I illustrate the problems by deriving explicit equations. Finally, I conclude that point estimators are generally not a suitable approach to constrain m(max).