Refine
Has Fulltext
- no (876) (remove)
Year of publication
Document Type
- Other (876) (remove)
Language
Keywords
- E-Learning (4)
- MOOC (4)
- Scrum (4)
- embodied cognition (4)
- errata, addenda (4)
- Cloud-Security (3)
- Digitalisierung (3)
- ISM: supernova remnants (3)
- Industry 4.0 (3)
- Internet of Things (3)
Institute
- Institut für Biochemie und Biologie (96)
- Institut für Physik und Astronomie (84)
- Hasso-Plattner-Institut für Digital Engineering GmbH (83)
- Institut für Geowissenschaften (63)
- Department Sport- und Gesundheitswissenschaften (46)
- Institut für Mathematik (46)
- Department Psychologie (45)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (31)
- Institut für Chemie (30)
- Institut für Ernährungswissenschaft (30)
In the present study, the charge distribution and the charge transport across the thickness of 2- and 3-dimensional polymer nanodielectrics was investigated. Chemically surface-treated polypropylene (PP) films and low-density polyethylene nanocomposite films with 3 wt % of magnesium oxide (LDPE/MgO) served as examples of 2-D and 3-D nanodielectrics, respectively. Surface charges were deposited onto the non-metallized surfaces of the one-side metallized polymer films and found to broaden and to thus enter the bulk of the films upon thermal stimulation at suitable elevated temperatures. The resulting space-charge profiles in the thickness direction were probed by means of Piezoelectrically-generated Pressure Steps (PPSs). It was observed that the chemical surface treatment of PP which led to the formation of nano-structures or the use of bulk nanoparticles from LDPE/MgO nanocomposites enhance charge trapping on or in the respective polymer films and also reduce charge transport inside the respective samples.
In Memoriam Siegfried Bauer
(2019)
Siegfried Bauer, an internationally renowned, very creative applied physicist, who also was a prolific materials scientist and engineer, died on December 30, 2018, in Linz, Austria, after a one-year battle with cancer. He was full professor of soft-matter physics at the Johannes Kepler University Linz, Austria, and a scientific leader and innovator across the fields but mainly in the areas of electro-active materials (including electrets) and stretchable and imperceptible electronics.
Dielectric materials for electro-active (electret) and/or electro-passive (insulation) applications
(2019)
Dielectric materials for electret applications usually have to contain a quasi-permanent space charge or dipole polarization that is stable over large temperature ranges and time periods. For electrical-insulation applications, on the other hand, a quasi-permanent space charge or dipole polarization is usually considered detrimental. In recent years, however, with the advent of high-voltage direct-current (HVDC) transmission and high-voltage capacitors for energy storage, new possibilities are being explored in the area of high-voltage dielectrics. Stable charge trapping (as e.g. found in nano-dielectrics) or large dipole polarizations (as e.g. found in relaxor ferroelectrics and high-permittivity dielectrics) are no longer considered to be necessarily detrimental in electrical-insulation materials. On the other hand, recent developments in electro-electrets (dielectric elastomers), i.e. very soft dielectrics with large actuation strains and high breakdown fields, and in ferroelectrets, i.e. polymers with electrically charged cavities, have resulted in new electret materials that may also be useful for HVDC insulation systems. Furthermore, 2-dimensional (nano-particles on surfaces or interfaces) and 3-dimensional (nano-particles in the bulk) nano-dielectrics have been found to provide very good charge-trapping properties that may not only be used for more stable electrets and ferroelectrets, but also for better HVDC electrical-insulation materials with the possibility to optimize charge-transport and field-gradient behavior. In view of these and other recent developments, a first attempt will be made to review a small selection of electro-active (i.e. electret) and electro-passive (i.e. insulation) dielectrics in direct comparison. Such a comparative approach may lead to synergies in materials concepts and research methods that will benefit both areas. Furthermore, electrets may be very useful for sensing and monitoring applications in electrical-insulation systems, while high-voltage technology is essential for more efficient charging and poling of electret materials.
Nowadays, structural health monitoring of critical infrastructures is considered as of primal importance especially for managing transport infrastructure however most current SHM methodologies are based on point-sensors that show various limitations relating to their spatial positioning capabilities, cost of development and measurement range. This publication describes the progress in the SENSKIN EC co-funded research project that is developing a dielectric-elastomer sensor, formed from a large highly extensible capacitance sensing membrane and is supported by an advanced micro-electronic circuitry, for monitoring transport infrastructure bridges. The sensor under development provides spatial measurements of strain in excess of 10%, while the sensing system is being designed to be easy to install, require low power in operation concepts, require simple signal processing, and have the ability to self-monitor and report. An appropriate wireless sensor network is also being designed and developed supported by local gateways for the required data collection and exploitation. SENSKIN also develops a Decision-Support-System (DSS) for proactive condition-based structural interventions under normal operating conditions and reactive emergency intervention following an extreme event. The latter is supported by a life-cycle-costing (LCC) and life-cycle-assessment (LCA) module responsible for the total internal and external costs for the identified bridge rehabilitation, analysis of options, yielding figures for the assessment of the economic implications of the bridge rehabilitation work and the environmental impacts of the bridge rehabilitation options and of the associated secondary effects respectively. The overall monitoring system will be evaluated and benchmarked on actual bridges of Egnatia Highway (Greece) and Bosporus Bridge (Turkey).
Already for decades it has been known that the winds of massive stars are inhomogeneous (i.e. clumped). To properly model observed spectra of massive star winds it is necessary to incorporate the 3-D nature of clumping into radiative transfer calculations. In this paper we present our full 3-D Monte Carlo radiative transfer code for inhomogeneous expanding stellar winds. We use a set of parameters to describe dense as well as the rarefied wind components. At the same time, we account for non-monotonic velocity fields. We show how the 3-D density and velocity wind inhomogeneities strongly affect the resonance line formation. We also show how wind clumping can solve the discrepancy between P v and H alpha mass-loss rate diagnostics.
Industry 4.0 and the Internet of Things are recent developments that have lead to the creation of new kinds of manufacturing data. Linking this new kind of sensor data to traditional business information is crucial for enterprises to take advantage of the data’s full potential. In this paper, we present a demo which allows experiencing this data integration, both vertically between technical and business contexts and horizontally along the value chain. The tool simulates a manufacturing company, continuously producing both business and sensor data, and supports issuing ad-hoc queries that answer specific questions related to the business. In order to adapt to different environments, users can configure sensor characteristics to their needs.
In cloud computing, users are able to use their own operating system (OS) image to run a virtual machine (VM) on a remote host. The virtual machine OS is started by the user using some interfaces provided by a cloud provider in public or private cloud. In peer to peer cloud, the VM is started by the host admin. After the VM is running, the user could get a remote access to the VM to install, configure, and run services. For the security reasons, the user needs to verify the integrity of the running VM, because a malicious host admin could modify the image or even replace the image with a similar image, to be able to get sensitive data from the VM. We propose an approach to verify the integrity of a running VM on a remote host, without using any specific hardware such as Trusted Platform Module (TPM). Our approach is implemented on a Linux platform where the kernel files (vmlinuz and initrd) could be replaced with new files, while the VM is running. kexec is used to reboot the VM with the new kernel files. The new kernel has secret codes that will be used to verify whether the VM was started using the new kernel files. The new kernel is used to further measuring the integrity of the running VM.
High-dimensional data is particularly useful for data analytics research. In the healthcare domain, for instance, high-dimensional data analytics has been used successfully for drug discovery. Yet, in order to adhere to privacy legislation, data analytics service providers must guarantee anonymity for data owners. In the context of high-dimensional data, ensuring privacy is challenging because increased data dimensionality must be matched by an exponential growth in the size of the data to avoid sparse datasets. Syntactically, anonymising sparse datasets with methods that rely of statistical significance, makes obtaining sound and reliable results, a challenge. As such, strong privacy is only achievable at the cost of high information loss, rendering the data unusable for data analytics. In this paper, we make two contributions to addressing this problem from both the privacy and information loss perspectives. First, we show that by identifying dependencies between attribute subsets we can eliminate privacy violating attributes from the anonymised dataset. Second, to minimise information loss, we employ a greedy search algorithm to determine and eliminate maximal partial unique attribute combinations. Thus, one only needs to find the minimal set of identifying attributes to prevent re-identification. Experiments on a health cloud based on the SAP HANA platform using a semi-synthetic medical history dataset comprised of 109 attributes, demonstrate the effectiveness of our approach.
Devices on the Internet of Things (IoT) are usually battery-powered and have limited resources. Hence, energy-efficient and lightweight protocols were designed for IoT devices, such as the popular Constrained Application Protocol (CoAP). Yet, CoAP itself does not include any defenses against denial-of-sleep attacks, which are attacks that aim at depriving victim devices of entering low-power sleep modes. For example, a denial-of-sleep attack against an IoT device that runs a CoAP server is to send plenty of CoAP messages to it, thereby forcing the IoT device to expend energy for receiving and processing these CoAP messages. All current security solutions for CoAP, namely Datagram Transport Layer Security (DTLS), IPsec, and OSCORE, fail to prevent such attacks. To fill this gap, Seitz et al. proposed a method for filtering out inauthentic and replayed CoAP messages "en-route" on 6LoWPAN border routers. In this paper, we expand on Seitz et al.'s proposal in two ways. First, we revise Seitz et al.'s software architecture so that 6LoWPAN border routers can not only check the authenticity and freshness of CoAP messages, but can also perform a wide range of further checks. Second, we propose a couple of such further checks, which, as compared to Seitz et al.'s original checks, more reliably protect IoT devices that run CoAP servers from remote denial-of-sleep attacks, as well as from remote exploits. We prototyped our solution and successfully tested its compatibility with Contiki-NG's CoAP implementation.
LoANs
(2019)
Recently, deep neural networks have achieved remarkable performance on the task of object detection and recognition. The reason for this success is mainly grounded in the availability of large scale, fully annotated datasets, but the creation of such a dataset is a complicated and costly task. In this paper, we propose a novel method for weakly supervised object detection that simplifies the process of gathering data for training an object detector. We train an ensemble of two models that work together in a student-teacher fashion. Our student (localizer) is a model that learns to localize an object, the teacher (assessor) assesses the quality of the localization and provides feedback to the student. The student uses this feedback to learn how to localize objects and is thus entirely supervised by the teacher, as we are using no labels for training the localizer. In our experiments, we show that our model is very robust to noise and reaches competitive performance compared to a state-of-the-art fully supervised approach. We also show the simplicity of creating a new dataset, based on a few videos (e.g. downloaded from YouTube) and artificially generated data.
Cloud Storage Broker (CSB) provides value-added cloud storage service for enterprise usage by leveraging multi-cloud storage architecture. However, it raises several challenges for managing resources and its access control in multiple Cloud Service Providers (CSPs) for authorized CSB stakeholders. In this paper we propose unified cloud access control model that provides the abstraction of CSP's services for centralized and automated cloud resource and access control management in multiple CSPs. Our proposal offers role-based access control for CSB stakeholders to access cloud resources by assigning necessary privileges and access control list for cloud resources and CSB stakeholders, respectively, following privilege separation concept and least privilege principle. We implement our unified model in a CSB system called CloudRAID for Business (CfB) with the evaluation result shows it provides system-and-cloud level security service for cfB and centralized resource and access control management in multiple CSPs.
While the IEEE 802.15.4 radio standard has many features that meet the requirements of Internet of things applications, IEEE 802.15.4 leaves the whole issue of key management unstandardized. To address this gap, Krentz et al. proposed the Adaptive Key Establishment Scheme (AKES), which establishes session keys for use in IEEE 802.15.4 security. Yet, AKES does not cover all aspects of key management. In particular, AKES comprises no means for key revocation and rekeying. Moreover, existing protocols for key revocation and rekeying seem limited in various ways. In this paper, we hence propose a key revocation and rekeying protocol, which is designed to overcome various limitations of current protocols for key revocation and rekeying. For example, our protocol seems unique in that it routes around IEEE 802.15.4 nodes whose keys are being revoked. We successfully implemented and evaluated our protocol using the Contiki-NG operating system and aiocoap.
Detect me if you can
(2019)
Spam Bots have become a threat to online social networks with their malicious behavior, posting misinformation messages and influencing online platforms to fulfill their motives. As spam bots have become more advanced over time, creating algorithms to identify bots remains an open challenge. Learning low-dimensional embeddings for nodes in graph structured data has proven to be useful in various domains. In this paper, we propose a model based on graph convolutional neural networks (GCNN) for spam bot detection. Our hypothesis is that to better detect spam bots, in addition to defining a features set, the social graph must also be taken into consideration. GCNNs are able to leverage both the features of a node and aggregate the features of a node’s neighborhood. We compare our approach, with two methods that work solely on a features set and on the structure of the graph. To our knowledge, this work is the first attempt of using graph convolutional neural networks in spam bot detection.
The ability to work in teams is an important skill in today's work environments. In MOOCs, however, team work, team tasks, and graded team-based assignments play only a marginal role. To close this gap, we have been exploring ways to integrate graded team-based assignments in MOOCs. Some goals of our work are to determine simple criteria to match teams in a volatile environment and to enable a frictionless online collaboration for the participants within our MOOC platform. The high dropout rates in MOOCs pose particular challenges for team work in this context. By now, we have conducted 15 MOOCs containing graded team-based assignments in a variety of topics. The paper at hand presents a study that aims to establish a solid understanding of the participants in the team tasks. Furthermore, we attempt to determine which team compositions are particularly successful. Finally, we examine how several modifications to our platform's collaborative toolset have affected the dropout rates and performance of the teams.
The "Bachelor Project"
(2019)
One of the challenges of educating the next generation of computer scientists is to teach them to become team players, that are able to communicate and interact not only with different IT systems, but also with coworkers and customers with a non-it background. The “bachelor project” is a project based on team work and a close collaboration with selected industry partners. The authors hosted some of the teams since spring term 2014/15. In the paper at hand we explain and discuss this concept and evaluate its success based on students' evaluation and reports. Furthermore, the technology-stack that has been used by the teams is evaluated to understand how self-organized students in IT-related projects work. We will show that and why the bachelor is the most successful educational format in the perception of the students and how this positive results can be improved by the mentors.
MOOCs in Secondary Education
(2019)
Computer science education in German schools is often less than optimal. It is only mandatory in a few of the federal states and there is a lack of qualified teachers. As a MOOC (Massive Open Online Course) provider with a German background, we developed the idea to implement a MOOC addressing pupils in secondary schools to fill this gap. The course targeted high school pupils and enabled them to learn the Python programming language. In 2014, we successfully conducted the first iteration of this MOOC with more than 7000 participants. However, the share of pupils in the course was not quite satisfactory. So we conducted several workshops with teachers to find out why they had not used the course to the extent that we had imagined. The paper at hand explores and discusses the steps we have taken in the following years as a result of these workshops.
The emergence of cloud computing allows users to easily host their Virtual Machines with no up-front investment and the guarantee of always available anytime anywhere. But with the Virtual Machine (VM) is hosted outside of user's premise, the user loses the physical control of the VM as it could be running on untrusted host machines in the cloud. Malicious host administrator could launch live memory dumping, Spectre, or Meltdown attacks in order to extract sensitive information from the VM's memory, e.g. passwords or cryptographic keys of applications running in the VM. In this paper, inspired by the moving target defense (MTD) scheme, we propose a novel approach to increase the security of application's sensitive data in the VM by continuously moving the sensitive data among several memory allocations (blocks) in Random Access Memory (RAM). A movement function is added into the application source code in order for the function to be running concurrently with the application's main function. Our approach could reduce the possibility of VM's sensitive data in the memory to be leaked into memory dump file by 2 5% and secure the sensitive data from Spectre and Meltdown attacks. Our approach's overhead depends on the number and the size of the sensitive data.
High-throughput RNA sequencing produces large gene expression datasets whose analysis leads to a better understanding of diseases like cancer. The nature of RNA-Seq data poses challenges to its analysis in terms of its high dimensionality, noise, and complexity of the underlying biological processes. Researchers apply traditional machine learning approaches, e. g. hierarchical clustering, to analyze this data. Until it comes to validation of the results, the analysis is based on the provided data only and completely misses the biological context. However, gene expression data follows particular patterns - the underlying biological processes. In our research, we aim to integrate the available biological knowledge earlier in the analysis process. We want to adapt state-of-the-art data mining algorithms to consider the biological context in their computations and deliver meaningful results for researchers.
BIOMEX (BIOlogy and Mars EXperiment) is an ESA/Roscosmos space exposure experiment housed within the exposure facility EXPOSE-R2 outside the Zvezda module on the International Space Station (ISS). The design of the multiuser facility supports-among others-the BIOMEX investigations into the stability and level of degradation of space-exposed biosignatures such as pigments, secondary metabolites, and cell surfaces in contact with a terrestrial and Mars analog mineral environment. In parallel, analysis on the viability of the investigated organisms has provided relevant data for evaluation of the habitability of Mars, for the limits of life, and for the likelihood of an interplanetary transfer of life (theory of lithopanspermia). In this project, lichens, archaea, bacteria, cyanobacteria, snow/permafrost algae, meristematic black fungi, and bryophytes from alpine and polar habitats were embedded, grown, and cultured on a mixture of martian and lunar regolith analogs or other terrestrial minerals. The organisms and regolith analogs and terrestrial mineral mixtures were then exposed to space and to simulated Mars-like conditions by way of the EXPOSE-R2 facility. In this special issue, we present the first set of data obtained in reference to our investigation into the habitability of Mars and limits of life. This project was initiated and implemented by the BIOMEX group, an international and interdisciplinary consortium of 30 institutes in 12 countries on 3 continents. Preflight tests for sample selection, results from ground-based simulation experiments, and the space experiments themselves are presented and include a complete overview of the scientific processes required for this space experiment and postflight analysis. The presented BIOMEX concept could be scaled up to future exposure experiments on the Moon and will serve as a pretest in low Earth orbit.
This is a correction notice for ‘Post-adiabatic supernova remnants in an interstellar magnetic field: oblique shocks and non-uniform environment’ (DOI: https://doi.org/10.1093/mnras/sty1750), which was published in MNRAS 479, 4253–4270 (2018). The publisher regrets to inform that the colour was missing from the colour scales in Figs 8(a)–(d) and Figs 9(a) and (b). This has now been corrected online. The publisher apologizes for this error.
Editorial
(2019)
This paper investigates the applicability of CMOS decoupling cells for mitigating the Single Event Transient (SET) effects in standard combinational gates. The concept is based on the insertion of two decoupling cells between the gate's output and the power/ground terminals. To verify the proposed hardening approach, extensive SPICE simulations have been performed with standard combinational cells designed in IHP's 130 nm bulk CMOS technology. Obtained simulation results have shown that the insertion of decoupling cells results in the increase of the gate's critical charge, thus reducing the gate's soft error rate (SER). Moreover, the decoupling cells facilitate the suppression of SET pulses propagating through the gate. It has been shown that the decoupling cells may be a competitive alternative to gate upsizing and gate duplication for hardening the gates with lower critical charge and multiple (3 or 4) inputs, as well as for filtering the short SET pulses induced by low-LET particles.
Monte-Carlo calculations are carried out to simulate the light transport in dense materials. Focus lies on the calculation of diffuse light transmission through films of scattering and absorbing media considering additionally the effect of dependent scattering. Different influences like interaction type between particles, particle size, composition etc. can be studied by this program. Simulations in this study show major influences on the diffuse transmission. Further simulations are carried out to model a sunscreen film and study best compositions of this film and will be presented.
Cardiovascular drift response over two different constant-load exercises in healthy non-athletes
(2019)
Cardiovascular drift (CV-d) is a steady increase in heart rate (HR) over time while performing constant load moderate intensity exercise (CME) > 20 min. CV-d presents problems for the prescription of exercise intensity by means of HR, because the work rate (WR) during exercise must be adjusted to maintain target HR, thus disturbing the intended effect of the exercise intervention. It has been shown that the increase in HR during CME is due to changes in WR and not to CV-d.
Foreword
(2019)
Auf dem Sprung
(2019)
Gefahr an jeder Ecke
(2019)
Blick in die Zukunft
(2019)
Vielfalt in der Uckermark
(2019)
Mut macht einsam
(2019)
Short period double degenerate white dwarf (WD) binaries with periods of less than similar to 1 day are considered to be one of the likely progenitors of type Ia supernovae. These binaries have undergone a period of common envelope evolution. If the core ignites helium before the envelope is ejected, then a hot subdwarf remains prior to contracting into a WD. Here we present a comparison of two very rare systems that contain two hot subdwarfs in short period orbits. We provide a quantitative spectroscopic analysis of the systems using synthetic spectra from state-of-the-art non-LTE models to constrain the atmospheric parameters of the stars. We also use these models to determine the radial velocities, and thus calculate dynamical masses for the stars in each system.
In self-incompatible plants the female style rejects self pollen, yet the extent to which the female style in the many self-compatible species can still select between different pollen genotypes and thus bias fertilization success is unclear. A new study identifies the molecular basis for how styles of the self-compatible coyote tobacco bias the fertilization success of pollen genotypes using matching gene expression patterns in a manner analogous to cryptic female choice in animals.