Refine
Year of publication
- 2019 (2062) (remove)
Document Type
- Article (1394)
- Postprint (204)
- Doctoral Thesis (180)
- Other (145)
- Review (55)
- Working Paper (31)
- Monograph/Edited Volume (17)
- Part of a Book (11)
- Conference Proceeding (8)
- Habilitation Thesis (7)
Language
- English (2062) (remove)
Keywords
- morphology (28)
- linguistics (25)
- syntax (25)
- Informationsstruktur (24)
- Morphologie (24)
- information structure (24)
- Festschrift (23)
- Linguistik (23)
- Syntax (23)
- festschrift (23)
Institute
- Institut für Biochemie und Biologie (337)
- Institut für Physik und Astronomie (308)
- Institut für Geowissenschaften (274)
- Institut für Chemie (173)
- Department Psychologie (103)
- Institut für Ernährungswissenschaft (84)
- Institut für Umweltwissenschaften und Geographie (75)
- Department Linguistik (72)
- Hasso-Plattner-Institut für Digital Engineering GmbH (64)
- Institut für Mathematik (59)
Literary criticism, particularly ecocriticism, occupies an uneasy position with regard to activism: reading books (or plays, or poems) seems like a rather leisurely activity to be undertaking if our environment—our planet—is in crisis. And yet, critiquing the narratives that structure worlds and discourses is key to the activities of the (literary) critic in this time of crisis. If this crisis manifests as a ‘crisis of imagination’ (e.g. Ghosh), I argue that this not so much a crisis of the absence of texts that address the environmental disaster, but rather a failure to comprehend the presences of the Anthropocene in the present. To interpret (literary) texts in this framework must entail acknowledging and scrutinising the extent of the incapacity of the privileged reader to comprehend the crisis as presence and present rather than spatially or temporally remote. The readings of the novels Carpentaria (2006) and The Swan Book (2013) by Waanyi writer Alexis Wright (Australia) trace the uneven presences of Anthropocenes in the present by way of bringing future worlds (The Swan Book) to the contemporary (Carpentaria). In both novels, protagonists must forge survival amongst ruins of the present and future: the depicted worlds, in particular the representations of the disenfranchisement of indigenous inhabitants of the far north of the Australian continent, emerge as a critique of the intersections of capitalist and colonial projects that define modernity and its impact on the global climate.
We have developed a method for deriving systems of closed equations for the dynamics of order parameters in the ensembles of phase oscillators. The Ott-Antonsen equation for the complex order parameter is a particular case of such equations. The simplest nontrivial extension of the Ott-Antonsen equation corresponds to two-bunch states of the ensemble. Based on the equations obtained, we study the dynamics of multi-bunch chimera states in coupled Kuramoto-Sakaguchi ensembles. We show an increase in the dimensionality of the system dynamics for two-bunch chimeras in the case of identical phase elements and a transition to one-bunch "Abrams chimeras" for imperfect identity (in the latter case, the one-bunch chimeras become attractive).
We provide explicit examples of positive and power-bounded operators on c(0) and l(infinity) which are mean ergodic but not weakly almost periodic. As a consequence we prove that a countably order complete Banach lattice on which every positive and power-bounded mean ergodic operator is weakly almost periodic is necessarily a KB-space. This answers several open questions from the literature. Finally, we prove that if T is a positive mean ergodic operator with zero fixed space on an arbitrary Banach lattice, then so is every power of T .
Composite actuators consisting of magnetic nanoparticles dispersed in a crystallizable multiphase polymer system can be remotely controlled by alternating magnetic fields (AMF). These actuators contain spatially segregated crystalline domains with chemically different compositions. Here, the crystalline domain associated to low melting transition range is responsible for actuation while the crystalline domain associated to the higher melting transition range determines the geometry of the shape change. This paper reports magnetomechanical actuators which are based on a single crystalline domain of oligo(omega-pentadecalactone) (OPDL) along with covalently integrated iron(III) oxide nanoparticles (ioNPs). Different geometrical modes of actuation such as a reversible change in length or twisting were implemented by a magneto-mechanical programming procedure. For an individual actuation mode, the degree of actuation could be tailored by variation of the magnetic field strengths. This material design can be easily extended to other composites containing other magnetic nanoparticles, e.g. with a high magnetic susceptibility.
By using synchrotron X-ray powder diffraction, the temperature dependent phase diagram of the hybrid perovskite tri-halide compounds, methyl ammonium lead iodide (MAPbI3, MA+ = CH3NH3+) and methyl ammonium lead bromide (MAPbBr3), as well as of their solid solutions, has been established. The existence of a large miscibility gap between 0.29 ≤ x ≤ 0.92 (±0.02) for the MAPb(I1−xBrx)3 solid solution has been proven. A systematic study of the lattice parameters for the solid solution series at room temperature revealed distinct deviations from Vegard's law. Furthermore, temperature dependent measurements showed that a strong temperature dependency of lattice parameters from the composition is present for iodine rich compositions. In contrast, the bromine rich compositions show an unusually low dependency of the phase transition temperature from the degree of substitution.
HexagDLy is a Python-library extending the PyTorch deep learning framework with convolution and pooling operations on hexagonal grids. It aims to ease the access to convolutional neural networks for applications that rely on hexagonally sampled data as, for example, commonly found in ground-based astroparticle physics experiments.
Shape-memory polymer actuators often contain crystallizable polyester segments. Here, the influence of accelerated hydrolytic degradation on the actuation performance in copolymer networks based on oligo(epsilon-caprolactone) dimethacrylate (OCL) and n-butyl acrylate is studied The semi-crystalline OCL was utilized as crosslinker with molecular weights of 2.3 and 15.2 kg.mol(-1) (ratio: 1:1 wt%) and n-butyl acrylate (25 wt% relative to OCL content) acted as softening agent creating the polymer main chain segments within the network architecture. The copolymer networks were programmed by 50% elongation and were degraded by means of alkaline hydrolysis utilizing sodium hydroxide solution (pH = 13). Experiments were performed in the range of the broad melting range of the actuators at 40 degrees C. The degradation of test specimen was monitored by the sample mass, which was reduced by 25 wt% within 105 d .45 degradation products, fragments of OCL with molecular masses ranging from 400 to 50.000 g.mol(-1) could be detected by NMR spectroscopy and GPC measurements. The cleavage of ester groups included in OCL segments resulted in a decrease of the melting temperature (T-m) related to the actuator domains (amorphous at the temperature of degradation) and simultaneously, the T-m associated to the skeleton domain was increased (semi-crystalline at the temperature of degradation). The alkaline hydrolysis decreased the polymer chain orientation of OCL domains until a random alignment of crystalline domains was obtained. This result was confirmed by cyclic thermomechanical actuation tests. The performance of directed movements decreased almost linearly as function of degradation time resulting in the loss of functionality when the orientation of polymer chains disappeared. Here, actuators were able to provide reversible movements until 91 d when the accelerated bulk degradation procedure using alkaline hydrolysis (pH = 13) was applied. Accordingly, a lifetime of more than one year can be guaranteed under physiological conditions (pH = 7.4) when, e.g., artificial muscles for biomimetic robots as potential application for these kind of shape-memory polymer actuators will be addressed.
Electronic health is one of the most popular applications of information and communication technologies and it has contributed immensely to health delivery through the provision of quality health service and ubiquitous access at a lower cost. Even though this mode of health service is increasingly becoming known or used in developing nations, these countries are faced with a myriad of challenges when implementing and deploying e-health services on both small and large scale. It is estimated that the Africa population alone carries the highest percentage of the world’s global diseases despite its certain level of e-health adoption. This paper aims at analyzing the progress so far and the current state of e-health in developing countries particularly Africa and propose a framework for further improvement.
SiO(2 )is the main component of silicate melts and thus controls their network structure and physical properties. The compressibility and viscosities of melts at depth are governed by their short range atomic and electronic structure. We measured the O K-edge and the Si L-2,L-3-edge in silica up to 110 GPa using X-ray Raman scattering spectroscopy, and found a striking match to calculated spectra based on structures from molecular dynamic simulations. Between 20 and 27 GPa, Si-[4] species are converted into a mixture of Si-[5] and Si-[6] species and between 60 and 70 GPa, Si-[6] becomes dominant at the expense of Si-[5] with no further increase up to at least 110 GPa. Coordination higher than 6 is only reached beyond 140 GPa, corroborating results from Brillouin scattering. Network modifying elements in silicate melts may shift this change in coordination to lower pressures and thus magmas could be denser than residual solids at the depth of the core-mantle boundary.
A significant percentage of urban traffic is caused by the search for parking spots. One possible approach to improve this situation is to guide drivers along routes which are likely to have free parking spots. The task of finding such a route can be modeled as a probabilistic graph problem which is NP-complete. Thus, we propose heuristic approaches for solving this problem and evaluate them experimentally. For this, we use probabilities of finding a parking spot, which are based on publicly available empirical data from TomTom International B.V. Additionally, we propose a heuristic that relies exclusively on conventional road attributes. Our experiments show that this algorithm comes close to the baseline by a factor of 1.3 in our cost measure. Last, we complement our experiments with results from a field study, comparing the success rates of our algorithms against real human drivers.
Hot subdwarf B (sdB) stars are evolved core helium burning stars that have lost most of their hydrogen envelope due to binary interaction on the red giant branch. As sdB stars in wide binary systems can only be created by stable Roche lobe overflow, they are a great test sample to constrain the theoretical models for stable mass loss on the red giant branch. We present here the findings of a long term monitoring program of wide sdB+MS binaries. We found two main features in the orbital parameters. The majority of the systems have eccentric orbits with systems on longer orbital period having a higher eccentricity. As these systems have undergone mass loss near the tip of the RGB, tidal circularisation theory predicts them to be circularized. Our observations suggest that efficient eccentricity pumping mechanisms are active during the mass loss phase. Secondly we find a strong correlation between the mass ratio and the orbital period. Using binary evolution models, this relation is used to derive both an upper and lower limit on the initial mass ratio at which RLOF will be stable. These limits depend on the core mass of the sdB progenitor.
This paper addresses the morpho-phonological, syntactic and pragmatic properties of postverbal subject constructions in Awing. Analogous to other inversion constructions in Bantu literature (Marten & Van der Wal 2014), Awing has a construction in which the subject occurs immediately after the verb, resulting in a subject or sentence focus interpretation. However in Awing, crucially, a VSX clause cannot host a subject marker, but must contain a certain le morpheme in sentence-initial position. Following Baker (2003) and Collins (2004), I argue that the subject marker triggers movement of the subject from Spec/vP, explaining why it is banned in VSX clauses. I further claim that although the subject is interpreted as focus, it is not in a lower focus phrase (Belletti 2004), but rather trapped in Spec/vP. Awing postverbal subject constructions also exhibit verb doubling: VSVO. I argue that verb doubling is due to Case requirement: In canonical SVO clauses the subject marker and the verb value the nominative and accusative Cases, respectively. In VSVO constructions, on the contrary, the verb values both nominative and accusative Cases, thus forcing syntax to spell out two copies of the same verb.
Audit - and then what?
(2019)
Current trends such as digital transformation, Internet of Things, or Industry 4.0 are challenging the majority of learning factories. Regardless of whether a conventional learning factory, a model factory, or a digital learning factory, traditional approaches such as the monotonous execution of specific instructions don‘t suffice the learner’s needs, market requirements as well as especially current technological developments. Contemporary teaching environments need a clear strategy, a road to follow for being able to successfully cope with the changes and develop towards digitized learning factories. This demand driven necessity of transformation leads to another obstacle: Assessing the status quo and developing and implementing adequate action plans. Within this paper, details of a maturity-based audit of the hybrid learning factory in the Research and Application Centre Industry 4.0 and a thereof derived roadmap for the digitization of a learning factory are presented.
Subject-oriented learning
(2019)
The transformation to a digitized company changes not only the work but also social context for the employees and requires inter alia new knowledge and skills from them. Additionally, individual action problems arise. This contribution proposes the subject-oriented learning theory, in which the employees´ action problems are the starting point of training activities in learning factories. In this contribution, the subject-oriented learning theory is exemplified and respective advantages for vocational training in learning factories are pointed out both theoretically and practically. Thereby, especially the individual action problems of learners and the infrastructure are emphasized as starting point for learning processes and competence development.
High-dimensional data is particularly useful for data analytics research. In the healthcare domain, for instance, high-dimensional data analytics has been used successfully for drug discovery. Yet, in order to adhere to privacy legislation, data analytics service providers must guarantee anonymity for data owners. In the context of high-dimensional data, ensuring privacy is challenging because increased data dimensionality must be matched by an exponential growth in the size of the data to avoid sparse datasets. Syntactically, anonymising sparse datasets with methods that rely of statistical significance, makes obtaining sound and reliable results, a challenge. As such, strong privacy is only achievable at the cost of high information loss, rendering the data unusable for data analytics. In this paper, we make two contributions to addressing this problem from both the privacy and information loss perspectives. First, we show that by identifying dependencies between attribute subsets we can eliminate privacy violating attributes from the anonymised dataset. Second, to minimise information loss, we employ a greedy search algorithm to determine and eliminate maximal partial unique attribute combinations. Thus, one only needs to find the minimal set of identifying attributes to prevent re-identification. Experiments on a health cloud based on the SAP HANA platform using a semi-synthetic medical history dataset comprised of 109 attributes, demonstrate the effectiveness of our approach.
Introduction to CTA Science
(2019)
Ground-based gamma-ray astronomy is a young field with enormous scientific potential. The possibility of astrophysical measurements at teraelectronvolt (TeV) energies was demonstrated in 1989 with the detection of a clear signal from the Crab nebula above 1 TeV with the Whipple 10 m imaging atmospheric Cherenkov telescope (IACT). Since then, the instrumentation for, and techniques of, astronomy with IACTs have evolved to the extent that a flourishing new scientific discipline has been established, with the detection of more than 150 sources and a major impact in astrophysics and more widely in physics. The current major arrays of IACTs, H.E.S.S., MAGIC, and VERITAS, have demonstrated the huge physics potential at these energies as well as the maturity of the detection technique. Many astrophysical source classes have been established, some with many well-studied individual objects, but there are indications that the known sources represent the tip of the iceberg in terms of both individual objects and source classes. The Cherenkov Telescope Array (CTA) will transform our understanding of the high-energy universe and will explore questions in physics of fundamental importance. As a key member of the suite of new and upcoming major astroparticle physics experiments and observatories, CTA will exploit synergies with gravitational wave and neutrino observatories as well as with classical photon observatories. CTA will address a wide range of major questions in and beyond astrophysics, which can be grouped into three broad themes…
Nowadays, structural health monitoring of critical infrastructures is considered as of primal importance especially for managing transport infrastructure however most current SHM methodologies are based on point-sensors that show various limitations relating to their spatial positioning capabilities, cost of development and measurement range. This publication describes the progress in the SENSKIN EC co-funded research project that is developing a dielectric-elastomer sensor, formed from a large highly extensible capacitance sensing membrane and is supported by an advanced micro-electronic circuitry, for monitoring transport infrastructure bridges. The sensor under development provides spatial measurements of strain in excess of 10%, while the sensing system is being designed to be easy to install, require low power in operation concepts, require simple signal processing, and have the ability to self-monitor and report. An appropriate wireless sensor network is also being designed and developed supported by local gateways for the required data collection and exploitation. SENSKIN also develops a Decision-Support-System (DSS) for proactive condition-based structural interventions under normal operating conditions and reactive emergency intervention following an extreme event. The latter is supported by a life-cycle-costing (LCC) and life-cycle-assessment (LCA) module responsible for the total internal and external costs for the identified bridge rehabilitation, analysis of options, yielding figures for the assessment of the economic implications of the bridge rehabilitation work and the environmental impacts of the bridge rehabilitation options and of the associated secondary effects respectively. The overall monitoring system will be evaluated and benchmarked on actual bridges of Egnatia Highway (Greece) and Bosporus Bridge (Turkey).
Recent years have seen a considerable broadening of the ambitions in urban sustainability policy-making. With its Sustainable Development Goal (SDG) 11 Making cities and human settlements inclusive, safe, resilient and sustainable, the 2030 Agenda stresses the critical role of cities in achieving sustainable development. In the context of SDG17 on partnerships, emphasis is also placed on the role of researchers and other scientific actors as change agents in the sustainability transformation. Against this backdrop, this article sheds light on different pathways through which science can contribute to urban sustainability. In particular, we discern four forms of science-policy-society interactions as key vectors: 1. sharing knowledge and providing scientific input to urban sustainability policy-making; 2. implementing transformative research projects; 3. contributing to local capacity building; and 4. self-governing towards sustainability. The pathways of influence are illustrated with empirical examples, and their interlinkages and limitations are discussed. We contend that there are numerous opportunities for actors from the field of sustainability science to engage with political and societal actors to enhance sustainable development at the local level.
The Author as Researcher
(2019)
This article proposes a new perspective on avant-garde travel writing through the lens of scientific field work, investigating these new writing techniques in Boris Pil’niak’s expedition prose. In the 1920s, the researching writer represents a hidden, but influential counterpart to the widely propagated figure of the working writer. While the author as producer combines word and deed in an operative act, the author as researcher investigates the production of knowledge. This entails revising the centrality of facts. Literature as artistic research subverts factography by going beyond the horizons of veristic data registration to include uncharted realms and vague possibilities. This exploration leads to specific genres: the author as researcher tries his hand at a kind of laboratory text, a prolific genre at the intersection of testing equipment, recording media, and hypothetical thought. Not confined to a sterile lab, avant-garde writer-researchers, as members of research expeditions, oscillate between their home writing desks and the remote depths of the emerging USSR. At the same time, they explore writing practices situated between data acquisition, sampling, fact-finding, observation and recording.
Signals for 2 degrees C
(2019)
The targets of the Paris Agreement make it necessary to redirect finance flows towards sustainable, low-carbon infrastructures and technologies. Currently, the potential of institutional investors to help finance this transition is widely discussed. Thus, this paper takes a closer look at influence factors for green investment decisions of large European insurance companies. With a mix of qualitative and quantitative methods, the importance of policy, market and civil society signals is evaluated. In summary, respondents favor measures that promote green investment, such as feed-in tariffs or adjustments of capital charges for green assets, over ones that make carbon-intensive investments less attractive, such as the phase-out of fossil fuel subsidies or a carbon price. While investors currently see a low impact of the carbon price, they rank a substantial reform as an important signal for the future. Respondents also emphasize that policy signals have to be coherent and credible to coordinate expectations.
A Fuzzy Rule-Based Model for Remote Monitoring of Preterm in the Intensive Care Unit of Hospitals
(2019)
The use of Remote patient monitoring (RPM) systems to monitor critically ill patients in the Intensive Care Unit (ICU) has enabled quality and real-time healthcare management. Fuzzy logic as an approach to designing RPM systems provides a means for encapsulating the subjective decision-making process of medical experts in an algorithm suitable for computer implementation. In this paper, a remote monitoring system for preterm in neonatal ICU incubators is modeled and simulated. The model was designed with 4 input variables (body temperature, heart rate, respiratory rate, and oxygen level saturation), and 1 output variable (action performed represented as ACT). ACT decides whether-an alert is generated or not and also determines the message displayed when a notification is required. ACT classifies the clinical priority of the monitored preterm into 5 different fields: code blue, code red, code yellow, code green, and-code black. The model was simulated using a fuzzy logic toolbox of MATLAB R2015A. About 216 IF_THEN rules were formulated to monitor the inputs data fed into the model. The performance of the model was evaluated using-the confusion matrix to determine the model’s accuracy, precision, sensitivity, specificity, and false alarm rate. The-experimental results obtained shows that the fuzzy-based system is capable of producing satisfactory results when used for monitoring and classifying the clinical statuses of neonates in ICU incubators.
Network science is driven by the question which properties large real-world networks have and how we can exploit them algorithmically. In the past few years, hyperbolic graphs have emerged as a very promising model for scale-free networks. The connection between hyperbolic geometry and complex networks gives insights in both directions: (1) Hyperbolic geometry forms the basis of a natural and explanatory model for real-world networks. Hyperbolic random graphs are obtained by choosing random points in the hyperbolic plane and connecting pairs of points that are geometrically close. The resulting networks share many structural properties for example with online social networks like Facebook or Twitter. They are thus well suited for algorithmic analyses in a more realistic setting. (2) Starting with a real-world network, hyperbolic geometry is well-suited for metric embeddings. The vertices of a network can be mapped to points in this geometry, such that geometric distances are similar to graph distances. Such embeddings have a variety of algorithmic applications ranging from approximations based on efficient geometric algorithms to greedy routing solely using hyperbolic coordinates for navigation decisions.
JavaScript is the most popular programming language for web applications. Static analysis of JavaScript applications is highly challenging due to its dynamic language constructs and event-driven asynchronous executions, which also give rise to many security-related bugs. Several static analysis tools to detect such bugs exist, however, research has not yet reported much on the precision and scalability trade-off of these analyzers. As a further obstacle, JavaScript programs structured in Node. js modules need to be collected for analysis, but existing bundlers are either specific to their respective analysis tools or not particularly suitable for static analysis.
Mobile operating systems, such as Google's Android, have become a fixed part of our daily lives and are entrusted with a plethora of private information. Congruously, their data protection mechanisms have been improved steadily over the last decade and, in particular, for Android, the research community has explored various enhancements and extensions to the access control model. However, the vast majority of those solutions has been concerned with controlling the access to data, but equally important is the question of how to control the flow of data once released. Ignoring control over the dissemination of data between applications or between components of the same app, opens the door for attacks, such as permission re-delegation or privacy-violating third-party libraries. Controlling information flows is a long-standing problem, and one of the most recent and practical-oriented approaches to information flow control is secure multi-execution.
In this paper, we present Ariel, the design and implementation of an IFC architecture for Android based on the secure multi-execution of apps. Ariel demonstrably extends Android's system with support for executing multiple instances of apps, and it is equipped with a policy lattice derived from the protection levels of Android's permissions as well as an I/O scheduler to achieve control over data flows between application instances. We demonstrate how secure multi-execution with Ariel can help to mitigate two prominent attacks on Android, permission re-delegations and malicious advertisement libraries.
Internet connectivity of cloud services is of exceptional importance for both their providers and consumers. This article demonstrates the outlines of a method for measuring cloud-service connectivity at the internet protocol level from a client's perspective. For this, we actively collect connectivity data via traceroute measurements from PlanetLab to several major cloud services. Furthermore, we construct graph models from the collected data, and analyse the connectivity of the services based on important graph-based measures. Then, random and targeted node removal attacks are simulated, and the corresponding vulnerability of cloud services is evaluated. Our results indicate that cloud service hosts are, on average, much better connected than average hosts. However, when interconnecting nodes are removed in a targeted manner, cloud connectivity is dramatically reduced.
Detect me if you can
(2019)
Spam Bots have become a threat to online social networks with their malicious behavior, posting misinformation messages and influencing online platforms to fulfill their motives. As spam bots have become more advanced over time, creating algorithms to identify bots remains an open challenge. Learning low-dimensional embeddings for nodes in graph structured data has proven to be useful in various domains. In this paper, we propose a model based on graph convolutional neural networks (GCNN) for spam bot detection. Our hypothesis is that to better detect spam bots, in addition to defining a features set, the social graph must also be taken into consideration. GCNNs are able to leverage both the features of a node and aggregate the features of a node’s neighborhood. We compare our approach, with two methods that work solely on a features set and on the structure of the graph. To our knowledge, this work is the first attempt of using graph convolutional neural networks in spam bot detection.
Spontaneous and induced platelet aggregation in apparently healthy subjects in relation to age
(2019)
Thrombotic disorders remain the leading cause of mortality and morbidity, despite the fact that anti-platelet therapies and vascular implants are successfully used today. As life expectancy is increasing in western societies, the specific knowledge about processes leading to thrombosis in elderly is essential for an adequate therapeutic management of platelet dysfunction and for tailoring blood contacting implants. This study addresses the limited available data on platelet function in apparently healthy subjects in relation to age, particularly in view of subjects of old age (80-98 years). Apparently healthy subjects between 20 and 98 years were included in this study. Platelet function was assessed by light transmission aggregometry and comprised experiments on spontaneous as well as ristocetin-, ADP- and collagen-induced platelet aggregation. The data of this study revealed a non-linear increase in the maximum spontaneous platelet aggregation (from 3.3% +/- 3.3% to 10.9% +/- 5.9%). The maximum induced aggregation decreased with age for ristocetin (from 85.8% +/- 7.2% to 75.0% +/- 7.8%), ADP (from 88.5% +/- 4.6% to 64.8% +/- 7.3%) and collagen (from 89.5% +/- 3.0% to 64.0% +/- 4.0%) in a non-linear manner (linear regression analysis). These observations indicate that during aging, circulating platelets become increasingly activated but lose their full aggregatory potential, a phenomenon that was earlier termed "platelet exhaustion". In this study we extended the limited existing data for spontaneous and induced platelet aggregation of apparently healthy donors above the age of 75 years. The presented data indicate that the extrapolation of data from a middle age group does not necessarily predict platelet function in apparently healthy subjects of old age. It emphasizes the need for respective studies to improve our understanding of thrombotic processes in elderly humans.
Lipid-containing adipocytes can dedifferentiate into fibroblast-like cells under appropriate culture conditions, which are known as dedifferentiated fat (DFAT) cells. However, the relative low dedifferentiation efficiency with the established protocols limit their widespread applications. In this study, we found that adipocyte dedifferentiation could be promoted via periodic exposure to cold (10 degrees C) in vitro. The lipid droplets in mature adipocytes were reduced by culturing the cells in periodic cooling/heating cycles (10-37 degrees C) for one week. The periodic temperature change led to the down-regulation of the adipogenic genes (FABP4, Leptin) and up-regulation of the mitochondrial uncoupling related genes (UCP1, PGC-1 alpha, and PRDM16). In addition, the enhanced expression of the cell proliferation marker Ki67 was observed in the dedifferentiated fibroblast-like cells after periodic exposure to cold, as compared to the cells cultured in 37 degrees C. Our in vitro model provides a simple and effective approach to promote lipolysis and can be used to improve the dedifferentiation efficiency of adipocytes towards multipotent DFAT cells.
Above and underground hydrological processes depend on soil moisture (SM) variability, driven by different environmental factors that seldom are well-monitored, leading to a misunderstanding of soil water temporal patterns. This study investigated the stability of the SM temporal dynamics to different monitoring temporal resolutions around the border between two soil types in a tropical watershed. Four locations were instrumented in a small-scale watershed (5.84 km(2)) within the tropical coast of Northeast Brazil, encompassing different soil types (Espodossolo Humiluvico or Carbic Podzol, and Argissolo Vermelho-Amarelo or Haplic Acrisol), land covers (Atlantic Forest, bush vegetation, and grassland) and topographies (flat and moderate slope). The SM was monitored at a temporal resolution of one hour along the 2013-2014 hydrological year and then resampled a resolutions of 6 h, 12 h, 1 day, 2 days, 4 days, 7 days, and 15 days. Descriptive statistics, temporal variability, time-stability ranking, and hierarchical clustering revealed uneven associations among SM time components. The results show that the time-invariant component ruled SM temporal variability over the time-varying parcel, either at high or low temporal resolutions. Time-steps longer than 2 days affected the mean statistical metrics of the SM time-variant parcel. Additionally, SM at downstream and upstream sites behaved differently, suggesting that the temporal mean was regulated by steady soil properties (slope, restrictive layer, and soil texture), whereas their temporal anomalies were driven by climate (rainfall) and hydrogeological (groundwater level) factors. Therefore, it is concluded that around the border between tropical soil types, the distinct behaviour of time-variant and time-invariant components of SM time series reflects different combinations of their soil properties.
Kyub
(2019)
We present an interactive editing system for laser cutting called kyub. Kyub allows users to create models efficiently in 3D, which it then unfolds into the 2D plates laser cutters expect. Unlike earlier systems, such as FlatFitFab, kyub affords construction based on closed box structures, which allows users to turn very thin material, such as 4mm plywood, into objects capable of withstanding large forces, such as chairs users can actually sit on. To afford such sturdy construction, every kyub project begins with a simple finger-joint "boxel"-a structure we found to be capable of withstanding over 500kg of load. Users then extend their model by attaching additional boxels. Boxels merge automatically, resulting in larger, yet equally strong structures. While the concept of stacking boxels allows kyub to offer the strong affordance and ease of use of a voxel-based editor, boxels are not confined to a grid and readily combine with kuyb's various geometry deformation tools. In our technical evaluation, objects built with kyub withstood hundreds of kilograms of loads. In our user study, non-engineers rated the learnability of kyub 6.1/7.
In this paper, we establish the underlying foundations of mechanisms that are composed of cell structures-known as metamaterial mechanisms. Such metamaterial mechanisms were previously shown to implement complete mechanisms in the cell structure of a 3D printed material, without the need for assembly. However, their design is highly challenging. A mechanism consists of many cells that are interconnected and impose constraints on each other. This leads to unobvious and non-linear behavior of the mechanism, which impedes user design. In this work, we investigate the underlying topological constraints of such cell structures and their influence on the resulting mechanism. Based on these findings, we contribute a computational design tool that automatically creates a metamaterial mechanism from user-defined motion paths. This tool is only feasible because our novel abstract representation of the global constraints highly reduces the search space of possible cell arrangements.
When it comes to autobiographical narratives, the most spontaneous and natural manner is preferable. But neither individually told narratives nor those grounded in the communicative repertoire of a social group are easily comparable. A clearly identifiable tertium comparationis is mandatory. We present the results of an experimental ‘Narrative Priming’ setting with French students. A potentially underlying model of narrating from personal experience was activated via a narrative prime, and in a second step, the participants were asked to tell a narrative of their own. The analysis focuses on similarities and differences between the primes and the students’ narratives. The results give evidence for the possibility to elicit a set of comparable narratives via a prime, and to activate an underlying narrative template. Meaningful differences are discussed as generational and age related styles. The transcriptions from the participants that authorized the publication are available online.
The "Bachelor Project"
(2019)
One of the challenges of educating the next generation of computer scientists is to teach them to become team players, that are able to communicate and interact not only with different IT systems, but also with coworkers and customers with a non-it background. The “bachelor project” is a project based on team work and a close collaboration with selected industry partners. The authors hosted some of the teams since spring term 2014/15. In the paper at hand we explain and discuss this concept and evaluate its success based on students' evaluation and reports. Furthermore, the technology-stack that has been used by the teams is evaluated to understand how self-organized students in IT-related projects work. We will show that and why the bachelor is the most successful educational format in the perception of the students and how this positive results can be improved by the mentors.
Currently, a transformation of our technical world into a networked technical world where besides the embedded systems with their interaction with the physical world the interconnection of these nodes in the cyber world becomes a reality can be observed. In parallel nowadays there is a strong trend to employ artificial intelligence techniques and in particular machine learning to make software behave smart. Often cyber-physical systems must be self-adaptive at the level of the individual systems to operate as elements in open, dynamic, and deviating overall structures and to adapt to open and dynamic contexts while being developed, operated, evolved, and governed independently.
In this presentation, we will first discuss the envisioned future scenarios for cyber-physical systems with an emphasis on the synergies networking can offer and then characterize which challenges for the design, production, and operation of these systems result. We will then discuss to what extent our current capabilities, in particular concerning software engineering match these challenges and where substantial improvements for the software engineering are crucial. In today's software engineering for embedded systems models are used to plan systems upfront to maximize envisioned properties on the one hand and minimize cost on the other hand. When applying the same ideas to software for smart cyber-physical systems, it soon turned out that for these systems often somehow more subtle links between the involved models and the requirements, users, and environment exist. Self-adaptation and runtime models have been advocated as concepts to covers the demands that result from these subtler links. Lately, both trends have been brought together more thoroughly by the notion of self-aware computing systems. We will review the underlying causes, discuss some our work in this direction, and outline related open challenges and potential for future approaches to software engineering for smart cyber-physical systems.
MOOCs in Secondary Education
(2019)
Computer science education in German schools is often less than optimal. It is only mandatory in a few of the federal states and there is a lack of qualified teachers. As a MOOC (Massive Open Online Course) provider with a German background, we developed the idea to implement a MOOC addressing pupils in secondary schools to fill this gap. The course targeted high school pupils and enabled them to learn the Python programming language. In 2014, we successfully conducted the first iteration of this MOOC with more than 7000 participants. However, the share of pupils in the course was not quite satisfactory. So we conducted several workshops with teachers to find out why they had not used the course to the extent that we had imagined. The paper at hand explores and discusses the steps we have taken in the following years as a result of these workshops.
Metamorphic geology
(2019)
From object to process
(2019)
One of the most difficult tasks today is trying to grasp the presence of computing. The almost ubiquitous and diverse forms of networked computers (in all their stationary, mobile, embedded, and autonomous modes) create a nearly overwhelming complexity. To speak of what is here evading and present at the same time, the paper proposes to reconsider the concept of interface, its historical roots, and its heuristic advantages for an analysis and critique of the current and especially everyday spread of computerization. The question of interfaces leads to isolable conditions and processes of conduction, as well as to the complexity of the cooperation formed by them. It opens both an investigative horizon and a mode of analysis, which always asks for further interface levels involved in the phenomenon I am currently investigating. As an example, the paper turns to the displacement of the file with the launch of the iPhone in 2007 and its comeback in 2017 with the "Files" apps. Both developments are profoundly related to the establishment of computers as permanently networked machines, whereby their functionality, depresentations, and ideology come into focus.
This book is about the building of alliances and about joint activities between two groups of social movement actors ascribed increasing relevance for the functioning and the eventual amendment of democratic capitalism. The chapters provide a well-balanced mix of theoretical and empirical accounts on the political, social and economic catalysts behind the changing motives finding expression in a multitude of novel types of joint collective action and inter-organizational alliances. The contributors to this volume go beyond attempting to place unions, movements, crises, precariousness, protests and coalitions at the centre of the research. Instead, they focus on actors who themselves transcend clear-cut social camps. They look at the values and motives underlying collective action by both types of actors as much as at their structural and strategic properties, and inter-organizational relations and networks. This creates a fresh, genuine and historically valid account of the incompatibilities and the commonalities of movements and unions, and of prospects for inter-organizational learning.
Preface
(2019)
Cloud Storage Broker (CSB) provides value-added cloud storage service for enterprise usage by leveraging multi-cloud storage architecture. However, it raises several challenges for managing resources and its access control in multiple Cloud Service Providers (CSPs) for authorized CSB stakeholders. In this paper we propose unified cloud access control model that provides the abstraction of CSP's services for centralized and automated cloud resource and access control management in multiple CSPs. Our proposal offers role-based access control for CSB stakeholders to access cloud resources by assigning necessary privileges and access control list for cloud resources and CSB stakeholders, respectively, following privilege separation concept and least privilege principle. We implement our unified model in a CSB system called CloudRAID for Business (CfB) with the evaluation result shows it provides system-and-cloud level security service for cfB and centralized resource and access control management in multiple CSPs.
The idea for this book arose out of discontent with essentially three shortcomings in the recent literature on the present state of politics in Western democracies and on forms of collective action. The general message resulting from research in the political economy and in forms of democracy is disastrous. We are confronted with a mix of decline, fragmentation, individualization, diminishing trust in institutions hollowed out from the inside, the hoarding of power by small political and economic elites, and the increasing marginalization and pauperization of vast parts of the population. While the accuracy of these trends shall not be called into question, it is noteworthy, and this is the first shortcoming, to what extent that literature tends to neglect one crucial aspect, namely the capacity of those suffering most from the above malaise to coming together and searching for possibilities of collectively halting, reversing, or otherwise influencing decline in defence of their needs and interests. The second shortcoming concerns the literatures on precisely these actors, namely established trade union research and research on social movements. While both fields acknowledge the extent of the current crisis and have submitted numerous books and articles on how their respective research targets are reacting to it, the situation continues to remain one of indifference. There hardly is cross-fertilization beyond the boundaries of established research traditions. At the same time, empirical reality seems to suggest that forms of joint activity by both types of actors may have become more advanced than theoretical reflection is so far prepared to admit. As observed by Fantasia and Stepan-Norris (2004: 561) students of each of the two forms of collective action "(…) mutually neglect each other". At best, trade union researchers and social movement research envisage their counterpart in purely instrumental
Increasing demand for analytical processing capabilities can be managed by replication approaches. However, to evenly balance the replicas' workload shares while at the same time minimizing the data replication factor is a highly challenging allocation problem. As optimal solutions are only applicable for small problem instances, effective heuristics are indispensable. In this paper, we test and compare state-of-the-art allocation algorithms for partial replication. By visualizing and exploring their (heuristic) solutions for different benchmark workloads, we are able to derive structural insights and to detect an algorithm's strengths as well as its potential for improvement. Further, our application enables end-to-end evaluations of different allocations to verify their theoretical performance.
Workload-Driven Fragment Allocation for Partially Replicated Databases Using Linear Programming
(2019)
In replication schemes, replica nodes can process read-only queries on snapshots of the master node without violating transactional consistency. By analyzing the workload, we can identify query access patterns and replicate data depending to its access frequency. In this paper, we define a linear programming (LP) model to calculate the set of partial replicas with the lowest overall memory capacity while evenly balancing the query load. Furthermore, we propose a scalable decomposition heuristic to calculate solutions for larger problem sizes. While guaranteeing the same performance as state-of-the-art heuristics, our decomposition approach calculates allocations with up to 23% lower memory footprint for the TPC-H benchmark.
Data analytics are moving beyond the limits of a single data processing platform. A cross-platform query optimizer is necessary to enable applications to run their tasks over multiple platforms efficiently and in a platform-agnostic manner. For the optimizer to be effective, it must consider data movement costs across different data processing platforms. In this paper, we present the graph-based data movement strategy used by RHEEM, our open-source cross-platform system. In particular, we (i) model the data movement problem as a new graph problem, which we prove to be NP-hard, and (ii) propose a novel graph exploration algorithm, which allows RHEEM to discover multiple hidden opportunities for cross-platform data processing.
An efficient selection of indexes is indispensable for database performance. For large problem instances with hundreds of tables, existing approaches are not suitable: They either exhibit prohibitive runtimes or yield far from optimal index configurations by strongly limiting the set of index candidates or not handling index interaction explicitly. We introduce a novel recursive strategy that does not exclude index candidates in advance and effectively accounts for index interaction. Using large real-world workloads, we demonstrate the applicability of our approach. Further, we evaluate our solution end to end with a commercial database system using a reproducible setup. We show that our solutions are near-optimal for small index selection problems. For larger problems, our strategy outperforms state-of-the-art approaches in both scalability and solution quality.
This article aims to sum up the main results of a research project made in 2016 and 2017 about the situation of 1190 Romanian migrants in Western Europe and to give an overview about the push and pull factors, transnational family structures, as well as the challenges and difficulties of the Romanian survey respondents living in Germany, France, the United Kingdom and Italy. It also considers the role of personal networks which represent an important motor of migration and constitute the main motive for the choice of a certain destination region. These migration networks lead to the construction of transnational social spaces between Romania and the destination country and have high influence in the search for housing or jobs but can also influence the integration process abroad.
For a singularly perturbed parabolic - ODE system we construct the asymptotic expansion in the small parameter in the case, when the degenerate equation has a double root. Such systems, which are called partly dissipative reaction-diffusion systems, are used to model various natural processes, including the signal transmission along axons, solid combustion and the kinetics of some chemical reactions. It turns out that the algorithm of the construction of the boundary layer functions and the behavior of the solution in the boundary layers essentially differ from that ones in case of a simple root. The multizonal initial and boundary layers behaviour was stated.
New Public Governance (NPG) as a paradigm for collaborative forms of public service delivery and Blockchain governance are trending topics for researchers and practitioners alike. Thus far, each topic has, on the whole, been discussed separately. This paper presents the preliminary results of ongoing research which aims to shed light on the more concrete benefits of Blockchain for the purpose of NPG. For the first time, a conceptual analysis is conducted on process level to spot benefits and limitations of Blockchain-based governance. Per process element, Blockchain key characteristics are mapped to functional aspects of NPG from a governance perspective. The preliminary results show that Blockchain offers valuable support for governments seeking methods to effectively coordinate co-producing networks. However, the extent of benefits of Blockchain varies across the process elements. It becomes evident that there is a need for off-chain processes. It is, therefore, argued in favour of intensifying research on off-chain governance processes to better understand the implications for and influences on on-chain governance.
This study explores the theoretical and political potentials of Édouard Glissant’s philosophy of relation and its approach to the issues of borders, migration, and the setup of political communities as proposed by his pensée nouvelle de la frontière (new border thought), against the background of the German migration crisis of 2015. The main argument of this article is that Glissant’s work offers an alternative epistemological and normative framework through which the contemporary political issues arising around the phenomenon of repressive border regimes can be studied. To demonstrate this point, this article works with Glissant’s border thought as an analytical lens and proposes a pathway for studying the contemporary German border regime. Particular emphasis is placed on the identification of potential areas where a Glissantian politics of relation could intervene with the goal of transforming borders from impermeable walls into points of passage. By exploring the political implications of his border thought, as well as the larger philosophical context from which it emerges, while using a transdisciplinary approach that borrows from literary and political studies, this work contributes to ongoing debates in postcolonial studies on borders and borderlessness, as well as Glissant’s political legacy in the twenty-first century.