Filtern
Volltext vorhanden
- nein (117) (entfernen)
Erscheinungsjahr
- 2017 (117) (entfernen)
Dokumenttyp
- Sonstiges (117) (entfernen)
Gehört zur Bibliographie
- ja (117)
Schlagworte
- Internet (2)
- MOOC (2)
- affect (2)
- carbon dioxide (2)
- embodied cognition (2)
- 2.5D Treemaps (1)
- Absorption kinetics (1)
- Aluminium (1)
- Aluminium adjuvants (1)
- Aufklarung (1)
Institut
- Institut für Biochemie und Biologie (22)
- Institut für Physik und Astronomie (17)
- Institut für Geowissenschaften (11)
- Department Sport- und Gesundheitswissenschaften (10)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (10)
- Department Psychologie (8)
- Institut für Ernährungswissenschaft (6)
- Department Linguistik (5)
- Institut für Informatik und Computational Science (5)
- Institut für Chemie (4)
The German Enlightenment
(2017)
The term Enlightenment (or Aufklärung) remains heavily contested. Even when historians delimit the remit of the concept, assigning it to a particular historical period rather than to an intellectual or moral programme, the public resonance of the Enlightenment remains high and problematic—especially when equated in an essentialist manner with modernity or some core values of ‘the West’. This Forum has been convened to discuss recent research on the Enlightenment in Germany, different views of the term and its ideological use in public discourse outside academia (and sometimes within it).
Selection of initial points, the number of clusters and finding proper clusters centers are still the main challenge in clustering processes. In this paper, we suggest genetic algorithm based method which searches several solution spaces simultaneously. The solution spaces are population groups consisting of elements with similar structure. Elements in a group have the same size, while elements in different groups are of different sizes. The proposed algorithm processes the population in groups of chromosomes with one gene, two genes to k genes. These genes hold corresponding information about the cluster centers. In the proposed method, the crossover and mutation operators can accept parents with different sizes; this can lead to versatility in population and information transfer among sub-populations. We implemented the proposed method and evaluated its performance against some random datasets and the Ruspini dataset as well. The experimental results show that the proposed method could effectively determine the appropriate number of clusters and recognize their centers. Overall this research implies that using heterogeneous population in the genetic algorithm can lead to better results.
Moving Forces
(2017)
Throughout a large part of the twentieth century, the body was interpreted as a field of signs, the meaning of which pointed to an unconscious dimension. At the height of the popularity of structuralism, Jacques Lacan deemed the unconscious to be “structured like a language.” Starting in the early 1990s, however, a deep shift occurred in the way the body was interpreted. A new movement cast tremendous doubt on the hegemony of language and instead advocated a performative, pictorial, and affective approach — the so-called material turn — which encompassed all of these. In the words of Karen Barad, this turn inquired as to why meaning, history, and truth are assigned to language only, whereas the movements of materiality are given less prominence: “How did language come to be more trustworthy than matter? Why are language and culture granted their own agency and historicity while matter is figured as passive and immutable?” With this shift toward the material, bodies began to be seen in a different light and their materiality understood as something that follows its own laws and movements, which cannot be understood exclusively in terms of social-cultural codes. Instead, these laws and movements call into question the very dichotomies of nature/culture and body/spirit.
HESS J1826-130
(2017)
HESS J1826-130 is an unidentified hard spectrum source discovered by H.E.S.S. along the Galactic plane, the spectral index being Gamma = 1.6 with an exponential cut-off at about 12 TeV. While the source does not have a clear counterpart at longer wavelengths, the very hard spectrum emission at TeV energies implies that electrons or protons accelerated up to several hundreds of TeV are responsible for the emission. In the hadronic case, the VHE emission can be produced by runaway cosmic-rays colliding with the dense molecular clouds spatially coincident with the H.E.S.S. source.
Tailed bacteriophages specific for Gram‐negative bacteria encounter lipopolysaccharide (LPS) during the first infection steps. Yet, it is not well understood how biochemistry of these initial interactions relates to subsequent events that orchestrate phage adsorption and tail rearrangements to initiate cell entry. For many phages, long O‐antigen chains found on the LPS of smooth bacterial strains serve as essential receptor recognized by their tailspike proteins (TSP). Many TSP are depolymerases and O‐antigen cleavage was described as necessary step for subsequent orientation towards a secondary receptor. However, O‐antigen specific host attachment must not always come along with O‐antigen degradation. In this issue of Molecular Microbiology Prokhorov et al. report that coliphage G7C carries a TSP that deacetylates O‐antigen but does not degrade it, whereas rough strains or strains lacking O‐antigen acetylation remain unaffected. Bacteriophage G7C specifically functionalizes its tail by attaching the deacetylase TSP directly to a second TSP that is nonfunctional on the host's O‐antigen. This challenges the view that bacteriophages use their TSP only to clear their way to a secondary receptor. Rather, O‐antigen specific phages may employ enzymatically active TSP as a tool for irreversible LPS membrane binding to initiate subsequent infection steps.
Since the Shallow Structure Hypothesis (SSH) was first put forward in 2006, it has inspired a growing body of research on grammatical processing in nonnative (L2) speakers. More than 10 years later, we think it is time for the SSH to be reconsidered in the light of new empirical findings and current theoretical assumptions about human language processing. The purpose of our critical commentary is twofold: to clarify some issues regarding the SSH and to sketch possible ways in which this hypothesis might be refined and improved to better account for L1 and L2 speakers’ performance patterns.
Direct visualization of APLP1 cell-cell adhesion platforms via fluorescence fluctuation spectroscopy
(2017)
Preclinical assessment of penetration not only in intact, but also in barrier‐disrupted skin is important to explore the surplus value of novel drug delivery systems, which can be specifically designed for diseased skin. Here, we characterized physical and chemical barrier disruption protocols for short‐term ex vivo skin cultures with regard to structural integrity, physiological and biological parameters. Further, we compared the penetration of dexamethasone (Dex) in different nanoparticle‐based formulations in stratum corneum, epidermis and dermis extracts of intact vs. barrier‐disrupted skin as well as by dermal microdialysis at 6, 12 and 24 hours after topical application. Dex was quantified by liquid‐chromatography ‐ tandem‐mass spectrometry (LC‐MS/MS). Simultaneously, we investigated the Dex efficacy by interleukin (IL) analysis. Tape‐stripping (TS) and 4 hours sodium lauryl sulfate 5 % (SLS) exposure were identified as highly effective barrier disruption methods assessed by reproducible transepidermal water loss (TEWL) changes and IL‐6/8 increase which was more pronounced in SLS‐treated skin. The barrier state has also a significant impact on the Dex penetration kinetics: for all formulations, TS highly increased dermal Dex concentration despite the fact that nanocrystals quickly and effectively penetrated both, intact and barrier‐disrupted skin reaching significantly higher dermal Dex concentration after 6 hours compared to Dex cream. The surplus value of encapsulation in ethyl cellulose nanocarriers could mostly be observed when applied on intact skin, in general showing a delayed Dex penetration. Estimation of cytokines was limited due to the trauma caused by probe insertion. In summary, ex vivo human skin is a highly interesting short‐term preclinical model for the analysis of penetration and efficacy of novel drug delivery systems.
The Internet can be considered as the most important infrastructure for modern society and businesses. A loss of Internet connectivity has strong negative financial impacts for businesses and economies. Therefore, assessing Internet connectivity, in particular beyond their own premises and area of direct control, is of growing importance in the face of potential failures, accidents, and malicious attacks. This paper presents CORIA, a software framework for an easy analysis of connectivity risks based on large network graphs. It provides researchers, risk analysts, network managers and security consultants with a tool to assess an organization's connectivity and paths options through the Internet backbone, including a user-friendly and insightful visual representation of results. CORIA is flexibly extensible in terms of novel data sets, graph metrics, and risk scores that enable further use cases. The performance of CORIA is evaluated by several experiments on the Internet graph and further randomly generated networks.
The interview offers a reconstruction of the German reception of Durkheim since the middle of the 1970s. Hans Joas, who was one of its major protagonists, discusses the backdrop that finally permitted a scholarly examination of Durkheim’s sociology in Germany. Focussing on his personal reception Joas then gives an account of the Durkheimian themes that inspire his work.
Just after the publication of the Theory of Communicative Action in 1981, a new generation of interpreters started a different reception of Durkheim in Germany. Hans-Peter Müller, sociologist and editor of the German translation of Leçons de sociologie, reconstructs the history of the German Durkheim’s Reception and illuminates the reasons for his interest in the French sociologist. He delivers different insights into the background which permitted the post-Habermasian generation to reach a new understanding of Durkheim’s work by enlightening the scientific and political conditions from which this new sensibility emerged.
We develop a simple two-zone interpretation of the broadband baseline Crab nebula spectrum between 10(-5) eV and similar to 100 TeV by using two distinct log-parabola energetic electrons distributions. We determine analytically the very-high energy photon spectrum as originated by inverse-Compton scattering of the far-infrared soft ambient photons within the nebula off a first population of electrons energized at the nebula termination shock. The broad and flat 200 GeV peak jointly observed by Fermi/LAT and MAGIC is naturally reproduced. The synchrotron radiation from a second energetic electron population explains the spectrum from the radio range up to similar to 10 keV. We infer from observations the energy dependence of the microscopic probability of remaining in proximity of the shock of the accelerating electrons.
In recent years, “transnationalism” has become a key concept for historians and other scholars in the humanities and social sciences. However, its overuse threatens to dilute what would otherwise be a distinct approach with promising heuristic potential. This danger seems especially pronounced when the notion of transnationalism is applied to Jewish history, which, paradoxically, most scholars would agree, is at its core transnational. Many studies have analyzed how Jewries in different times and places, from the biblical era to the present, have been shaped by people, ideas, texts, and institutions that migrated across state lines and between cultures. So what is new about transnationalism in Jewish Studies? What new insights does it offer?
American Jewry offers an obvious arena to test transnationalism’s significance as an approach to historical research within Jewish studies. As a “nation of nations,” the United States is made up of a distinct and unique society, built on ideas of diversity and pluralism, and transcending old European concepts of nation and state. The transformative incorporation in American life of cultural, political, and social traditions brought from abroad is one feature of this distinctiveness. American Jewish history and culture, in particular, are best understood in the context of interaction with Jews in other places, both because of American Jews’ roots in and continued entanglement with Europe, and because of their differences from other Jews.
These considerations guided the participants in a roundtable that formed a prologue to an international conference held July 20–22, 2016, at the School of Jewish Theology at the University of Potsdam and the Center for Jewish Studies Berlin-Brandenburg, Germany. The conference title, “Re-Framing American Jewish History and Thought: New Transnational Perspectives,” indicated the organizers’ conviction that the transnational approach does have the potential to shed fresh light on the American Jewish experience. The participants were asked to bring their experiences to the table, in an effort to clarify what transnationalism might mean for American Jewish Studies, and where it might yield new approaches and insights.
The conference brought together some thirty scholars of various disciplines from Europe, Israel, and the United States. In addition to exploring a relatively new approach (at least, in the field of American Jewish Studies), the conference also served a second purpose: to further the interest in American Jewry as a subject of scholarly attention in countries outside the U.S., where the topic has been curiously neglected. The assumption underlying the conference was that a transnational perspective on American Jewry would bring to bear the particular interests and skills of scholars working outside the American academy, and thereby complement, rather than replicate, the ways American Jewish Studies have been pursued in North America itself.
The identification of vulnerabilities relies on detailed information about the target infrastructure. The gathering of the necessary information is a crucial step that requires an intensive scanning or mature expertise and knowledge about the system even though the information was already available in a different context. In this paper we propose a new method to detect vulnerabilities that reuses the existing information and eliminates the necessity of a comprehensive scan of the target system. Since our approach is able to identify vulnerabilities without the additional effort of a scan, we are able to increase the overall performance of the detection. Because of the reuse and the removal of the active testing procedures, our approach could be classified as a passive vulnerability detection. We will explain the approach and illustrate the additional possibility to increase the security awareness of users. Therefore, we applied the approach on an experimental setup and extracted security relevant information from web logs.
Recently a multitude of empirically derived damage models have been applied to project future tropical cyclone (TC) losses for the United States. In their study (Geiger et al 2016 Environ. Res. Lett. 11 084012) compared two approaches that differ in the scaling of losses with socio-economic drivers: the commonly-used approach resulting in a sub-linear scaling of historical TC losses with a nation's affected gross domestic product (GDP), and the disentangled approach that shows a sub-linear increase with affected population and a super-linear scaling of relative losses with per capita income. Statistics cannot determine which approach is preferable but since process understanding demands that there is a dependence of the loss on both GDP per capita and population, an approach that accounts for both separately is preferable to one which assumes a specific relation between the two dependencies. In the accompanying comment, Rybski et al argued that there is no rigorous evidence to reach the conclusion that high-income does not protect against hurricane losses. Here we affirm that our conclusion is drawn correctly and reply to further remarks raised in the comment, highlighting the adequateness of our approach but also the potential for future extension of our research.
Editorial
(2017)
In this extended abstract, we will analyze the current challenges for the envisioned Self-Adaptive CPS. In addition, we will outline our results to approach these challenges with SMARTSOS [10] a generic approach based on extensions of graph transformation systems employing open and adaptive collaborations and models at runtime for trustworthy self-adaptation, self-organization, and evolution of the individual systems and the system-of-systems level taking the independent development, operation, management, and evolution of these systems into account.
Background: Infliximab (IFX), an anti-TNF monoclonal antibody approved for the treatment of inflammatory bowel disease, is dosed per kg body weight (BW). However, the rationale for body size adjustment has not been unequivocally demonstrated [1], and first attempts to improve IFX therapy have been undertaken [2]. The aim of our study was to assess the impact of different dosing strategies (i.e. body size-adjusted and fixed dosing) on drug exposure and pharmacokinetic (PK) target attainment. For this purpose, a comprehensive simulation study was performed, using patient characteristics (n=116) from an in-house clinical database.
Methods: IFX concentration-time profiles of 1000 virtual, clinically representative patients were generated using a previously published PK model for IFX in patients with Crohn's disease [3]. For each patient 1000 profiles accounting for PK variability were considered. The IFX exposure during maintenance treatment after the following dosing strategies was compared: i) fixed dose, and per ii) BW, iii) lean BW (LBW), iv) body surface area (BSA), v) height (HT), vi) body mass index (BMI) and vii) fat-free mass (FFM)). For each dosing strategy the variability in maximum concentration Cmax, minimum concentration Cmin (= C8weeks) and area under the concentration-time curve (AUC), as well as percent of patients achieving the PK target, Cmin=3 μg/mL [4] were assessed.
Results: For all dosing strategies the variability of Cmin (CV ≈110%) was highest, compared to Cmax and AUC, and was of similar extent regardless of dosing strategy. The proportion of patients reaching the PK target (≈⅓ was approximately equal for all dosing strategies.
Lately, first implementation approaches of Internet of Things (IoT) technologies penetrate industrial value-adding processes. Within this, the competence requirements for employees are changing. Employees’ organization, process, and interaction competences are of crucial importance in this new IoT environment, however, in students and vocational training not sufficiently considered yet. On the other hand, conventional learning factories evolve and transform to digital learning factories. Nevertheless, the integration of IoT technology and its usage for training in digital learning factories has been largely neglected thus far. Existing learning factories do not explicitly and properly consider IoT technology, which leads to deficiencies regarding an appropriate development of employees’ Industrial IoT competences. The goal of this contribution is to point out a didactic concept that enables development and training of these new demanded competences by using an IoT laboratory. For this purpose, a design science approach is applied. The result of this contribution is a didactic concept for the development of Industrial IoT competences in an IoT laboratory.
Editorial
(2017)
Photonic sensing in highly concentrated biotechnical processes by photon density wave spectroscopy
(2017)
Photon Density Wave (PDW) spectroscopy is introduced as a new approach for photonic sensing in highly concentrated biotechnical processes. It independently quantifies the absorption and reduced scattering coefficient calibration-free and as a function of time, thus describing the optical properties in the vis/NIR range of the biomaterial during their processing. As examples of industrial relevance, enzymatic milk coagulation, beer mashing, and algae cultivation in photo bioreactors are discussed.
New data from the LEADER trial show that the glucagon-like peptide 1 receptor agonist liraglutide protects against diabetic nephropathy in patients with type 2 diabetes mellitus. The renoprotective efficacy of liraglutide is not, however, as great as that reported for the sodium-glucose cotransporter 2 inhibitor emplagiflozin in the EMPA-REG OUTCOME trial.
Preclinical studies in cell culture systems as well as in whole animal chronic kidney disease (CKD) models showed that parathyroid hormone (PTH), oxidized at the 2 methionine residues (positions 8 and 18), caused a loss of function. This was so far not considered in the development of PTH assays used in current clinical practice. Patients with advanced CKD are subject to oxidative stress, and plasma proteins (including PTH) are targets for oxidants. In patients with CKD, a considerable but variable fraction (about 70 to 90%) of measured PTH appears to be oxidized. Oxidized PTH (oxPTH) does not interact with the PTH receptor resulting in loss of biological activity. Currently used intact PTH (iPTH) assays detect both oxidized and non-oxPTH (n-oxPTH). Clinical studies demonstrated that bioactive, n-oxPTH, but not iPTH nor oxPTH, is associated with mortality in CKD patients.
Emergency Care in Germany being re-assessed Hybrid Medical Care Model Seen As Potential Answer
(2017)
Editorial
(2017)
Preface
(2017)
Recently, Kocyan & Wiland-Szymańska (2016) have published a thorough research article on one of the outstanding members of the family Hypoxidaceae on the Seychelles, which resulted in the raise of a new genus (Friedmannia Kocyan & Wiland-Szymańska 2016: 60) to accommodate the former Curculigo seychellensis Bojer ex Baker (1877: 368). However, it has turned out that the name Friedmannia Chantanachat & Bold (1962: 45) already exists in literature for a green alga, which renders the new hypoxid genus illegitimate (Melbourne Code; McNeill et al. 2012). Therefore, we assign a new generic epithet to Curculigo seychellensis.
Eighteen scientists met at Jurata, Poland, to discuss various aspects of the transition from adolescence to adulthood. This transition is a delicate period facing complex interactions between the adolescents and the social group they belong to. Social identity, group identification and identity signalling, but also stress affecting basal salivary cortisol rhythms, hypertension, inappropriate nutrition causing latent and manifest obesity, moreover, in developing and under-developed countries, parasitosis causing anaemia thereby impairing growth and development, are issues to be dealt with during this period of the human development. In addition, some new aspects of the association between weight, height and head circumference in the newborns were discussed, as well as intrauterine head growth and head circumference as health risk indicators.
It has been observationally established that winds of hot massive stars have highly variable characteristics. The variability evident in the winds is believed to be caused by structures on a broad range of spatial scales. Small-scale structures (clumping) in stellar winds of hot stars are possible consequence of an instability appearing in their radiation hydrodynamics. To understand how clumping may influence calculation of theoretical spectra, different clumping properties and their 3D nature have to be taken into account. Properties of clumping have been examined using our 3D radiative transfer calculations. Effects of clumping for the case of the B[e] phenomenon are discussed.
During their evolution, massive stars are characterized by a significant loss of mass either via spherically symmetric stellar winds or by aspherical mass-loss mechanisms, namely outflowing equatorial disks. However, the scenario that leads to the formation of a disk or rings of gas and dust around these objects is still under debate. Is it a viscous disk or an ouftlowing disk-forming wind or some other mechanism? It is also unclear how various physical mechanisms that act on the circumstellar environment of the stars affect its shape, density, kinematic, and thermal structure. We assume that the disk-forming mechanism is a viscous transport within an equatorial outflowing disk of a rapidly or even critically rotating star. We study the hydrodynamic and thermal structure of optically thick dense parts of outflowing circumstellar disks that may form around,e.g., Be stars, sgB[e] stars, or Pop m stars. We calculate self-consistent time dependent models of the inner dense region of the disk that is strongly affected either by irradiation from the central star and by contributions of viscous heating effects. We also simulate the dynamic effects of collision between expanding ejecta of supernovae and circumstellar disks that may be form in sgB[e] stars and, e.g., LBVs or Pop in stars.
Dissolved CO2 storage in geological formations with low pressure, low risk and large capacities
(2017)
Geological CO2 storage is a mitigation technology to reduce CO2 emissions from fossil fuel combustion. However, major concerns are the pressure increase and saltwater displacement in the mainly targeted deep groundwater aquifers due to injection of supercritical CO2. The suggested solution is storage of CO2 exclusively in the dissolved state. In our exemplary regional case study of the North East German Basin based on a highly resolved temperature and pressure distribution model and a newly developed reactive transport coupling, we have quantified that 4.7 Gt of CO2 can be stored in solution compared to 1.5 Gt in the supercritical state.
Integration and development of the energy supply in China and worldwide is a challenge for the years to come. The innovative idea presented here is based on an extension of the “power-to-gas-to-power” technology by establishing a closed carbon cycle. It is an implementation of a low-carbon energy system based on carbon dioxide capture and storage (CCS) to store and reuse wind and solar energy. The Chenjiacun storage project in China compares well with the German case study for the towns Potsdam and Brandenburg/Havel in the Federal State of Brandenburg based on the Ketzin pilot site for CCS.
Water management tools are necessary to guarantee the preservation of natural resources while ensuring optimum utilization. Linear regression models are a simple and quick solution for creating prognostic capabilities. Multivariate models show higher precision than univariate models. In the case of Waiwera, implementation of individual production rates is more accurate than applying just the total production rate. A maximum of approximately 1,075 m3/day can be pumped to ensure a water level of at least 0.5 m a.s.l. in the monitoring well. The model should be renewed annually to implement new data and current water level trends to keep the quality.
Der Leserbrief fokussiert in weiten Teilen auf das Gutachterwesen, weshalb wir ausschließlich auf die inhaltlichen Punkte im Zusammenhang mit unserer Arbeit eingehen. Untersucht wurden schmerzpsychologische Interventionen, wie beschrieben definiert als psychologische Interventionen, deren primäres Ziel die Schmerzreduktion war.
Die extrahierten Zielgrößen, wie Lebensqualität oder Depressivität, ergaben sich aus den in den Primärstudien untersuchten Hauptoutcomes und nicht aus der Suchstrategie.
Zur Einschätzung der methodischen Qualität der Primärstudien konnte ein Kriterium des von Johannsen und Kollegen [2] gebildeten Scores nicht berücksichtigt werden, da die eingeschlossenen Primärstudien keine metaanalytische Zusammenfassung erlaubten. Stellt man dies in Rechnung, bleibt die Vergleichbarkeit beider Werte erhalten.
Die Evidenzsynthese erfolgte narrativ in Text- und Tabellenform, d. h. in Form einer strukturierten Zusammenfassung und Diskussion von Studien [1].
Um unsere Arbeit zu fokussieren, hätten wir eine weitergehende Gegenüberstellung wie auch eine Überprüfung von Zitaten und Übersetzungen selbstverständlich vorgenommen, wenn wir den Hinweis dazu vor Publikation erhalten hätten.
Hulleman & Olivers' (H&O's) model introduces variation of the functional visual field (FVF) for explaining visual search behavior. Our research shows how the FVF can be studied using gaze-contingent displays and how FVF variation can be implemented in models of gaze control. Contrary to H&O, we believe that fixation duration is an important factor when modeling visual search behavior.
Mixed-projection treemaps
(2017)
This paper presents a novel technique for combining 2D and 2.5D treemaps using multi-perspective views to leverage the advantages of both treemap types. It enables a new form of overview+detail visualization for tree-structured data and contributes new concepts for real-time rendering of and interaction with treemaps. The technique operates by tilting the graphical elements representing inner nodes using affine transformations and animated state transitions. We explain how to mix orthogonal and perspective projections within a single treemap. Finally, we show application examples that benefit from the reduced interaction overhead.
As virtualization drives the automation of networking, the validation of security properties becomes more and more challenging eventually ruling out manual inspections. While formal verification in Software Defined Networks is provided by comprehensive tools with high speed reverification capabilities like NetPlumber for instance, the presence of middlebox functionality like firewalls is not considered. Also, they lack the ability to handle dynamic protocol elements like IPv6 extension header chains. In this work, we provide suitable modeling abstractions to enable both - the inclusion of firewalls and dynamic protocol elements. We exemplarily model the Linux ip6tables/netfilter packet filter and also provide abstractions for an application layer gateway. Finally, we present a prototype of our formal verification system FaVe.
Structural health monitoring activities are of primal importance for managing transport infrastructure, however most SHM methodologies are based on point-based sensors that have limitations in terms of their spatial positioning requirements, cost of development and measurement range. This paper describes the progress on the SENSKIN EC project whose objective is to develop a dielectric-elastomer and micro-electronics-based sensor, formed from a large highly extensible capacitance sensing membrane supported by advanced microelectronic circuitry, for monitoring transport infrastructure bridges. Such a sensor could provide spatial measurements of strain in excess of 10%. The actual sensor along with the data acquisition module, the communication module and power electronics are all integrated into a compact unit, the SENSKIN device, which is energy-efficient, requires simple signal processing and it is easy to install over various surface types. In terms of communication, SENSKIN devices interact with each other to form the SENSKIN system; a fully distributed and autonomous wireless sensor network that is able to self-monitor. SENSKIN system utilizes Delay-/Disruption-Tolerant Networking technologies to ensure that the strain measurements will be received by the base station even under extreme conditions where normal communications are disrupted. This paper describes the architecture of the SENSKIN system and the development and testing of the first SENSKIN prototype sensor, the data acquisition system, and the communication system.
We compare Visual Berrypicking, an interactive approach allowing users to explore large and highly faceted information spaces using similarity-based two-dimensional maps, with traditional browsing techniques. For large datasets, current projection methods used to generate maplike overviews suffer from increased computational costs and a loss of accuracy resulting in inconsistent visualizations. We propose to interactively align inexpensive small maps, showing local neighborhoods only, which ideally creates the impression of panning a large map. For evaluation, we designed a web-based prototype for movie exploration and compared it to the web interface of The Movie Database (TMDb) in an online user study. Results suggest that users are able to effectively explore large movie collections by hopping from one neighborhood to the next. Additionally, due to the projection of movie similarities, interesting links between movies can be found more easily, and thus, compared to browsing serendipitous discoveries are more likely.
Influenza virus vRNPs: quantitative investigations via fluorescence
cross-correlation spectroscopy
(2017)
Embedded smart home
(2017)
The popularity of MOOCs has increased considerably in the last years. A typical MOOC course consists of video content, self tests after a video and homework, which is normally in multiple choice format. After solving this homeworks for every week of a MOOC, the final exam certificate can be issued when the student has reached a sufficient score. There are also some attempts to include practical tasks, such as programming, in MOOCs for grading. Nevertheless, until now there is no known possibility to teach embedded system programming in a MOOC course where the programming can be done in a remote lab and where grading of the tasks is additionally possible. This embedded programming includes communication over GPIO pins to control LEDs and measure sensor values. We started a MOOC course called "Embedded Smart Home" as a pilot to prove the concept to teach real hardware programming in a MOOC environment under real life MOOC conditions with over 6000 students. Furthermore, also students with real hardware have the possibility to program on their own real hardware and grade their results in the MOOC course. Finally, we evaluate our approach and analyze the student acceptance of this approach to offer a course on embedded programming. We also analyze the hardware usage and working time of students solving tasks to find out if real hardware programming is an advantage and motivating achievement to support students learning success.
Gaussianity Fair
(2017)
This special issue is the result of several fruitful conference sessions on disturbance hydrology, which started at the 2013 AGU Fall Meeting in San Francisco and have continued every year since. The stimulating presentations and discussions surrounding those sessions have focused on understanding both the disruption of hydrologic functioning following discrete disturbances, as well as the subsequent recovery or change within the affected watershed system. Whereas some hydrologic disturbances are directly linked to anthropogenic activities, such as resource extraction, the contributions to this special issue focus primarily on those with indirect or less pronounced human involvement, such as bark-beetle infestation, wildfire, and other natural hazards. However, human activities are enhancing the severity and frequency of these seemingly natural disturbances, thereby contributing to acute hydrologic problems and hazards. Major research challenges for our increasingly disturbed planet include the lack of continuous pre and postdisturbance monitoring, hydrologic impacts that vary spatially and temporally based on environmental and hydroclimatic conditions, and the preponderance of overlapping or compounding disturbance sequences. In addition, a conceptual framework for characterizing commonalities and differences among hydrologic disturbances is still in its infancy. In this introduction to the special issue, we advance the fusion of concepts and terminology from ecology and hydrology to begin filling this gap. We briefly explore some preliminary approaches for comparing different disturbances and their hydrologic impacts, which provides a starting point for further dialogue and research progress.
This paper describes architectural extensions for a dynamically scheduled processor, so that it can be used in three different operation modes, ranging from high-performance, to high-reliability. With minor hardware-extensions of the control path, the resources of the superscalar data-path can be used either for high-performance execution, fail-safe-operation, or fault-tolerant-operation. This makes the processor-architecture a very good candidate for applications with dynamically changing reliability requirements, e.g. for automotive applications. The paper reports the hardware-overhead for the extensions, and investigates the performance penalties introduced by the fail-safe and fault-tolerant mode. Furthermore, a comprehensive fault simulation was carried out in order to investigate the fault-coverage of the proposed approach.
Handling manufacturing and aging faults with software-based techniques in tiny embedded systems
(2017)
Non-volatile memory area occupies a large portion of the area of a chip in an embedded system. Such memories are prone to manufacturing faults, retention faults, and aging faults. The paper presents a single software based technique that allows for handling all of these fault types in tiny embedded systems without the need for hardware support. This is beneficial for low-cost embedded systems with simple memory architectures. A software infrastructure and a flow are presented that demonstrate how the presented technique is used in general for fault handling right after manufacturing and in-the-field. Moreover, a full implementation is presented for a MSP430 microcontroller, along with a discussion of the performance, overhead, and reliability impacts.
Nanocarriers
(2017)
The design of embedded systems is becoming continuously more complex such that efficient system-level design methods are becoming crucial. Recently, combined Answer Set Programming (ASP) and Quantifier Free Integer Difference Logic (QF-IDL) solving has been shown to be a promising approach in system synthesis. However, this approach still has several restrictions limiting its applicability. In the paper at hand, we propose a novel ASP modulo Theories (ASPmT) system synthesis approach, which (i) supports more sophisticated system models, (ii) tightly integrates the QF-IDL solving into the ASP solving, and (iii) makes use of partial assignment checking. As a result, more realistic systems are considered and an early exclusion of infeasible solutions improves the entire system synthesis.
Surface acoustic wave (SAW) devices are well-known for gravimetric sensor applications. In biosensing applications, chemical-and biochemically evoked adsorption processes at surfaces are detected in liquid environments using delay-line or resonator sensor configurations, preferably in combination with appropriate microfluidic devices. In this paper, a novel SAW-based impedance sensor type is introduced which uses only one interdigital electrode transducer (IDT) simultaneously as SAW generator and sensor element. It is shown that the amplitude of the reflected S-11 signal directly depends on the input impedance of the SAW device. The input impedance is strongly influenced by mass adsorption which causes a characteristic and measurable impedance mismatch.
Gamma-ray bursts (GRBs) are some of the Universe’s most enigmatic and exotic events. However, at energies above 10 GeV their behaviour remains largely unknown. Although space based telescopes such as the Fermi-LAT have been able to detect GRBs in this energy range, their photon statistics are limited by the small detector size. Such limitations are not present in ground based gamma-ray telescopes such as the H.E.S.S. experiment, which has now entered its second phase with the addition of a large 600 m2 telescope to the centre of the array. Such a large telescope allows H.E.S.S. to access the sub 100-GeV energy range while still maintaining a large effective collection area, helping to potentially probe the short timescale emission of these events.
We present a description of the H.E.S.S. GRB observation programme, summarising the performance of the rapid GRB repointing system and the conditions under which GRB observations are initiated. Additionally we will report on the GRB follow-ups made during the 2014-15 observation campaigns.
Cost models play an important role for the efficient implementation of software systems. These models can be embedded in operating systems and execution environments to optimize execution at run time. Even though non-uniform memory access (NUMA) architectures are dominating today's server landscape, there is still a lack of parallel cost models that represent NUMA system sufficiently. Therefore, the existing NUMA models are analyzed, and a two-step performance assessment strategy is proposed that incorporates low-level hardware counters as performance indicators. To support the two-step strategy, multiple tools are developed, all accumulating and enriching specific hardware event counter information, to explore, measure, and visualize these low-overhead performance indicators. The tools are showcased and discussed alongside specific experiments in the realm of performance assessment.
DPP4 inhibition prevents AKI
(2017)
Web-based E-Learning uses Internet technologies and digital media to deliver education content to learners. Many universities in recent years apply their capacity in producing Massive Open Online Courses (MOOCs). They have been offering MOOCs with an expectation of rendering a comprehensive online apprenticeship. Typically, an online content delivery process requires an Internet connection. However, access to the broadband has never been a readily available resource in many regions. In Africa, poor and no networks are yet predominantly experienced by Internet users, frequently causing offline each moment a digital device disconnect from a network. As a result, a learning process is always disrupted, delayed and terminated in such regions. This paper raises the concern of E-Learning in poor and low bandwidths, in fact, it highlights the needs for an Offline-Enabled mode. The paper also explores technical approaches beamed to enhance the user experience inWeb-based E-Learning, particular in Africa.
Root infinitives on Twitter
(2017)
In this paper, the applicability of deep downhole geoelectrical monitoring for detecting CO2 related signatures is evaluated after a nearly ten year period of CO2 storage at the Ketzin pilot site. Deep downhole electrode arrays have been studied as part of a multi-physical monitoring concept at four CO2 pilot test sites worldwide so far. For these sites, it was considered important to implement the geoelectrical method into the measurement program of tracking the CO2 plume. Analyzing the example of the Ketzin site, it can be seen that during all phases of the CO2 storage reservoir development the resistivity measurements and their corresponding tomographic interpretation contribute in a beneficial manner to the measurement, monitoring and verification (MMV) protocol. The most important impact of a permanent electrode array is its potential as tool for estimating reservoir saturations.
E-commerce marketplaces are highly dynamic with constant competition. While this competition is challenging for many merchants, it also provides plenty of opportunities, e.g., by allowing them to automatically adjust prices in order to react to changing market situations. For practitioners however, testing automated pricing strategies is time-consuming and potentially hazardously when done in production. Researchers, on the other side, struggle to study how pricing strategies interact under heavy competition. As a consequence, we built an open continuous time framework to simulate dynamic pricing competition called Price Wars. The microservice-based architecture provides a scalable platform for large competitions with dozens of merchants and a large random stream of consumers. Our platform stores each event in a distributed log. This allows to provide different performance measures enabling users to compare profit and revenue of various repricing strategies in real-time. For researchers, price trajectories are shown which ease evaluating mutual price reactions of competing strategies. Furthermore, merchants can access historical marketplace data and apply machine learning. By providing a set of customizable, artificial merchants, users can easily simulate both simple rule-based strategies as well as sophisticated data-driven strategies using demand learning to optimize their pricing strategies.