Refine
Year of publication
Document Type
- Other (983) (remove)
Language
- English (682)
- German (283)
- Spanish (5)
- Italian (4)
- Multiple languages (4)
- Polish (2)
- French (1)
- Portuguese (1)
- Russian (1)
Keywords
- Arrayseismologie (5)
- E-Learning (5)
- array seismology (5)
- Digitalisierung (4)
- Dysphagie (4)
- Erdbeben (4)
- Judaism (4)
- Judentum (4)
- MOOC (4)
- Patholinguistik (4)
Institute
- Institut für Biochemie und Biologie (98)
- Institut für Physik und Astronomie (86)
- Hasso-Plattner-Institut für Digital Engineering GmbH (84)
- Institut für Geowissenschaften (76)
- Department Psychologie (46)
- Department Sport- und Gesundheitswissenschaften (46)
- Institut für Mathematik (46)
- Institut für Ernährungswissenschaft (32)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (30)
- Institut für Chemie (30)
The spatio-temporal evolution of the three recent tsunamogenic earthquakes (TsE) off-coast N-Sumatra (Mw9.3), 28/03/2005 (Mw8.5) off-coast Nias, on 17/07/2006 (Mw7.7) off-coast Java. Start time, duration, and propagation of the rupture are retrieved. All parameters can be obtained rapidly after recording of the first-arrival phases in near-real time processing. We exploit semblance analysis, backpropagation and broad-band seismograms within 30°-95° distance. Image enhancement is reached by stacking the semblance of arrays within different directions. For the three events, the rupture extends over about 1150, 150, and 200km, respectively. The events in 2004, 2005, and 2006 had source durations of at least 480s, 120s, and 180s, respectively. We observe unilateral rupture propagation for all events except for the rupture onset and the Nias event, where there is evidence for a bilateral start of the rupture. Whereas average rupture speed of the events in 2004 and 2005 is in the order of the S-wave speed (≈2.5-3km/s), unusually slow rupturing (≈1.5 km/s) is indicated for the July 2006 event. For the July 2006 event we find rupturing of a 200 x 100 km wide area in at least 2 phases with propagation from NW to SE. The event has some characteristics of a circular rupture followed by unilateral faulting with change in slip rate. Fault area and aftershock distribution coincide. Spatial and temporal resolution are frequency dependent. Studies of a Mw6.0 earthquake on 2006/09/21 and one synthetic source show a ≈1° limit in resolution. Retrieved source area, source duration as well as peak values for semblance and beam power generally increase with the size of the earthquake making possible an automatic detection and classification of large and small earthquakes.
We use seismic array methods (semblance analysis) to image areas of seismic energy release in the Sunda Arc region and world-wide. Broadband seismograms at teleseismic distances (30° ≤ Δ ≤ 100°) are compared at several subarrays. Semblance maps of different subarrays are multiplied. High semblance tracked over long time (10s of second to minutes) and long distances indicate locations of earthquakes. The method allows resolution of rupture characteristics important for tsunami early warning: start and duration, velocity and direction, length and area. The method has been successfully applied to recent and historic events (M>6.5) and is now operational in real time. Results are obtained shortly after source time, see http://www.geo.uni-potsdam.de/Forschung/Geophysik/GITEWS/tsunami.htm). Comparison of manual and automatic processing are in good agreement. Computational effort is small. Automatic results may be obtained within 15 - 20 minutes after event occurrence.
We study the rupture propagation of the 2008/05/12 Ms8.0 Wenchuan Earthquake. We apply array techniques such as semblance vespagram analysis to P waves recorded at seismic broadband station within 30-100° epicentral distance. By combination of multiple large aperture station groups spatial and temporal resolution is enhanced and problems due source directivity and source mechanism are avoided. We find that seismic energy was released for at least 110 s. Propagating unilaterally at sub-shear rupture velocity of about 2.5 km/s in NE direction, the earthquake reaches a lateral extent of more than 300 km. Whereas high semblance during within 70 s from rupture start indicates simple propagation more complex source processes are indicated thereafter by decreases coherency in seismograms. At this stage of the event coherency is low but significantly above noise level. We emphasize that first result of our computations where obtain within 30 minutes after source time by using an atomized algorithm. This procedure has been routinely and globally applied to major earthquakes. Results are made public through internet.
The most recent intense earthquake swarm in the Vogtland lasted from 6 October 2008 until January 2009. Greatest magnitudes exceeded M3.5 several times in October making it the greatest swarm since 1985/86. In contrast to the swarms in 1985 and 2000, seismic moment release was concentrated near swarm onset. Focal area and temporal evolution are similar to the swarm in 2000. Work hypothysis: uprising upper-mantle fluids trigger swarm earthquakes at low stress level. To monitor the seismicity, the University of Potsdam operated a small aperture seismic array at 10 km epicentral distance between 18 October 2008 and 18 March 2009. Consisting of 12 seismic stations and 3 additional microphones, the array is capable of detecting earthquakes from larger to very low magnitudes (M<-1) as well as associated air waves. We use array techniques to determine properties of the incoming wavefield: noise, direct P and S waves, and converted phases.
The electret state stability in nonpolar semicrystalline polymers is largely determined by the traps located at crystalline/ amorphous phase interfaces. Thus, the thermal history of such polymers should considerably influence their electret properties. In the present work, we investigate how recrystallization influences charge stability in low-density polyethylene corona electrets. It has been found that electret charge stability in quenched samples is higher than in slowly-crystallized ones. Phenomenologicaly, this can be explained by the increased number of deeper traps in samples with smaller crystallite size.
Precision fruticulture addresses site or tree-adapted crop management. In the present study, soil and tree status, as well as fruit quality at harvest were analysed in a commercial apple (Malus × domestica 'Gala Brookfield'/Pajam1) orchard in a temperate climate. Trees were irrigated in addition to precipitation. Three irrigation levels (0, 50 and 100%) were applied. Measurements included readings of apparent electrical conductivity of soil (ECa), stem water potential, canopy temperature obtained by infrared camera, and canopy volume estimated by LiDAR and RGB colour imaging. Laboratory analyses of 6 trees per treatment were done on fruit considering the pigment contents and quality parameters. Midday stem water potential (SWP), normalized crop water stress index (CWSI) calculated from thermal data, and fruit yield and quality at harvest were analysed. Spatial patterns of the variability of tree water status were estimated by CWSI imaging supported by SWP readings. CWSI ranged from 0.1 to 0.7 indicating high variability due to irrigation and precipitation. Canopy volume data were less variable. Soil ECa appeared homogeneous in the range of 0 to 4 mS m-1. Fruit harvested in a drought stress zone showed enhanced portion of pheophytin in the chlorophyll pool. Irrigation affected soluble solids content and, hence, the quality of fruit. Overall, results highlighted that spatial variation in orchards can be found even if marginal variability of soil properties can be assumed.
SpringFit
(2019)
Joints are crucial to laser cutting as they allow making three-dimensional objects; mounts are crucial because they allow embedding technical components, such as motors. Unfortunately, mounts and joints tend to fail when trying to fabricate a model on a different laser cutter or from a different material. The reason for this lies in the way mounts and joints hold objects in place, which is by forcing them into slightly smaller openings. Such "press fit" mechanisms unfortunately are susceptible to the small changes in diameter that occur when switching to a machine that removes more or less material ("kerf"), as well as to changes in stiffness, as they occur when switching to a different material. We present a software tool called springFit that resolves this problem by replacing the problematic press fit-based mounts and joints with what we call cantilever-based mounts and joints. A cantilever spring is simply a long thin piece of material that pushes against the object to be held. Unlike press fits, cantilever springs are robust against variations in kerf and material; they can even handle very high variations, simply by using longer springs. SpringFit converts models in the form of 2D cutting plans by replacing all contained mounts, notch joints, finger joints, and t-joints. In our technical evaluation, we used springFit to convert 14 models downloaded from the web.
From victims to activists
(2022)
The politics of fear
(2022)
Background:
Inflammatory bowel disease (IBD) represents a dysregulation of the mucosal immune system. The pathogenesis of Crohn’s disease (CD) and ulcerative colitis (UC) is linked to the loss of intestinal tolerance and barrier function. The healthy mucosal immune system has previously been shown to be inert against food antigens. Since the small intestine is the main contact surface for antigens and therefore the immunological response, the present study served to analyse food-antigen-specific T cells in the peripheral blood of IBD patients.
Methods:
Peripheral blood mononuclear cells of CD, with an affected small intestine, and UC (colitis) patients, either active or in remission, were stimulated with the following food antigens: gluten, soybean, peanut and ovalbumin. Healthy controls and celiac disease patients were included as controls. Antigen-activated CD4+ T cells in the peripheral blood were analysed by a magnetic enrichment of CD154+ effector T cells and a cytometric antigen-reactive T-cell analysis (‘ARTE’ technology) followed by characterisation of the ef- fector response.
Results:
The effector T-cell response of antigen-specific T cells were compared between CD with small intestinal inflammation and UC where inflammation was restricted to the colon. Among all tested food antigens, the highest frequency of antigen-specific T cells (CD4+CD154+) was found for gluten. Celiac disease patients were included as control, since gluten has been identified as the disease- causing antigen. The highest frequency of gluten antigen-specific T cells was revealed in active CD when compared with UC, celiac disease on a gluten-free diet (GFD) and healthy controls. Ovalbuminspecific T cells were almost undetectable, whereas the reaction to soybean and peanut was slightly higher. But again, the strong- est reaction was observed in CD with small intestinal involvement compared with UC. Remarkably, in celiac disease on a GFD only
antigen-specific cells for gluten were detected. These gluten-specific T cells were characterised by up-regulation of the pro-inflammatory cytokines IFN-γ, IL-17A and TNF-α. IFN-g was exclusively elevated in CD patients with active disease. Gluten-specific T-cells expressing IL-17A were increased in all IBD patients. Furthermore, T cells of CD patients, independent of disease activity, revealed a high expression of the pro-inflammatory cytokine TNF-α.
Conclusion:
The ‘ARTE’-technique allows to analyse and quantify food antigen specific T cells in the peripheral blood of IBD patients indicating a potential therapeutic insight. These data provide evidence that small intestinal inflammation in CD is key for the development of a systemic pro-inflammatory effector T-cell response driven by food antigens.
Comparative text mining extends from genre analysis and political bias detection to the revelation of cultural and geographic differences, through to the search for prior art across patents and scientific papers. These applications use cross-collection topic modeling for the exploration, clustering, and comparison of large sets of documents, such as digital libraries. However, topic modeling on documents from different collections is challenging because of domain-specific vocabulary. We present a cross-collection topic model combined with automatic domain term extraction and phrase segmentation. This model distinguishes collection-specific and collection-independent words based on information entropy and reveals commonalities and differences of multiple text collections. We evaluate our model on patents, scientific papers, newspaper articles, forum posts, and Wikipedia articles. In comparison to state-of-the-art cross-collection topic modeling, our model achieves up to 13% higher topic coherence, up to 4% lower perplexity, and up to 31% higher document classification accuracy. More importantly, our approach is the first topic model that ensures disjunct general and specific word distributions, resulting in clear-cut topic representations.
Leveraging spatio-temporal soccer data to define a graphical query language for game recordings
(2019)
For professional soccer clubs, performance and video analysis are an integral part of the preparation and post-processing of games. Coaches, scouts, and video analysts extract information about strengths and weaknesses of their team as well as opponents by manually analyzing video recordings of past games. Since video recordings are an unstructured data source, it is a complex and time-intensive task to find specific game situations and identify similar patterns. In this paper, we present a novel approach to detect patterns and situations (e.g., playmaking and ball passing of midfielders) based on trajectory data. The application uses the metaphor of a tactic board to offer a graphical query language. With this interactive tactic board, the user can model a game situation or mark a specific situation in the video recording for which all matching occurrences in various games are immediately displayed, and the user can directly jump to the corresponding game scene. Through the additional visualization of key performance indicators (e.g.,the physical load of the players), the user can get a better overall assessment of situations. With the capabilities to find specific game situations and complex patterns in video recordings, the interactive tactic board serves as a useful tool to improve the video analysis process of professional sports teams.
Rapid advances in location-acquisition technologies have led to large amounts of trajectory data. This data is the foundation for a broad spectrum of services driven and improved by trajectory data mining. However, for hybrid transactional and analytical workloads, the storing and processing of rapidly accumulated trajectory data is a non-trivial task. In this paper, we present a detailed survey about state-of-the-art trajectory data management systems. To determine the relevant aspects and requirements for such systems, we developed a trajectory data mining framework, which summarizes the different steps in the trajectory data mining process. Based on the derived requirements, we analyze different concepts to store, compress, index, and process spatio-temporal data. There are various trajectory management systems, which are optimized for scalability, data footprint reduction, elasticity, or query performance. To get a comprehensive overview, we describe and compare different exciting systems. Additionally, the observed similarities in the general structure of different systems are consolidated in a general blueprint of trajectory management systems.
This study examined the relationships between the three phenotypic domains of the triarchic model of psychopathy —boldness, meanness, disinhibition— and electrophysiological indices of inhibitory control (NoGo-N2/NoGo-P3). EEG data from a 256-channel dense array were recorded while participants (135 un-dergraduates assessed via the Triarchic Psychopathy Measure) performed a Go/NoGo task with three types of stimuli (60% frequent-Go, 20% infrequent-Go, 20% infrequent-NoGo). N2 was defined as the mean amplitude between 240 ms and 340 ms after stimuli onset over fronto-central sensors on correct trials; P300 was defined as the mean amplitude between 350 ms and 550 ms after stimuli onset over centro-parietal sensors on correct trials. Multiple regression analyses using gender-corrected triarchic scores as predictors revealed that only Disinhibition scores significantly predicted reduced NoGo-N2 amplitudes (3.5% explained variance, beta weight = .23, p < .05) and reduced P3 amplitudes for NoGo and infrequent-Go trials (3.1 and 3.2% explained variance, respectively, beta weights = -.21, ps < .05). Our results indicate that high disinhibition entails deviations in early conflict monitoring processes (reduced NoGo-N2), as well as in latter evaluative and updating processing stages of infrequent events (reduced NoGo-P3 and infrequent-Go-P3). The null contribution of meanness and boldness domains in these results suggests that N2 and P3 amplitudes in Go/NoGo tasks could be considered as neurobiological indices of the externalizing tendencies comprised in this personality disorder.
Point clouds provide high-resolution topographic data which is often classified into bare-earth, vegetation, and building points and then filtered and aggregated to gridded Digital Elevation Models (DEMs) or Digital Terrain Models (DTMs). Based on these equally-spaced grids flow-accumulation algorithms are applied to describe the hydrologic and geomorphologic mass transport on the surface. In this contribution, we propose a stochastic point-cloud filtering that, together with a spatial bootstrap sampling, allows for a flow accumulation directly on point clouds using Facet-Flow Networks (FFN). Additionally, this provides a framework for the quantification of uncertainties in point-cloud derived metrics such as Specific Catchment Area (SCA) even though the flow accumulation itself is deterministic.
Beacon in the Dark
(2018)
The large amount of heterogeneous data in these email corpora renders experts' investigations by hand infeasible. Auditors or journalists, e.g., who are looking for irregular or inappropriate content or suspicious patterns, are in desperate need for computer-aided exploration tools to support their investigations.
We present our Beacon system for the exploration of such corpora at different levels of detail. A distributed processing pipeline combines text mining methods and social network analysis to augment the already semi-structured nature of emails. The user interface ties into the resulting cleaned and enriched dataset. For the interface design we identify three objectives expert users have: gain an initial overview of the data to identify leads to investigate, understand the context of the information at hand, and have meaningful filters to iteratively focus onto a subset of emails. To this end we make use of interactive visualisations based on rearranged and aggregated extracted information to reveal salient patterns.
Web-based E-Learning uses Internet technologies and digital media to deliver education content to learners. Many universities in recent years apply their capacity in producing Massive Open Online Courses (MOOCs). They have been offering MOOCs with an expectation of rendering a comprehensive online apprenticeship. Typically, an online content delivery process requires an Internet connection. However, access to the broadband has never been a readily available resource in many regions. In Africa, poor and no networks are yet predominantly experienced by Internet users, frequently causing offline each moment a digital device disconnect from a network. As a result, a learning process is always disrupted, delayed and terminated in such regions. This paper raises the concern of E-Learning in poor and low bandwidths, in fact, it highlights the needs for an Offline-Enabled mode. The paper also explores technical approaches beamed to enhance the user experience inWeb-based E-Learning, particular in Africa.
The "Bachelor Project"
(2019)
One of the challenges of educating the next generation of computer scientists is to teach them to become team players, that are able to communicate and interact not only with different IT systems, but also with coworkers and customers with a non-it background. The “bachelor project” is a project based on team work and a close collaboration with selected industry partners. The authors hosted some of the teams since spring term 2014/15. In the paper at hand we explain and discuss this concept and evaluate its success based on students' evaluation and reports. Furthermore, the technology-stack that has been used by the teams is evaluated to understand how self-organized students in IT-related projects work. We will show that and why the bachelor is the most successful educational format in the perception of the students and how this positive results can be improved by the mentors.
Triage und Diskrimminierung
(2022)
BDS-Kampagne in der Kommune
(2022)
Parteienfinanzierung
(2023)
Schutzloser Fussball-Ultra
(2024)
E-Bikes und Scientology
(2023)
Mobile expressive rendering gained increasing popularity among users seeking casual creativity by image stylization and supports the development of mobile artists as a new user group. In particular, neural style transfer has advanced as a core technology to emulate characteristics of manifold artistic styles. However, when it comes to creative expression, the technology still faces inherent limitations in providing low-level controls for localized image stylization. This work enhances state-of-the-art neural style transfer techniques by a generalized user interface with interactive tools to facilitate a creative and localized editing process. Thereby, we first propose a problem characterization representing trade-offs between visual quality, run-time performance, and user control. We then present MaeSTrO, a mobile app for orchestration of neural style transfer techniques using iterative, multi-style generative and adaptive neural networks that can be locally controlled by on-screen painting metaphors. At this, first user tests indicate different levels of satisfaction for the implemented techniques and interaction design.
DPP4 inhibition prevents AKI
(2017)
Logical modeling has been widely used to understand and expand the knowledge about protein interactions among different pathways. Realizing this, the caspo-ts system has been proposed recently to learn logical models from time series data. It uses Answer Set Programming to enumerate Boolean Networks (BNs) given prior knowledge networks and phosphoproteomic time series data. In the resulting sequence of solutions, similar BNs are typically clustered together. This can be problematic for large scale problems where we cannot explore the whole solution space in reasonable time. Our approach extends the caspo-ts system to cope with the important use case of finding diverse solutions of a problem with a large number of solutions. We first present the algorithm for finding diverse solutions and then we demonstrate the results of the proposed approach on two different benchmark scenarios in systems biology: (1) an artificial dataset to model TCR signaling and (2) the HPN-DREAM challenge dataset to model breast cancer cell lines.
Tikhonov regularization with oversmoothing penalty for linear statistical inverse learning problems
(2019)
In this paper, we consider the linear ill-posed inverse problem with noisy data in the statistical learning setting. The Tikhonov regularization scheme in Hilbert scales is considered in the reproducing kernel Hilbert space framework to reconstruct the estimator from the random noisy data. We discuss the rates of convergence for the regularized solution under the prior assumptions and link condition. For regression functions with smoothness given in terms of source conditions the error bound can explicitly be established.
When local poverty is more important than your income: Mental health in minorities in inner cities
(2015)
The influence of chemical composition and crystallisation conditions on the ferroelectric and paraelectric phases and the resulting morphology in Poly(vinylidene fluoride-trifluoroethylene-chlorofluoroethylene) (P(VDF-TrFE-CFE)) terpolymer films with 55.4/37.2/7.3 mol% or with 62.2/29.4/8.4 mol% of VDF/TrFE/CFE was studied. Poly(vinylidene fluoride trifluoroethylene) (P(VDF-TrFE)) with 75/25 mol% VDF/TrFE was employed as reference material. Fourier-Transform Infrared Spectroscopy (FTIR) was used to determine the fractions of the relevant terpolymer phases, and X-Ray Diffraction (XRD) was employed to assess the crystalline morphology. The FTIR results show an increase of the fraction of paraelectric phases after annealing. On the other hand, XRD results indicate a more stable paraelectric phase in the terpolymer with higher CFE content.
When I started my PhD, I wanted to do something related to systems but I wasn't sure exactly what. I didn't consider data management systems initially, because I was unaware of the richness of the systems work that data management systems were build on. I thought the field was mainly about SQL. Luckily, that view changed quickly.
Cardiovascular drift response over two different constant-load exercises in healthy non-athletes
(2019)
Cardiovascular drift (CV-d) is a steady increase in heart rate (HR) over time while performing constant load moderate intensity exercise (CME) > 20 min. CV-d presents problems for the prescription of exercise intensity by means of HR, because the work rate (WR) during exercise must be adjusted to maintain target HR, thus disturbing the intended effect of the exercise intervention. It has been shown that the increase in HR during CME is due to changes in WR and not to CV-d.
Business process simulation is an important means for quantitative analysis of a business process and to compare different process alternatives. With the Business Process Model and Notation (BPMN) being the state-of-the-art language for the graphical representation of business processes, many existing process simulators support already the simulation of BPMN diagrams. However, they do not provide well-defined interfaces to integrate new concepts in the simulation environment. In this work, we present the design and architecture of a proof-of-concept implementation of an open and extensible BPMN process simulator. It also supports the simulation of multiple BPMN processes at a time and relies on the building blocks of the well-founded discrete event simulation. The extensibility is assured by a plug-in concept. Its feasibility is demonstrated by extensions supporting new BPMN concepts, such as the simulation of business rule activities referencing decision models and batch activities.
The target article discusses the question of how educational makerspaces can become places supportive of knowledge construction. This question is too often neglected by people who run makerspaces, as they mostly explain how to use different tools and focus on the creation of a product. In makerspaces, often pupils also engage in physical computing activities and thus in the creation of interactive artifacts containing embedded systems, such as smart shoes or wristbands, plant monitoring systems or drink mixing machines. This offers the opportunity to reflect on teaching physical computing in computer science education, where similarly often the creation of the product is so strongly focused upon that the reflection of the learning process is pushed into the background.
Aspirin inhibits release of platelet-derived sphingosine-1-phosphate in
acute myocardial infarction
(2013)
Minimising Information Loss on Anonymised High Dimensional Data with Greedy In-Memory Processing
(2018)
Minimising information loss on anonymised high dimensional data is important for data utility. Syntactic data anonymisation algorithms address this issue by generating datasets that are neither use-case specific nor dependent on runtime specifications. This results in anonymised datasets that can be re-used in different scenarios which is performance efficient. However, syntactic data anonymisation algorithms incur high information loss on high dimensional data, making the data unusable for analytics. In this paper, we propose an optimised exact quasi-identifier identification scheme, based on the notion of k-anonymity, to generate anonymised high dimensional datasets efficiently, and with low information loss. The optimised exact quasi-identifier identification scheme works by identifying and eliminating maximal partial unique column combination (mpUCC) attributes that endanger anonymity. By using in-memory processing to handle the attribute selection procedure, we significantly reduce the processing time required. We evaluated the effectiveness of our proposed approach with an enriched dataset drawn from multiple real-world data sources, and augmented with synthetic values generated in close alignment with the real-world data distributions. Our results indicate that in-memory processing drops attribute selection time for the mpUCC candidates from 400s to 100s, while significantly reducing information loss. In addition, we achieve a time complexity speed-up of O(3(n/3)) approximate to O(1.4422(n)).
High-dimensional data is particularly useful for data analytics research. In the healthcare domain, for instance, high-dimensional data analytics has been used successfully for drug discovery. Yet, in order to adhere to privacy legislation, data analytics service providers must guarantee anonymity for data owners. In the context of high-dimensional data, ensuring privacy is challenging because increased data dimensionality must be matched by an exponential growth in the size of the data to avoid sparse datasets. Syntactically, anonymising sparse datasets with methods that rely of statistical significance, makes obtaining sound and reliable results, a challenge. As such, strong privacy is only achievable at the cost of high information loss, rendering the data unusable for data analytics. In this paper, we make two contributions to addressing this problem from both the privacy and information loss perspectives. First, we show that by identifying dependencies between attribute subsets we can eliminate privacy violating attributes from the anonymised dataset. Second, to minimise information loss, we employ a greedy search algorithm to determine and eliminate maximal partial unique attribute combinations. Thus, one only needs to find the minimal set of identifying attributes to prevent re-identification. Experiments on a health cloud based on the SAP HANA platform using a semi-synthetic medical history dataset comprised of 109 attributes, demonstrate the effectiveness of our approach.
Cost models play an important role for the efficient implementation of software systems. These models can be embedded in operating systems and execution environments to optimize execution at run time. Even though non-uniform memory access (NUMA) architectures are dominating today's server landscape, there is still a lack of parallel cost models that represent NUMA system sufficiently. Therefore, the existing NUMA models are analyzed, and a two-step performance assessment strategy is proposed that incorporates low-level hardware counters as performance indicators. To support the two-step strategy, multiple tools are developed, all accumulating and enriching specific hardware event counter information, to explore, measure, and visualize these low-overhead performance indicators. The tools are showcased and discussed alongside specific experiments in the realm of performance assessment.
The overhead of moving data is the major limiting factor in todays hardware, especially in heterogeneous systems where data needs to be transferred frequently between host and accelerator memory. With the increasing availability of hardware-based compression facilities in modern computer architectures, this paper investigates the potential of hardware-accelerated I/O Link Compression as a promising approach to reduce data volumes and transfer time, thus improving the overall efficiency of accelerators in heterogeneous systems. Our considerations are focused on On-the-Fly compression in both Single-Node and Scale-Out deployments. Based on a theoretical analysis, this paper demonstrates the feasibility of hardware-accelerated On-the-Fly I/O Link Compression for many workloads in a Scale-Out scenario, and for some even in a Single-Node scenario. These findings are confirmed in a preliminary evaluation using software-and hardware-based implementations of the 842 compression algorithm.
Im Hochmittelalter entstehen Erzählungen, die etablierte literarische Formen und Traditionen neu verbinden: Sie sind volkssprachig, allegorisch und verwenden als Erzählform die erste Person, um in dieser Kombination, die sich zu einem die Grenzen der Einzelsprachen überschreitenden Erzähl-Format verfestigt, unterschiedlichste Themen aufzugreifen. Dieses Format, erstmals realisiert im altfranzösischen Roman de la Rose, wird die europäische Literatur mit Texten wie Dantes Divina Comedia, Guillaumes de Deguileville Pèlerinage de la Vie Humaine, William Langlands Pierce Plowman und Christines de Pizan Le Livre de la mutation de Fortune bis weit in die Neuzeit hinein prägen. Der in den Band einleitende Beitrag geht der Frage nach, ob das narrative Format dabei universell verwendet wird oder, z.B. im Rahmen der Liebesdichtung, spezifische Besonderheiten aufweist.
This is a correction notice for ‘Post-adiabatic supernova remnants in an interstellar magnetic field: oblique shocks and non-uniform environment’ (DOI: https://doi.org/10.1093/mnras/sty1750), which was published in MNRAS 479, 4253–4270 (2018). The publisher regrets to inform that the colour was missing from the colour scales in Figs 8(a)–(d) and Figs 9(a) and (b). This has now been corrected online. The publisher apologizes for this error.
High-throughput RNA sequencing produces large gene expression datasets whose analysis leads to a better understanding of diseases like cancer. The nature of RNA-Seq data poses challenges to its analysis in terms of its high dimensionality, noise, and complexity of the underlying biological processes. Researchers apply traditional machine learning approaches, e. g. hierarchical clustering, to analyze this data. Until it comes to validation of the results, the analysis is based on the provided data only and completely misses the biological context. However, gene expression data follows particular patterns - the underlying biological processes. In our research, we aim to integrate the available biological knowledge earlier in the analysis process. We want to adapt state-of-the-art data mining algorithms to consider the biological context in their computations and deliver meaningful results for researchers.
High-throughput RNA sequencing (RNAseq) produces large data sets containing expression levels of thousands of genes. The analysis of RNAseq data leads to a better understanding of gene functions and interactions, which eventually helps to study diseases like cancer and develop effective treatments. Large-scale RNAseq expression studies on cancer comprise samples from multiple cancer types and aim to identify their distinct molecular characteristics. Analyzing samples from different cancer types implies analyzing samples from different tissue origin. Such multi-tissue RNAseq data sets require a meaningful analysis that accounts for the inherent tissue-related bias: The identified characteristics must not originate from the differences in tissue types, but from the actual differences in cancer types. However, current analysis procedures do not incorporate that aspect. As a result, we propose to integrate a tissue-awareness into the analysis of multi-tissue RNAseq data. We introduce an extension for gene selection that provides a tissue-wise context for every gene and can be flexibly combined with any existing gene selection approach. We suggest to expand conventional evaluation by additional metrics that are sensitive to the tissue-related bias. Evaluations show that especially low complexity gene selection approaches profit from introducing tissue-awareness.