Refine
Has Fulltext
- no (652) (remove)
Year of publication
Document Type
- Other (652) (remove)
Language
- English (652) (remove)
Is part of the Bibliography
- yes (652)
Keywords
- E-Learning (4)
- MOOC (4)
- Scrum (4)
- embodied cognition (4)
- errata, addenda (4)
- Cloud-Security (3)
- ISM: supernova remnants (3)
- Industry 4.0 (3)
- Internet of Things (3)
- Security Metrics (3)
Institute
- Hasso-Plattner-Institut für Digital Engineering GmbH (83)
- Institut für Biochemie und Biologie (82)
- Institut für Physik und Astronomie (82)
- Institut für Geowissenschaften (63)
- Department Psychologie (41)
- Department Sport- und Gesundheitswissenschaften (38)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (30)
- Institut für Chemie (27)
- Institut für Ernährungswissenschaft (27)
- Institut für Informatik und Computational Science (26)
In an effort to explain the formation of a narrow third radiation belt at ultra-relativistic energies detected during a solar storm in September 20121, Mann et al.2 present simulations from which they conclude it arises from a process of outward radial diffusion alone, without the need for additional loss processes from higher frequency waves. The comparison of observations with the model in Figs 2 and 3 of their Article clearly shows that even with strong radial diffusion rates, the model predicts a third belt near L* = 3 that is twice as wide as observed and approximately an order of magnitude more intense. We therefore disagree with their interpretation that “the agreement between the absolute fluxes from the model and those observed by REPT [the Relativistic Electron Proton Telescope] shown on Figs 2 and 3 is excellent.”
Previous studies3 have shown that outward radial diffusion plays a very important role in the dynamics of the outer belt and is capable of explaining rapid reductions in the electron flux. It has also been shown that it can produce remnant belts (Fig. 2 of a long-term simulation study4). However, radial diffusion alone cannot explain the formation of the narrow third belt at multi-MeV during September 2012. An additional loss mechanism is required.
Higher radial diffusion rates cannot improve the comparison of model presented by Mann et al. with observations. A further increase in the radial diffusion rates (reported in Fig. 4 of the Supplementary Information of ref. 2) results in the overestimation of the outer belt fluxes by up to three orders of magnitude at energy of 3.4 MeV.
Observations at 2 MeV, where belts show only a two-zone structure, were not presented by Mann et al. Moreover, simulations of electrons with energies below 2 MeV with the same diffusion rates and boundary conditions used by the authors would probably produce very strong depletions down to L = 3–3.5, where L is radial distance from the centre of the Earth to the given field line in the equatorial plane. Observations do not show a non-adiabatic loss below L ∼ 4.5 for 2 MeV. Such different dynamics between 2 MeV and above 4 MeV at around L = 3.5 are another indication that particles are scattered by electromagnetic ion cyclotron (EMIC) waves that affect only energies above a certain threshold.
Observations of the phase space density (PSD) provide additional evidence for the local loss of electrons. Around L* = 3.5–4 PSD shows significant decrease by an order of magnitude starting in the afternoon of 3 September (Fig. 1a), while PSD above L* = 4 is increasing. The minimum in PSD between L* = 3.5–4 continues to decrease until 4 September. This evolution demonstrates that the loss is not produced by outward diffusion. Radial diffusion cannot produce deepening minima, as it works to smooth gradients. Just as growing peaks in PSD show the presence of localized acceleration5, deepening minima show the presence of localized loss.
Figure 1: Time evolution of radiation profiles in electron PSD at relativistic and ultra-relativistic energies.
figure 1
a, Similar to Supplementary Fig. 3 of ref. 2, but using TS07D model10 and for μ = 2,500 MeV G−1, K = 0.05 RE G0.5 (where RE is the radius of the Earth). b, Similar to Supplementary Fig. 3 of ref. 2, but using TS07D model and for μ = 700 MeV G−1, corresponding to MeV energies in the heart of the belt. Minimum in PSD in the heart of the multi-MeV electron radiation belt between 3.5 and 4 RE deepening between the afternoon of 3 September and 5 September clearly show that the narrow remnant belt at multi-MeV below 3.5 RE is produced by the local loss.
Full size image
The minimum in the outer boundary is reached on the evening of 2 September. After that, the outer boundary moves up, while the minimum decreases by approximately an order of magnitude, clearly showing that this main decrease cannot be explained by outward diffusion, and requires additional loss processes. The analysis of profiles of PSD is a standard tool used, for example, in the study about electron acceleration5 and routinely used by the entire Van Allen Probes team. In the Supplementary Information, we show that this analysis is validated by using different magnetic field models. The Supplementary Information also shows that measurements are above background noise.
Deepening minima at multi-MeV during the times when the boundary flux increases are clearly seen in Fig. 1a. They show that there must be localized loss, as radial diffusion cannot produce a minimum that becomes lower with time. At lower energies of 1–2 MeV, which corresponds to lower values of the first adiabatic invariant μ (Fig. 1b), the profiles are monotonic between L* = 3–3.5, consistent with the absence of scattering by EMIC waves that affect only electrons above a certain energy threshold6,7,8,9.
In summary, the results of the modelling and observations presented by Mann et al. do not lend support to the claim of explaining the dynamics of the ultra-relativistic third Van Allen radiation belt in terms of an outward radial diffusion process alone. While the outward radial diffusion driven by the loss to the magnetopause2 is certainly operating during this storm, there is compelling observational and modelling2,6 evidence that shows that very efficient localized electron loss operates during this storm at multi-MeV energies, consistent with localized loss produced by EMIC waves.
The detection of all inclusion dependencies (INDs) in an unknown dataset is at the core of any data profiling effort. Apart from the discovery of foreign key relationships, INDs can help perform data integration, integrity checking, schema (re-)design, and query optimization. With the advent of Big Data, the demand increases for efficient INDs discovery algorithms that can scale with the input data size. To this end, we propose S-INDD++ as a scalable system for detecting unary INDs in large datasets. S-INDD++ applies a new stepwise partitioning technique that helps discard a large number of attributes in early phases of the detection by processing the first partitions of smaller sizes. S-INDD++ also extends the concept of the attribute clustering to decide which attributes to be discarded based on the clustering result of each partition. Moreover, in contrast to the state-of-the-art, S-INDD++ does not require the partition to fit into the main memory-which is a highly appreciable property in the face of the ever growing datasets. We conducted an exhaustive evaluation of S-INDD++ by applying it to large datasets with thousands attributes and more than 266 million tuples. The results show the high superiority of S-INDD++ over the state-of-the-art. S-INDD++ reduced up to 50 % of the runtime in comparison with BINDER, and up to 98 % in comparison with S-INDD.
E-commerce marketplaces are highly dynamic with constant competition. While this competition is challenging for many merchants, it also provides plenty of opportunities, e.g., by allowing them to automatically adjust prices in order to react to changing market situations. For practitioners however, testing automated pricing strategies is time-consuming and potentially hazardously when done in production. Researchers, on the other side, struggle to study how pricing strategies interact under heavy competition. As a consequence, we built an open continuous time framework to simulate dynamic pricing competition called Price Wars. The microservice-based architecture provides a scalable platform for large competitions with dozens of merchants and a large random stream of consumers. Our platform stores each event in a distributed log. This allows to provide different performance measures enabling users to compare profit and revenue of various repricing strategies in real-time. For researchers, price trajectories are shown which ease evaluating mutual price reactions of competing strategies. Furthermore, merchants can access historical marketplace data and apply machine learning. By providing a set of customizable, artificial merchants, users can easily simulate both simple rule-based strategies as well as sophisticated data-driven strategies using demand learning to optimize their pricing strategies.
Devices on the Internet of Things (IoT) are usually battery-powered and have limited resources. Hence, energy-efficient and lightweight protocols were designed for IoT devices, such as the popular Constrained Application Protocol (CoAP). Yet, CoAP itself does not include any defenses against denial-of-sleep attacks, which are attacks that aim at depriving victim devices of entering low-power sleep modes. For example, a denial-of-sleep attack against an IoT device that runs a CoAP server is to send plenty of CoAP messages to it, thereby forcing the IoT device to expend energy for receiving and processing these CoAP messages. All current security solutions for CoAP, namely Datagram Transport Layer Security (DTLS), IPsec, and OSCORE, fail to prevent such attacks. To fill this gap, Seitz et al. proposed a method for filtering out inauthentic and replayed CoAP messages "en-route" on 6LoWPAN border routers. In this paper, we expand on Seitz et al.'s proposal in two ways. First, we revise Seitz et al.'s software architecture so that 6LoWPAN border routers can not only check the authenticity and freshness of CoAP messages, but can also perform a wide range of further checks. Second, we propose a couple of such further checks, which, as compared to Seitz et al.'s original checks, more reliably protect IoT devices that run CoAP servers from remote denial-of-sleep attacks, as well as from remote exploits. We prototyped our solution and successfully tested its compatibility with Contiki-NG's CoAP implementation.
Bottom-up saliency is often cited as a factor driving the choice of fixation locations of human observers, based on the (partial) success of saliency models to predict fixation densities in free viewing. However, these observations are only weak evidence for a causal role of bottom-up saliency in natural viewing behaviour. To test bottom-up saliency more directly, we analyse the performance of a number of saliency models---including our own saliency model based on our recently published model of early visual processing (Schütt & Wichmann, 2017, JoV)---as well as the theoretical limits for predictions over time. On free viewing data our model performs better than classical bottom-up saliency models, but worse than the current deep learning based saliency models incorporating higher-level information like knowledge about objects. However, on search data all saliency models perform worse than the optimal image independent prediction. We observe that the fixation density in free viewing is not stationary over time, but changes over the course of a trial. It starts with a pronounced central fixation bias on the first chosen fixation, which is nonetheless influenced by image content. Starting with the 2nd to 3rd fixation, the fixation density is already well predicted by later densities, but more concentrated. From there the fixation distribution broadens until it reaches a stationary distribution around the 10th fixation. Taken together these observations argue against bottom-up saliency as a mechanistic explanation for eye movement control after the initial orienting reaction in the first one to two saccades, although we confirm the predictive value of early visual representations for fixation locations. The fixation distribution is, first, not well described by any stationary density, second, is predicted better when including object information and, third, is badly predicted by any saliency model in a search task.
Manufacturing industries are undergoing a major paradigm shift towards more autonomy. Automated planning and scheduling then becomes a necessity. The Planning and Execution Competition for Logistics Robots in Simulation held at ICAPS is based on this scenario and provides an interesting testbed. However, the posed problem is challenging as also demonstrated by the somewhat weak results in 2017. The domain requires temporal reasoning and dealing with uncertainty. We propose a novel planning system based on Answer Set Programming and the Clingo solver to tackle these problems and incentivize robot cooperation. Our results show a significant performance improvement, both, in terms of lowering computational requirements and better game metrics.
Eccentric exercises (ECC) induce reversible muscle damage, delayed-onset muscle soreness and an inflammatory reaction that is often followed by a systemic anti-inflammatory response. Thus, ECC might be beneficial for treatment of metabolic disorders which are frequently accompanied by a low-grade systemic inflammation. However, extent and time course of a systemic immune response after repeated ECC bouts are poorly characterized.
PURPOSE: To analyze the (anti-)inflammatory response after repeated ECC loading of the trunk.
METHODS: Ten healthy participants (33 ± 6 y; 173 ± 14 cm; 74 ± 16 kg) performed three isokinetic strength measurements of the trunk (concentric (CON), ECC1, ECC2, each 2 wks apart; flexion/extension, velocity 60°/s, 120s MVC). Pre- and 4, 24, 48, 72, 168h post-exercise, muscle soreness (numeric rating scale, NRS) was assessed and blood samples were taken and analyzed [Creatine kinase (CK), C-reactive protein (CRP), Interleukin-6 (IL-6), IL-10, Tumor necrosis factor-α (TNF-α)]. Statistics were done by Friedman‘s test with Dunn‘s post hoc test (α=.05).
RESULTS: Mean peak torque was higher during ECC1 (319 ± 142 Nm) than during CON (268 ± 108 Nm; p<.05) and not different between ECC1 and ECC2 (297 ± 126 Nm; p>.05). Markers of muscle damage (peaks post-ECC1: NRS 48h, 4.4±2.9; CK 72h, 14407 ± 19991 U/l) were higher after ECC1 than after CON and ECC2 (p<.05). The responses over 72h (stated as Area under the Curve, AUC) were abolished after ECC2 compared to ECC1 (p<.05) indicating the presence of the repeated bout effect. CRP levels were not changed. IL-6 levels increased 2-fold post-ECC1 (pre: 0.5 ± 0.4 vs. 72h: 1.0 ± 0.8 pg/ml). The IL-6 response was enhanced after ECC1 (AUC 61 ± 37 pg/ml*72h) compared to CON (AUC 33 ± 31 pg/ml*72h; p<.05). After ECC2, the IL-6 response (AUC 43 ± 25 pg/ml*72h) remained lower than post-ECC1, but the difference was not statistically significant. Serum levels of TNF-α and of the anti-inflammatory cytokine IL-10 were below detection limits. Overall, markers of muscle damage and immune response showed high inter-individual variability.
CONCLUSION: Despite maximal ECC loading of a large muscle group, no anti-inflammatory and just weak inflammatory responses were detected in healthy adults. Whether ECC elicits a different reaction in inflammatory clinical conditions is unclear.
A hybrid design approach of the hierarchical physical implementation design flow is presented and demonstrated on a fault-tolerant low-power multiprocessor system. The proposed flow allows to implement selected submodules in parallel with contrary requirements such as identical placement and individual block implementation. The overall system contains four Leon2 cores and communicates via the Waterbear framework and supports Adaptive Voltage Scaling (AVS) functionality. Three of the processor core variants are derived from the first baseline reference core but implemented individually at block level based on their clock tree specification. The chip is prepared for space applications and designed with triple modular redundancy (TMR) for control parts. The low-power performance is enabled by contemporary power and clock management control. An ASIC is fabricated in a low-power 0.13 mu m BiCMOS technology process node.
Eccentric (ECC) exercises might cause muscle damage, characterized by delayed-onset muscle soreness, elevated creatine kinase (CK) levels and local muscle oedema, shown by elevated T2 times in magnet resonance imaging (MRI) scans. Previous research suggests a high inter-individual difference regarding these systemic and local responses to eccentric workload. PURPOSE: To analyze ECC exercise-induced muscle damage in lumbar paraspinal muscles assessed via MRI. METHODS: Ten participants (3f/7m; 33±6y; 174±8cm; 71±12kg) were included in the study. Quantitative paraspinal muscle constitution of M. erector spinae and M. multifidius were assessed in supine position before and 72h after an intense eccentric trunk exercise bout in a mobile 1.5 tesla MRI device. MRI scans were recorded on spinal level L3 (T2-weighted TSE echo sequences, 11 slices, 2mm slice thickness, 3mm gap, echo times: 20, 40, 60, 80, 100ms, TR time: 2500ms). Muscle T2 times were calculated for manually traced regions of interest of the respective muscles with an imaging software. The exercise protocol was performed in an isokinetic device and consisted of 120sec alternating ECC trunk flexion-extension with maximal effort. Venous blood samples were taken before and 72h after the ECC exercise. Descriptive statistics (mean±SD) and t-testing for pre-post ECC exercises were performed. RESULTS: T2 times increased from pre- to post-ECC MRI measurements from 55±3ms to 79±28ms in M. erector spinae and from 62±5ms to 78±24ms in M. multifidius (p<0.001). CK increased from 126±97 U/L to 1447±20579 U/L. High SDs of T2 time and CK in post-ECC measures could be due to inter-individual reactions to ECC exercises. 3 participants showed high local and systemic reactions (HR) with T2 time increases of 120±24% (M. erector spinae) and 73±50% (M. multifidius). In comparison, the remaining 7 participants showed increases of 11±12% (M. erector spinae) and 7±9% (M. multifidius) in T2 time. Mean CK increased 9.5-fold in the 3 HR subjects compared with the remaining 7 subjects. CONCLUSIONS: The 120sec maximal ECC trunk flexion-extension protocol induced high amounts of muscle damage in 3 participants. Moderate to low responses were found in the remaining 7 subjects, assuming that inter-individual predictors play a role regarding physiological responses to ECC workload.
DualPanto
(2018)
We present a new haptic device that enables blind users to continuously track the absolute position of moving objects in spatial virtual environments, as is the case in sports or shooter games. Users interact with DualPanto by operating the me handle with one hand and by holding on to the it handle with the other hand. Each handle is connected to a pantograph haptic input/output device. The key feature is that the two handles are spatially registered with respect to each other. When guiding their avatar through a virtual world using the me handle, spatial registration enables users to track moving objects by having the device guide the output hand. This allows blind players of a 1-on-1 soccer game to race for the ball or evade an opponent; it allows blind players of a shooter game to aim at an opponent and dodge shots. In our user study, blind participants reported very high enjoyment when using the device to play (6.5/7).
Female extra-pair mating, fitness and genetic diversity: Expression in socially monogamous Coal Tits
(2006)
This introductory essay to the HSR Special Issue “Economists, Politics, and Society” argues for a strong field-theoretical programme inspired by Pierre Bourdieu to research economic life as an integral part of different social forms. Its main aim is threefold. First, we spell out the very distinct Durkheimian legacy in Bourdieu’s thinking and the way he applies it in researching economic phenomena. Without this background, much of what is actually part of how Bourdieu analysed economic aspects of social life would be overlooked or reduced to mere economic sociology. Second, we sketch the main theoretical concepts and heuristics used to analyse economic life from a field perspective. Third, we focus on practical methodological issues of field-analytical research into economic phenomena. We conclude with a short summary of the basic characteristics of this approach and discuss the main insights provided by the contributions to this special issue.
In this paper, the applicability of deep downhole geoelectrical monitoring for detecting CO2 related signatures is evaluated after a nearly ten year period of CO2 storage at the Ketzin pilot site. Deep downhole electrode arrays have been studied as part of a multi-physical monitoring concept at four CO2 pilot test sites worldwide so far. For these sites, it was considered important to implement the geoelectrical method into the measurement program of tracking the CO2 plume. Analyzing the example of the Ketzin site, it can be seen that during all phases of the CO2 storage reservoir development the resistivity measurements and their corresponding tomographic interpretation contribute in a beneficial manner to the measurement, monitoring and verification (MMV) protocol. The most important impact of a permanent electrode array is its potential as tool for estimating reservoir saturations.
An efficient selection of indexes is indispensable for database performance. For large problem instances with hundreds of tables, existing approaches are not suitable: They either exhibit prohibitive runtimes or yield far from optimal index configurations by strongly limiting the set of index candidates or not handling index interaction explicitly. We introduce a novel recursive strategy that does not exclude index candidates in advance and effectively accounts for index interaction. Using large real-world workloads, we demonstrate the applicability of our approach. Further, we evaluate our solution end to end with a commercial database system using a reproducible setup. We show that our solutions are near-optimal for small index selection problems. For larger problems, our strategy outperforms state-of-the-art approaches in both scalability and solution quality.
History without borders
(2020)
Root infinitives on Twitter
(2017)
Religious Mapping Erbil (RME) is a joint project of teams from the Catholic University in Erbil (CUE), Salahaddin University-Erbil (SUE) and Tishk International University (TIU) under the guidance of the University of Potsdam (UP). From 2018 to 2022, the project was financed by the German Academic Exchange Service (DAAD).
This project involves scholars of various disciplines including religious studies, Islamic studies, English language, applied computing, and computer engineering. The research is a cooperation of students, PhD candidates and advanced scholars.
The project attempts to display the religious diversity in Erbil, the fast-changing capital of Iraqi Kurdistan. Unlike a census or a survey, which focuses on individuals, RME presents the locations (mosques, churches, synagogues, temples and other venues) together with the history and social profiles of the congregations meeting there. [insert tiny map or part of it]
The data were obtained by visiting the locations, observing their services, interviewing community leaders (mostly imams and priests), evaluating information from the Ministry of Endowment and Religious Affairs, and by consulting websites. All investigations followed the same pattern, consisting of (I) spatiotemporal and (III) social dimensions, framed by (II) religious performance.
Philippine hornbills of the genera Aceros and Penelopides (Bucerotidae) are known to possess a large tandemly duplicated fragment in their mitochondrial genome, whose paralogous parts largely evolve in concert. In the present study, we surveyed the two distinguishable duplicated control regions in several individuals of the Luzon Tarictic Hornbill Penelopides manillae, compare their characteristics within and across individuals, and report on an intraspecific mitochondrial gene rearrangement found in one single specimen, i.e., an interchange between the two control regions. To our knowledge, this is the first observation of two distinct mitochondrial genome rearrangements within a bird species. We briefly discuss a possible evolutionary mechanism responsible for this pattern, and highlight potential implications for the application of control region sequences as a marker in population genetics and phylogeography.
Impact of self-assessment of return to work on employable discharge from multi-component cardiac rehabilitation. Retrospective unicentric analysis of routine data from cardiac rehabilitation in patients below 65 years of age. Presentation in the "Cardiovascular rehabilitation revisited" high impact abstract session during ESC Congress 2018.
An IoT network may consist of hundreds heterogeneous devices. Some of them may be constrained in terms of memory, power, processing and network capacity. Manual network and service management of IoT devices are challenging. We propose a usage of an ontology for the IoT device descriptions enabling automatic network management as well as service discovery and aggregation. Our IoT architecture approach ensures interoperability using existing standards, i.e. MQTT protocol and SemanticWeb technologies. We herein introduce virtual IoT devices and their semantic framework deployed at the edge of network. As a result, virtual devices are enabled to aggregate capabilities of IoT devices, derive new services by inference, delegate requests/responses and generate events. Furthermore, they can collect and pre-process sensor data. These tasks on the edge computing overcome the shortcomings of the cloud usage regarding siloization, network bandwidth, latency and speed. We validate our proposition by implementing a virtual device on a Raspberry Pi.
One particular challenge in the Internet of Things is the management of many heterogeneous things. The things are typically constrained devices with limited memory, power, network and processing capacity. Configuring every device manually is a tedious task. We propose an interoperable way to configure an IoT network automatically using existing standards. The proposed NETCONF-MQTT bridge intermediates between the constrained devices (speaking MQTT) and the network management standard NETCONF. The NETCONF-MQTT bridge generates dynamically YANG data models from the semantic description of the device capabilities based on the oneM2M ontology. We evaluate the approach for two use cases, i.e. describing an actuator and a sensor scenario.
The electret state stability in nonpolar semicrystalline polymers is largely determined by the traps located at crystalline/ amorphous phase interfaces. Thus, the thermal history of such polymers should considerably influence their electret properties. In the present work, we investigate how recrystallization influences charge stability in low-density polyethylene corona electrets. It has been found that electret charge stability in quenched samples is higher than in slowly-crystallized ones. Phenomenologicaly, this can be explained by the increased number of deeper traps in samples with smaller crystallite size.
Precision fruticulture addresses site or tree-adapted crop management. In the present study, soil and tree status, as well as fruit quality at harvest were analysed in a commercial apple (Malus × domestica 'Gala Brookfield'/Pajam1) orchard in a temperate climate. Trees were irrigated in addition to precipitation. Three irrigation levels (0, 50 and 100%) were applied. Measurements included readings of apparent electrical conductivity of soil (ECa), stem water potential, canopy temperature obtained by infrared camera, and canopy volume estimated by LiDAR and RGB colour imaging. Laboratory analyses of 6 trees per treatment were done on fruit considering the pigment contents and quality parameters. Midday stem water potential (SWP), normalized crop water stress index (CWSI) calculated from thermal data, and fruit yield and quality at harvest were analysed. Spatial patterns of the variability of tree water status were estimated by CWSI imaging supported by SWP readings. CWSI ranged from 0.1 to 0.7 indicating high variability due to irrigation and precipitation. Canopy volume data were less variable. Soil ECa appeared homogeneous in the range of 0 to 4 mS m-1. Fruit harvested in a drought stress zone showed enhanced portion of pheophytin in the chlorophyll pool. Irrigation affected soluble solids content and, hence, the quality of fruit. Overall, results highlighted that spatial variation in orchards can be found even if marginal variability of soil properties can be assumed.
SpringFit
(2019)
Joints are crucial to laser cutting as they allow making three-dimensional objects; mounts are crucial because they allow embedding technical components, such as motors. Unfortunately, mounts and joints tend to fail when trying to fabricate a model on a different laser cutter or from a different material. The reason for this lies in the way mounts and joints hold objects in place, which is by forcing them into slightly smaller openings. Such "press fit" mechanisms unfortunately are susceptible to the small changes in diameter that occur when switching to a machine that removes more or less material ("kerf"), as well as to changes in stiffness, as they occur when switching to a different material. We present a software tool called springFit that resolves this problem by replacing the problematic press fit-based mounts and joints with what we call cantilever-based mounts and joints. A cantilever spring is simply a long thin piece of material that pushes against the object to be held. Unlike press fits, cantilever springs are robust against variations in kerf and material; they can even handle very high variations, simply by using longer springs. SpringFit converts models in the form of 2D cutting plans by replacing all contained mounts, notch joints, finger joints, and t-joints. In our technical evaluation, we used springFit to convert 14 models downloaded from the web.
From victims to activists
(2022)
The politics of fear
(2022)
Background:
Inflammatory bowel disease (IBD) represents a dysregulation of the mucosal immune system. The pathogenesis of Crohn’s disease (CD) and ulcerative colitis (UC) is linked to the loss of intestinal tolerance and barrier function. The healthy mucosal immune system has previously been shown to be inert against food antigens. Since the small intestine is the main contact surface for antigens and therefore the immunological response, the present study served to analyse food-antigen-specific T cells in the peripheral blood of IBD patients.
Methods:
Peripheral blood mononuclear cells of CD, with an affected small intestine, and UC (colitis) patients, either active or in remission, were stimulated with the following food antigens: gluten, soybean, peanut and ovalbumin. Healthy controls and celiac disease patients were included as controls. Antigen-activated CD4+ T cells in the peripheral blood were analysed by a magnetic enrichment of CD154+ effector T cells and a cytometric antigen-reactive T-cell analysis (‘ARTE’ technology) followed by characterisation of the ef- fector response.
Results:
The effector T-cell response of antigen-specific T cells were compared between CD with small intestinal inflammation and UC where inflammation was restricted to the colon. Among all tested food antigens, the highest frequency of antigen-specific T cells (CD4+CD154+) was found for gluten. Celiac disease patients were included as control, since gluten has been identified as the disease- causing antigen. The highest frequency of gluten antigen-specific T cells was revealed in active CD when compared with UC, celiac disease on a gluten-free diet (GFD) and healthy controls. Ovalbuminspecific T cells were almost undetectable, whereas the reaction to soybean and peanut was slightly higher. But again, the strong- est reaction was observed in CD with small intestinal involvement compared with UC. Remarkably, in celiac disease on a GFD only
antigen-specific cells for gluten were detected. These gluten-specific T cells were characterised by up-regulation of the pro-inflammatory cytokines IFN-γ, IL-17A and TNF-α. IFN-g was exclusively elevated in CD patients with active disease. Gluten-specific T-cells expressing IL-17A were increased in all IBD patients. Furthermore, T cells of CD patients, independent of disease activity, revealed a high expression of the pro-inflammatory cytokine TNF-α.
Conclusion:
The ‘ARTE’-technique allows to analyse and quantify food antigen specific T cells in the peripheral blood of IBD patients indicating a potential therapeutic insight. These data provide evidence that small intestinal inflammation in CD is key for the development of a systemic pro-inflammatory effector T-cell response driven by food antigens.
Comparative text mining extends from genre analysis and political bias detection to the revelation of cultural and geographic differences, through to the search for prior art across patents and scientific papers. These applications use cross-collection topic modeling for the exploration, clustering, and comparison of large sets of documents, such as digital libraries. However, topic modeling on documents from different collections is challenging because of domain-specific vocabulary. We present a cross-collection topic model combined with automatic domain term extraction and phrase segmentation. This model distinguishes collection-specific and collection-independent words based on information entropy and reveals commonalities and differences of multiple text collections. We evaluate our model on patents, scientific papers, newspaper articles, forum posts, and Wikipedia articles. In comparison to state-of-the-art cross-collection topic modeling, our model achieves up to 13% higher topic coherence, up to 4% lower perplexity, and up to 31% higher document classification accuracy. More importantly, our approach is the first topic model that ensures disjunct general and specific word distributions, resulting in clear-cut topic representations.
Leveraging spatio-temporal soccer data to define a graphical query language for game recordings
(2019)
For professional soccer clubs, performance and video analysis are an integral part of the preparation and post-processing of games. Coaches, scouts, and video analysts extract information about strengths and weaknesses of their team as well as opponents by manually analyzing video recordings of past games. Since video recordings are an unstructured data source, it is a complex and time-intensive task to find specific game situations and identify similar patterns. In this paper, we present a novel approach to detect patterns and situations (e.g., playmaking and ball passing of midfielders) based on trajectory data. The application uses the metaphor of a tactic board to offer a graphical query language. With this interactive tactic board, the user can model a game situation or mark a specific situation in the video recording for which all matching occurrences in various games are immediately displayed, and the user can directly jump to the corresponding game scene. Through the additional visualization of key performance indicators (e.g.,the physical load of the players), the user can get a better overall assessment of situations. With the capabilities to find specific game situations and complex patterns in video recordings, the interactive tactic board serves as a useful tool to improve the video analysis process of professional sports teams.
Rapid advances in location-acquisition technologies have led to large amounts of trajectory data. This data is the foundation for a broad spectrum of services driven and improved by trajectory data mining. However, for hybrid transactional and analytical workloads, the storing and processing of rapidly accumulated trajectory data is a non-trivial task. In this paper, we present a detailed survey about state-of-the-art trajectory data management systems. To determine the relevant aspects and requirements for such systems, we developed a trajectory data mining framework, which summarizes the different steps in the trajectory data mining process. Based on the derived requirements, we analyze different concepts to store, compress, index, and process spatio-temporal data. There are various trajectory management systems, which are optimized for scalability, data footprint reduction, elasticity, or query performance. To get a comprehensive overview, we describe and compare different exciting systems. Additionally, the observed similarities in the general structure of different systems are consolidated in a general blueprint of trajectory management systems.
This study examined the relationships between the three phenotypic domains of the triarchic model of psychopathy —boldness, meanness, disinhibition— and electrophysiological indices of inhibitory control (NoGo-N2/NoGo-P3). EEG data from a 256-channel dense array were recorded while participants (135 un-dergraduates assessed via the Triarchic Psychopathy Measure) performed a Go/NoGo task with three types of stimuli (60% frequent-Go, 20% infrequent-Go, 20% infrequent-NoGo). N2 was defined as the mean amplitude between 240 ms and 340 ms after stimuli onset over fronto-central sensors on correct trials; P300 was defined as the mean amplitude between 350 ms and 550 ms after stimuli onset over centro-parietal sensors on correct trials. Multiple regression analyses using gender-corrected triarchic scores as predictors revealed that only Disinhibition scores significantly predicted reduced NoGo-N2 amplitudes (3.5% explained variance, beta weight = .23, p < .05) and reduced P3 amplitudes for NoGo and infrequent-Go trials (3.1 and 3.2% explained variance, respectively, beta weights = -.21, ps < .05). Our results indicate that high disinhibition entails deviations in early conflict monitoring processes (reduced NoGo-N2), as well as in latter evaluative and updating processing stages of infrequent events (reduced NoGo-P3 and infrequent-Go-P3). The null contribution of meanness and boldness domains in these results suggests that N2 and P3 amplitudes in Go/NoGo tasks could be considered as neurobiological indices of the externalizing tendencies comprised in this personality disorder.
Point clouds provide high-resolution topographic data which is often classified into bare-earth, vegetation, and building points and then filtered and aggregated to gridded Digital Elevation Models (DEMs) or Digital Terrain Models (DTMs). Based on these equally-spaced grids flow-accumulation algorithms are applied to describe the hydrologic and geomorphologic mass transport on the surface. In this contribution, we propose a stochastic point-cloud filtering that, together with a spatial bootstrap sampling, allows for a flow accumulation directly on point clouds using Facet-Flow Networks (FFN). Additionally, this provides a framework for the quantification of uncertainties in point-cloud derived metrics such as Specific Catchment Area (SCA) even though the flow accumulation itself is deterministic.
Beacon in the Dark
(2018)
The large amount of heterogeneous data in these email corpora renders experts' investigations by hand infeasible. Auditors or journalists, e.g., who are looking for irregular or inappropriate content or suspicious patterns, are in desperate need for computer-aided exploration tools to support their investigations.
We present our Beacon system for the exploration of such corpora at different levels of detail. A distributed processing pipeline combines text mining methods and social network analysis to augment the already semi-structured nature of emails. The user interface ties into the resulting cleaned and enriched dataset. For the interface design we identify three objectives expert users have: gain an initial overview of the data to identify leads to investigate, understand the context of the information at hand, and have meaningful filters to iteratively focus onto a subset of emails. To this end we make use of interactive visualisations based on rearranged and aggregated extracted information to reveal salient patterns.
Web-based E-Learning uses Internet technologies and digital media to deliver education content to learners. Many universities in recent years apply their capacity in producing Massive Open Online Courses (MOOCs). They have been offering MOOCs with an expectation of rendering a comprehensive online apprenticeship. Typically, an online content delivery process requires an Internet connection. However, access to the broadband has never been a readily available resource in many regions. In Africa, poor and no networks are yet predominantly experienced by Internet users, frequently causing offline each moment a digital device disconnect from a network. As a result, a learning process is always disrupted, delayed and terminated in such regions. This paper raises the concern of E-Learning in poor and low bandwidths, in fact, it highlights the needs for an Offline-Enabled mode. The paper also explores technical approaches beamed to enhance the user experience inWeb-based E-Learning, particular in Africa.
The "Bachelor Project"
(2019)
One of the challenges of educating the next generation of computer scientists is to teach them to become team players, that are able to communicate and interact not only with different IT systems, but also with coworkers and customers with a non-it background. The “bachelor project” is a project based on team work and a close collaboration with selected industry partners. The authors hosted some of the teams since spring term 2014/15. In the paper at hand we explain and discuss this concept and evaluate its success based on students' evaluation and reports. Furthermore, the technology-stack that has been used by the teams is evaluated to understand how self-organized students in IT-related projects work. We will show that and why the bachelor is the most successful educational format in the perception of the students and how this positive results can be improved by the mentors.
Mobile expressive rendering gained increasing popularity among users seeking casual creativity by image stylization and supports the development of mobile artists as a new user group. In particular, neural style transfer has advanced as a core technology to emulate characteristics of manifold artistic styles. However, when it comes to creative expression, the technology still faces inherent limitations in providing low-level controls for localized image stylization. This work enhances state-of-the-art neural style transfer techniques by a generalized user interface with interactive tools to facilitate a creative and localized editing process. Thereby, we first propose a problem characterization representing trade-offs between visual quality, run-time performance, and user control. We then present MaeSTrO, a mobile app for orchestration of neural style transfer techniques using iterative, multi-style generative and adaptive neural networks that can be locally controlled by on-screen painting metaphors. At this, first user tests indicate different levels of satisfaction for the implemented techniques and interaction design.
DPP4 inhibition prevents AKI
(2017)
Logical modeling has been widely used to understand and expand the knowledge about protein interactions among different pathways. Realizing this, the caspo-ts system has been proposed recently to learn logical models from time series data. It uses Answer Set Programming to enumerate Boolean Networks (BNs) given prior knowledge networks and phosphoproteomic time series data. In the resulting sequence of solutions, similar BNs are typically clustered together. This can be problematic for large scale problems where we cannot explore the whole solution space in reasonable time. Our approach extends the caspo-ts system to cope with the important use case of finding diverse solutions of a problem with a large number of solutions. We first present the algorithm for finding diverse solutions and then we demonstrate the results of the proposed approach on two different benchmark scenarios in systems biology: (1) an artificial dataset to model TCR signaling and (2) the HPN-DREAM challenge dataset to model breast cancer cell lines.
Tikhonov regularization with oversmoothing penalty for linear statistical inverse learning problems
(2019)
In this paper, we consider the linear ill-posed inverse problem with noisy data in the statistical learning setting. The Tikhonov regularization scheme in Hilbert scales is considered in the reproducing kernel Hilbert space framework to reconstruct the estimator from the random noisy data. We discuss the rates of convergence for the regularized solution under the prior assumptions and link condition. For regression functions with smoothness given in terms of source conditions the error bound can explicitly be established.
When local poverty is more important than your income: Mental health in minorities in inner cities
(2015)
The influence of chemical composition and crystallisation conditions on the ferroelectric and paraelectric phases and the resulting morphology in Poly(vinylidene fluoride-trifluoroethylene-chlorofluoroethylene) (P(VDF-TrFE-CFE)) terpolymer films with 55.4/37.2/7.3 mol% or with 62.2/29.4/8.4 mol% of VDF/TrFE/CFE was studied. Poly(vinylidene fluoride trifluoroethylene) (P(VDF-TrFE)) with 75/25 mol% VDF/TrFE was employed as reference material. Fourier-Transform Infrared Spectroscopy (FTIR) was used to determine the fractions of the relevant terpolymer phases, and X-Ray Diffraction (XRD) was employed to assess the crystalline morphology. The FTIR results show an increase of the fraction of paraelectric phases after annealing. On the other hand, XRD results indicate a more stable paraelectric phase in the terpolymer with higher CFE content.
Cardiovascular drift response over two different constant-load exercises in healthy non-athletes
(2019)
Cardiovascular drift (CV-d) is a steady increase in heart rate (HR) over time while performing constant load moderate intensity exercise (CME) > 20 min. CV-d presents problems for the prescription of exercise intensity by means of HR, because the work rate (WR) during exercise must be adjusted to maintain target HR, thus disturbing the intended effect of the exercise intervention. It has been shown that the increase in HR during CME is due to changes in WR and not to CV-d.
Business process simulation is an important means for quantitative analysis of a business process and to compare different process alternatives. With the Business Process Model and Notation (BPMN) being the state-of-the-art language for the graphical representation of business processes, many existing process simulators support already the simulation of BPMN diagrams. However, they do not provide well-defined interfaces to integrate new concepts in the simulation environment. In this work, we present the design and architecture of a proof-of-concept implementation of an open and extensible BPMN process simulator. It also supports the simulation of multiple BPMN processes at a time and relies on the building blocks of the well-founded discrete event simulation. The extensibility is assured by a plug-in concept. Its feasibility is demonstrated by extensions supporting new BPMN concepts, such as the simulation of business rule activities referencing decision models and batch activities.
The target article discusses the question of how educational makerspaces can become places supportive of knowledge construction. This question is too often neglected by people who run makerspaces, as they mostly explain how to use different tools and focus on the creation of a product. In makerspaces, often pupils also engage in physical computing activities and thus in the creation of interactive artifacts containing embedded systems, such as smart shoes or wristbands, plant monitoring systems or drink mixing machines. This offers the opportunity to reflect on teaching physical computing in computer science education, where similarly often the creation of the product is so strongly focused upon that the reflection of the learning process is pushed into the background.
Aspirin inhibits release of platelet-derived sphingosine-1-phosphate in
acute myocardial infarction
(2013)
Minimising Information Loss on Anonymised High Dimensional Data with Greedy In-Memory Processing
(2018)
Minimising information loss on anonymised high dimensional data is important for data utility. Syntactic data anonymisation algorithms address this issue by generating datasets that are neither use-case specific nor dependent on runtime specifications. This results in anonymised datasets that can be re-used in different scenarios which is performance efficient. However, syntactic data anonymisation algorithms incur high information loss on high dimensional data, making the data unusable for analytics. In this paper, we propose an optimised exact quasi-identifier identification scheme, based on the notion of k-anonymity, to generate anonymised high dimensional datasets efficiently, and with low information loss. The optimised exact quasi-identifier identification scheme works by identifying and eliminating maximal partial unique column combination (mpUCC) attributes that endanger anonymity. By using in-memory processing to handle the attribute selection procedure, we significantly reduce the processing time required. We evaluated the effectiveness of our proposed approach with an enriched dataset drawn from multiple real-world data sources, and augmented with synthetic values generated in close alignment with the real-world data distributions. Our results indicate that in-memory processing drops attribute selection time for the mpUCC candidates from 400s to 100s, while significantly reducing information loss. In addition, we achieve a time complexity speed-up of O(3(n/3)) approximate to O(1.4422(n)).
High-dimensional data is particularly useful for data analytics research. In the healthcare domain, for instance, high-dimensional data analytics has been used successfully for drug discovery. Yet, in order to adhere to privacy legislation, data analytics service providers must guarantee anonymity for data owners. In the context of high-dimensional data, ensuring privacy is challenging because increased data dimensionality must be matched by an exponential growth in the size of the data to avoid sparse datasets. Syntactically, anonymising sparse datasets with methods that rely of statistical significance, makes obtaining sound and reliable results, a challenge. As such, strong privacy is only achievable at the cost of high information loss, rendering the data unusable for data analytics. In this paper, we make two contributions to addressing this problem from both the privacy and information loss perspectives. First, we show that by identifying dependencies between attribute subsets we can eliminate privacy violating attributes from the anonymised dataset. Second, to minimise information loss, we employ a greedy search algorithm to determine and eliminate maximal partial unique attribute combinations. Thus, one only needs to find the minimal set of identifying attributes to prevent re-identification. Experiments on a health cloud based on the SAP HANA platform using a semi-synthetic medical history dataset comprised of 109 attributes, demonstrate the effectiveness of our approach.
Cost models play an important role for the efficient implementation of software systems. These models can be embedded in operating systems and execution environments to optimize execution at run time. Even though non-uniform memory access (NUMA) architectures are dominating today's server landscape, there is still a lack of parallel cost models that represent NUMA system sufficiently. Therefore, the existing NUMA models are analyzed, and a two-step performance assessment strategy is proposed that incorporates low-level hardware counters as performance indicators. To support the two-step strategy, multiple tools are developed, all accumulating and enriching specific hardware event counter information, to explore, measure, and visualize these low-overhead performance indicators. The tools are showcased and discussed alongside specific experiments in the realm of performance assessment.
The overhead of moving data is the major limiting factor in todays hardware, especially in heterogeneous systems where data needs to be transferred frequently between host and accelerator memory. With the increasing availability of hardware-based compression facilities in modern computer architectures, this paper investigates the potential of hardware-accelerated I/O Link Compression as a promising approach to reduce data volumes and transfer time, thus improving the overall efficiency of accelerators in heterogeneous systems. Our considerations are focused on On-the-Fly compression in both Single-Node and Scale-Out deployments. Based on a theoretical analysis, this paper demonstrates the feasibility of hardware-accelerated On-the-Fly I/O Link Compression for many workloads in a Scale-Out scenario, and for some even in a Single-Node scenario. These findings are confirmed in a preliminary evaluation using software-and hardware-based implementations of the 842 compression algorithm.
Im Hochmittelalter entstehen Erzählungen, die etablierte literarische Formen und Traditionen neu verbinden: Sie sind volkssprachig, allegorisch und verwenden als Erzählform die erste Person, um in dieser Kombination, die sich zu einem die Grenzen der Einzelsprachen überschreitenden Erzähl-Format verfestigt, unterschiedlichste Themen aufzugreifen. Dieses Format, erstmals realisiert im altfranzösischen Roman de la Rose, wird die europäische Literatur mit Texten wie Dantes Divina Comedia, Guillaumes de Deguileville Pèlerinage de la Vie Humaine, William Langlands Pierce Plowman und Christines de Pizan Le Livre de la mutation de Fortune bis weit in die Neuzeit hinein prägen. Der in den Band einleitende Beitrag geht der Frage nach, ob das narrative Format dabei universell verwendet wird oder, z.B. im Rahmen der Liebesdichtung, spezifische Besonderheiten aufweist.
This is a correction notice for ‘Post-adiabatic supernova remnants in an interstellar magnetic field: oblique shocks and non-uniform environment’ (DOI: https://doi.org/10.1093/mnras/sty1750), which was published in MNRAS 479, 4253–4270 (2018). The publisher regrets to inform that the colour was missing from the colour scales in Figs 8(a)–(d) and Figs 9(a) and (b). This has now been corrected online. The publisher apologizes for this error.