00 Informatik, Wissen, Systeme
Refine
Has Fulltext
- no (43)
Year of publication
Document Type
- Article (28)
- Other (8)
- Doctoral Thesis (4)
- Monograph/Edited Volume (1)
- Part of a Book (1)
- Conference Proceeding (1)
Is part of the Bibliography
- yes (43)
Keywords
- 2-tag system (1)
- Absorbed dose (1)
- Adoption effects (1)
- Advertising (1)
- Analytical models (1)
- Answer Set Programming (1)
- Answer set programming (1)
- Artificial Neuronal Network (1)
- Attributed graphs (1)
- Automated parsing (1)
Hyperbolic random graphs (HRGs) and geometric inhomogeneous random graphs (GIRGs) are two similar generative network models that were designed to resemble complex real-world networks.
In particular, they have a power-law degree distribution with controllable exponent beta and high clustering that can be controlled via the temperature T.
We present the first implementation of an efficient GIRG generator running in expected linear time.
Besides varying temperatures, it also supports underlying geometries of higher dimensions. It is capable of generating graphs with ten million edges in under a second on commodity hardware. The algorithm can be adapted to HRGs.
Our resulting implementation is the fastest sequential HRG generator, despite the fact that we support non-zero temperatures. Though non-zero temperatures are crucial for many applications, most existing generators are restricted to T = 0 .
We also support parallelization, although this is not the focus of this paper.
Moreover, we note that our generators draw from the correct probability distribution, that is, they involve no approximation.
Besides the generators themselves, we also provide an efficient algorithm to determine the non-trivial dependency between the average degree of the resulting graph and the input parameters of the GIRG model.
This makes it possible to specify the desired expected average degree as input. Moreover, we investigate the differences between HRGs and GIRGs, shedding new light on the nature of the relation between the two models. Although HRGs represent, in a certain sense, a special case of the GIRG model, we find that a straightforward inclusion does not hold in practice.
However, the difference is negligible for most use cases.
Background
The aggregation of a series of N-of-1 trials presents an innovative and efficient study design, as an alternative to traditional randomized clinical trials. Challenges for the statistical analysis arise when there is carry-over or complex dependencies of the treatment effect of interest.
Methods
In this study, we evaluate and compare methods for the analysis of aggregated N-of-1 trials in different scenarios with carry-over and complex dependencies of treatment effects on covariates. For this, we simulate data of a series of N-of-1 trials for Chronic Nonspecific Low Back Pain based on assumed causal relationships parameterized by directed acyclic graphs. In addition to existing statistical methods such as regression models, Bayesian Networks, and G-estimation, we introduce a carry-over adjusted parametric model (COAPM).
Results
The results show that all evaluated existing models have a good performance when there is no carry-over and no treatment dependence. When there is carry-over, COAPM yields unbiased and more efficient estimates while all other methods show some bias in the estimation. When there is known treatment dependence, all approaches that are capable to model it yield unbiased estimates. Finally, the efficiency of all methods decreases slightly when there are missing values, and the bias in the estimates can also increase.
Conclusions
This study presents a systematic evaluation of existing and novel approaches for the statistical analysis of a series of N-of-1 trials. We derive practical recommendations which methods may be best in which scenarios.
transferGWAS
(2022)
Motivation:
Medical images can provide rich information about diseases and their biology. However, investigating their association with genetic variation requires non-standard methods. We propose transferGWAS, a novel approach to perform genome-wide association studies directly on full medical images. First, we learn semantically meaningful representations of the images based on a transfer learning task, during which a deep neural network is trained on independent but similar data. Then, we perform genetic association tests with these representations.
Results:
We validate the type I error rates and power of transferGWAS in simulation studies of synthetic images. Then we apply transferGWAS in a genome-wide association study of retinal fundus images from the UK Biobank. This first-of-a-kind GWAS of full imaging data yielded 60 genomic regions associated with retinal fundus images, of which 7 are novel candidate loci for eye-related traits and diseases.
Instruments for measuring the absorbed dose and dose rate under radiation exposure, known as radiation dosimeters, are indispensable in space missions. They are composed of radiation sensors that generate current or voltage response when exposed to ionizing radiation, and processing electronics for computing the absorbed dose and dose rate. Among a wide range of existing radiation sensors, the Radiation Sensitive Field Effect Transistors (RADFETs) have unique advantages for absorbed dose measurement, and a proven record of successful exploitation in space missions. It has been shown that the RADFETs may be also used for the dose rate monitoring. In that regard, we propose a unique design concept that supports the simultaneous operation of a single RADFET as absorbed dose and dose rate monitor. This enables to reduce the cost of implementation, since the need for other types of radiation sensors can be minimized or eliminated. For processing the RADFET's response we propose a readout system composed of analog signal conditioner (ASC) and a self-adaptive multiprocessing system-on-chip (MPSoC). The soft error rate of MPSoC is monitored in real time with embedded sensors, allowing the autonomous switching between three operating modes (high-performance, de-stress and fault-tolerant), according to the application requirements and radiation conditions.
Programs are often subjected to significant optimizing and parallelizing transformations based on extensive dependence analysis. Formal validation of such transformations needs modelling paradigms which can capture both control and data dependences in the program vividly. Being value-based with an inherent scope of capturing parallelism, the untimed coloured Petri net (CPN) models, reported in the literature, fit the bill well; accordingly, they are likely to be more convenient as the intermediate representations (IRs) of both the source and the transformed codes for translation validation than strictly sequential variable-based IRs like sequential control flow graphs (CFGs). In this work, an efficient path-based equivalence checking method for CPN models of programs on integers is presented. Extensive experimentation has been carried out on several sequential and parallel examples. Complexity and correctness issues have been treated rigorously for the method.
CrashNet
(2021)
Destructive car crash tests are an elaborate, time-consuming, and expensive necessity of the automotive development process. Today, finite element method (FEM) simulations are used to reduce costs by simulating car crashes computationally. We propose CrashNet, an encoder-decoder deep neural network architecture that reduces costs further and models specific outcomes of car crashes very accurately. We achieve this by formulating car crash events as time series prediction enriched with a set of scalar features. Traditional sequence-to-sequence models are usually composed of convolutional neural network (CNN) and CNN transpose layers. We propose to concatenate those with an MLP capable of learning how to inject the given scalars into the output time series. In addition, we replace the CNN transpose with 2D CNN transpose layers in order to force the model to process the hidden state of the set of scalars as one time series. The proposed CrashNet model can be trained efficiently and is able to process scalars and time series as input in order to infer the results of crash tests. CrashNet produces results faster and at a lower cost compared to destructive tests and FEM simulations. Moreover, it represents a novel approach in the car safety management domain.
Industry 4.0 is transforming how businesses innovate and, as a result, companies are spearheading the movement towards 'Digital Transformation'. While some scholars advocate the use of design thinking to identify new innovative behaviours, cognition experts emphasise the importance of top managers in supporting employees to develop these behaviours. However, there is a dearth of research in this domain and companies are struggling to implement the required behaviours. To address this gap, this study aims to identify and prioritise behavioural strategies conducive to design thinking to inform the creation of a managerial mental model. We identify 20 behavioural strategies from 45 interviewees with practitioners and educators and combine them with the concepts of 'paradigm-mindset-mental model' from cognition theory. The paper contributes to the body of knowledge by identifying and prioritising specific behavioural strategies to form a novel set of survival conditions aligned to the new industrial paradigm of Industry 4.0.
Answer Set Programming (ASP) is a prominent knowledge representation language with roots in logic programming and non-monotonic reasoning. Biennial ASP competitions are organized in order to furnish challenging benchmark collections and assess the advancement of the state of the art in ASP solving. In this paper, we report on the design and results of the Seventh ASP Competition, jointly organized by the University of Calabria (Italy), the University of Genova (Italy), and the University of Potsdam (Germany), in affiliation with the 14th International Conference on Logic Programming and Non-Monotonic Reasoning (LPNMR 2017).
We study the derivational complexity of context-free and context-sensitive grammars by counting the maximal number of non-regular and non-context-free rules used in a derivation, respectively. The degree of non-regularity/non-context-freeness of a language is the minimum degree of non-regularity/non-context-freeness of context-free/context-sensitive grammars generating it. A language has finite degree of non-regularity iff it is regular. We give a condition for deciding whether the degree of non-regularity of a given unambiguous context-free grammar is finite. The problem becomes undecidable for arbitrary linear context-free grammars. The degree of non-regularity of unambiguous context-free grammars generating non-regular languages as well as that of grammars generating deterministic context-free languages that are not regular is of order Omega(n). Context-free non-regular languages of sublinear degree of non-regularity are presented. A language has finite degree of non-context-freeness if it is context-free. Context-sensitive grammars with a quadratic degree of non-context-freeness are more powerful than those of a linear degree.
IMDfence
(2020)
Over the past decade, focus on the security and privacy aspects of implantable medical devices (IMDs) has intensified, driven by the multitude of cybersecurity vulnerabilities found in various existing devices. However, due to their strict computational, energy and physical constraints, conventional security protocols are not directly applicable to IMDs. Custom-tailored schemes have been proposed instead which, however, fail to cover the full spectrum of security features that modern IMDs and their ecosystems so critically require. In this paper we propose IMDfence, a security protocol for IMD ecosystems that provides a comprehensive yet practical security portfolio, which includes availability, non-repudiation, access control, entity authentication, remote monitoring and system scalability. The protocol also allows emergency access that results in the graceful degradation of offered services without compromising security and patient safety. The performance of the security protocol as well as its feasibility and impact on modern IMDs are extensively analyzed and evaluated. We find that IMDfence achieves the above security requirements at a mere less than 7% increase in total IMD energy consumption, and less than 14 ms and 9 kB increase in system delay and memory footprint, respectively.
Transform2Open
(2022)
As virtualization drives the automation of networking, the validation of security properties becomes more and more challenging eventually ruling out manual inspections. While formal verification in Software Defined Networks is provided by comprehensive tools with high speed reverification capabilities like NetPlumber for instance, the presence of middlebox functionality like firewalls is not considered. Also, they lack the ability to handle dynamic protocol elements like IPv6 extension header chains. In this work, we provide suitable modeling abstractions to enable both - the inclusion of firewalls and dynamic protocol elements. We exemplarily model the Linux ip6tables/netfilter packet filter and also provide abstractions for an application layer gateway. Finally, we present a prototype of our formal verification system FaVe.
Handling manufacturing and aging faults with software-based techniques in tiny embedded systems
(2017)
Non-volatile memory area occupies a large portion of the area of a chip in an embedded system. Such memories are prone to manufacturing faults, retention faults, and aging faults. The paper presents a single software based technique that allows for handling all of these fault types in tiny embedded systems without the need for hardware support. This is beneficial for low-cost embedded systems with simple memory architectures. A software infrastructure and a flow are presented that demonstrate how the presented technique is used in general for fault handling right after manufacturing and in-the-field. Moreover, a full implementation is presented for a MSP430 microcontroller, along with a discussion of the performance, overhead, and reliability impacts.
Many markets are characterized by pricing competition. Typically, competitors are involved that adjust their prices in response to other competitors with different frequencies. We analyze stochastic dynamic pricing models under competition for the sale of durable goods. Given a competitor’s pricing strategy, we show how to derive optimal response strategies that take the anticipated competitor’s price adjustments into account. We study resulting price cycles and the associated expected long-term profits. We show that reaction frequencies have a major impact on a strategy’s performance. In order not to act predictable our model also allows to include randomized reaction times. Additionally, we study to which extent optimal response strategies of active competitors are affected by additional passive competitors that use constant prices. It turns out that optimized feedback strategies effectively avoid a decline in price. They help to gain profits, especially, when aggressive competitor s are involved.
Comment sections of online news platforms are an essential space to express opinions and discuss political topics. In contrast to other online posts, news discussions are related to particular news articles, comments refer to each other, and individual conversations emerge. However, the misuse by spammers, haters, and trolls makes costly content moderation necessary. Sentiment analysis can not only support moderation but also help to understand the dynamics of online discussions. A subtask of content moderation is the identification of toxic comments. To this end, we describe the concept of toxicity and characterize its subclasses. Further, we present various deep learning approaches, including datasets and architectures, tailored to sentiment analysis in online discussions. One way to make these approaches more comprehensible and trustworthy is fine-grained instead of binary comment classification. On the downside, more classes require more training data. Therefore, we propose to augment training data by using transfer learning. We discuss real-world applications, such as semi-automated comment moderation and troll detection. Finally, we outline future challenges and current limitations in light of most recent research publications.
Continuous verification of network security compliance is an accepted need. Especially, the analysis of stateful packet filters plays a central role for network security in practice. But the few existing tools which support the analysis of stateful packet filters are based on general applicable formal methods like Satifiability Modulo Theories (SMT) or theorem prover and show runtimes in the order of minutes to hours making them unsuitable for continuous compliance verification. In this work, we address these challenges and present the concept of state shell interweaving to transform a stateful firewall rule set into a stateless rule set. This allows us to reuse any fast domain specific engine from the field of data plane verification tools leveraging smart, very fast, and domain specialized data structures and algorithms including Header Space Analysis (HSA). First, we introduce the formal language FPL that enables a high-level human-understandable specification of the desired state of network security. Second, we demonstrate the instantiation of a compliance process using a verification framework that analyzes the configuration of complex networks and devices - including stateful firewalls - for compliance with FPL policies. Our evaluation results show the scalability of the presented approach for the well known Internet2 and Stanford benchmarks as well as for large firewall rule sets where it outscales state-of-the-art tools by a factor of over 41.
The design of qualitative, excellent teaching requires collaboration between teachers and learners. For this purpose, face-to-face teaching benefits from a long-standing tradition, while digital teaching is comparatively still at the beginning of its dissemination. A major developmental step toward the digitization of teaching was achieved in the context of university teaching during the Covid 19 pandemic in spring 2020, when face-to-face teaching was interrupted for months. During this time, important insights into the opportunities and limitations of digital teaching were gained. This paper presents selected results of a study conducted at four German universities and with 875 responses in spring 2020. The study uncovers opportunities and limitations of digital teaching from the students’ perspective and against the background of their experience in the completely digital semester. The results are used as a basis for deriving design guidelines for digital teaching and learning offerings. Based on a model for analyzing the design of teaching and learning formats, these indications are structured according to the elements learners, teachers, teaching content, environment and teaching style.
This paper discusses a new approach for designing and deploying Security-as-a-Service (SecaaS) applications using cloud native design patterns. Current SecaaS approaches do not efficiently handle the increasing threats to computer systems and applications. For example, requests for security assessments drastically increase after a high-risk security vulnerability is disclosed. In such scenarios, SecaaS applications are unable to dynamically scale to serve requests. A root cause of this challenge is employment of architectures not specifically fitted to cloud environments. Cloud native design patterns resolve this challenge by enabling certain properties e.g. massive scalability and resiliency via the combination of microservice patterns and cloud-focused design patterns. However adopting these patterns is a complex process, during which several security issues are introduced. In this work, we investigate these security issues, we redesign and deploy a monolithic SecaaS application using cloud native design patterns while considering appropriate, layered security counter-measures i.e. at the application and cloud networking layer. Our prototype implementation out-performs traditional, monolithic applications with an average Scanner Time of 6 minutes, without compromising security. Our approach can be employed for designing secure, scalable and performant SecaaS applications that effectively handle unexpected increase in security assessment requests.