Institut für Informatik
Refine
Has Fulltext
- no (10)
Year of publication
- 2016 (10) (remove)
Document Type
- Article (10)
Language
- English (10)
Is part of the Bibliography
- yes (10)
Keywords
- Answer Set Programming (1)
- Boolean networks (1)
- Combinational logic (1)
- Combined Task and Motion Planning (1)
- Concurrent checking (1)
- DMR (1)
- Design flow (1)
- Driver assistance (1)
- FPGA (1)
- Hardware acceleration (1)
Institute
Boolean networks (and more general logic models) are useful frameworks to study signal transduction across multiple pathways. Logic models can be learned from a prior knowledge network structure and multiplex phosphoproteomics data. However, most efficient and scalable training methods focus on the comparison of two time-points and assume that the system has reached an early steady state. In this paper, we generalize such a learning procedure to take into account the time series traces of phosphoproteomics data in order to discriminate Boolean networks according to their transient dynamics. To that end, we identify a necessary condition that must be satisfied by the dynamics of a Boolean network to be consistent with a discretized time series trace. Based on this condition, we use Answer Set Programming to compute an over-approximation of the set of Boolean networks which fit best with experimental data and provide the corresponding encodings. Combined with model-checking approaches, we end up with a global learning algorithm. Our approach is able to learn logic models with a true positive rate higher than 78% in two case studies of mammalian signaling networks; for a larger case study, our method provides optimal answers after 7 min of computation. We quantified the gain in our method predictions precision compared to learning approaches based on static data. Finally, as an application, our method proposes erroneous time-points in the time series data with respect to the optimal learned logic models. (C) 2016 Elsevier Ireland Ltd. All rights reserved.
We present a summary on the current status of two inversion algorithms that are used in EARLINET (European Aerosol Research Lidar Network) for the inversion of data collected with EARLINET multiwavelength Raman lidars. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. Development of these two algorithms started in 2000 when EARLINET was founded. The algorithms are based on a manually controlled inversion of optical data which allows for detailed sensitivity studies. The algorithms allow us to derive particle effective radius as well as volume and surface area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light absorption needs to be known with high accuracy. It is an extreme challenge to retrieve the real part with an accuracy better than 0.05 and the imaginary part with accuracy better than 0.005-0.1 or +/- 50 %. Single-scattering albedo can be computed from the retrieved microphysical parameters and allows us to categorize aerosols into high-and low-absorbing aerosols. On the basis of a few exemplary simulations with synthetic optical data we discuss the current status of these manually operated algorithms, the potentially achievable accuracy of data products, and the goals for future work. One algorithm was used with the purpose of testing how well microphysical parameters can be derived if the real part of the complex refractive index is known to at least 0.05 or 0.1. The other algorithm was used to find out how well microphysical parameters can be derived if this constraint for the real part is not applied. The optical data used in our study cover a range of Angstrom exponents and extinction-to-backscatter (lidar) ratios that are found from lidar measurements of various aerosol types. We also tested aerosol scenarios that are considered highly unlikely, e.g. the lidar ratios fall outside the commonly accepted range of values measured with Raman lidar, even though the underlying microphysical particle properties are not uncommon. The goal of this part of the study is to test the robustness of the algorithms towards their ability to identify aerosol types that have not been measured so far, but cannot be ruled out based on our current knowledge of aerosol physics. We computed the optical data from monomodal logarithmic particle size distributions, i.e. we explicitly excluded the more complicated case of bimodal particle size distributions which is a topic of ongoing research work. Another constraint is that we only considered particles of spherical shape in our simulations. We considered particle radii as large as 7-10 mu m in our simulations where the Potsdam algorithm is limited to the lower value. We considered optical-data errors of 15% in the simulation studies. We target 50% uncertainty as a reasonable threshold for our data products, though we attempt to obtain data products with less uncertainty in future work.
Traditional probabilistic seismic-hazard analysis as well as the estimation of ground-motion models (GMMs) is based on the ergodic assumption, which means that the distribution of ground motions over time at a given site is the same as their spatial distribution over all sites for the same magnitude, distance, and site condition. With a large increase in the number of recorded ground-motion data, there are now repeated observations at given sites and from multiple earthquakes in small regions, so that assumption can be relaxed. We use a novel approach to develop a nonergodic GMM, which is cast as a varying-coefficient model (VCM). In this model, the coefficients are allowed to vary by geographical location, which makes it possible to incorporate effects of spatially varying source, path, and site conditions. Hence, a separate set of coefficients is estimated for each source and site coordinate in the data set. The coefficients are constrained to be similar for spatially nearby locations. This is achieved by placing a Gaussian process prior on the coefficients. The amount of correlation is determined by the data. The spatial correlation structure of the model allows one to extrapolate the varying coefficients to a new location and trace the corresponding uncertainties. The approach is illustrated with the Next Generation Attenuation-West2 data set, using only Californian records. The VCM outperforms a traditionally estimated GMM in terms of generalization error and leads to a reduction in the aleatory standard deviation by similar to 40%, which has important implications for seismic-hazard calculations. The scaling of the model with respect to its predictor variables such as magnitude and distance is physically plausible. The epistemic uncertainty associated with the predicted ground motions is small in places where events or stations are close and large where data are sparse.
Solving problems combining task and motion planning requires searching across a symbolic search space and a geometric search space. Because of the semantic gap between symbolic and geometric representations, symbolic sequences of actions are not guaranteed to be geometrically feasible. This compels us to search in the combined search space, in which frequent backtracks between symbolic and geometric levels make the search inefficient. We address this problem by guiding symbolic search with rich information extracted from the geometric level through culprit detection mechanisms.
In this paper two new methods for the design of fault-tolerant pipelined sequential and combinational circuits, called Error Detection and Partial Error Correction (EDPEC) and Full Error Detection and Correction (FEDC), are described. The proposed methods are based on an Error Detection Logic (EDC) in the combinational circuit part combined with fault tolerant memory elements implemented using fault tolerant master-slave flip-flops. If a transient error, due to a transient fault in the combinational circuit part is detected by the EDC, the error signal controls the latching stage of the flip-flops such that the previous correct state of the register stage is retained until the transient error disappears. The system can continue to work in its previous correct state and no additional recovery procedure (with typically reduced clock frequency) is necessary. The target applications are dataflow processing blocks, for which software-based recovery methods cannot be easily applied. The presented architectures address both single events as well as timing faults of arbitrarily long duration. An example of this architecture is developed and described, based on the carry look-ahead adder. The timing conditions are carefully investigated and simulated up to the layout level. The enhancement of the baseline architecture is demonstrated with respect to the achieved fault tolerance for the single event and timing faults. It is observed that the number of uncorrected single events is reduced by the EDPEC architecture by 2.36 times compared with previous solution. The FEDC architecture further reduces the number of uncorrected events to zero and outperforms the Triple Modular Redundancy (TMR) with respect to correction of timing faults. The power overhead of both new architectures is about 26-28% lower than the TMR. (C) 2015 Elsevier Ltd. All rights reserved.
Answer set programming is a declarative problem-solving paradigm that rests upon a work flow involving modeling, grounding, and solving. While the former is described by Gebser and Schaub (2016), we focus here on key issues in grounding, or how to systematically replace object variables by ground terms in an effective way, and solving, or how to compute the answer sets, of a propositional logic program obtained by grounding.
Answer set programming (ASP) has emerged as an approach to declarative problem solving based on the stable model semantics for logic programs. The basic idea is to represent a computational problem by a logic program, formulating constraints in terms of rules, such that its answer sets correspond to problem solutions. To this end, ASP combines an expressive language for high-level modeling with powerful low-level reasoning capacities, provided by off-the-shelf tools. Compact problem representations take advantage of genuine modeling features of ASP, including (first-order) variables, negation by default, and recursion. In this article, we demonstrate the ASP methodology on two example scenarios, illustrating basic as well as advanced modeling and solving concepts. We also discuss mechanisms to represent and implement extended kinds of preferences and optimization. An overview of further available extensions concludes the article.
Learning to control a structured-prediction decoder for detection of HTTP-layer DDoS attackers
(2016)
We focus on the problem of detecting clients that attempt to exhaust server resources by flooding a service with protocol-compliant HTTP requests. Attacks are usually coordinated by an entity that controls many clients. Modeling the application as a structured-prediction problem allows the prediction model to jointly classify a multitude of clients based on their cohesion of otherwise inconspicuous features. Since the resulting output space is too vast to search exhaustively, we employ greedy search and techniques in which a parametric controller guides the search. We apply a known method that sequentially learns the controller and the structured-prediction model. We then derive an online policy-gradient method that finds the parameters of the controller and of the structured-prediction model in a joint optimization problem; we obtain a convergence guarantee for the latter method. We evaluate and compare the various methods based on a large collection of traffic data of a web-hosting service.
A holistic design and verification environment to investigate driving assistance systems is presented, with an emphasis on system-on-chip architectures for video applications. Starting with an executable specification of a driving assistance application, subsequent transformations are performed across different levels of abstraction until the final implementation is achieved. The hardware/software partitioning is facilitated through the integration of OpenCV and SystemC in the same design environment, as well as OpenCV and Linux in the run-time system. We built a rapid prototyping, FPGA-based camera system, which allows designs to be explored and evaluated in realistic conditions. Using lane departure and the corresponding performance speedup, we show that our platform reduces the design time, while improving the verification efforts.
Answer Set Programming (ASP) is a powerful declarative programming paradigm that has been successfully applied to many different domains. Recently, ASP has also proved successful for hard optimization problems like course timetabling and travel allotment. In this paper, we approach another important task, namely, the shift design problem, aiming at an alignment of a minimum number of shifts in order to meet required numbers of employees (which typically vary for different time periods) in such a way that over- and understaffing is minimized. We provide an ASP encoding of the shift design problem, which, to the best of our knowledge, has not been addressed by ASP yet. Our experimental results demonstrate that ASP is capable of improving the best known solutions to some benchmark problems. Other instances remain challenging and make the shift design problem an interesting benchmark for ASP-based optimization methods.