Refine
Has Fulltext
- no (805) (remove)
Year of publication
- 2024 (3)
- 2023 (5)
- 2022 (10)
- 2021 (9)
- 2020 (33)
- 2019 (155)
- 2018 (189)
- 2017 (117)
- 2016 (73)
- 2015 (17)
- 2014 (11)
- 2013 (12)
- 2012 (14)
- 2011 (18)
- 2010 (2)
- 2009 (3)
- 2008 (7)
- 2007 (7)
- 2006 (25)
- 2005 (12)
- 2004 (11)
- 2003 (4)
- 2002 (3)
- 2001 (1)
- 2000 (4)
- 1999 (2)
- 1998 (8)
- 1997 (5)
- 1996 (13)
- 1995 (9)
- 1994 (15)
- 1993 (4)
- 1992 (3)
- 1991 (1)
Document Type
- Other (805) (remove)
Keywords
- E-Learning (4)
- MOOC (4)
- Scrum (4)
- embodied cognition (4)
- errata, addenda (4)
- Cloud-Security (3)
- ISM: supernova remnants (3)
- Industry 4.0 (3)
- Internet of Things (3)
- Security Metrics (3)
Institute
- Institut für Biochemie und Biologie (84)
- Hasso-Plattner-Institut für Digital Engineering GmbH (83)
- Institut für Physik und Astronomie (82)
- Institut für Geowissenschaften (60)
- Institut für Mathematik (46)
- Department Sport- und Gesundheitswissenschaften (44)
- Department Psychologie (43)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (30)
- Institut für Ernährungswissenschaft (30)
- Institut für Chemie (29)
- Institut für Informatik und Computational Science (28)
- Department Linguistik (16)
- Sozialwissenschaften (16)
- Institut für Germanistik (15)
- Bürgerliches Recht (14)
- Fachgruppe Politik- & Verwaltungswissenschaft (14)
- Historisches Institut (14)
- Institut für Romanistik (13)
- Institut für Umweltwissenschaften und Geographie (13)
- Wirtschaftswissenschaften (13)
- Institut für Anglistik und Amerikanistik (11)
- Department Erziehungswissenschaft (10)
- Institut für Jüdische Studien und Religionswissenschaft (10)
- Department Grundschulpädagogik (8)
- Öffentliches Recht (8)
- Fachgruppe Soziologie (7)
- Institut für Slavistik (7)
- Institut für Philosophie (5)
- Lehreinheit für Wirtschafts-Arbeit-Technik (5)
- MenschenRechtsZentrum (5)
- Fachgruppe Betriebswirtschaftslehre (4)
- Zentrum für Umweltwissenschaften (4)
- Philosophische Fakultät (3)
- Department Musik und Kunst (2)
- Fachgruppe Volkswirtschaftslehre (2)
- Institut für Jüdische Theologie (2)
- Wirtschafts- und Sozialwissenschaftliche Fakultät (2)
- Department für Inklusionspädagogik (1)
- Forschungsbereich „Politik, Verwaltung und Management“ (1)
- Institut für Künste und Medien (1)
- Potsdam Institute for Climate Impact Research (PIK) e. V. (1)
- Referat für Presse- und Öffentlichkeitsarbeit (1)
One of the most important aspects of a randomized algorithm is bounding its expected run time on various problems. Formally speaking, this means bounding the expected first-hitting time of a random process. The two arguably most popular tools to do so are the fitness level method and drift theory. The fitness level method considers arbitrary transition probabilities but only allows the process to move toward the goal. On the other hand, drift theory allows the process to move into any direction as long as it move closer to the goal in expectation; however, this tendency has to be monotone and, thus, the transition probabilities cannot be arbitrary. We provide a result that combines the benefit of these two approaches: our result gives a lower and an upper bound for the expected first-hitting time of a random process over {0,..., n} that is allowed to move forward and backward by 1 and can use arbitrary transition probabilities. In case that the transition probabilities are known, our bounds coincide and yield the exact value of the expected first-hitting time. Further, we also state the stationary distribution as well as the mixing time of a special case of our scenario.
For theoretical analyses there are two specifics distinguishing GP from many other areas of evolutionary computation. First, the variable size representations, in particular yielding a possible bloat (i.e. the growth of individuals with redundant parts). Second, the role and realization of crossover, which is particularly central in GP due to the tree-based representation. Whereas some theoretical work on GP has studied the effects of bloat, crossover had a surprisingly little share in this work. We analyze a simple crossover operator in combination with local search, where a preference for small solutions minimizes bloat (lexicographic parsimony pressure); the resulting algorithm is denoted Concatenation Crossover GP. For this purpose three variants of the wellstudied Majority test function with large plateaus are considered. We show that the Concatenation Crossover GP can efficiently optimize these test functions, while local search cannot be efficient for all three variants independent of employing bloat control.
High-throughput RNA sequencing (RNAseq) produces large data sets containing expression levels of thousands of genes. The analysis of RNAseq data leads to a better understanding of gene functions and interactions, which eventually helps to study diseases like cancer and develop effective treatments. Large-scale RNAseq expression studies on cancer comprise samples from multiple cancer types and aim to identify their distinct molecular characteristics. Analyzing samples from different cancer types implies analyzing samples from different tissue origin. Such multi-tissue RNAseq data sets require a meaningful analysis that accounts for the inherent tissue-related bias: The identified characteristics must not originate from the differences in tissue types, but from the actual differences in cancer types. However, current analysis procedures do not incorporate that aspect. As a result, we propose to integrate a tissue-awareness into the analysis of multi-tissue RNAseq data. We introduce an extension for gene selection that provides a tissue-wise context for every gene and can be flexibly combined with any existing gene selection approach. We suggest to expand conventional evaluation by additional metrics that are sensitive to the tissue-related bias. Evaluations show that especially low complexity gene selection approaches profit from introducing tissue-awareness.
User-generated content on social media platforms is a rich source of latent information about individual variables. Crawling and analyzing this content provides a new approach for enterprises to personalize services and put forward product recommendations. In the past few years, brands made a gradual appearance on social media platforms for advertisement, customers support and public relation purposes and by now it became a necessity throughout all branches. This online identity can be represented as a brand personality that reflects how a brand is perceived by its customers. We exploited recent research in text analysis and personality detection to build an automatic brand personality prediction model on top of the (Five-Factor Model) and (Linguistic Inquiry and Word Count) features extracted from publicly available benchmarks. The proposed model reported significant accuracy in predicting specific personality traits form brands. For evaluating our prediction results on actual brands, we crawled the Facebook API for 100k posts from the most valuable brands' pages in the USA and we visualize exemplars of comparison results and present suggestions for future directions.
This paper investigates the applicability of CMOS decoupling cells for mitigating the Single Event Transient (SET) effects in standard combinational gates. The concept is based on the insertion of two decoupling cells between the gate's output and the power/ground terminals. To verify the proposed hardening approach, extensive SPICE simulations have been performed with standard combinational cells designed in IHP's 130 nm bulk CMOS technology. Obtained simulation results have shown that the insertion of decoupling cells results in the increase of the gate's critical charge, thus reducing the gate's soft error rate (SER). Moreover, the decoupling cells facilitate the suppression of SET pulses propagating through the gate. It has been shown that the decoupling cells may be a competitive alternative to gate upsizing and gate duplication for hardening the gates with lower critical charge and multiple (3 or 4) inputs, as well as for filtering the short SET pulses induced by low-LET particles.
Studies indicate that reliable access to power is an important enabler for economic growth. To this end, modern energy management systems have seen a shift from reliance on time-consuming manual procedures , to highly automated management , with current energy provisioning systems being run as cyber-physical systems . Operating energy grids as a cyber-physical system offers the advantage of increased reliability and dependability , but also raises issues of security and privacy. In this chapter, we provide an overview of the contents of this book showing the interrelation between the topics of the chapters in terms of smart energy provisioning. We begin by discussing the concept of smart-grids in general, proceeding to narrow our focus to smart micro-grids in particular. Lossy networks also provide an interesting framework for enabling the implementation of smart micro-grids in remote/rural areas, where deploying standard smart grids is economically and structurally infeasible. To this end, we consider an architectural design for a smart micro-grid suited to low-processing capable devices. We model malicious behaviour, and propose mitigation measures based properties to distinguish normal from malicious behaviour .
The Aral Sea desiccation and related changes in hydroclimatic conditions on a regional level is a hot topic for past decades. The key problem of scientific research projects devoted to an investigation of modern Aral Sea basin hydrological regime is its discontinuous nature - the only limited amount of papers takes into account the complex runoff formation system entirely. Addressing this challenge we have developed a continuous prediction system for assessing freshwater inflow into the Small Aral Sea based on coupling stack of hydrological and data-driven models. Results show a good prediction skill and approve the possibility to develop a valuable water assessment tool which utilizes the power of classical physically based and modern machine learning models both for territories with complex water management system and strong water-related data scarcity. The source code and data of the proposed system is available on a Github page (https://github.com/SMASHIproject/IWRM2018).
In cloud computing, users are able to use their own operating system (OS) image to run a virtual machine (VM) on a remote host. The virtual machine OS is started by the user using some interfaces provided by a cloud provider in public or private cloud. In peer to peer cloud, the VM is started by the host admin. After the VM is running, the user could get a remote access to the VM to install, configure, and run services. For the security reasons, the user needs to verify the integrity of the running VM, because a malicious host admin could modify the image or even replace the image with a similar image, to be able to get sensitive data from the VM. We propose an approach to verify the integrity of a running VM on a remote host, without using any specific hardware such as Trusted Platform Module (TPM). Our approach is implemented on a Linux platform where the kernel files (vmlinuz and initrd) could be replaced with new files, while the VM is running. kexec is used to reboot the VM with the new kernel files. The new kernel has secret codes that will be used to verify whether the VM was started using the new kernel files. The new kernel is used to further measuring the integrity of the running VM.
We consider chimera states in a one-dimensional medium of nonlinear nonlocally coupled phase oscillators. Stationary inhomogeneous solutions of the Ott-Antonsen equation for a complex order parameter that correspond to fundamental chimeras have been constructed. Stability calculations reveal that only some of these states are stable. The direct numerical simulation has shown that these structures under certain conditions are transformed to breathing chimera regimes because of the development of instability. Further development of instability leads to turbulent chimeras.
Monitoring is a key prerequisite for self-adaptive software and many other forms of operating software. Monitoring relevant lower level phenomena like the occurrences of exceptions and diagnosis data requires to carefully examine which detailed information is really necessary and feasible to monitor. Adaptive monitoring permits observing a greater variety of details with less overhead, if most of the time the MAPE-K loop can operate using only a small subset of all those details. However, engineering such an adaptive monitoring is a major engineering effort on its own that further complicates the development of self-adaptive software. The proposed approach overcomes the outlined problems by providing generic adaptive monitoring via runtime models. It reduces the effort to introduce and apply adaptive monitoring by avoiding additional development effort for controlling the monitoring adaptation. Although the generic approach is independent from the monitoring purpose, it still allows for substantial savings regarding the monitoring resource consumption as demonstrated by an example.