Refine
Has Fulltext
- no (145) (remove)
Year of publication
- 2019 (145) (remove)
Document Type
- Other (145) (remove)
Language
- English (145) (remove)
Is part of the Bibliography
- yes (145)
Keywords
- evaluation (3)
- Cloud Computing (2)
- Industry 4.0 (2)
- Scrum (2)
- Social Media Analysis (2)
- Teamwork (2)
- Virtual Machine (2)
- fabrication (2)
- retrospective (2)
- software process improvement (2)
Institute
- Hasso-Plattner-Institut für Digital Engineering GmbH (30)
- Institut für Physik und Astronomie (19)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (17)
- Institut für Biochemie und Biologie (16)
- Department Psychologie (12)
- Institut für Geowissenschaften (9)
- Department Sport- und Gesundheitswissenschaften (4)
- Institut für Ernährungswissenschaft (4)
- Institut für Informatik und Computational Science (4)
- Institut für Umweltwissenschaften und Geographie (4)
Based on the notion that time, space, and number are part of a generalized magnitude system, we assume that the dual-systems approach to temporal cognition also applies to numerical cognition. Referring to theoretical models of the development of numerical concepts, we propose that children's early skills in processing numbers can be described analogously to temporal updating and temporal reasoning.
Bridging the Gap
(2019)
The recent restructuring of the electricity grid (i.e., smart grid) introduces a number of challenges for today's large-scale computing systems. To operate reliable and efficient, computing systems must adhere not only to technical limits (i.e., thermal constraints) but they must also reduce operating costs, for example, by increasing their energy efficiency. Efforts to improve the energy efficiency, however, are often hampered by inflexible software components that hardly adapt to underlying hardware characteristics. In this paper, we propose an approach to bridge the gap between inflexible software and heterogeneous hardware architectures. Our proposal introduces adaptive software components that dynamically adapt to heterogeneous processing units (i.e., accelerators) during runtime to improve the energy efficiency of computing systems.
Mise-Unseen
(2019)
Creating or arranging objects at runtime is needed in many virtual reality applications, but such changes are noticed when they occur inside the user's field of view. We present Mise-Unseen, a software system that applies such scene changes covertly inside the user's field of view. Mise-Unseen leverages gaze tracking to create models of user attention, intention, and spatial memory to determine if and when to inject a change. We present seven applications of Mise-Unseen to unnoticeably modify the scene within view (i) to hide that task difficulty is adapted to the user, (ii) to adapt the experience to the user's preferences, (iii) to time the use of low fidelity effects, (iv) to detect user choice for passive haptics even when lacking physical props, (v) to sustain physical locomotion despite a lack of physical space, (vi) to reduce motion sickness during virtual locomotion, and (vii) to verify user understanding during story progression. We evaluated Mise-Unseen and our applications in a user study with 15 participants and find that while gaze data indeed supports obfuscating changes inside the field of view, a change is rendered unnoticeably by using gaze in combination with common masking techniques.
Diffusion of cosmic rays (CRs) is the key process for understanding their propagation and acceleration. We employ the description of spatial separation of magnetic field lines in magnetohydrodynamic turbulence in Lazarian & Vishniac to quantify the divergence of the magnetic field on scales less than the injection scale of turbulence and show that this divergence induces superdiffusion of CR in the direction perpendicular to the mean magnetic field. The perpendicular displacement squared increases, not as the distance x along the magnetic field, which is the case for a regular diffusion, but as the x 3 for freely streaming CRs. The dependence changes to x 3/2 for the CRs propagating diffusively along the magnetic field. In the latter case, we show that it is important to distinguish the perpendicular displacement with respect to the mean field and to the local magnetic field. We consider how superdiffusion changes the acceleration of CRs in shocks and show how it decreases efficiency of the CRs acceleration in perpendicular shocks. We also demonstrate that in the case when the small-scale magnetic field is generated in the pre-shock region, an efficient acceleration can take place for the CRs streaming without collisions along the magnetic loops.
Editorial
(2019)
Editorial
(2019)
The new year starts and many of us have right away been burdened with conference datelines, grant proposal datelines, teaching obligations, paper revisions and many other things. While being more or less successful in fulfilling To‐Do lists and ticking of urgent (and sometimes even important) things, we often feel that our ability to be truly creative or innovative is rather restrained by this (external pressure). With this, we are not alone. Many studies have shown that stress does influence overall work performance and satisfaction. Furthermore, more and more students and entry‐levels look for work‐life balance and search for employers that offer a surrounding and organization considering these needs. High‐Tech and start‐up companies praise themselves for their “Feel‐Good managers” or Yoga programs. But is this really helpful? Is there indeed a relationship between stress, adverse work environment and creativity or innovation? What are the supporting factors in a work environment that lets employees be more creative? What kind of leadership do we need for innovative behaviour and to what extent can an organization create support structures that reduce the stress we feel? The first issue of Creativity and Innovation Management in 2019 gives some first answers to these questions and hopefully some food for thought.
The first paper written by Dirk De Clercq, and Imanol Belausteguigoitia starts with the question which impact work overload has on creative behaviour. The authors look at how employees' perceptions of work overload reduces their creative behaviour. While they find empirical proof for this relationship, they can also show that the effect is weaker with higher levels of passion for work, emotion sharing, and organizational commitment. The buffering effects of emotion sharing and organizational commitment are particularly strong when they are combined with high levels of passion for work. Their findings give first empirical proof that organizations can and should take an active role in helping their employees reducing the effects of adverse work conditions in order to become or stay creative. However, not only work overload is harming creative behaviour, also the fear of losing one's job has detrimental effects on innovative work behaviour. Anahi van Hootegem, Wendy Niesen and Hans de Witte verify that stress and adverse environmental conditions shape our perception of work. Using threat rigidity theory and an empirical study of 394 employees, they show that the threat of job loss impairs employees' innovativeness through increased irritation and decreased concentration. Organizations can help their employees coping better with this insecurity by communicating more openly and providing different support structures. Support often comes from leadership and the support of the supervisor can clearly shape an employee's motivation to show creative behaviour. Wenjing Cai, Evgenia Lysova, Bart A. G. Bossink, Svetlana N. Khapova and Weidong Wang report empirical findings from a large‐scale survey in China where they find that supervisor support for creativity and job characteristics effectively activate individual psychological capital associated with employee creativity.
On a slight different notion, Gisela Bäcklander looks at agile practices in a very well‐known High Tech firm. In “Doing Complexity Leadership Theory: How agile coaches at Spotify practice enabling leadership”, she researches the role of agile coaches and how they practice enabling leadership, a key balancing force in complexity leadership. She finds that the active involvement of coaches in observing group dynamics, surfacing conflict and facilitating and encouraging constructive dialogue leads to a positive working environment and the well‐being of employees. Quotes from the interviews suggest that the flexible structure provided by the coaches may prove a fruitful way to navigate and balance autonomy and alignment in organizations.
The fifth paper of Frederik Anseel, Michael Vandamme, Wouter Duyck and Eric Rietzchel goes a little further down this road and researches how groups can be motivated better to select truly creative ideas. We know from former studies that groups often perform rather poorly when it comes to selecting creative ideas for implementation. The authors find in an extensive field experiment that under conditions of high epistemic motivation, proself motivated groups select significantly more creative and original ideas than prosocial groups. They conclude however, that more research is needed to understand better why these differences occur. The prosocial behaviour of groups is also the theme of Karin Moser, Jeremy F. Dawson and Michael A. West's paper on “Antecedents of team innovation in health care teams”. They look at team‐level motivation and how a prosocial team environment, indicated by the level of helping behaviour and information‐sharing, may foster innovation. Their results support the hypotheses of both information‐sharing and helping behaviour on team innovation. They suggest that both factors may actually act as buffer against constraints in team work, such as large team size or high occupational diversity in cross‐functional health care teams, and potentially turn these into resources supporting team innovation rather than acting as barriers.
Away from teams and onto designing favourable work environments, the seventh paper of Ferney Osorio, Laurent Dupont, Mauricio Camargo, Pedro Palominos, Jose Ismael Pena and Miguel Alfaro looks into innovation laboratories. Although several studies have tackled the problem of design, development and sustainability of these spaces for innovation, there is still a gap in understanding how the capabilities and performance of these environments are affected by the strategic intentions at the early stages of their design and functioning. The authors analyse and compare eight existing frameworks from literature and propose a new framework for researchers and practitioners aiming to assess or to adapt innovation laboratories. They test their framework in an exploratory study with fifteen laboratories from five different countries and give recommendations for the future design of these laboratories. From design to design thinking goes our last paper from Rama Krishna Reddy Kummitha on “Design Thinking in Social Organisations: Understanding the role of user engagement” where she studies how users persuade social organisations to adopt design thinking. Looking at four social organisations in India during 2008 to 2013, she finds that the designer roles are blurred when social organisations adopt design thinking, while users in the form of interconnecting agencies reduce the gap between designers and communities.
The last two articles were developed from papers presented at the 17th International CINet conference organized in Turin in 2016 by Paolo Neirotti and his colleagues. In the first article, Fábio Gama, Johan Frishammar and Vinit Parida focus on ideation and open innovation in small‐ and medium‐sized enterprises. They investigate the relationship between systematic idea generation and performance and the moderating role of market‐based partnerships. Based on a survey among manufacturing SMEs, they conclude that higher levels of performance are reached and that collaboration with customers and suppliers pays off most when idea generation is done in a highly systematic way. The second article, by Anna Holmquist, Mats Magnusson and Mona Livholts, resonates the theme of the CINet conference ‘Innovation and Tradition; combining the old and the new’. They explore how tradition is used in craft‐based design practices to create new meaning. Applying a narrative ‘research through design’ approach they uncover important design elements, and tensions between them.
Please enjoy this first issue of CIM in 2019 and we wish you creativity and innovation without too much stress in the months to come.
An efficient selection of indexes is indispensable for database performance. For large problem instances with hundreds of tables, existing approaches are not suitable: They either exhibit prohibitive runtimes or yield far from optimal index configurations by strongly limiting the set of index candidates or not handling index interaction explicitly. We introduce a novel recursive strategy that does not exclude index candidates in advance and effectively accounts for index interaction. Using large real-world workloads, we demonstrate the applicability of our approach. Further, we evaluate our solution end to end with a commercial database system using a reproducible setup. We show that our solutions are near-optimal for small index selection problems. For larger problems, our strategy outperforms state-of-the-art approaches in both scalability and solution quality.
Short period double degenerate white dwarf (WD) binaries with periods of less than similar to 1 day are considered to be one of the likely progenitors of type Ia supernovae. These binaries have undergone a period of common envelope evolution. If the core ignites helium before the envelope is ejected, then a hot subdwarf remains prior to contracting into a WD. Here we present a comparison of two very rare systems that contain two hot subdwarfs in short period orbits. We provide a quantitative spectroscopic analysis of the systems using synthetic spectra from state-of-the-art non-LTE models to constrain the atmospheric parameters of the stars. We also use these models to determine the radial velocities, and thus calculate dynamical masses for the stars in each system.
Interactive Close-Up Rendering for Detail plus Overview Visualization of 3D Digital Terrain Models
(2019)
This paper presents an interactive rendering technique for detail+overview visualization of 3D digital terrain models using interactive close-ups. A close-up is an alternative presentation of input data varying with respect to geometrical scale, mapping, appearance, as well as Level-of-Detail (LOD) and Level-of-Abstraction (LOA) used. The presented 3D close-up approach enables in-situ comparison of multiple Regionof-Interests (ROIs) simultaneously. We describe a GPU-based rendering technique for the image-synthesis of multiple close-ups in real-time.
The availability of detailed virtual 3D building models including representations of indoor elements, allows for a wide number of applications requiring effective exploration and navigation functionality. Depending on the application context, users should be enabled to focus on specific Objects-of-Interests (OOIs) or important building elements. This requires approaches to filtering building parts as well as techniques to visualize important building objects and their relations. For it, this paper explores the application and combination of interactive rendering techniques as well as their semanticallydriven configuration in the context of 3D indoor models.
A fundamental task in 3D geovisualization and GIS applications is the visualization of vector data that can represent features such as transportation networks or land use coverage. Mapping or draping vector data represented by geometric primitives (e.g., polylines or polygons) to 3D digital elevation or 3D digital terrain models is a challenging task. We present an interactive GPU-based approach that performs geometry-based draping of vector data on per-frame basis using an image-based representation of a 3D digital elevation or terrain model only.
Stress and bone health
(2019)
An essential, respected, and critical aspect of the modern practice of science and scientific publishing is peer review. The process of peer review facilitates best practices in scientific conduct and communication, ensuring that manuscripts published are as accurate, valuable, and clearly communicated. The over 216 papers published in Tectonics in 2018 benefit from the time, effort, and expertise of our reviewers who have provided thoughtfully considered advice on each manuscript. This role is critical to advancing our understanding of the evolution of the continents and their margins, as these reviews lead to even clearer and higher-quality papers. In 2018, the over 443 papers submitted to Tectonics were the beneficiaries of more than 1,010 reviews provided by 668 members of the tectonics community and related disciplines. To everyone who has volunteered their time and intellect to peer reviewing, thank you for helping Tectonics and all other AGU Publications provide the best science possible.
We review the evidence for a putative early 21st-century divergence between global mean surface temperature (GMST) and Coupled Model Intercomparison Project Phase 5 (CMIP5) projections. We provide a systematic comparison between temperatures and projections using historical versions of GMST products and historical versions of model projections that existed at the times when claims about a divergence were made. The comparisons are conducted with a variety of statistical techniques that correct for problems in previous work, including using continuous trends and a Monte Carlo approach to simulate internal variability. The results show that there is no robust statistical evidence for a divergence between models and observations. The impression of a divergence early in the 21st century was caused by various biases in model interpretation and in the observations, and was unsupported by robust statistics.
Cardiovascular drift response over two different constant-load exercises in healthy non-athletes
(2019)
Cardiovascular drift (CV-d) is a steady increase in heart rate (HR) over time while performing constant load moderate intensity exercise (CME) > 20 min. CV-d presents problems for the prescription of exercise intensity by means of HR, because the work rate (WR) during exercise must be adjusted to maintain target HR, thus disturbing the intended effect of the exercise intervention. It has been shown that the increase in HR during CME is due to changes in WR and not to CV-d.
Domain-specific physical activity patterns and cardiorespiratory fitness among adults in Germany
(2019)
Background Studies show that occupational physical activity (OPA) has less health-enhancing effects than leisure-time physical activity (LTPA). The spare data available suggests that OPA rarely includes aerobic PAs with little or no enhancing effects on cardiorespiratory fitness (CRF) as a possible explanation. This study aims to investigate the associations between patterns of OPA and LTPA and CRF among adults in Germany. Methods 1,204 men and 1,303 women (18-64 years), who participated in the German Health Interview and Examination Survey 2008-2011, completed a standardized sub-maximal cycle ergometer test to estimate maximal oxygen consumption (VO2max). Job positions were coded according to the level of physical effort to construct an occupational PA index and categorized as low vs. high OPA. LTPA was assessed via questionnaires and dichotomized in no vs. any LTPA participation. A combined LTPA/OPA variable was used (high OPA/ LTPA, low OPA/LTPA, high OPA/no LTPA, low OPA/no LTPA). Information on potential confounders was obtained via questionnaires (e.g., smoking and education) or physical measurements (e.g., waist circumference). Multi-variable logistic regression was used to analyze associations between OPA/LTPA patterns and VO2max. Results Preliminary analyses showed that less-active men were more likely to have a low VO2max with odds ratios (ORs) of 0.80 for low OPA/LTPA, 1.84 for high OPA/no LTPA and 3.46 for low OPA/no LTPA compared to high OPA/LTPA. The corresponding ORs for women were 1.11 for low OPA/LTPA, 3.99 for high OPA/no LTPA and 2.44 for low OPA/no LTPA, indicating the highest likelihood of low fitness for women working in physically demanding jobs and not engaging in LTPA. Conclusions Findings confirm a strong association between LTPA and CRF and suggest an interaction between OPA and LTPA patterns on CRF within the workforce in Germany. Women without LTPA are at high risk of having a low CRF, especially if they work in physically demanding jobs. Key messages Women not practicing leisure-time physical activity are at risk of having a low cardiorespiratory fitness, especially if they work in physically demanding jobs. Different impact of domains of physical activity should be considered when planning interventions to enhance fitness among the adult population.
Editorial
(2019)
Words as social tools
(2019)
The target article discusses the question of how educational makerspaces can become places supportive of knowledge construction. This question is too often neglected by people who run makerspaces, as they mostly explain how to use different tools and focus on the creation of a product. In makerspaces, often pupils also engage in physical computing activities and thus in the creation of interactive artifacts containing embedded systems, such as smart shoes or wristbands, plant monitoring systems or drink mixing machines. This offers the opportunity to reflect on teaching physical computing in computer science education, where similarly often the creation of the product is so strongly focused upon that the reflection of the learning process is pushed into the background.
Foreword
(2019)
BIOMEX (BIOlogy and Mars EXperiment) is an ESA/Roscosmos space exposure experiment housed within the exposure facility EXPOSE-R2 outside the Zvezda module on the International Space Station (ISS). The design of the multiuser facility supports-among others-the BIOMEX investigations into the stability and level of degradation of space-exposed biosignatures such as pigments, secondary metabolites, and cell surfaces in contact with a terrestrial and Mars analog mineral environment. In parallel, analysis on the viability of the investigated organisms has provided relevant data for evaluation of the habitability of Mars, for the limits of life, and for the likelihood of an interplanetary transfer of life (theory of lithopanspermia). In this project, lichens, archaea, bacteria, cyanobacteria, snow/permafrost algae, meristematic black fungi, and bryophytes from alpine and polar habitats were embedded, grown, and cultured on a mixture of martian and lunar regolith analogs or other terrestrial minerals. The organisms and regolith analogs and terrestrial mineral mixtures were then exposed to space and to simulated Mars-like conditions by way of the EXPOSE-R2 facility. In this special issue, we present the first set of data obtained in reference to our investigation into the habitability of Mars and limits of life. This project was initiated and implemented by the BIOMEX group, an international and interdisciplinary consortium of 30 institutes in 12 countries on 3 continents. Preflight tests for sample selection, results from ground-based simulation experiments, and the space experiments themselves are presented and include a complete overview of the scientific processes required for this space experiment and postflight analysis. The presented BIOMEX concept could be scaled up to future exposure experiments on the Moon and will serve as a pretest in low Earth orbit.
Cold regulated protein 15A (COR15A) is a nuclear encoded, intrinsically disordered protein that is found in Arabidopsis thaliana. It belongs to the Late Embryogenesis Abundant (LEA) family of proteins and is responsible for increased freezing tolerance in plants. COR15A is intrinsically disordered in dilute solutions and adopts a helical structure upon dehydration or in the presence of co-solutes such as TFE and ethylene glycol. This helical structure is thought to be important for protecting plants from dehydration induced by freezing. Multiple protein sequence alignments revealed the presence of several conserved glycine residues that we hypothesize keeps COR15A from becoming helical in dilute solutions. Using AGADIR, the change in helical content of COR15A when these conserved glycine residues were mutated to alanine residues was predicted. Based on the predictions, glycine to alanine mutants were made at position 68, and 54,68,81, and 84. Labeled samples of wildtype COR15A and mutant proteins were purified and NMR experiments were performed to examine any structural changes induced by the mutations. To test the effects of dehydration on the structure of COR15A, trifluoroethanol, an alcohol based co solvent that is proposed to induce/stabilize helical structure in peptides was added to the NMR samples, and the results of the experiment showed an increase in helical content, compared to the samples without TFE. To test the functional differences between wild type and the mutants, liposome leakage assays were performed. The results from these assays suggest the more helical mutants may augment membrane stability.
A distinguishing feature of Answer Set Programming is that all atoms belonging to a stable model must be founded. That is, an atom must not only be true but provably true. This can be made precise by means of the constructive logic of Here-and-There, whose equilibrium models correspond to stable models. One way of looking at foundedness is to regard Boolean truth values as ordered by letting true be greater than false. Then, each Boolean variable takes the smallest truth value that can be proven for it. This idea was generalized by Aziz to ordered domains and applied to constraint satisfaction problems. As before, the idea is that a, say integer, variable gets only assigned to the smallest integer that can be justified. In this paper, we present a logical reconstruction of Aziz’ idea in the setting of the logic of Here-and-There. More precisely, we start by defining the logic of Here-and-There with lower bound founded variables along with its equilibrium models and elaborate upon its formal properties. Finally, we compare our approach with related ones and sketch future work.
Predictive coding and its generalization to active inference offer a unified theory of brain function. The underlying predictive processing paradigmhas gained significant attention in artificial intelligence research for its representation learning and predictive capacity. Here, we suggest that it is possible to integrate human and artificial generative models with a predictive coding network that processes sensations simultaneously with the signature of predictive coding found in human neuroimaging data. We propose a recurrent hierarchical predictive coding model that predicts low-dimensional representations of stimuli, electroencephalogram and physiological signals with variational inference. We suggest that in a shared environment, such hybrid predictive coding networks learn to incorporate the human predictive model in order to reduce prediction error. We evaluate the model on a publicly available EEG dataset of subjects watching one-minute long video excerpts. Our initial results indicate that the model can be trained to predict visual properties such as the amount, distance and motion of human subjects in videos.
alt'ai is an agent-based simulation inspired by aesthetics, culture and environmental conditions of the Altai mountain region on the borders between Russia, Kazakhstan, China and Mongolia. It is set into a scenario of a remote automated landscape populated by sentient machines, where biological species, machines and environments autonomously interact to produce unforeseeable visual outputs. It poses a question of designing future machine-to-machine authentication protocols that are based on the use of images encoding agent behavior. Also, the simulation provides rich visual perspective on this challenge. The project pleads for a heavily aestheticized approach to design practice and highlights the importance of productively inefficient and information redundant systems.
Working in iterations and repeatedly improving team workflows based on collected feedback is fundamental to agile software development processes. Scrum, the most popular agile method, provides dedicated retrospective meetings to reflect on the last development iteration and to decide on process improvement actions. However, agile methods do not prescribe how these improvement actions should be identified, managed or tracked in detail. The approaches to detect and remove problems in software development processes are therefore often based on intuition and prior experiences and perceptions of team members. Previous research in this area has focused on approaches to elicit a team's improvement opportunities as well as measurements regarding the work performed in an iteration, e.g. Scrum burn-down charts. Little research deals with the quality and nature of identified problems or how progress towards removing issues is measured. In this research, we investigate how agile development teams in the professional software industry organize their feedback and process improvement approaches. In particular, we focus on the structure and content of improvement and reflection meetings, i.e. retrospectives, and their outcomes. Researching how the vital mechanism of process improvement is implemented in practice in modern software development leads to a more complete picture of agile process improvement.
Feedback in Scrum
(2019)
Improving the way that teams work together by reflecting and improving the executed process is at the heart of agile processes. The idea of iterative process improvement takes various forms in different agile development methodologies, e.g. Scrum Retrospectives. However, these methods do not prescribe how improvement steps should be conducted in detail. In this research we investigate how agile software teams can use their development data, such as commits or tickets, created during regular development activities, to drive and track process improvement steps. Our previous research focused on data-informed process improvement in the context of student teams, where controlled circumstances and deep domain knowledge allowed creation and usage of specific process measures. Encouraged by positive results in this area, we investigate the process improvement approaches employed in industry teams. Researching how the vital mechanism of process improvement is implemented and how development data is already being used in practice in modern software development leads to a more complete picture of agile process improvement. It is the first step in enabling a data-informed feedback and improvement process, tailored to a team's context and based on the development data of individual teams.
Monitoring is a key functionality for automated decision making as it is performed by self-adaptive systems, too. Effective monitoring provides the relevant information on time. This can be achieved with exhaustive monitoring causing a high overhead consumption of economical and ecological resources. In contrast, our generic adaptive monitoring approach supports effectiveness with increased efficiency. Also, it adapts to changes regarding the information demand and the monitored system without additional configuration and software implementation effort. The approach observes the executions of runtime model queries and processes change events to determine the currently required monitoring configuration. In this paper we explicate different possibilities to use the approach and evaluate their characteristics regarding the phenomenon detection time and the monitoring effort. Our approach allows balancing between those two characteristics. This makes it an interesting option for the monitoring function of self-adaptive systems because for them usually very short-lived phenomena are not relevant.
Introduction
(2019)
This book started as a conversation about successful societies and human development. It was originally based on a simple idea— it would be unusual if, in a society that might be reasonably deemed as successful, its citizens were deeply unhappy. This combination— successful societies and happy citizens— raised immediate and obvious problems. How might one define “success” when dealing, for example, with a society as large and as complex as the United States? We ran into equally major problems when trying to understand “happiness.” Yet one constantly hears political analysts talking about the success or failure of various democratic institutions. In ordinary conversations one constantly hears people talking about being happy or unhappy. In the everyday world, conversations about living in a successful society or about being happy do not appear to cause bewilderment or confusion. “Ordinary people” do not appear to find questions like— is your school successful or are you happily married?— meaningless or absurd. Yet, in the social sciences, both “successful societies” and “happy lives” are seen to be troublesome.
As our research into happiness and success unfolded, the conundrums we discussed were threefold: societal conditions, measurements and concepts. What are the key social factors that are indispensable for the social and political stability of any given society? Is it possible to develop precise measures of social success that would give us reliable data? There are a range of economic indicators that might be associated with success, such as labor productivity, economic growth rates, low inflation and a robust GDP. Are there equally reliable political and social measures of a successful society and human happiness? For example, rule of law and the absence of large- scale corruption might be relevant to the assessment of societal happiness. These questions about success led us inexorably to what seems to be a futile notion: happiness. Economic variables such as income or psychological measures of well- being in terms of mental health could be easily analyzed; however, happiness is a dimension that has been elusive to the social sciences.
In our unfolding conversation, there was also another stream of thought, namely that the social sciences appeared to be more open to the study of human unhappiness rather than happiness.
Introduction
(2019)
We investigate how the technology acceptance and learning experience of the digital education platform HPI Schul-Cloud (HPI School Cloud) for German secondary school teachers can be improved by proposing a user-centered research and development framework. We highlight the importance of developing digital learning technologies in a user-centered way to take differences in the requirements of educators and students into account. We suggest applying qualitative and quantitative methods to build a solid understanding of a learning platform's users, their needs, requirements, and their context of use. After concept development and idea generation of features and areas of opportunity based on the user research, we emphasize on the application of a multi-attribute utility analysis decision-making framework to prioritize ideas rationally, taking results of user research into account. Afterward, we recommend applying the principle build-learn-iterate to build prototypes in different resolutions while learning from user tests and improving the selected opportunities. Last but not least, we propose an approach for continuous short- and long-term user experience controlling and monitoring, extending existing web- and learning analytics metrics.
Network Creation Games are a well-known approach for explaining and analyzing the structure, quality and dynamics of real-world networks like the Internet and other infrastructure networks which evolved via the interaction of selfish agents without a central authority. In these games selfish agents which correspond to nodes in a network strategically buy incident edges to improve their centrality. However, past research on these games has only considered the creation of networks with unit-weight edges. In practice, e.g. when constructing a fiber-optic network, the choice of which nodes to connect and also the induced price for a link crucially depends on the distance between the involved nodes and such settings can be modeled via edge-weighted graphs. We incorporate arbitrary edge weights by generalizing the well-known model by Fabrikant et al. [PODC'03] to edge-weighted host graphs and focus on the geometric setting where the weights are induced by the distances in some metric space. In stark contrast to the state-of-the-art for the unit-weight version, where the Price of Anarchy is conjectured to be constant and where resolving this is a major open problem, we prove a tight non-constant bound on the Price of Anarchy for the metric version and a slightly weaker upper bound for the non-metric case. Moreover, we analyze the existence of equilibria, the computational hardness and the game dynamics for several natural metrics. The model we propose can be seen as the game-theoretic analogue of a variant of the classical Network Design Problem. Thus, low-cost equilibria of our game correspond to decentralized and stable approximations of the optimum network design.
Tikhonov regularization with oversmoothing penalty for linear statistical inverse learning problems
(2019)
In this paper, we consider the linear ill-posed inverse problem with noisy data in the statistical learning setting. The Tikhonov regularization scheme in Hilbert scales is considered in the reproducing kernel Hilbert space framework to reconstruct the estimator from the random noisy data. We discuss the rates of convergence for the regularized solution under the prior assumptions and link condition. For regression functions with smoothness given in terms of source conditions the error bound can explicitly be established.
A Landscape for Case Models
(2019)
Case Management is a paradigm to support knowledge-intensive processes. The different approaches developed for modeling these types of processes tend to result in scattered models due to the low abstraction level at which the inherently complex processes are therein represented. Thus, readability and understandability is more challenging than that of traditional process models. By reviewing existing proposals in the field of process overviews and case models, this paper extends a case modeling language - the fragment-based Case Management (fCM) language - with the goal of modeling knowledge-intensive processes from a higher abstraction level - to generate a so-called fCM landscape. This proposal is empirically evaluated via an online experiment. Results indicate that interpreting an fCM landscape might be more effective and efficient than interpreting an informationally equivalent case model.
The supercritical Hopf bifurcation is one of the simplest ways in which a stationary state of a nonlinear system can undergo a transition to stable self-sustained oscillations. At the bifurcation point, a small-amplitude limit cycle is born, which already at onset displays a finite frequency. If we consider a reaction-diffusion system that undergoes a supercritical Hopf bifurcation, its dynamics is described by the complex Ginzburg-Landau equation (CGLE). Here, we study such a system in the parameter regime where the CGLE shows spatio-temporal chaos. We review a type of time-delay feedback methods which is suitable to suppress chaos and replace it by other spatio-temporal solutions such as uniform oscillations, plane waves, standing waves, and the stationary state.
Introduction
(2019)
Over the past decades, it has become more and more obvious that ongoing globalisation processes have substantial impacts on the natural environment. Studies reveal that intensified global economic relations have caused or accelerated dramatic changes in the Earth system, defined as the sum of our planet’s interacting physical, chemical, biological and human processes (Schellnhuber et al. 2004). Climate change, biodiversity loss, disrupted biogeochemical cycles, and land degradation are often cited as emblematic problems of global environmental change (Rockström et al. 2009; Steffen et al. 2015). In this context, the term Anthropocene has lately received widespread attention and gained some prominence in the academic literature
Editorial
(2019)
In this paper, we consider counting and projected model counting of extensions in abstract argumentation for various semantics. When asking for projected counts we are interested in counting the number of extensions of a given argumentation framework while multiple extensions that are identical when restricted to the projected arguments count as only one projected extension. We establish classical complexity results and parameterized complexity results when the problems are parameterized by treewidth of the undirected argumentation graph. To obtain upper bounds for counting projected extensions, we introduce novel algorithms that exploit small treewidth of the undirected argumentation graph of the input instance by dynamic programming (DP). Our algorithms run in time double or triple exponential in the treewidth depending on the considered semantics. Finally, we take the exponential time hypothesis (ETH) into account and establish lower bounds of bounded treewidth algorithms for counting extensions and projected extension.
Mobile sensing technology allows us to investigate human behaviour on a daily basis. In the study, we examined temporal orientation, which refers to the capacity of thinking or talking about personal events in the past and future. We utilise the mksense platform that allows us to use the experience-sampling method. Individual's thoughts and their relationship with smartphone's Bluetooth data is analysed to understand in which contexts people are influenced by social environments, such as the people they spend the most time with. As an exploratory study, we analyse social condition influence through a collection of Bluetooth data and survey information from participant's smartphones. Preliminary results show that people are likely to focus on past events when interacting with close-related people, and focus on future planning when interacting with strangers. Similarly, people experience present temporal orientation when accompanied by known people. We believe that these findings are linked to emotions since, in its most basic state, emotion is a state of physiological arousal combined with an appropriated cognition. In this contribution, we envision a smartphone application for automatically inferring human emotions based on user's temporal orientation by using Bluetooth sensors, we briefly elaborate on the influential factor of temporal orientation episodes and conclude with a discussion and lessons learned.
Workload-Driven Fragment Allocation for Partially Replicated Databases Using Linear Programming
(2019)
In replication schemes, replica nodes can process read-only queries on snapshots of the master node without violating transactional consistency. By analyzing the workload, we can identify query access patterns and replicate data depending to its access frequency. In this paper, we define a linear programming (LP) model to calculate the set of partial replicas with the lowest overall memory capacity while evenly balancing the query load. Furthermore, we propose a scalable decomposition heuristic to calculate solutions for larger problem sizes. While guaranteeing the same performance as state-of-the-art heuristics, our decomposition approach calculates allocations with up to 23% lower memory footprint for the TPC-H benchmark.
Increasing demand for analytical processing capabilities can be managed by replication approaches. However, to evenly balance the replicas' workload shares while at the same time minimizing the data replication factor is a highly challenging allocation problem. As optimal solutions are only applicable for small problem instances, effective heuristics are indispensable. In this paper, we test and compare state-of-the-art allocation algorithms for partial replication. By visualizing and exploring their (heuristic) solutions for different benchmark workloads, we are able to derive structural insights and to detect an algorithm's strengths as well as its potential for improvement. Further, our application enables end-to-end evaluations of different allocations to verify their theoretical performance.
Preface
(2019)
Currently, a transformation of our technical world into a networked technical world where besides the embedded systems with their interaction with the physical world the interconnection of these nodes in the cyber world becomes a reality can be observed. In parallel nowadays there is a strong trend to employ artificial intelligence techniques and in particular machine learning to make software behave smart. Often cyber-physical systems must be self-adaptive at the level of the individual systems to operate as elements in open, dynamic, and deviating overall structures and to adapt to open and dynamic contexts while being developed, operated, evolved, and governed independently.
In this presentation, we will first discuss the envisioned future scenarios for cyber-physical systems with an emphasis on the synergies networking can offer and then characterize which challenges for the design, production, and operation of these systems result. We will then discuss to what extent our current capabilities, in particular concerning software engineering match these challenges and where substantial improvements for the software engineering are crucial. In today's software engineering for embedded systems models are used to plan systems upfront to maximize envisioned properties on the one hand and minimize cost on the other hand. When applying the same ideas to software for smart cyber-physical systems, it soon turned out that for these systems often somehow more subtle links between the involved models and the requirements, users, and environment exist. Self-adaptation and runtime models have been advocated as concepts to covers the demands that result from these subtler links. Lately, both trends have been brought together more thoroughly by the notion of self-aware computing systems. We will review the underlying causes, discuss some our work in this direction, and outline related open challenges and potential for future approaches to software engineering for smart cyber-physical systems.
Mobile operating systems, such as Google's Android, have become a fixed part of our daily lives and are entrusted with a plethora of private information. Congruously, their data protection mechanisms have been improved steadily over the last decade and, in particular, for Android, the research community has explored various enhancements and extensions to the access control model. However, the vast majority of those solutions has been concerned with controlling the access to data, but equally important is the question of how to control the flow of data once released. Ignoring control over the dissemination of data between applications or between components of the same app, opens the door for attacks, such as permission re-delegation or privacy-violating third-party libraries. Controlling information flows is a long-standing problem, and one of the most recent and practical-oriented approaches to information flow control is secure multi-execution.
In this paper, we present Ariel, the design and implementation of an IFC architecture for Android based on the secure multi-execution of apps. Ariel demonstrably extends Android's system with support for executing multiple instances of apps, and it is equipped with a policy lattice derived from the protection levels of Android's permissions as well as an I/O scheduler to achieve control over data flows between application instances. We demonstrate how secure multi-execution with Ariel can help to mitigate two prominent attacks on Android, permission re-delegations and malicious advertisement libraries.
JavaScript is the most popular programming language for web applications. Static analysis of JavaScript applications is highly challenging due to its dynamic language constructs and event-driven asynchronous executions, which also give rise to many security-related bugs. Several static analysis tools to detect such bugs exist, however, research has not yet reported much on the precision and scalability trade-off of these analyzers. As a further obstacle, JavaScript programs structured in Node. js modules need to be collected for analysis, but existing bundlers are either specific to their respective analysis tools or not particularly suitable for static analysis.
Network science is driven by the question which properties large real-world networks have and how we can exploit them algorithmically. In the past few years, hyperbolic graphs have emerged as a very promising model for scale-free networks. The connection between hyperbolic geometry and complex networks gives insights in both directions: (1) Hyperbolic geometry forms the basis of a natural and explanatory model for real-world networks. Hyperbolic random graphs are obtained by choosing random points in the hyperbolic plane and connecting pairs of points that are geometrically close. The resulting networks share many structural properties for example with online social networks like Facebook or Twitter. They are thus well suited for algorithmic analyses in a more realistic setting. (2) Starting with a real-world network, hyperbolic geometry is well-suited for metric embeddings. The vertices of a network can be mapped to points in this geometry, such that geometric distances are similar to graph distances. Such embeddings have a variety of algorithmic applications ranging from approximations based on efficient geometric algorithms to greedy routing solely using hyperbolic coordinates for navigation decisions.
Monitoring is a key prerequisite for self-adaptive software and many other forms of operating software. Monitoring relevant lower level phenomena like the occurrences of exceptions and diagnosis data requires to carefully examine which detailed information is really necessary and feasible to monitor. Adaptive monitoring permits observing a greater variety of details with less overhead, if most of the time the MAPE-K loop can operate using only a small subset of all those details. However, engineering such an adaptive monitoring is a major engineering effort on its own that further complicates the development of self-adaptive software. The proposed approach overcomes the outlined problems by providing generic adaptive monitoring via runtime models. It reduces the effort to introduce and apply adaptive monitoring by avoiding additional development effort for controlling the monitoring adaptation. Although the generic approach is independent from the monitoring purpose, it still allows for substantial savings regarding the monitoring resource consumption as demonstrated by an example.
Rapid advances in location-acquisition technologies have led to large amounts of trajectory data. This data is the foundation for a broad spectrum of services driven and improved by trajectory data mining. However, for hybrid transactional and analytical workloads, the storing and processing of rapidly accumulated trajectory data is a non-trivial task. In this paper, we present a detailed survey about state-of-the-art trajectory data management systems. To determine the relevant aspects and requirements for such systems, we developed a trajectory data mining framework, which summarizes the different steps in the trajectory data mining process. Based on the derived requirements, we analyze different concepts to store, compress, index, and process spatio-temporal data. There are various trajectory management systems, which are optimized for scalability, data footprint reduction, elasticity, or query performance. To get a comprehensive overview, we describe and compare different exciting systems. Additionally, the observed similarities in the general structure of different systems are consolidated in a general blueprint of trajectory management systems.
Leveraging spatio-temporal soccer data to define a graphical query language for game recordings
(2019)
For professional soccer clubs, performance and video analysis are an integral part of the preparation and post-processing of games. Coaches, scouts, and video analysts extract information about strengths and weaknesses of their team as well as opponents by manually analyzing video recordings of past games. Since video recordings are an unstructured data source, it is a complex and time-intensive task to find specific game situations and identify similar patterns. In this paper, we present a novel approach to detect patterns and situations (e.g., playmaking and ball passing of midfielders) based on trajectory data. The application uses the metaphor of a tactic board to offer a graphical query language. With this interactive tactic board, the user can model a game situation or mark a specific situation in the video recording for which all matching occurrences in various games are immediately displayed, and the user can directly jump to the corresponding game scene. Through the additional visualization of key performance indicators (e.g.,the physical load of the players), the user can get a better overall assessment of situations. With the capabilities to find specific game situations and complex patterns in video recordings, the interactive tactic board serves as a useful tool to improve the video analysis process of professional sports teams.
New Public Governance (NPG) as a paradigm for collaborative forms of public service delivery and Blockchain governance are trending topics for researchers and practitioners alike. Thus far, each topic has, on the whole, been discussed separately. This paper presents the preliminary results of ongoing research which aims to shed light on the more concrete benefits of Blockchain for the purpose of NPG. For the first time, a conceptual analysis is conducted on process level to spot benefits and limitations of Blockchain-based governance. Per process element, Blockchain key characteristics are mapped to functional aspects of NPG from a governance perspective. The preliminary results show that Blockchain offers valuable support for governments seeking methods to effectively coordinate co-producing networks. However, the extent of benefits of Blockchain varies across the process elements. It becomes evident that there is a need for off-chain processes. It is, therefore, argued in favour of intensifying research on off-chain governance processes to better understand the implications for and influences on on-chain governance.
Leben in der ehemaligen DDR
(2019)
Monte-Carlo calculations are carried out to simulate the light transport in dense materials. Focus lies on the calculation of diffuse light transmission through films of scattering and absorbing media considering additionally the effect of dependent scattering. Different influences like interaction type between particles, particle size, composition etc. can be studied by this program. Simulations in this study show major influences on the diffuse transmission. Further simulations are carried out to model a sunscreen film and study best compositions of this film and will be presented.
User-generated content on social media platforms is a rich source of latent information about individual variables. Crawling and analyzing this content provides a new approach for enterprises to personalize services and put forward product recommendations. In the past few years, brands made a gradual appearance on social media platforms for advertisement, customers support and public relation purposes and by now it became a necessity throughout all branches. This online identity can be represented as a brand personality that reflects how a brand is perceived by its customers. We exploited recent research in text analysis and personality detection to build an automatic brand personality prediction model on top of the (Five-Factor Model) and (Linguistic Inquiry and Word Count) features extracted from publicly available benchmarks. The proposed model reported significant accuracy in predicting specific personality traits form brands. For evaluating our prediction results on actual brands, we crawled the Facebook API for 100k posts from the most valuable brands' pages in the USA and we visualize exemplars of comparison results and present suggestions for future directions.
The Schwarzenberg mining district in the western Erzgebirge hosts numerous skarn-hosted tin-polymetallic deposits, such as Breitenbrunn. The St. Christoph mine is located in the Breitenbrunn deposit and is the locus typicus of christophite, an iron-rich sphalerite variety, which can be associated with indium enrichment. This study presents a revision of the paragenetic scheme, a contribution to the indium behavior and potential, and discussion on the origin of the sulfur. This was achieved through reflected light microscopy, SEM-based MLA, EPMA, and bulk mineral sulfur isotope analysis on 37 sulfide-rich skarn samples from a mineral collection. The paragenetic scheme includes: a pre-mineralization stage of anhydrous calc-silicates and hydrous minerals; an oxide stage, dominated by magnetite; a sulfide stage of predominantly sphalerite, minor pyrite, chalcopyrite, arsenopyrite, and galena. Some sphalerite samples present elevated indium contents of up to 0.44 wt%. Elevated iron contents (4-10 wt%) in sphalerite can be tentatively linked to increased indium incorporation, but further analyses are required. Analyzed sulfides exhibit homogeneous delta S-34 values (-1 to +2 parts per thousand VCDT), assumed to be post-magmatic. They correlate with other Fe-Sn-Zn-Cu-In skarn deposits in the western Erzgebirge, and Permian vein-hosted associations throughout the Erzgebirge region.
Recent research indicates that non- invasive stimulation of the afferent auricular vagal nerve (tVNS) may modulate various cognitive and affec-tive functions, likely via activation of the locus coeruleus- norepinephrine (LC- NE) system. In a series of ERP studies we found that the attention- related P300 component is enhanced during continuous vagal stimula-tion, compared to sham, which is also related to increased salivary alpha amylase levels (a putative indirect marker for central NE activation). In another study, we investigated the effect of continuous tVNS on the late positive potential (LPP), an electrophysiological index for motivated atten-tion toward emotionally evocative cues, and the effects of tVNS on later recognition memory (1- week delay). Here, vagal stimulation prompted earlier LPP differences (300- 500 ms) between unpleasant and neutral scenes. During retrieval, vagal stimulation significantly improved memory performance for unpleasant, but not neutral pictures, compared to sham stimulation, which was also related to enhanced salivary alpha amylase levels. In line, unpleasant images encoded under tVNS compared to sham stimulation also produced enhanced ERP old/new differences (500- 800 ms) during retrieval indicating better recollection. Taken together, our studies suggest that tVNS facilitates attention, learning and episodic memory, likely via afferent projections to the arousal- modulated LC- NE system. We will, however, also show data that point to critical stimulation parameters (likely duration and frequency) that need to be considered when applying tVNS
Zero-shot learning in Language & Vision is the task of correctly labelling (or naming) objects of novel categories. Another strand of work in L&V aims at pragmatically informative rather than "correct" object descriptions, e.g. in reference games. We combine these lines of research and model zero-shot reference games, where a speaker needs to successfully refer to a novel object in an image. Inspired by models of "rational speech acts", we extend a neural generator to become a pragmatic speaker reasoning about uncertain object categories. As a result of this reasoning, the generator produces fewer nouns and names of distractor categories as compared to a literal speaker. We show that this conversational strategy for dealing with novel objects often improves communicative success, in terms of resolution accuracy of an automatic listener.
Bottom-up saliency is often cited as a factor driving the choice of fixation locations of human observers, based on the (partial) success of saliency models to predict fixation densities in free viewing. However, these observations are only weak evidence for a causal role of bottom-up saliency in natural viewing behaviour. To test bottom-up saliency more directly, we analyse the performance of a number of saliency models---including our own saliency model based on our recently published model of early visual processing (Schütt & Wichmann, 2017, JoV)---as well as the theoretical limits for predictions over time. On free viewing data our model performs better than classical bottom-up saliency models, but worse than the current deep learning based saliency models incorporating higher-level information like knowledge about objects. However, on search data all saliency models perform worse than the optimal image independent prediction. We observe that the fixation density in free viewing is not stationary over time, but changes over the course of a trial. It starts with a pronounced central fixation bias on the first chosen fixation, which is nonetheless influenced by image content. Starting with the 2nd to 3rd fixation, the fixation density is already well predicted by later densities, but more concentrated. From there the fixation distribution broadens until it reaches a stationary distribution around the 10th fixation. Taken together these observations argue against bottom-up saliency as a mechanistic explanation for eye movement control after the initial orienting reaction in the first one to two saccades, although we confirm the predictive value of early visual representations for fixation locations. The fixation distribution is, first, not well described by any stationary density, second, is predicted better when including object information and, third, is badly predicted by any saliency model in a search task.
Modern production infrastructures of globally operating companies usually consist of multiple distributed production sites. While the organization of individual sites consisting of Industry 4.0 components itself is demanding, new questions regarding the organization and allocation of resources emerge considering the total production network. In an attempt to face the challenge of efficient distribution and processing both within and across sites, we aim to provide a hybrid simulation approach as a first step towards optimization. Using hybrid simulation allows us to include real and simulated concepts and thereby benchmark different approaches with reasonable effort. A simulation concept is conceptualized and demonstrated qualitatively using a global multi-site example.
Audit - and then what?
(2019)
Current trends such as digital transformation, Internet of Things, or Industry 4.0 are challenging the majority of learning factories. Regardless of whether a conventional learning factory, a model factory, or a digital learning factory, traditional approaches such as the monotonous execution of specific instructions don‘t suffice the learner’s needs, market requirements as well as especially current technological developments. Contemporary teaching environments need a clear strategy, a road to follow for being able to successfully cope with the changes and develop towards digitized learning factories. This demand driven necessity of transformation leads to another obstacle: Assessing the status quo and developing and implementing adequate action plans. Within this paper, details of a maturity-based audit of the hybrid learning factory in the Research and Application Centre Industry 4.0 and a thereof derived roadmap for the digitization of a learning factory are presented.
Subject-oriented learning
(2019)
The transformation to a digitized company changes not only the work but also social context for the employees and requires inter alia new knowledge and skills from them. Additionally, individual action problems arise. This contribution proposes the subject-oriented learning theory, in which the employees´ action problems are the starting point of training activities in learning factories. In this contribution, the subject-oriented learning theory is exemplified and respective advantages for vocational training in learning factories are pointed out both theoretically and practically. Thereby, especially the individual action problems of learners and the infrastructure are emphasized as starting point for learning processes and competence development.
High-throughput RNA sequencing produces large gene expression datasets whose analysis leads to a better understanding of diseases like cancer. The nature of RNA-Seq data poses challenges to its analysis in terms of its high dimensionality, noise, and complexity of the underlying biological processes. Researchers apply traditional machine learning approaches, e. g. hierarchical clustering, to analyze this data. Until it comes to validation of the results, the analysis is based on the provided data only and completely misses the biological context. However, gene expression data follows particular patterns - the underlying biological processes. In our research, we aim to integrate the available biological knowledge earlier in the analysis process. We want to adapt state-of-the-art data mining algorithms to consider the biological context in their computations and deliver meaningful results for researchers.