Refine
Year of publication
- 2019 (156) (remove)
Document Type
- Other (156) (remove)
Language
- English (145)
- German (9)
- Portuguese (1)
- Spanish (1)
Is part of the Bibliography
- yes (156)
Keywords
- evaluation (3)
- Cloud Computing (2)
- Industry 4.0 (2)
- Scrum (2)
- Social Media Analysis (2)
- Teamwork (2)
- Virtual Machine (2)
- fabrication (2)
- retrospective (2)
- software process improvement (2)
Institute
- Hasso-Plattner-Institut für Digital Engineering GmbH (30)
- Institut für Physik und Astronomie (19)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (17)
- Institut für Biochemie und Biologie (17)
- Department Psychologie (13)
- Institut für Geowissenschaften (9)
- Institut für Umweltwissenschaften und Geographie (6)
- Department Sport- und Gesundheitswissenschaften (4)
- Institut für Ernährungswissenschaft (4)
- Institut für Informatik und Computational Science (4)
Already for decades it has been known that the winds of massive stars are inhomogeneous (i.e. clumped). To properly model observed spectra of massive star winds it is necessary to incorporate the 3-D nature of clumping into radiative transfer calculations. In this paper we present our full 3-D Monte Carlo radiative transfer code for inhomogeneous expanding stellar winds. We use a set of parameters to describe dense as well as the rarefied wind components. At the same time, we account for non-monotonic velocity fields. We show how the 3-D density and velocity wind inhomogeneities strongly affect the resonance line formation. We also show how wind clumping can solve the discrepancy between P v and H alpha mass-loss rate diagnostics.
Increasing demand for analytical processing capabilities can be managed by replication approaches. However, to evenly balance the replicas' workload shares while at the same time minimizing the data replication factor is a highly challenging allocation problem. As optimal solutions are only applicable for small problem instances, effective heuristics are indispensable. In this paper, we test and compare state-of-the-art allocation algorithms for partial replication. By visualizing and exploring their (heuristic) solutions for different benchmark workloads, we are able to derive structural insights and to detect an algorithm's strengths as well as its potential for improvement. Further, our application enables end-to-end evaluations of different allocations to verify their theoretical performance.
A Landscape for Case Models
(2019)
Case Management is a paradigm to support knowledge-intensive processes. The different approaches developed for modeling these types of processes tend to result in scattered models due to the low abstraction level at which the inherently complex processes are therein represented. Thus, readability and understandability is more challenging than that of traditional process models. By reviewing existing proposals in the field of process overviews and case models, this paper extends a case modeling language - the fragment-based Case Management (fCM) language - with the goal of modeling knowledge-intensive processes from a higher abstraction level - to generate a so-called fCM landscape. This proposal is empirically evaluated via an online experiment. Results indicate that interpreting an fCM landscape might be more effective and efficient than interpreting an informationally equivalent case model.
General intelligence has a substantial genetic background in children, adolescents, and adults, but environmental factors also strongly correlate with cognitive performance as evidenced by a strong (up to one SD) increase in average intelligence test results in the second half of the previous century. This change occurred in a period apparently too short to accommodate radical genetic changes. It is highly suggestive that environmental factors interact with genotype by possible modification of epigenetic factors that regulate gene expression and thus contribute to individual malleability. This modification might as well be reflected in recent observations of an association between dopamine-dependent encoding of reward prediction errors and cognitive capacity, which was modulated by adverse life events.
Rapid advances in location-acquisition technologies have led to large amounts of trajectory data. This data is the foundation for a broad spectrum of services driven and improved by trajectory data mining. However, for hybrid transactional and analytical workloads, the storing and processing of rapidly accumulated trajectory data is a non-trivial task. In this paper, we present a detailed survey about state-of-the-art trajectory data management systems. To determine the relevant aspects and requirements for such systems, we developed a trajectory data mining framework, which summarizes the different steps in the trajectory data mining process. Based on the derived requirements, we analyze different concepts to store, compress, index, and process spatio-temporal data. There are various trajectory management systems, which are optimized for scalability, data footprint reduction, elasticity, or query performance. To get a comprehensive overview, we describe and compare different exciting systems. Additionally, the observed similarities in the general structure of different systems are consolidated in a general blueprint of trajectory management systems.
Industry 4.0 and the Internet of Things are recent developments that have lead to the creation of new kinds of manufacturing data. Linking this new kind of sensor data to traditional business information is crucial for enterprises to take advantage of the data’s full potential. In this paper, we present a demo which allows experiencing this data integration, both vertically between technical and business contexts and horizontally along the value chain. The tool simulates a manufacturing company, continuously producing both business and sensor data, and supports issuing ad-hoc queries that answer specific questions related to the business. In order to adapt to different environments, users can configure sensor characteristics to their needs.
Working in iterations and repeatedly improving team workflows based on collected feedback is fundamental to agile software development processes. Scrum, the most popular agile method, provides dedicated retrospective meetings to reflect on the last development iteration and to decide on process improvement actions. However, agile methods do not prescribe how these improvement actions should be identified, managed or tracked in detail. The approaches to detect and remove problems in software development processes are therefore often based on intuition and prior experiences and perceptions of team members. Previous research in this area has focused on approaches to elicit a team's improvement opportunities as well as measurements regarding the work performed in an iteration, e.g. Scrum burn-down charts. Little research deals with the quality and nature of identified problems or how progress towards removing issues is measured. In this research, we investigate how agile development teams in the professional software industry organize their feedback and process improvement approaches. In particular, we focus on the structure and content of improvement and reflection meetings, i.e. retrospectives, and their outcomes. Researching how the vital mechanism of process improvement is implemented in practice in modern software development leads to a more complete picture of agile process improvement.
Basándose en el conjunto de la obra humboldtiana, desde sus comienzos hasta el Cosmos, este dossier trata de destacar la orientación cosmopolita del sabio prusiano así como, sobre todo, el fundamento americano de sus enfoques. El continente americano, para Humboldt, representa la diversidad de lo pensable y la multirrelacionalidad de lo imaginable: la llave para entender su cosmovisión.
alt'ai is an agent-based simulation inspired by aesthetics, culture and environmental conditions of the Altai mountain region on the borders between Russia, Kazakhstan, China and Mongolia. It is set into a scenario of a remote automated landscape populated by sentient machines, where biological species, machines and environments autonomously interact to produce unforeseeable visual outputs. It poses a question of designing future machine-to-machine authentication protocols that are based on the use of images encoding agent behavior. Also, the simulation provides rich visual perspective on this challenge. The project pleads for a heavily aestheticized approach to design practice and highlights the importance of productively inefficient and information redundant systems.
Mobile sensing technology allows us to investigate human behaviour on a daily basis. In the study, we examined temporal orientation, which refers to the capacity of thinking or talking about personal events in the past and future. We utilise the mksense platform that allows us to use the experience-sampling method. Individual's thoughts and their relationship with smartphone's Bluetooth data is analysed to understand in which contexts people are influenced by social environments, such as the people they spend the most time with. As an exploratory study, we analyse social condition influence through a collection of Bluetooth data and survey information from participant's smartphones. Preliminary results show that people are likely to focus on past events when interacting with close-related people, and focus on future planning when interacting with strangers. Similarly, people experience present temporal orientation when accompanied by known people. We believe that these findings are linked to emotions since, in its most basic state, emotion is a state of physiological arousal combined with an appropriated cognition. In this contribution, we envision a smartphone application for automatically inferring human emotions based on user's temporal orientation by using Bluetooth sensors, we briefly elaborate on the influential factor of temporal orientation episodes and conclude with a discussion and lessons learned.
High-dimensional data is particularly useful for data analytics research. In the healthcare domain, for instance, high-dimensional data analytics has been used successfully for drug discovery. Yet, in order to adhere to privacy legislation, data analytics service providers must guarantee anonymity for data owners. In the context of high-dimensional data, ensuring privacy is challenging because increased data dimensionality must be matched by an exponential growth in the size of the data to avoid sparse datasets. Syntactically, anonymising sparse datasets with methods that rely of statistical significance, makes obtaining sound and reliable results, a challenge. As such, strong privacy is only achievable at the cost of high information loss, rendering the data unusable for data analytics. In this paper, we make two contributions to addressing this problem from both the privacy and information loss perspectives. First, we show that by identifying dependencies between attribute subsets we can eliminate privacy violating attributes from the anonymised dataset. Second, to minimise information loss, we employ a greedy search algorithm to determine and eliminate maximal partial unique attribute combinations. Thus, one only needs to find the minimal set of identifying attributes to prevent re-identification. Experiments on a health cloud based on the SAP HANA platform using a semi-synthetic medical history dataset comprised of 109 attributes, demonstrate the effectiveness of our approach.
Audit - and then what?
(2019)
Current trends such as digital transformation, Internet of Things, or Industry 4.0 are challenging the majority of learning factories. Regardless of whether a conventional learning factory, a model factory, or a digital learning factory, traditional approaches such as the monotonous execution of specific instructions don‘t suffice the learner’s needs, market requirements as well as especially current technological developments. Contemporary teaching environments need a clear strategy, a road to follow for being able to successfully cope with the changes and develop towards digitized learning factories. This demand driven necessity of transformation leads to another obstacle: Assessing the status quo and developing and implementing adequate action plans. Within this paper, details of a maturity-based audit of the hybrid learning factory in the Research and Application Centre Industry 4.0 and a thereof derived roadmap for the digitization of a learning factory are presented.
Bridging the Gap
(2019)
The recent restructuring of the electricity grid (i.e., smart grid) introduces a number of challenges for today's large-scale computing systems. To operate reliable and efficient, computing systems must adhere not only to technical limits (i.e., thermal constraints) but they must also reduce operating costs, for example, by increasing their energy efficiency. Efforts to improve the energy efficiency, however, are often hampered by inflexible software components that hardly adapt to underlying hardware characteristics. In this paper, we propose an approach to bridge the gap between inflexible software and heterogeneous hardware architectures. Our proposal introduces adaptive software components that dynamically adapt to heterogeneous processing units (i.e., accelerators) during runtime to improve the energy efficiency of computing systems.
New Public Governance (NPG) as a paradigm for collaborative forms of public service delivery and Blockchain governance are trending topics for researchers and practitioners alike. Thus far, each topic has, on the whole, been discussed separately. This paper presents the preliminary results of ongoing research which aims to shed light on the more concrete benefits of Blockchain for the purpose of NPG. For the first time, a conceptual analysis is conducted on process level to spot benefits and limitations of Blockchain-based governance. Per process element, Blockchain key characteristics are mapped to functional aspects of NPG from a governance perspective. The preliminary results show that Blockchain offers valuable support for governments seeking methods to effectively coordinate co-producing networks. However, the extent of benefits of Blockchain varies across the process elements. It becomes evident that there is a need for off-chain processes. It is, therefore, argued in favour of intensifying research on off-chain governance processes to better understand the implications for and influences on on-chain governance.
Cardiovascular drift response over two different constant-load exercises in healthy non-athletes
(2019)
Cardiovascular drift (CV-d) is a steady increase in heart rate (HR) over time while performing constant load moderate intensity exercise (CME) > 20 min. CV-d presents problems for the prescription of exercise intensity by means of HR, because the work rate (WR) during exercise must be adjusted to maintain target HR, thus disturbing the intended effect of the exercise intervention. It has been shown that the increase in HR during CME is due to changes in WR and not to CV-d.
Catholicism
(2019)
The Schwarzenberg mining district in the western Erzgebirge hosts numerous skarn-hosted tin-polymetallic deposits, such as Breitenbrunn. The St. Christoph mine is located in the Breitenbrunn deposit and is the locus typicus of christophite, an iron-rich sphalerite variety, which can be associated with indium enrichment. This study presents a revision of the paragenetic scheme, a contribution to the indium behavior and potential, and discussion on the origin of the sulfur. This was achieved through reflected light microscopy, SEM-based MLA, EPMA, and bulk mineral sulfur isotope analysis on 37 sulfide-rich skarn samples from a mineral collection. The paragenetic scheme includes: a pre-mineralization stage of anhydrous calc-silicates and hydrous minerals; an oxide stage, dominated by magnetite; a sulfide stage of predominantly sphalerite, minor pyrite, chalcopyrite, arsenopyrite, and galena. Some sphalerite samples present elevated indium contents of up to 0.44 wt%. Elevated iron contents (4-10 wt%) in sphalerite can be tentatively linked to increased indium incorporation, but further analyses are required. Analyzed sulfides exhibit homogeneous delta S-34 values (-1 to +2 parts per thousand VCDT), assumed to be post-magmatic. They correlate with other Fe-Sn-Zn-Cu-In skarn deposits in the western Erzgebirge, and Permian vein-hosted associations throughout the Erzgebirge region.
In this paper, we consider counting and projected model counting of extensions in abstract argumentation for various semantics. When asking for projected counts we are interested in counting the number of extensions of a given argumentation framework while multiple extensions that are identical when restricted to the projected arguments count as only one projected extension. We establish classical complexity results and parameterized complexity results when the problems are parameterized by treewidth of the undirected argumentation graph. To obtain upper bounds for counting projected extensions, we introduce novel algorithms that exploit small treewidth of the undirected argumentation graph of the input instance by dynamic programming (DP). Our algorithms run in time double or triple exponential in the treewidth depending on the considered semantics. Finally, we take the exponential time hypothesis (ETH) into account and establish lower bounds of bounded treewidth algorithms for counting extensions and projected extension.
We investigate how the technology acceptance and learning experience of the digital education platform HPI Schul-Cloud (HPI School Cloud) for German secondary school teachers can be improved by proposing a user-centered research and development framework. We highlight the importance of developing digital learning technologies in a user-centered way to take differences in the requirements of educators and students into account. We suggest applying qualitative and quantitative methods to build a solid understanding of a learning platform's users, their needs, requirements, and their context of use. After concept development and idea generation of features and areas of opportunity based on the user research, we emphasize on the application of a multi-attribute utility analysis decision-making framework to prioritize ideas rationally, taking results of user research into account. Afterward, we recommend applying the principle build-learn-iterate to build prototypes in different resolutions while learning from user tests and improving the selected opportunities. Last but not least, we propose an approach for continuous short- and long-term user experience controlling and monitoring, extending existing web- and learning analytics metrics.
Devices on the Internet of Things (IoT) are usually battery-powered and have limited resources. Hence, energy-efficient and lightweight protocols were designed for IoT devices, such as the popular Constrained Application Protocol (CoAP). Yet, CoAP itself does not include any defenses against denial-of-sleep attacks, which are attacks that aim at depriving victim devices of entering low-power sleep modes. For example, a denial-of-sleep attack against an IoT device that runs a CoAP server is to send plenty of CoAP messages to it, thereby forcing the IoT device to expend energy for receiving and processing these CoAP messages. All current security solutions for CoAP, namely Datagram Transport Layer Security (DTLS), IPsec, and OSCORE, fail to prevent such attacks. To fill this gap, Seitz et al. proposed a method for filtering out inauthentic and replayed CoAP messages "en-route" on 6LoWPAN border routers. In this paper, we expand on Seitz et al.'s proposal in two ways. First, we revise Seitz et al.'s software architecture so that 6LoWPAN border routers can not only check the authenticity and freshness of CoAP messages, but can also perform a wide range of further checks. Second, we propose a couple of such further checks, which, as compared to Seitz et al.'s original checks, more reliably protect IoT devices that run CoAP servers from remote denial-of-sleep attacks, as well as from remote exploits. We prototyped our solution and successfully tested its compatibility with Contiki-NG's CoAP implementation.
In the present study, the charge distribution and the charge transport across the thickness of 2- and 3-dimensional polymer nanodielectrics was investigated. Chemically surface-treated polypropylene (PP) films and low-density polyethylene nanocomposite films with 3 wt % of magnesium oxide (LDPE/MgO) served as examples of 2-D and 3-D nanodielectrics, respectively. Surface charges were deposited onto the non-metallized surfaces of the one-side metallized polymer films and found to broaden and to thus enter the bulk of the films upon thermal stimulation at suitable elevated temperatures. The resulting space-charge profiles in the thickness direction were probed by means of Piezoelectrically-generated Pressure Steps (PPSs). It was observed that the chemical surface treatment of PP which led to the formation of nano-structures or the use of bulk nanoparticles from LDPE/MgO nanocomposites enhance charge trapping on or in the respective polymer films and also reduce charge transport inside the respective samples.
Modern production infrastructures of globally operating companies usually consist of multiple distributed production sites. While the organization of individual sites consisting of Industry 4.0 components itself is demanding, new questions regarding the organization and allocation of resources emerge considering the total production network. In an attempt to face the challenge of efficient distribution and processing both within and across sites, we aim to provide a hybrid simulation approach as a first step towards optimization. Using hybrid simulation allows us to include real and simulated concepts and thereby benchmark different approaches with reasonable effort. A simulation concept is conceptualized and demonstrated qualitatively using a global multi-site example.
Detect me if you can
(2019)
Spam Bots have become a threat to online social networks with their malicious behavior, posting misinformation messages and influencing online platforms to fulfill their motives. As spam bots have become more advanced over time, creating algorithms to identify bots remains an open challenge. Learning low-dimensional embeddings for nodes in graph structured data has proven to be useful in various domains. In this paper, we propose a model based on graph convolutional neural networks (GCNN) for spam bot detection. Our hypothesis is that to better detect spam bots, in addition to defining a features set, the social graph must also be taken into consideration. GCNNs are able to leverage both the features of a node and aggregate the features of a node’s neighborhood. We compare our approach, with two methods that work solely on a features set and on the structure of the graph. To our knowledge, this work is the first attempt of using graph convolutional neural networks in spam bot detection.
Permettez à l’inventeur du manuscrit dont vous avez l’édition diplomatique sous les yeux d’exposer en exergue les principales étapes qui ont conduit en 2012 à la détermination de la langue et à l’identification de l’auteur.
Étant inscrit à la conférence de linguistique missionnaire à l’université de Brème en mars 2012 et conscient que maintes bibliothèques et archives renferment des trésors non identifiés ou mal catalogués, je suivis le conseil d’un collègue de la bibliothèque de l’université de Trèves, Hans-Ulrich Seiffert, d’inspecter le manuscrit 1136 / 2048 4° de la bibliothèque municipale de Trèves. Cette bibliothèque est riche de plus de 2.500 manuscrits et incunables provenant pour la plupart des saisies révolutionnaires françaises dans les bibliothèques des ci-devant couvents trévirois. Parmi ces trésors un manuscrit portugais des missions qui était rentré dans les fonds de la bibliothèque de Trèves dès 1799 et avait été exposé en 1991 dans le cadre d’une évocation des activités missionnaires globales des Jésuites. Michael Embach, alors directeur de la bibliothèque du séminaire épiscopal de Trèves, l’avait brièvement décrit dans une notice qui prétendait qu’il était rédigé en espagnol et en portugais et avait peut-être été composé par un auteur « Meirin » dans le cadre de ses préparations pour partir dans les missions… Dans ma copie du catalogue de 1991 se trouve inscrit « la langue reste à déterminer ».
J’allais résoudre le mystère en cette mémorable journée du jeudi 23 février 2012 quand je me résolus à l’autopsier dans la salle désormais historique de la Bibliothèque municipale de Trèves, dirigée depuis peu par le collègue Embach. Il parut à l’évidence que le manuscrit ne contenait aucun mot d’espagnol mais que dans la première partie, les entrées en portugais, strictement numérotées et arrangées dans l’ordre alphabétique, étaient suivies par leurs équivalences dans une langue inconnue qui, cependant, avait pour un linguiste une forte odeur de l’Amérique amazonienne. La partie postérieure du manuscrit présentait l’arrangement inverse. Les entrées dans la langue inconnue étaient suivies de leurs équivalences portugaises. Ce n’est que plus tard que nous avons compris le mystérieux ordre des lemmata dans cette partie, basée qu’elle est sur la rime finale des mots.
Deux facteurs ont grandement facilité l’identification de la langue inconnue : 1. la permission du directeur Embach de pouvoir prendre des photos de travail de tous les folios du manuscrit et 2. l’entrée dans le premier catalogue de la Bibliothèque de Trèves de 1802 : “Codex maxime memorabiblis est, cum nondum grammatica praeter Lusitani Anchieta nota sit, et nullum vocabularium huius linguae existet. Sine dubio scriptum est a quodam Missionario Jesuita”.
Rentrant de Trèves chez moi, je fis une brève halte dans mon bureau d’alors pour consulter Internet. L’entrée “Anchieta” dans l’encyclopédie électronique Wikipedia m’informait que ce jésuite portugais avait composé une grammaire du Vieux Tupi, publiée en 1595. Sous le vocable “Old Tupi” ensuite, j’appris que l’auto-désignation pour cette langue est ñeengatú (la bonne langue, le parler correct). De là à retrouver au folio 25 de mes photos la traduction du portugais linguagem par nheénga ne prit que quelques minutes. Dès ce premier soir donc, le mystère de la langue inconnue était résolu!
Un projet de recherche international, voire transcontinental allait se mettre peu à peu en place après. À la conférence de Brème, début mars 2012, je fis la connaissance du linguiste et romaniste, le professeur Wolf Dietrich de l’université de Münster en Westphalie, un des meilleurs connaisseurs des parlers amazoniens. Il rentrait avec une copie de mes photos de travail et confirma bientôt l’importance du manuscrit de Trèves pour la connaissance de la Lingua Geral, le parler qui s’était développé à partir du Vieux Tupi moribond.
Je communiquai également ma découverte au frère Karl-Heinz Arenz de la congrégation des pères de Steyl, originaire de l’Eifel allemande et enseignant d’histoire aux universités brésiliennes de Belém et de Santarém dans l’État du Pará. Arenz est l’auteur d’une étude sur le jésuite luxembourgeois Jean-Philippe Bettendorf, actif au Maranhão dans la seconde moitié du XVIIe siècle. Nous avions collaboré en 2007 à une exposition didactique consacrée à ce personnage. Arenz ne tarda pas à passer l’information sur le manuscrit de Trèves aux deux linguistes brésiliennes Cândida Barros et Ruth Monserrat, qui à leur tour présentaient un projet de recherche qui fut accepté par les autorités brésiliennes. Les reals de ce subside furent bien investis en effet : l’étudiant doctoral Gabriel Prudente réalisa une transcription intégrale du manuscrit de Trèves qui fit apparaître des couches dialectales et sociolectales dans le texte, et même quelques mots allemands. La bibliothèque municipale de Trèves, quant à elle, contribua « en nature » les images digitales qui sont consultables sur les pages de gauche de cette édition électronique.
Début avril 2014, nous fîmes tous connaissance personnellement dans le cadre d’un colloque à l’université de Belém où fut décidée la présente édition digitale et où furent prononcées la plupart des contributions scientifiques que vous pouvez lire en guise d’introduction. Restait à percer l’énigme du véritable auteur du manuscrit. Cândida Barros avait assez tôt proposé les noms de trois Jésuites allemands, actifs dans les années 1750 dans la région du fleuve Xingu et chassés par les mesures anti-jésuites du pouvoir portugais en 1756. L’étude paléographique des Quattuor vota des trois candidats, aux archives centrales des Jésuites à Rome, me permit début septembre 2015 d’éliminer les pères Eckart et Kaulen et de ne retenir que Antonius Meisterburg, originaire de Bernkastel sur la Moselle, comme le scribe du manuscrit de Trèves. Qu’il me soit permis d’exprimer ma GRATITUDE à toutes celles et à tous ceux qui ont contribué à cette belle aventure de découverte intellectuelle et scientifique et à rendre au Brésil une petite pierre de son histoire.
Dielectric materials for electro-active (electret) and/or electro-passive (insulation) applications
(2019)
Dielectric materials for electret applications usually have to contain a quasi-permanent space charge or dipole polarization that is stable over large temperature ranges and time periods. For electrical-insulation applications, on the other hand, a quasi-permanent space charge or dipole polarization is usually considered detrimental. In recent years, however, with the advent of high-voltage direct-current (HVDC) transmission and high-voltage capacitors for energy storage, new possibilities are being explored in the area of high-voltage dielectrics. Stable charge trapping (as e.g. found in nano-dielectrics) or large dipole polarizations (as e.g. found in relaxor ferroelectrics and high-permittivity dielectrics) are no longer considered to be necessarily detrimental in electrical-insulation materials. On the other hand, recent developments in electro-electrets (dielectric elastomers), i.e. very soft dielectrics with large actuation strains and high breakdown fields, and in ferroelectrets, i.e. polymers with electrically charged cavities, have resulted in new electret materials that may also be useful for HVDC insulation systems. Furthermore, 2-dimensional (nano-particles on surfaces or interfaces) and 3-dimensional (nano-particles in the bulk) nano-dielectrics have been found to provide very good charge-trapping properties that may not only be used for more stable electrets and ferroelectrets, but also for better HVDC electrical-insulation materials with the possibility to optimize charge-transport and field-gradient behavior. In view of these and other recent developments, a first attempt will be made to review a small selection of electro-active (i.e. electret) and electro-passive (i.e. insulation) dielectrics in direct comparison. Such a comparative approach may lead to synergies in materials concepts and research methods that will benefit both areas. Furthermore, electrets may be very useful for sensing and monitoring applications in electrical-insulation systems, while high-voltage technology is essential for more efficient charging and poling of electret materials.
Domain-specific physical activity patterns and cardiorespiratory fitness among adults in Germany
(2019)
Background Studies show that occupational physical activity (OPA) has less health-enhancing effects than leisure-time physical activity (LTPA). The spare data available suggests that OPA rarely includes aerobic PAs with little or no enhancing effects on cardiorespiratory fitness (CRF) as a possible explanation. This study aims to investigate the associations between patterns of OPA and LTPA and CRF among adults in Germany. Methods 1,204 men and 1,303 women (18-64 years), who participated in the German Health Interview and Examination Survey 2008-2011, completed a standardized sub-maximal cycle ergometer test to estimate maximal oxygen consumption (VO2max). Job positions were coded according to the level of physical effort to construct an occupational PA index and categorized as low vs. high OPA. LTPA was assessed via questionnaires and dichotomized in no vs. any LTPA participation. A combined LTPA/OPA variable was used (high OPA/ LTPA, low OPA/LTPA, high OPA/no LTPA, low OPA/no LTPA). Information on potential confounders was obtained via questionnaires (e.g., smoking and education) or physical measurements (e.g., waist circumference). Multi-variable logistic regression was used to analyze associations between OPA/LTPA patterns and VO2max. Results Preliminary analyses showed that less-active men were more likely to have a low VO2max with odds ratios (ORs) of 0.80 for low OPA/LTPA, 1.84 for high OPA/no LTPA and 3.46 for low OPA/no LTPA compared to high OPA/LTPA. The corresponding ORs for women were 1.11 for low OPA/LTPA, 3.99 for high OPA/no LTPA and 2.44 for low OPA/no LTPA, indicating the highest likelihood of low fitness for women working in physically demanding jobs and not engaging in LTPA. Conclusions Findings confirm a strong association between LTPA and CRF and suggest an interaction between OPA and LTPA patterns on CRF within the workforce in Germany. Women without LTPA are at high risk of having a low CRF, especially if they work in physically demanding jobs. Key messages Women not practicing leisure-time physical activity are at risk of having a low cardiorespiratory fitness, especially if they work in physically demanding jobs. Different impact of domains of physical activity should be considered when planning interventions to enhance fitness among the adult population.
Editorial
(2019)
Editorial
(2019)
The new year starts and many of us have right away been burdened with conference datelines, grant proposal datelines, teaching obligations, paper revisions and many other things. While being more or less successful in fulfilling To‐Do lists and ticking of urgent (and sometimes even important) things, we often feel that our ability to be truly creative or innovative is rather restrained by this (external pressure). With this, we are not alone. Many studies have shown that stress does influence overall work performance and satisfaction. Furthermore, more and more students and entry‐levels look for work‐life balance and search for employers that offer a surrounding and organization considering these needs. High‐Tech and start‐up companies praise themselves for their “Feel‐Good managers” or Yoga programs. But is this really helpful? Is there indeed a relationship between stress, adverse work environment and creativity or innovation? What are the supporting factors in a work environment that lets employees be more creative? What kind of leadership do we need for innovative behaviour and to what extent can an organization create support structures that reduce the stress we feel? The first issue of Creativity and Innovation Management in 2019 gives some first answers to these questions and hopefully some food for thought.
The first paper written by Dirk De Clercq, and Imanol Belausteguigoitia starts with the question which impact work overload has on creative behaviour. The authors look at how employees' perceptions of work overload reduces their creative behaviour. While they find empirical proof for this relationship, they can also show that the effect is weaker with higher levels of passion for work, emotion sharing, and organizational commitment. The buffering effects of emotion sharing and organizational commitment are particularly strong when they are combined with high levels of passion for work. Their findings give first empirical proof that organizations can and should take an active role in helping their employees reducing the effects of adverse work conditions in order to become or stay creative. However, not only work overload is harming creative behaviour, also the fear of losing one's job has detrimental effects on innovative work behaviour. Anahi van Hootegem, Wendy Niesen and Hans de Witte verify that stress and adverse environmental conditions shape our perception of work. Using threat rigidity theory and an empirical study of 394 employees, they show that the threat of job loss impairs employees' innovativeness through increased irritation and decreased concentration. Organizations can help their employees coping better with this insecurity by communicating more openly and providing different support structures. Support often comes from leadership and the support of the supervisor can clearly shape an employee's motivation to show creative behaviour. Wenjing Cai, Evgenia Lysova, Bart A. G. Bossink, Svetlana N. Khapova and Weidong Wang report empirical findings from a large‐scale survey in China where they find that supervisor support for creativity and job characteristics effectively activate individual psychological capital associated with employee creativity.
On a slight different notion, Gisela Bäcklander looks at agile practices in a very well‐known High Tech firm. In “Doing Complexity Leadership Theory: How agile coaches at Spotify practice enabling leadership”, she researches the role of agile coaches and how they practice enabling leadership, a key balancing force in complexity leadership. She finds that the active involvement of coaches in observing group dynamics, surfacing conflict and facilitating and encouraging constructive dialogue leads to a positive working environment and the well‐being of employees. Quotes from the interviews suggest that the flexible structure provided by the coaches may prove a fruitful way to navigate and balance autonomy and alignment in organizations.
The fifth paper of Frederik Anseel, Michael Vandamme, Wouter Duyck and Eric Rietzchel goes a little further down this road and researches how groups can be motivated better to select truly creative ideas. We know from former studies that groups often perform rather poorly when it comes to selecting creative ideas for implementation. The authors find in an extensive field experiment that under conditions of high epistemic motivation, proself motivated groups select significantly more creative and original ideas than prosocial groups. They conclude however, that more research is needed to understand better why these differences occur. The prosocial behaviour of groups is also the theme of Karin Moser, Jeremy F. Dawson and Michael A. West's paper on “Antecedents of team innovation in health care teams”. They look at team‐level motivation and how a prosocial team environment, indicated by the level of helping behaviour and information‐sharing, may foster innovation. Their results support the hypotheses of both information‐sharing and helping behaviour on team innovation. They suggest that both factors may actually act as buffer against constraints in team work, such as large team size or high occupational diversity in cross‐functional health care teams, and potentially turn these into resources supporting team innovation rather than acting as barriers.
Away from teams and onto designing favourable work environments, the seventh paper of Ferney Osorio, Laurent Dupont, Mauricio Camargo, Pedro Palominos, Jose Ismael Pena and Miguel Alfaro looks into innovation laboratories. Although several studies have tackled the problem of design, development and sustainability of these spaces for innovation, there is still a gap in understanding how the capabilities and performance of these environments are affected by the strategic intentions at the early stages of their design and functioning. The authors analyse and compare eight existing frameworks from literature and propose a new framework for researchers and practitioners aiming to assess or to adapt innovation laboratories. They test their framework in an exploratory study with fifteen laboratories from five different countries and give recommendations for the future design of these laboratories. From design to design thinking goes our last paper from Rama Krishna Reddy Kummitha on “Design Thinking in Social Organisations: Understanding the role of user engagement” where she studies how users persuade social organisations to adopt design thinking. Looking at four social organisations in India during 2008 to 2013, she finds that the designer roles are blurred when social organisations adopt design thinking, while users in the form of interconnecting agencies reduce the gap between designers and communities.
The last two articles were developed from papers presented at the 17th International CINet conference organized in Turin in 2016 by Paolo Neirotti and his colleagues. In the first article, Fábio Gama, Johan Frishammar and Vinit Parida focus on ideation and open innovation in small‐ and medium‐sized enterprises. They investigate the relationship between systematic idea generation and performance and the moderating role of market‐based partnerships. Based on a survey among manufacturing SMEs, they conclude that higher levels of performance are reached and that collaboration with customers and suppliers pays off most when idea generation is done in a highly systematic way. The second article, by Anna Holmquist, Mats Magnusson and Mona Livholts, resonates the theme of the CINet conference ‘Innovation and Tradition; combining the old and the new’. They explore how tradition is used in craft‐based design practices to create new meaning. Applying a narrative ‘research through design’ approach they uncover important design elements, and tensions between them.
Please enjoy this first issue of CIM in 2019 and we wish you creativity and innovation without too much stress in the months to come.
Editorial
(2019)
Editorial
(2019)
Editorial
(2019)