Refine
Year of publication
Document Type
- Article (20617)
- Doctoral Thesis (3125)
- Postprint (2090)
- Monograph/Edited Volume (1195)
- Other (650)
- Review (582)
- Conference Proceeding (311)
- Preprint (232)
- Part of a Book (217)
- Working Paper (131)
Language
- English (29365) (remove)
Is part of the Bibliography
- yes (29365) (remove)
Keywords
- climate change (173)
- Germany (98)
- machine learning (84)
- diffusion (76)
- German (68)
- Arabidopsis thaliana (66)
- anomalous diffusion (58)
- stars: massive (58)
- Climate change (55)
- Holocene (55)
Institute
- Institut für Physik und Astronomie (4867)
- Institut für Biochemie und Biologie (4692)
- Institut für Geowissenschaften (3306)
- Institut für Chemie (2848)
- Institut für Mathematik (1565)
- Department Psychologie (1400)
- Institut für Ernährungswissenschaft (1026)
- Department Linguistik (917)
- Wirtschaftswissenschaften (825)
- Institut für Informatik und Computational Science (795)
Income inequality and taxes
(2023)
Economic literature offers several distinct explanations for the raising income inequality observed in several countries. In the debate about the causes of inequality a growing strand of research focuses on the effects of taxation on income inequality. We contribute to this literature by providing a systematic empirical account of the relationship between income inequality and personal income taxation (PIT) for a set of countries over the period 1981–2005. In order to take alternative explanations into account and to isolate the effects of tax progressivity, we include a wide range of control variables. We address potential reverse causality between inequality and PIT by using the variation in tax schedules of neighbouring countries. Our results confirm a statistically significant negative association between the progressivity of PIT and income inequality. Overall, we find that especially the average and the marginal tax rate have the potential to reduce income inequality. This finding is qualitatively robust across various different empirical specifications.
The synthesis and the crystal structure of the double cluster compound [Nb6Cl14(MeCN)(4)][Nb6Cl14(pyz)(4)]middot6CH(3)CN are described. The synthesis is based on a partial ligand exchange reaction, which proceeds upon dissolving [Nb6Cl14(pyz)(4)]middot2CH(2)Cl(2) in acetonitrile. The compound is built up of two discrete neutral cluster units, which consist of octahedra of Nb-6 atoms coordinated by 12 edge-bridging chlorido and two terminal chlorido ligands, and four acetonitrile ligands on one and four pyrazine ligands on the other cluster unit. Co-crystallized acetonitrile molecules are also present. The single-crystal structure determination has revealed a cluster arrangement in which the [Nb6Cl14(pyz)(4)] units are connected by (halogen) lone-pair-(pyrazine) pi interactions. These lead to chains of [Nb6Cl14(pyz)(4)] clusters. These chains are further connected to cluster layers by (nitrile-halogen) dipole-dipole interactions, in which the [Nb6Cl14(MeCN)(4)] and co-crystallized MeCN molecules are also involved. These cluster layers are arranged parallel to the crystallographic {011} plane.
Early sensitivity to prosodic phrase boundary cues: Behavioral evidence from German-learning infants
(2023)
This dissertation seeks to shed light on the relation of phrasal prosody and developmental speech perception in German-learning infants. Three independent empirical studies explore the role of acoustic correlates of major prosodic boundaries, specifically pitch change, final lengthening, and pause, in infant boundary perception. Moreover, it was examined whether the sensitivity to prosodic phrase boundary markings changes during the first year of life as a result of perceptual attunement to the ambient language (Aslin & Pisoni, 1980).
Using the headturn preference procedure six- and eight-month-old monolingual German-learning infants were tested on their discrimination of two different prosodic groupings of the same list of coordinated names either with or without an internal IPB after the second name, that is, [Moni und Lilli] [und Manu] or [Moni und Lilli und Manu]. The boundary marking was systematically varied with respect to single prosodic cues or specific cue combinations.
Results revealed that six- and eight-month-old German-learning infants successfully detect the internal prosodic boundary when it is signaled by all the three main boundary cues pitch change, final lengthening, and pause. For eight-, but not for six-month-olds, the combination of pitch change and final lengthening, without the occurrence of a pause, is sufficient. This mirrors an adult-like perception by eight-months (Holzgrefe-Lang et al., 2016). Six-month-olds detect a prosodic phrase boundary signaled by final lengthening and pause. The findings suggest a developmental change in German prosodic boundary cue perception from a strong reliance on the pause cue at six months to a differentiated sensitivity to the more subtle cues pitch change and final lengthening at eight months. Neither for six- nor for eight-month-olds the occurrence of pitch change or final lengthening as single cues is sufficient, similar to what has been observed for adult speakers of German (Holzgrefe-Lang et al., 2016).
The present dissertation provides new scientific knowledge on infants’ sensitivity to individual prosodic phrase boundary cues in the first year of life. Methodologically, the studies are pathbreaking since they used exactly the same stimulus materials – phonologically thoroughly controlled lists of names – that have also been used with adults (Holzgrefe-Lang et al., 2016) and with infants in a neurophysiological paradigm (Holzgrefe-Lang, Wellmann, Höhle, & Wartenburger, 2018), allowing for comparisons across age (six/ eight months and adults) and method (behavioral vs. neurophysiological methods). Moreover, materials are suited to be transferred to other languages allowing for a crosslinguistic comparison. Taken together with a study with similar French materials (van Ommen et al., 2020) the observed change in sensitivity in German-learning infants can be interpreted as a language-specific one, from an initial language-general processing mechanism that primarily focuses on the presence of pauses to a language-specific processing that takes into account prosodic properties available in the ambient language. The developmental pattern is discussed as an interplay of acoustic salience, prosodic typology (prosodic regularity) and cue reliability.
Decubitus is one of the most relevant diseases in nursing and the most expensive to treat. It is caused by sustained pressure on tissue, so it particularly affects bed-bound patients. This work lays a foundation for pressure mattress-based decubitus prophylaxis by implementing a solution to the single-frame 2D Human Pose Estimation problem.
For this, methods of Deep Learning are employed. Two approaches are examined, a coarse-to-fine Convolutional Neural Network for direct regression of joint coordinates and a U-Net for the derivation of probability distribution heatmaps.
We conclude that training our models on a combined dataset of the publicly available Bodies at Rest and SLP data yields the best results. Furthermore, various preprocessing techniques are investigated, and a hyperparameter optimization is performed to discover an improved model architecture.
Another finding indicates that the heatmap-based approach outperforms direct regression.
This model achieves a mean per-joint position error of 9.11 cm for the Bodies at Rest data and 7.43 cm for the SLP data.
We find that it generalizes well on data from mattresses other than those seen during training but has difficulties detecting the arms correctly.
Additionally, we give a brief overview of the medical data annotation tool annoto we developed in the bachelor project and furthermore conclude that the Scrum framework and agile practices enhanced our development workflow.
Divergent thinking is the ability to produce numerous and diverse responses to questions or tasks, and it is used as a predictor of creative achievement. It plays a significant role in the business organization’s innovation process and the recognition of new business opportunities. Drawing upon the cumulative process model of creativity in entrepreneurship, we hypothesize that divergent thinking has a lasting effect on post-launch entrepreneurial outcomes related to innovation and growth, but that this relation might not always be linear. Additionally, we hypothesize that domain-specific experience has a moderating role in this relation. We test our hypotheses based on a representative longitudinal sample of 457 German business founders, which we observe up until 40 months after start-up. We find strong relative effects for innovation and growth outcomes. For survival we find conclusive evidence for non-linearities in the effects of divergent thinking. Additionally, we show that such effects are moderated by the type of domain-specific experience that entrepreneurs gathered pre-launch, as it shapes the individual’s ideational abilities to fit into more sophisticated strategies regarding entrepreneurial creative achievement. Our findings have relevant policy implications in characterizing and identifying business start-ups with growth and innovation potential, allowing a more efficient allocation of public and private funds.
We analyze the impact of women’s managerial representation on the gender pay gap among employees on the establishment level using German Linked-Employer-Employee-Data from the years 2004 to 2018. For identification of a causal effect we employ a panel model with establishment fixed effects and industry-specific time dummies. Our results show that a higher share of women in management significantly reduces the gender pay gap within the firm. An increase in the share of women in first-level management e.g. from zero to above 33 percent decreases the adjusted gender pay gap from a baseline of 15 percent by 1.2 percentage points, i.e. to roughly 14 percent. The effect is stronger for women in second-level than first-level management, indicating that women managers with closer interactions with their subordinates have a higher impact on the gender pay gap than women on higher management levels. The results are similar for East and West Germany, despite the lower gender pay gap and more gender egalitarian social norms in East Germany. From a policy perspective, we conclude that increasing the number of women in management positions has the potential to reduce the gender pay gap to a limited extent. However, further policy measures will be needed in order to fully close the gender gap in pay.
Layered structures are ubiquitous in nature and industrial products, in which individual layers could have different mechanical/thermal properties and functions independently contributing to the performance of the whole layered structure for their relevant application. Tuning each layer affects the performance of the whole layered system.
Pores are utilized in various disciplines, where low density, but large surfaces are demanded. Besides, open and interconnected pores would act as a transferring channel for guest chemical molecules. The shape of pores influences compression behavior of the material. Moreover, introducing pores decreases the density and subsequently the mechanical strength. To maintain defined mechanical strength under various stress, porous structure can be reinforced by adding reinforcement agent such as fiber, filler or layered structure to bear the mechanical stress on demanded application.
In this context, this thesis aimed to generate new functions in bilayer systems by combining layers having different moduli and/or porosity, and to develop suitable processing techniques to access these structures.
Manufacturing processes of layered structures employ often organic solvents mostly causing environmental pollution. In this regard, the studied bilayer structures here were manufactured by processes free of organic solvents.
In this thesis, three bilayer systems were studied to answer the individual questions.
First, while various methods of introducing pores in melt-phase are reported for one-layer constructs with simple geometry, can such methods be applied to a bilayer structure, giving two porous layers?
This was addressed with Bilayer System 1. Two porous layers were obtained from melt-blending of two different polyurethanes (PU) and polyvinyl alcohol (PVA) in a co-continuous phase followed by sequential injection molding and leaching the PVA phase in deionized water. A porosity of 50 ± 5% with a high interconnectivity was obtained, in which the pore sizes in both layers ranged from 1 µm to 100 µm with an average of 22 µm in both layers. The obtained pores were tailored by applying an annealing treatment at relevant high temperatures of 110 °C and 130 °C, which allowed the porosity to be kept constant. The disadvantage of this system is that a maximum of 50% porosity could be reached and removal of leaching material in the weld line section of both layers is not guaranteed. Such a construct serves as a model for bilayer porous structure for determining structure-property relationships with respect to the pore size, porosity and mechanical properties of each layer. This fabrication method is also applicable to complex geometries by designing a relevant mold for injection molding.
Secondly, utilizing scCO2 foaming process at elevated temperature and pressure is considered as a green manufacturing process. Employing this method as a post-treatment can alter the history orientation of polymer chains created by previous fabrication methods. Can a bilayer structure be fabricated by a combination of sequential injection molding and scCO2 foaming process, in which a porous layer is supported by a compact layer?
Such a construct (Bilayer System 2) was generated by sequential injection molding of a PCL (Tm ≈ 58 °C) layer and a PLLA (Tg ≈ 58 °C) layer. Soaking this structure in the autoclave with scCO2 at T = 45 °C and P = 100 bar led to the selective foaming of PCL with a porosity of 80%, while the PLA layer was kept compact. The scCO2 autoclave led to the formation of a porous core and skin layer of the PCL, however, the degree of crystallinity of PLLA layer increased from 0 to 50% at the defined temperature and pressure. The microcellular structure of PCL as well as the degree of crystallinity of PLLA were controlled by increasing soaking time.
Thirdly, wrinkles on surfaces in micro/nano scale alter the properties, which are surface-related. Wrinkles are formed on a surface of a bilayer structure having a compliant substrate and a stiff thin film. However, the reported wrinkles were not reversible. Moreover, dynamic wrinkles in nano and micro scale have numerous examples in nature such as gecko foot hair offering reversible adhesion and an ability of lotus leaves for self-cleaning altering hydrophobicity of the surface. It was envisioned to imitate this biomimetic function on the bilayer structure, where self-assembly on/off patterns would be realized on the surface of this construct.
In summary, developing layered constructs having different properties/functions in the individual layer or exhibiting a new function as the consequence of layered structure can give novel insight for designing layered constructs in various disciplines such as packaging and transport industry, aerospace industry and health technology.
This dissertation focuses on the understanding of the optical manipulation of microgels dispersed in aqueous solution of azobenzene containing surfactant. The work consists of three parts where each part is a systematic investigation of the (1) photo-isomerization kinetics of the surfactant in complex with the microgel polymer matrix, (2) light driven diffusiosmosis (LDDO) in microgels and (3) photo-responsivity of microgel on complexation with spiropyran.
The first part comprises three publications where the first one [P1] investigates the photo-isomerization kinetics and corresponding isomer composition at a photo-stationary state of the photo-sensitive surfactant conjugated with charged polymers or micro sized polymer networks to understand the structural response of such photo-sensitive complexes. We report that the photo-isomerization of the azobenzene-containing cationic surfactant is slower in a polymer complex compared to being purely dissolved in an aqueous solution. The surfactant aggregates near the polyelectrolyte chains at concentrations much lower than the bulk critical micelle concentration. This, along with the inhibition of the photo-isomerization kinetics due to steric hindrance within the densely packed aggregates, pushes the isomer-ratio to a higher trans-isomer concentration for all irradiation wavelengths.
The second publication [P2] combines experimental results and non-adiabatic dynamic simulations for the same surfactant molecules embedded in the micelles with absorption spectroscopy measurements of micellar solutions to uncover the reasons responsible for the slowdown in photo induced trans → cis azobenzene isomerization at concentrations higher than the critical micelle concentration (CMC). The simulations reveal a decrease of isomerization quantum yields for molecules inside the micelles and observes a reduction of extinction coefficients upon micellization. These findings explain the deceleration of the trans → cis switching in micelles of the azobenzene-containing surfactants.
Finally, the third publication [P3] focusses on the kinetics of adsorption and desorption of the same surfactant within anionic microgels in the dark and under continuous irradiation. Experimental data demonstrate, that microgels can serve as a selective absorber of the trans isomers. The interaction of the isomers with the gel matrix induces a remotely controllable collapse or swelling on appropriate irradiation wavelengths. Measuring the kinetics of the microgel size response and knowing the exact isomer composition under light exposure, we calculate the adsorption rate of the trans-isomers.
The second part comprises two publications. The first publication [P4] reports on the phenomenon of light-driven diffusioosmotic (DO) long-range attractive and repulsive interactions between micro-sized objects, whose range extends several times the size of microparticles and can be adjusted to point towards or away from the particle by varying irradiation parameters such as intensity or wavelength of light. The phenomenon is fueled by the aforementioned photosensitive surfactant. The complex interaction of dynamic exchange of isomers and photo-isomerization rate yields to relative concentrations gradients of the isomers in the vicinity of micro-sized object inducing a local diffusioosmotic (DO) flow thereby making a surface act as a micropump.
The second publication [P5] exclusively aims the visualization and investigation of the DO flows generated from microgels by using small tracer particles. Similar to micro sized objects, the flow is able to push adjacent tracers over distances several times larger than microgel size. Here we report that the direction and the strength of the l-LDDO depends on the intensity, irradiation wavelength and the amount of surfactant adsorbed by the microgel. For example, the flow pattern around a microgel is directed radially outward and can be maintained quasi-indefinitely under exposure at 455 nm when the trans:cis ratio is 2:1, whereas irradiation at 365 nm, generates a radially transient flow pattern, which inverts at lower intensities.
Lastly, the third part consists of one publication [P6] which, unlike the previous works, reports on the study of the kinetics of photo- and thermo-switching of a new surfactant namely, spiropyran, upon exposure with light of different wavelengths and its interaction with p(NIPAM-AA) microgels. The surfactant being an amphiphile, switches between its ring closed spiropyran (SP) form and ring open merocyanine (MC) form which results in a change in the hydrophilic–hydrophobic balance of the surfactant as MC being a zwitterionic form along with the charged head group, generates three charges on the molecule. Therefore, the MC form of the surfactant is more hydrophilic than in the case of the neutral SP state. Here, we investigate the initial shrinkage of the gel particles via charge compensation on first exposure to SP molecules which results from the complex formation of the molecules with the gel matrix, triggering them to become photo responsive. The size and VPTT of the microgels during irradiation is shown to be a combination of heating up of the solution during light absorption by the surfactant (more pronounced in the case of UV irradiation) and the change in the hydrophobicity of the surfactant.
Recent research suggests that design thinking practices may foster the development of needed capabilities in new digitalised landscapes. However, existing publications represent individual contributions, and we lack a holistic understanding of the value of design thinking in a digital world. No review, to date, has offered a holistic retrospection of this research. In response, in this bibliometric review, we aim to shed light on the intellectual structure of multidisciplinary design thinking literature related to capabilities relevant to the digital world in higher education and business settings, highlight current trends and suggest further studies to advance theoretical and empirical underpinnings. Our study addresses this aim using bibliometric methods—bibliographic coupling and co-word analysis as they are particularly suitable for identifying current trends and future research priorities at the forefront of the research. Overall, bibliometric analyses of the publications dealing with the related topics published in the last 10 years (extracted from the Web of Science database) expose six trends and two possible future research developments highlighting the expanding scope of the design thinking scientific field related to capabilities required for the (more sustainable and human-centric) digital world. Relatedly, design thinking becomes a relevant approach to be included in higher education curricula and human resources training to prepare students and workers for the changing work demands. This paper is well-suited for education and business practitioners seeking to embed design thinking capabilities in their curricula and for design thinking and other scholars wanting to understand the field and possible directions for future research.
Long COVID patients show symptoms, such as fatigue, muscle weakness and pain. Adequate diagnostics are still lacking. Investigating muscle function might be a beneficial approach. The holding capacity (maximal isometric Adaptive Force; AFisomax) was previously suggested to be especially sensitive for impairments. This longitudinal, non-clinical study aimed to investigate the AF in long COVID patients and their recovery process. AF parameters of elbow and hip flexors were assessed in 17 patients at three time points (pre: long COVID state, post: immediately after first treatment, end: recovery) by an objectified manual muscle test. The tester applied an increasing force on the limb of the patient, who had to resist isometrically for as long as possible. The intensity of 13 common symptoms were queried. At pre, patients started to lengthen their muscles at ~50% of the maximal AF (AFmax), which was then reached during eccentric motion, indicating unstable adaptation. At post and end, AFisomax increased significantly to ~99% and 100% of AFmax, respectively, reflecting stable adaptation. AFmax was statistically similar for all three time points. Symptom intensity decreased significantly from pre to end. The findings revealed a substantially impaired maximal holding capacity in long COVID patients, which returned to normal function with substantial health improvement. AFisomax might be a suitable sensitive functional parameter to assess long COVID patients and to support therapy process
Recent research suggests that design thinking practices may foster the development of needed capabilities in new digitalised landscapes. However, existing publications represent individual contributions, and we lack a holistic understanding of the value of design thinking in a digital world. No review, to date, has offered a holistic retrospection of this research. In response, in this bibliometric review, we aim to shed light on the intellectual structure of multidisciplinary design thinking literature related to capabilities relevant to the digital world in higher education and business settings, highlight current trends and suggest further studies to advance theoretical and empirical underpinnings. Our study addresses this aim using bibliometric methods—bibliographic coupling and co-word analysis as they are particularly suitable for identifying current trends and future research priorities at the forefront of the research. Overall, bibliometric analyses of the publications dealing with the related topics published in the last 10 years (extracted from the Web of Science database) expose six trends and two possible future research developments highlighting the expanding scope of the design thinking scientific field related to capabilities required for the (more sustainable and human-centric) digital world. Relatedly, design thinking becomes a relevant approach to be included in higher education curricula and human resources training to prepare students and workers for the changing work demands. This paper is well-suited for education and business practitioners seeking to embed design thinking capabilities in their curricula and for design thinking and other scholars wanting to understand the field and possible directions for future research.
Digital technology offers significant political, economic, and societal opportunities. At the same time, the notion of digital sovereignty has become a leitmotif in German discourse: the state’s capacity to assume its responsibilities and safeguard society’s – and individuals’ – ability to shape the digital transformation in a self-determined way. The education sector is exemplary for the challenge faced by Germany, and indeed Europe, of harnessing the benefits of digital technology while navigating concerns around sovereignty. It encompasses education as a core public good, a rapidly growing field of business, and growing pools of highly sensitive personal data. The report describes pathways to mitigating the tension between digitalization and sovereignty at three different levels – state, economy, and individual – through the lens of concrete technical projects in the education sector: the HPI Schul-Cloud (state sovereignty), the MERLOT data spaces (economic sovereignty), and the openHPI platform (individual sovereignty).
Personal data privacy is considered to be a fundamental right. It forms a part of our highest ethical standards and is anchored in legislation and various best practices from the technical perspective. Yet, protecting against personal data exposure is a challenging problem from the perspective of generating privacy-preserving datasets to support machine learning and data mining operations. The issue is further compounded by the fact that devices such as consumer wearables and sensors track user behaviours on such a fine-grained level, thereby accelerating the formation of multi-attribute and large-scale high-dimensional datasets.
In recent years, increasing news coverage regarding de-anonymisation incidents, including but not limited to the telecommunication, transportation, financial transaction, and healthcare sectors, have resulted in the exposure of sensitive private information. These incidents indicate that releasing privacy-preserving datasets requires serious consideration from the pre-processing perspective. A critical problem that appears in this regard is the time complexity issue in applying syntactic anonymisation methods, such as k-anonymity, l-diversity, or t-closeness to generating privacy-preserving data. Previous studies have shown that this problem is NP-hard.
This thesis focuses on large high-dimensional datasets as an example of a special case of data that is characteristically challenging to anonymise using syntactic methods. In essence, large high-dimensional data contains a proportionately large number of attributes in proportion to the population of attribute values. Applying standard syntactic data anonymisation approaches to generating privacy-preserving data based on such methods results in high information-loss, thereby rendering the data useless for analytics operations or in low privacy due to inferences based on the data when information loss is minimised.
We postulate that this problem can be resolved effectively by searching for and eliminating all the quasi-identifiers present in a high-dimensional dataset. Essentially, we quantify the privacy-preserving data sharing problem as the Find-QID problem.
Further, we show that despite the complex nature of absolute privacy, the discovery of QID can be achieved reliably for large datasets. The risk of private data exposure through inferences can be circumvented, and both can be practicably achieved without the need for high-performance computers.
For this purpose, we present, implement, and empirically assess both mathematical and engineering optimisation methods for a deterministic discovery of privacy-violating inferences. This includes a greedy search scheme by efficiently queuing QID candidates based on their tuple characteristics, projecting QIDs on Bayesian inferences, and countering Bayesian network’s state-space-explosion with an aggregation strategy taken from multigrid context and vectorised GPU acceleration. Part of this work showcases magnitudes of processing acceleration, particularly in high dimensions. We even achieve near real-time runtime for currently impractical applications. At the same time, we demonstrate how such contributions could be abused to de-anonymise Kristine A. and Cameron R. in a public Twitter dataset addressing the US Presidential Election 2020.
Finally, this work contributes, implements, and evaluates an extended and generalised version of the novel syntactic anonymisation methodology, attribute compartmentation. Attribute compartmentation promises sanitised datasets without remaining quasi-identifiers while minimising information loss. To prove its functionality in the real world, we partner with digital health experts to conduct a medical use case study. As part of the experiments, we illustrate that attribute compartmentation is suitable for everyday use and, as a positive side effect, even circumvents a common domain issue of base rate neglect.
In an effort to describe and produce different formats for video instruction, the research community in technology-enhanced learning, and MOOC scholars in particular, have focused on the general style of video production: whether it is a digitally scripted “talk-and-chalk” or a “talking head” version of a learning unit. Since these production styles include various sub-elements, this paper deconstructs the inherited elements of video production in the context of educational live-streams. Using over 700 videos – both from synchronous and asynchronous modalities of large video-based platforms (YouTube and Twitch), 92 features were found in eight categories of video production. These include commonly analyzed features such as the use of green screen and a visible instructor, but also less studied features such as social media connections and changing camera perspective depending on the topic being covered. Overall, the research results enable an analysis of common video production styles and a toolbox for categorizing new formats – independent of their final (a)synchronous use in MOOCs. Keywords: video production, MOOC video styles, live-streaming.
Invention
(2023)
This entry addresses invention from five different perspectives: (i) definition of the term, (ii) mechanisms underlying invention processes, (iii) (pre-)history of human inventions, (iv) intellectual property protection vs open innovation, and (v) case studies of great inventors. Regarding the definition, an invention is the outcome of a creative process taking place within a technological milieu, which is recognized as successful in terms of its effectiveness as an original technology. In the process of invention, a technological possibility becomes realized. Inventions are distinct from either discovery or innovation. In human creative processes, seven mechanisms of invention can be observed, yielding characteristic outcomes: (1) basic inventions, (2) invention branches, (3) invention combinations, (4) invention toolkits, (5) invention exaptations, (6) invention values, and (7) game-changing inventions. The development of humanity has been strongly shaped by inventions ever since early stone tools and the conception of agriculture. An “explosion of creativity” has been associated with Homo sapiens, and inventions in all fields of human endeavor have followed suit, engendering an exponential growth of cumulative culture. This culture development emerges essentially through a reuse of previous inventions, their revision, amendment and rededication. In sociocultural terms, humans have increasingly regulated processes of invention and invention-reuse through concepts such as intellectual property, patents, open innovation and licensing methods. Finally, three case studies of great inventors are considered: Edison, Marconi, and Montessori, next to a discussion of human invention processes as collaborative endeavors.
We conduct a laboratory experiment to study how locus of control operates through people's preferences and beliefs to influence their decisions. Using the principal-agent setting of the delegation game, we test four key channels that conceptually link locus of control to decision-making: (i) preference for agency; (ii) optimism and (iii) confidence regarding the return to effort; and (iv) illusion of control. Knowing the return and cost of stated effort, principals either retain or delegate the right to make an investment decision that generates payoffs for themselves and their agents. Extending the game to the context in which the return to stated effort is unknown allows us to explicitly study the relationship between locus of control and beliefs about the return to effort. We find that internal locus of control is linked to the preference for agency, an effect that is driven by women. We find no evidence that locus of control influences optimism and confidence about the return to stated effort, or that it operates through an illusion of control.
Design Thinking is a human-centered approach to innovation that has become increasingly popular globally over the last decade. While the spread of Design Thinking is well understood and documented in the Western cultural contexts, particularly in Europe and the US due to the popularity of the Stanford-Potsdam Design Thinking education model, this is not the case when it comes to non-Western cultural contexts. This thesis fills a gap identified in the literature regarding how Design Thinking emerged, was perceived, adopted, and practiced in the Arab world. The culture in that part of the world differs from that of the Western context, which impacts the mindset of people and how they interact with Design Thinking tools and methods.
A mixed-methods research approach was followed in which both quantitative and qualitative methods were employed. First, two methods were used in the quantitative phase: a social media analysis using Twitter as a source of data, and an online questionnaire. The results and analysis of the quantitative data informed the design of the qualitative phase in which two methods were employed: ten semi-structured interviews, and participant observation of seven Design Thinking training events.
According to the analyzed data, the Arab world appears to have had an early, though relatively weak, and slow, adoption of Design Thinking since 2006. Increasing adoption, however, has been witnessed over the last decade, especially in Saudi Arabia, the United Arab Emirates and Egypt. The results also show that despite its limited spread, Design Thinking has been practiced the most in education, information technology and communication, administrative services, and the non-profit sectors. The way it is being practiced, though, is not fully aligned with how it is being practiced and taught in the US and Europe, as most people in the region do not necessarily believe in all mindset attributes introduced by the Stanford-Potsdam tradition.
Practitioners in the Arab world also seem to shy away from the 'wild side' of Design Thinking in particular, and do not fully appreciate the connection between art-design, and science-engineering. This questions the role of the educational institutions in the region since -according to the findings- they appear to be leading the movement in promoting and developing Design Thinking in the Arab world. Nonetheless, it is notable that people seem to be aware of the positive impact of applying Design Thinking in the region, and its potential to bring meaningful transformation. However, they also seem to be concerned about the current cultural, social, political, and economic challenges that may challenge this transformation. Therefore, they call for more awareness and demand to create Arabic, culturally appropriate programs to respond to the local needs. On another note, the lack of Arabic content and local case studies on Design Thinking were identified by several interviewees and were also confirmed by the participant observation as major challenges that are slowing down the spread of Design Thinking or sometimes hampering capacity building in the region. Other challenges that were revealed by the study are: changing the mindset of people, the lack of dedicated Design Thinking spaces, and the need for clear instructions on how to apply Design Thinking methods and activities. The concept of time and how Arabs deal with it, gender management during trainings, and hierarchy and power dynamics among training participants are also among the identified challenges. Another key finding revealed by the study is the confirmation of التفكير التصميمي as the Arabic term to be most widely adopted in the region to refer to Design Thinking, since four other Arabic terms were found to be associated with Design Thinking.
Based on the findings of the study, the thesis concludes by presenting a list of recommendations on how to overcome the mentioned challenges and what factors should be considered when designing and implementing culturally-customized Design Thinking training in the Arab region.
The MOOChub is a joined web-based catalog of all relevant German and Austrian MOOC platforms that lists well over 750 Massive Open Online Courses (MOOCs). Automatically building such a catalog requires that all partners describe and publicly offer the metadata of their courses in the same way. The paper at hand presents the genesis of the idea to establish a common metadata standard and the story of its subsequent development. The result of this effort is, first, an open-licensed de-facto-standard, which is based on existing commonly used standards and second, a first prototypical platform that is using this standard: the MOOChub, which lists all courses of the involved partners. This catalog is searchable and provides a more comprehensive overview of basically all MOOCs that are offered by German and Austrian MOOC platforms. Finally, the upcoming developments to further optimize the catalog and the metadata standard are reported.
At the beginning of 2020, with COVID-19, courts of justice worldwide had to move online to continue providing judicial service. Digital technologies materialized the court practices in ways unthinkable shortly before the pandemic creating resonances with judicial and legal regulation, as well as frictions. A better understanding of the dynamics at play in the digitalization of courts is paramount for designing justice systems that serve their users better, ensure fair and timely dispute resolutions, and foster access to justice. Building on three major bodies of literature —e-justice, digitalization and organization studies, and design research— Designing for Digital Justice takes a nuanced approach to account for human and more-than-human agencies.
Using a qualitative approach, I have studied in depth the digitalization of Chilean courts during the pandemic, specifically between April 2020 and September 2022. Leveraging a comprehensive source of primary and secondary data, I traced back the genealogy of the novel materializations of courts’ practices structured by the possibilities offered by digital technologies. In five (5) cases studies, I show in detail how the courts got to 1) work remotely, 2) host hearings via videoconference, 3) engage with users via social media (i.e., Facebook and Chat Messenger), 4) broadcast a show with judges answering questions from users via Facebook Live, and 5) record, stream, and upload judicial hearings to YouTube to fulfil the publicity requirement of criminal hearings. The digitalization of courts during the pandemic is characterized by a suspended normativity, which makes innovation possible yet presents risks. While digital technologies enabled the judiciary to provide services continuously, they also created the risk of displacing traditional judicial and legal regulation.
Contributing to liminal innovation and digitalization research, Designing for Digital Justice theorizes four phases: 1) the pre-digitalization phase resulting in the development of regulation, 2) the hotspot of digitalization resulting in the extension of regulation, 3) the digital innovation redeveloping regulation (moving to a new, preliminary phase), and 4) the permanence of temporal practices displacing regulation. Contributing to design research Designing for Digital Justice provides new possibilities for innovation in the courts, focusing at different levels to better address tensions generated by digitalization. Fellow researchers will find in these pages a sound theoretical advancement at the intersection of digitalization and justice with novel methodological references. Practitioners will benefit from the actionable governance framework Designing for Digital Justice Model, which provides three fields of possibilities for action to design better justice systems. Only by taking into account digital, legal, and social factors can we design better systems that promote access to justice, the rule of law, and, ultimately social peace.
The Security Operations Center (SOC) represents a specialized unit responsible for managing security within enterprises. To aid in its responsibilities, the SOC relies heavily on a Security Information and Event Management (SIEM) system that functions as a centralized repository for all security-related data, providing a comprehensive view of the organization's security posture. Due to the ability to offer such insights, SIEMS are considered indispensable tools facilitating SOC functions, such as monitoring, threat detection, and incident response.
Despite advancements in big data architectures and analytics, most SIEMs fall short of keeping pace. Architecturally, they function merely as log search engines, lacking the support for distributed large-scale analytics. Analytically, they rely on rule-based correlation, neglecting the adoption of more advanced data science and machine learning techniques.
This thesis first proposes a blueprint for next-generation SIEM systems that emphasize distributed processing and multi-layered storage to enable data mining at a big data scale. Next, with the architectural support, it introduces two data mining approaches for advanced threat detection as part of SOC operations.
First, a novel graph mining technique that formulates threat detection within the SIEM system as a large-scale graph mining and inference problem, built on the principles of guilt-by-association and exempt-by-reputation. The approach entails the construction of a Heterogeneous Information Network (HIN) that models shared characteristics and associations among entities extracted from SIEM-related events/logs. Thereon, a novel graph-based inference algorithm is used to infer a node's maliciousness score based on its associations with other entities in the HIN. Second, an innovative outlier detection technique that imitates a SOC analyst's reasoning process to find anomalies/outliers. The approach emphasizes explainability and simplicity, achieved by combining the output of simple context-aware univariate submodels that calculate an outlier score for each entry.
Both approaches were tested in academic and real-world settings, demonstrating high performance when compared to other algorithms as well as practicality alongside a large enterprise's SIEM system.
This thesis establishes the foundation for next-generation SIEM systems that can enhance today's SOCs and facilitate the transition from human-centric to data-driven security operations.
Throughout the last ~3 million years, the Earth's climate system was characterised by cycles of glacial and interglacial periods. The current warm period, the Holocene, is comparably stable and stands out from this long-term cyclicality. However, since the industrial revolution, the climate has been increasingly affected by a human-induced increase in greenhouse gas concentrations. While instrumental observations are used to describe changes over the past ~200 years, indirect observations via proxy data are the main source of information beyond this instrumental era. These data are indicators of past climatic conditions, stored in palaeoclimate archives around the Earth. The proxy signal is affected by processes independent of the prevailing climatic conditions. In particular, for sedimentary archives such as marine sediments and polar ice sheets, material may be redistributed during or after the initial deposition and subsequent formation of the archive. This leads to noise in the records challenging reliable reconstructions on local or short time scales. This dissertation characterises the initial deposition of the climatic signal and quantifies the resulting archive-internal heterogeneity and its influence on the observed proxy signal to improve the representativity and interpretation of climate reconstructions from marine sediments and ice cores.
To this end, the horizontal and vertical variation in radiocarbon content of a box-core from the South China Sea is investigated. The three-dimensional resolution is used to quantify the true uncertainty in radiocarbon age estimates from planktonic foraminifera with an extensive sampling scheme, including different sample volumes and replicated measurements of batches of small and large numbers of specimen. An assessment on the variability stemming from sediment mixing by benthic organisms reveals strong internal heterogeneity. Hence, sediment mixing leads to substantial time uncertainty of proxy-based reconstructions with error terms two to five times larger than previously assumed.
A second three-dimensional analysis of the upper snowpack provides insights into the heterogeneous signal deposition and imprint in snow and firn. A new study design which combines a structure-from-motion photogrammetry approach with two-dimensional isotopic data is performed at a study site in the accumulation zone of the Greenland Ice Sheet. The photogrammetry method reveals an intermittent character of snowfall, a layer-wise snow deposition with substantial contributions by wind-driven erosion and redistribution to the final spatially variable accumulation and illustrated the evolution of stratigraphic noise at the surface. The isotopic data show the preservation of stratigraphic noise within the upper firn column, leading to a spatially variable climate signal imprint and heterogeneous layer thicknesses. Additional post-depositional modifications due to snow-air exchange are also investigated, but without a conclusive quantification of the contribution to the final isotopic signature.
Finally, this characterisation and quantification of the complex signal formation in marine sediments and polar ice contributes to a better understanding of the signal content in proxy data which is needed to assess the natural climate variability during the Holocene.
In late summer, migratory bats of the temperate zone face the challenge of accomplishing two energy-demanding tasks almost at the same time: migration and mating. Both require information and involve search efforts, such as localizing prey or finding potential mates. In non-migrating bat species, playback studies showed that listening to vocalizations of other bats, both con-and heterospecifics, may help a recipient bat to find foraging patches and mating sites. However, we are still unaware of the degree to which migrating bats depend on con-or heterospecific vocalizations for identifying potential feeding or mating opportunities during nightly transit flights. Here, we investigated the vocal responses of Nathusius’ pipistrelle bats, Pipistrellus nathusii, to simulated feeding and courtship aggregations at a coastal migration corridor. We presented migrating bats either feeding buzzes or courtship calls of their own or a heterospecific migratory species, the common noctule, Nyctalus noctula. We expected that during migratory transit flights, simulated feeding opportunities would be particularly attractive to bats, as well as simulated mating opportunities which may indicate suitable roosts for a stopover. However, we found that when compared to the natural silence of both pre-and post-playback phases, bats called indifferently during the playback of conspecific feeding sounds, whereas P. nathusii echolocation call activity increased during simulated feeding of N. noctula. In contrast, the call activity of P. nathusii decreased during the playback of conspecific courtship calls, while no response could be detected when heterospecific call types were broadcasted. Our results suggest that while on migratory transits, P. nathusii circumnavigate conspecific mating aggregations, possibly to save time or to reduce the risks associated with social interactions where aggression due to territoriality might be expected. This avoidance behavior could be a result of optimization strategies by P. nathusii when performing long-distance migratory flights, and it could also explain the lack of a response to simulated conspecific feeding. However, the observed increase of activity in response to simulated feeding of N. noctula, suggests that P. nathusii individuals may be eavesdropping on other aerial hawking insectivorous species during migration, especially if these occupy a slightly different foraging niche.
Sulfur is an important element that is incorporated into many biomolecules in humans. The incorporation and transfer of sulfur into biomolecules is, however, facilitated by a series of different sulfurtransferases. Among these sulfurtransferases is the human mercaptopyruvate sulfurtransferase (MPST) also designated as tRNA thiouridine modification protein (TUM1). The role of the human TUM1 protein has been suggested in a wide range of physiological processes in the cell among which are but not limited to involvement in Molybdenum cofactor (Moco) biosynthesis, cytosolic tRNA thiolation and generation of H2S as signaling molecule both in mitochondria and the cytosol. Previous interaction studies showed that TUM1 interacts with the L-cysteine desulfurase NFS1 and the Molybdenum cofactor biosynthesis protein 3 (MOCS3). Here, we show the roles of TUM1 in human cells using CRISPR/Cas9 genetically modified Human Embryonic Kidney cells. Here, we show that TUM1 is involved in the sulfur transfer for Molybdenum cofactor synthesis and tRNA thiomodification by spectrophotometric measurement of the activity of sulfite oxidase and liquid chromatography quantification of the level of sulfur-modified tRNA. Further, we show that TUM1 has a role in hydrogen sulfide production and cellular bioenergetics.
Digitalization, as well as sustainability, are gaining increased relevance and have attracted significant attention in research and practice. However, the research already published about this topic examining digitalization in the retail sector does not consider the acceptance of related innovations, nor their impact on sustainability. Therefore, this article critically analyzes the acceptance of customers towards digital technologies in fashion stores as well as their impact on sustainability in the textile industry. The comprehensive analysis of the literature and the current state of research provide the basis of this paper. Theoretical models, such as the Technology-Acceptance-Model (TAM) and the Unified Theory of Acceptance and Use of Technology 2 (UTAUT 2) enable the evaluation of expectations and acceptance, as well as the assessment of possible inhibitory factors for the subsequent descriptive and statistical examination of the acceptance of digital technologies in fashion stores. The research on this subject was examined in a quantitative way. The key findings show that customers do accept digital technologies in fashion stores. The final part of this contribution describes the innovative Digitalization 4 Sustainability Framework which shows that digital technologies at the point of sale (PoS) in fashion stores could have a positive impact on sustainability. Overall, this paper shows that it is particularly important for fashion stores to concentrate on their individual strengths and customer needs as well as to indicate a more sustainable way by using digital technologies, in order to achieve added value for the customers and to set themselves apart from the competition while designing a more sustainable future. Moreover, fashion stores should make it a point of their honor to harness the power of digitalization for sake of sustainability and economic value creation.
Due to anthropogenic greenhouse gas emissions, Earth’s average surface temperature is steadily increasing. As a consequence, many weather extremes are likely to become more frequent and intense. This poses a threat to natural and human systems, with local impacts capable of destroying exposed assets and infrastructure, and disrupting economic and societal activity. Yet, these effects are not locally confined to the directly affected regions, as they can trigger indirect economic repercussions through loss propagation along supply chains. As a result, local extremes yield a potentially global economic response. To build economic resilience and design effective adaptation measures that mitigate adverse socio-economic impacts of ongoing climate change, it is crucial to gain a comprehensive understanding of indirect impacts and the underlying economic mechanisms.
Presenting six articles in this thesis, I contribute towards this understanding. To this end, I expand on local impacts under current and future climate, the resulting global economic response, as well as the methods and tools to analyze this response.
Starting with a traditional assessment of weather extremes under climate change, the first article investigates extreme snowfall in the Northern Hemisphere until the end of the century. Analyzing an ensemble of global climate model projections reveals an increase of the most extreme snowfall, while mean snowfall decreases.
Assessing repercussions beyond local impacts, I employ numerical simulations to compute indirect economic effects from weather extremes with the numerical agent-based shock propagation model Acclimate. This model is used in conjunction with the recently emerged storyline framework, which involves analyzing the impacts of a particular reference extreme event and comparing them to impacts in plausible counterfactual scenarios under various climate or socio-economic conditions. Using this approach, I introduce three primary storylines that shed light on the complex mechanisms underlying economic loss propagation.
In the second and third articles of this thesis, I analyze storylines for the historical Hurricanes Sandy (2012) and Harvey (2017) in the USA. For this, I first estimate local economic output losses and then simulate the resulting global economic response with Acclimate. The storyline for Hurricane Sandy thereby focuses on global consumption price anomalies and the resulting changes in consumption. I find that the local economic disruption leads to a global wave-like economic price ripple, with upstream effects propagating in the supplier direction and downstream effects in the buyer direction. Initially, an upstream demand reduction causes consumption price decreases, followed by a downstream supply shortage and increasing prices, before the anomalies decay in a normalization phase. A dominant upstream or downstream effect leads to net consumption gains or losses of a region, respectively. Moreover, I demonstrate that a longer direct economic shock intensifies the downstream effect for many regions, leading to an overall consumption loss.
The third article of my thesis builds upon the developed loss estimation method by incorporating projections to future global warming levels. I use these projections to explore how the global production response to Hurricane Harvey would change under further increased global warming. The results show that, while the USA is able to nationally offset direct losses in the reference configuration, other countries have to compensate for increasing shares of counterfactual future losses. This compensation is mainly achieved by large exporting countries, but gradually shifts towards smaller regions. These findings not only highlight the economy’s ability to flexibly mitigate disaster losses to a certain extent, but also reveal the vulnerability and economic disadvantage of regions that are exposed to extreme weather events.
The storyline in the fourth article of my thesis investigates the interaction between global economic stress and the propagation of losses from weather extremes. I examine indirect impacts of weather extremes — tropical cyclones, heat stress, and river floods — worldwide under two different economic conditions: an unstressed economy and a globally stressed economy, as seen during the Covid-19 pandemic. I demonstrate that the adverse effects of weather extremes on global consumption are strongly amplified when the economy is under stress. Specifically, consumption losses in the USA and China double and triple, respectively, due to the global economy’s decreased capacity for disaster loss compensation. An aggravated scarcity intensifies the price response, causing consumption losses to increase.
Advancing on the methods and tools used here, the final two articles in my thesis extend the agent-based model Acclimate and formalize the storyline approach. With the model extension described in the fifth article, regional consumers make rational choices on the goods bought such that their utility is maximized under a constrained budget. In an out-of-equilibrium economy, these rational consumers are shown to temporarily increase consumption of certain goods in spite of rising prices.
The sixth article of my thesis proposes a formalization of the storyline framework, drawing on multiple studies including storylines presented in this thesis. The proposed guideline defines eight central elements that can be used to construct a storyline.
Overall, this thesis contributes towards a better understanding of economic repercussions of weather extremes. It achieves this by providing assessments of local direct impacts, highlighting mechanisms and impacts of loss propagation, and advancing on methods and tools used.
The CH2Cl2/MeOH (1:1) extract of Zanthoxylum holstzianum stem bark showed good antiplasmodial activity (IC50 2.5 +/- 0.3 and 2.6 +/- 0.3 mu g/mL against the W2 and D6 strains of Plasmodium falciparum, respectively). From the extract five benzophenanthridine alkaloids [8-acetonyldihydrochelerythrine (1), nitidine (2), dihydrochelerythine (3), norchelerythrine (5), arnottianamide (8)]; a 2-quinolone alkaloid [N-methylflindersine (4)]; a lignan [4,4 '-dihydroxy-3,3 '-dimethoxylignan-9,9 '-diyl diacetate (7)] and a dimer of a benzophenanthridine and 2-quinoline [holstzianoquinoline (6)] were isolated. The CH2Cl2/MeOH (1:1) extract of the root bark afforded 1, 3-6, 8, chelerythridimerine (9) and 9-demethyloxychelerythrine (10). Holstzianoquinoline (6) is new, and is the second dimer linked by a C-C bond of a benzophenanthridine and a 2-quinoline reported thus far. The compounds were identified based on spectroscopic evidence. Amongst five compounds (1-5) tested against two strains of P. falciparum, nitidine (IC50 0.11 +/- 0.01 mu g/mL against W2 and D6 strains) and norchelerythrine (IC50 value of 0.15 +/- 0.01 mu g/mL against D6 strain) were the most active.
Droughts in São Paulo
(2023)
Literature has suggested that droughts and societies are mutually shaped and, therefore, both require a better understanding of their coevolution on risk reduction and water adaptation. Although the Sao Paulo Metropolitan Region drew attention because of the 2013-2015 drought, this was not the first event. This paper revisits this event and the 1985-1986 drought to compare the evolution of drought risk management aspects. Documents and hydrological records are analyzed to evaluate the hazard intensity, preparedness, exposure, vulnerability, responses, and mitigation aspects of both events. Although the hazard intensity and exposure of the latter event were larger than the former one, the policy implementation delay and the dependency of service areas in a single reservoir exposed the region to higher vulnerability. In addition to the structural and non-structural tools implemented just after the events, this work raises the possibility of rainwater reuse for reducing the stress in reservoirs.
Its properties make copper one of the world’s most important functional metals. Numerous megatrends are increasing the demand for copper. This requires the prospection and exploration of new deposits, as well as the monitoring of copper quality in the various production steps. A promising technique to perform these tasks is Laser Induced Breakdown Spectroscopy (LIBS). Its unique feature, among others, is the ability to measure on site without sample collection and preparation. In this work, copper-bearing minerals from two different deposits are studied. The first set of field samples come from a volcanogenic massive sulfide (VMS) deposit, the second part from a stratiform sedimentary copper (SSC) deposit. Different approaches are used to analyze the data. First, univariate regression (UVR) is used. However, due to the strong influence of matrix effects, this is not suitable for the quantitative analysis of copper grades. Second, the multivariate method of partial least squares regression (PLSR) is used, which is more suitable for quantification. In addition, the effects of the surrounding matrices on the LIBS data are characterized by principal component analysis (PCA), alternative regression methods to PLSR are tested and the PLSR calibration is validated using field samples.
Keep on scrolling?
(2023)
Smartphones are an integral part of daily life for many people worldwide. However, concerns have been raised that long usage times and the fragmentation of daily life through smartphone usage are detrimental to well-being. This preregistered study assesses (1) whether differences in smartphone usage behaviors between individuals predict differences in a variety of well-being measures (between-person effects) and (2) whether differences in smartphone usage behaviors between situations predict whether an individual is feeling better or worse (within-person effects). In addition to total usage time, several indicators capturing the fragmentation of usage/nonusage time were developed. The study combines objectively measured smartphone usage with self-reports of well-being in surveys (N = 236) and an experience sampling period (N = 378, n = 5775 datapoints). To ensure the robustness of the results, we replicated our analyses in a second measurement period (surveys: N = 305; experience sampling: N = 534, n = 7287 datapoints) and considered the pattern of effects across different operational definitions and constructs. Results show that individuals who use their smartphone more report slightly lower well-being (between-person effect) but no evidence for within-person effects of total usage time emerged. With respect to fragmentation, we found no robust association with well-being.
Extreme flooding displaces an average of 12 million people every year. Marginalized populations in low-income countries are in particular at high risk, but also industrialized countries are susceptible to displacement and its inherent societal impacts. The risk of being displaced results from a complex interaction of flood hazard, population exposed in the floodplains, and socio-economic vulnerability. Ongoing global warming changes the intensity, frequency, and duration of flood hazards, undermining existing protection measures. Meanwhile, settlements in attractive yet hazardous flood-prone areas have led to a higher degree of population exposure. Finally, the vulnerability to displacement is altered by demographic and social change, shifting economic power, urbanization, and technological development. These risk components have been investigated intensively in the context of loss of life and economic damage, however, only little is known about the risk of displacement under global change.
This thesis aims to improve our understanding of flood-induced displacement risk under global climate change and socio-economic change. This objective is tackled by addressing the following three research questions. First, by focusing on the choice of input data, how well can a global flood modeling chain reproduce flood hazards of historic events that lead to displacement? Second, what are the socio-economic characteristics that shape the vulnerability to displacement? Finally, to what degree has climate change potentially contributed to recent flood-induced displacement events?
To answer the first question, a global flood modeling chain is evaluated by comparing simulated flood extent with satellite-derived inundation information for eight major flood events. A focus is set on the sensitivity to different combinations of the underlying climate reanalysis datasets and global hydrological models which serve as an input for the global hydraulic model. An evaluation scheme of performance scores shows that simulated flood extent is mostly overestimated without the consideration of flood protection and only for a few events dependent on the choice of global hydrological models. Results are more sensitive to the underlying climate forcing, with two datasets differing substantially from a third one. In contrast, the incorporation of flood protection standards results in an underestimation of flood extent, pointing to potential deficiencies in the protection level estimates or the flood frequency distribution within the modeling chain.
Following the analysis of a physical flood hazard model, the socio-economic drivers of vulnerability to displacement are investigated in the next step. For this purpose, a satellite- based, global collection of flood footprints is linked with two disaster inventories to match societal impacts with the corresponding flood hazard. For each event the number of affected population, assets, and critical infrastructure, as well as socio-economic indicators are computed. The resulting datasets are made publicly available and contain 335 displacement events and 695 mortality/damage events. Based on this new data product, event-specific displacement vulnerabilities are determined and multiple (national) dependencies with the socio-economic predictors are derived. The results suggest that economic prosperity only partially shapes vulnerability to displacement; urbanization, infant mortality rate, the share of elderly, population density and critical infrastructure exhibit a stronger functional relationship, suggesting that higher levels of development are generally associated with lower vulnerability.
Besides examining the contextual drivers of vulnerability, the role of climate change in the context of human displacement is also being explored. An impact attribution approach is applied on the example of Cyclone Idai and associated extreme coastal flooding in Mozambique. A combination of coastal flood modeling and satellite imagery is used to construct factual and counterfactual flood events. This storyline-type attribution method allows investigating the isolated or combined effects of sea level rise and the intensification of cyclone wind speeds on coastal flooding. The results suggest that displacement risk has increased by 3.1 to 3.5% due to the total effects of climate change on coastal flooding, with the effects of increasing wind speed being the dominant factor.
In conclusion, this thesis highlights the potentials and challenges of modeling flood- induced displacement risk. While this work explores the sensitivity of global flood modeling to the choice of input data, new questions arise on how to effectively improve the reproduction of flood return periods and the representation of protection levels. It is also demonstrated that disentangling displacement vulnerabilities is feasible, with the results providing useful information for risk assessments, effective humanitarian aid, and disaster relief. The impact attribution study is a first step in assessing the effects of global warming on displacement risk, leading to new research challenges, e.g., coupling fluvial and coastal flood models or the attribution of other hazard types and displacement events. This thesis is one of the first to address flood-induced displacement risk from a global perspective. The findings motivate for further development of the global flood modeling chain to improve our understanding of displacement vulnerability and the effects of global warming.
The objective of the present paper is to explore the potentials and challenges inherent in con- ceptualizations of global citizenship education (GCE) in the context of foreign language edu- cation. Specifically, we argue for a critical approach to GCE that emphasizes the significance of language as symbolic power by drawing on the concepts of critical literacy (e.g., Freire 1983; Janks 2014) and symbolic competence (Kramsch 2006; 2011; 2021). To illustrate the necessity of such a critical approach to GCE, we critically analyze teaching materials designed for the English language classroom as provided by the curriculum framework (KMK/ BMZ 2016). The analysis reveals how reliance on dominant Western liberal and neoliberal epistemologies, norms, and discourses might inadvertently reinforce the very inequalities that GCE actually seeks to address. By foregrounding the relationship between language, symbolic power, and GCE, we further redesign these teaching materials and incorporate pedagogical and methodological principles which are in line with a critical literacy and symbolic competence.
When moral authority speaks
(2023)
The kinetics of water transfer between the lower critical solution temperature (LCST) and upper critical solution temperature (UCST) thermoresponsive blocks in about 10 nm thin films of a diblock copolymer is monitored by in situ neutron reflectivity. The UCST-exhibiting block in the copolymer consists of the zwitterionic poly(4((3-methacrylamidopropyl)dimethylammonio)butane-1-sulfonate), abbreviated as PSBP. The LCST-exhibiting block consists of the nonionic poly(N-isopropylacrylamide), abbreviated as PNIPAM. The as-prepared PSBP80-b-PNIPAM(400) films feature a three-layer structure, i.e., PNIPAM, mixed PNIPAM and PSBP, and PSBP. Both blocks have similar transition temperatures (TTs), namely around 32 degrees C for PNIPAM, and around 35 degrees C for PSBP, and with a two-step heating protocol (20 degrees C to 40 degrees C and 40 degrees C to 80 degrees C), both TTs are passed. The response to such a thermal stimulus turns out to be complex. Besides a three-step process (shrinkage, rearrangement, and reswelling), a continuous transfer of D2O from the PNIPAM to the PSBP block is observed. Due to the existence of both, LCST and UCST blocks in the PSBP80-b-PNIPAM(400 )film, the water transfer from the contracting PNIPAM, and mixed layers to the expanding PSBP layer occurs. Thus, the hydration kinetics and thermal response differ markedly from a thermoresponsive polymer film with a single LCST transition.
New Relations in the Making?
(2023)
Against the pain
(2023)
Although prior research has shown that reward provision might sometimes increase creativity, little is known about how leadership that clarifies effort-reward contingencies (i.e., contingent reward leadership) is related to team creativity. Drawing on the theory of learned industriousness, we argue that contingent reward leadership can enhance team knowledge exchange and, in turn, team creative performance. However, we propose that this relationship is moderated by leader unpredictability, which can create uncertainty about resource allocation, thereby undermining the otherwise positive effect of contingent reward leadership. In a two-source, lagged design (three-wave) field study with data from 60 organizational teams, we found a conditional indirect (moderated mediation) effect of contingent reward leadership on team creative performance through team knowledge exchange. This conditional indirect effect was positive when leader unpredictability was low, and negative when leader unpredictability was high. Our research provides leaders with clear and actionable advice by showing that contingent reward leadership promotes team creative performance only when leaders act in predictable and consistent ways.
Enterprise Resource Planning (ERP) systems are critical to the success of enterprises, facilitating business operations through standardized digital processes. However, existing ERP systems are unsuitable for startups and small and medium-sized enterprises that grow quickly and require adaptable solutions with low barriers to entry. Drawing upon 15 explorative interviews with industry experts, we examine the challenges of current ERP systems using the task technology fit theory across companies of varying sizes. We describe high entry barriers, high costs of implementing implicit processes, and insufficient interoperability of already employed tools. We present a vision of a future business process platform based on three enablers: Business processes as first-class entities, semantic data and processes, and cloud-native elasticity and high availability. We discuss how these enablers address current ERP systems' challenges and how they may be used for research on the next generation of business software for tomorrow's enterprises.
Scholars have argued that visionary leadership is an effective tool to motivate followers because it provides them with meaning and purpose. However, previous research tells us little about which leaders and under which circumstances leaders engage in visionary leadership. We draw on theories of human and social capital to argue that leader work centrality is an important antecedent of visionary leadership, and especially so for leaders with low organizational tenure. Moreover, we propose that visionary leadership then provides followers with meaningfulness and thereby decreases their turnover intentions. Our predictions were confirmed by data from a two-wave, lagged-design field study with 101 leader-follower dyads. Overall, our research identifies an important antecedent of visionary leadership, a specific situation in which this antecedent is particularly important, and provides empirical evidence for why visionary leadership can bind followers to an organization.
While Information Systems (IS) Research on the individual and workgroup level of analysis is omnipresent, research on the enterprise-level IS is less frequent. Even though research on Enterprise Systems and their management is established in academic associations and conference programs, enterprise-level phenomena are underrepresented. This minitrack provides a forum to integrate existing research streams that traditionally needed to be attached to other topics (such as IS management or IS governance). The minitrack received broad attention. The three selected papers address different facets of the future role of enterprise-wide IS including aspects such as carbonization, ecosystem integration, and technology-organization fit.
Scaling up CSP
(2023)
Concentrating solar power (CSP) is one of the few scalable technologies capable of delivering dispatchable renewable power. Therefore, many expect it to shoulder a significant share of system balancing in a renewable electricity future powered by cheap, intermittent PV and wind power: the IEA, for example, projects 73 GW CSP by 2030 and several hundred GW by 2050 in its Net-Zero by 2050 pathway. In this paper, we assess how fast CSP can be expected to scale up and how long time it would take to get new, high-efficiency CSP technologies to market, based on observed trends and historical patterns. We find that to meaningfully contribute to net-zero pathways the CSP sector needs to reach and exceed the maximum historical annual growth rate of 30%/year last seen between 2010-2014 and maintain it for at least two decades. Any CSP deployment in the 2020s will rely mostly on mature existing technologies, namely parabolic trough and molten-salt towers, but likely with adapted business models such as hybrid CSP-PV stations, combining the advantages of higher-cost dispatchable and low-cost intermittent power. New third-generation CSP designs are unlikely to play a role in markets during the 2020s, as they are still at or before the pilot stage and, judging from past pilot-to-market cycles for CSP, they will likely not be ready for market deployment before 2030. CSP can contribute to low-cost zero-emission energy systems by 2050, but to make that happen, at the scale foreseen in current energy models, ambitious technology-specific policy support is necessary, as soon as possible and in several countries.
Virtual reality can have advantages for education and learning. However, it must be adequately designed so that the learner benefits from the technological possibilities. Understanding the underlying effects of the virtual learning environment and the learner’s prior experience with virtual reality or prior knowledge of the content is necessary to design a proper virtual learning environment. This article presents a pre-study testing the design of a virtual learning environment for engineering vocational training courses. In the pre-study, 12 employees of two companies joined the training course in one of the two degrees of immersion (desktop VR and VR HMD). Quantitative results on learning success, cognitive load, usability, and motivation and qualitative learning process data were presented. The qualitative data assessment shows that overall, the employees were satisfied with the learning environment regardless of the level of immersion and that the participants asked for more guidance and structure accompanying the learning process. Further research is needed to test for solid group differences.
The rise of open source models for software and hardware development has catalyzed the debate regarding sustainable business models. Open Source Software has already become a dominant part in the software industry, whereas Open Source Hardware is still a little-researched phenomenon but has the potential to do the same to manufacturing in a wide range of products. This article addresses this potential by introducing a research design to analyze the prototyping phase of six different Open Source Hardware projects tackling ecological, social, and economical challenges. Using a design science research methodology, a process model is developed to concretise the prototype development steps. The prototype phase is important because it is where fundamental decisions are made that affect the openness of the final product. This paper aims to advance the discourse on open production as a concept that enables companies to apply the aspect of openness towards collaboration-oriented and sustainable business models.
The persistence of food preferences, which are crucial for diet-related decisions, is a significant obstacle to changing unhealthy eating behavior. To overcome this obstacle, the current study investigates whether posthypnotic suggestions (PHSs) can enhance food-related decisions by measuring food choices and subjective ratings. After assessing hypnotic susceptibility in Session 1, at the beginning of Session 2, a PHS was delivered aiming to increase the desirability of healthy food items (e.g., vegetables and fruit). After the termination of hypnosis, a set of two tasks was administrated twice, once when the PHS was activated and once deactivated in counterbalanced order. The task set consisted of rating 170 pictures of food items, followed by an online supermarket where participants were instructed to select enough food from the same item pool for a fictitious week of quarantine. After 1 week, Session 3 mimicked Session 2 without renewed hypnosis induction to assess the persistence of the PHS effects. The Bayesian hierarchical modeling results indicate that the PHS increased preferences and choices of healthy food items without altering the influence of preferences in choices. In contrast, for unhealthy food items, not only both preferences and choices were decreased due to the PHS, but also their relationship was modified. That is, although choices became negatively biased against unhealthy items, preferences played a more dominant role in unhealthy choices when the PHS was activated. Importantly, all effects persisted over 1 week, qualitatively and quantitatively. Our results indicate that although the PHS affected healthy choices through resolve, i.e., preferred more and chosen more, unhealthy items were probably chosen less impulsively through effortful suppression. Together, besides the translational importance of the current results for helping the obesity epidemic in modern societies, our results contribute theoretically to the understanding of hypnosis and food choices.
Purpose
Because steadily growing consumption is not beneficial for nature and climate and is not the same as increasing well-being, an anti-consumerism movement has formed worldwide. The renouncement of dispensable consumption will, however, only establish itself as a significant lifestyle if consumers do not perceive reduced consumption as a personal sacrifice. Since prior research has not yielded a consistent understanding of the relationship between anti-consumption and personal well-being, this paper aims to examine three factors about which theory implies that they may moderate this relationship: decision-control empowerment, market-control empowerment and the value of materialism.
Design/methodology/approach
The analysis is based on data from a large-scale, representative online survey (N = 1,398). Structural equation modelling with latent interaction effects is used to test how three moderators (decision-control empowerment, market-control empowerment and materialism) affect the relationship amongst four types of anti-consumption (e.g. voluntary simplicity) and three different well-being states (e.g. subjective well-being).
Findings
While both dimensions of empowerment almost always directly promote consumer well-being, significant moderation effects are present in only a few but meaningful cases. Although the materialism value tends to reduce consumers’ well-being, it improves the well-being effect of two anti-consumption styles.
Research limitations/implications
Using only one sample from a wealthy country is a limitation of the study. Researchers should replicate the findings in different nations and cultures.
Practical implications
Consumer affairs practitioners and commercial marketing for sustainably produced, high-quality and long-lasting goods can benefit greatly from these findings.
Social implications
This paper shows that sustainable marketing campaigns can more easily motivate consumers to voluntarily reduce their consumption for the benefit of society and the environment if a high level of market-control empowerment can be communicated to them.
Originality/value
This study provides differentiated new insights into the roles of consumer empowerment, i.e. both decision-control empowerment and market-control empowerment, and the value of materialism to frame specific relationships between different anti-consumption types and various well-being states.
Zimzum
(2023)
The Hebrew word zimzum originally means “contraction,” “withdrawal,” “retreat,” “limitation,” and “concentration.” In Kabbalah, zimzum is a term for God’s self-limitation, done before creating the world to create the world. Jewish mystic Isaac Luria coined this term in Galilee in the sixteenth century, positing that the God who was “Ein-Sof,” unlimited and omnipresent before creation, must concentrate himself in the zimzum and withdraw in order to make room for the creation of the world in God’s own center. At the same time, God also limits his infinite omnipotence to allow the finite world to arise. Without the zimzum there is no creation, making zimzum one of the basic concepts of Judaism.
The Lurianic doctrine of the zimzum has been considered an intellectual showpiece of the Kabbalah and of Jewish philosophy. The teaching of the zimzum has appeared in the Kabbalistic literature across Central and Eastern Europe, perhaps most famously in Hasidic literature up to the present day and in philosopher and historian Gershom Scholem’s epoch-making research on Jewish mysticism. The Zimzum has fascinated Jewish and Christian theologians, philosophers, and writers like no other Kabbalistic teaching. This can be seen across the philosophy and cultural history of the twentieth century as it gained prominence among such diverse authors and artists as Franz Rosenzweig, Hans Jonas, Isaac Bashevis Singer, Harold Bloom, Barnett Newman, and Anselm Kiefer.
This book follows the traces of the zimzum across the Jewish and Christian intellectual history of Europe and North America over more than four centuries, where Judaism and Christianity, theosophy and philosophy, divine and human, mysticism and literature, Kabbalah and the arts encounter, mix, and cross-fertilize the interpretations and appropriations of this doctrine of God’s self-entanglement and limitation
Seasonal forecasts are of great interest in many areas. Knowing the amount of precipitation for the upcoming season in regions of water scarcity would facilitate a better water management. If farmers knew the weather conditions of the upcoming summer at sowing time, they could select those cereal species that are best adapted to these conditions. This would allow farmers to improve the harvest and potentially even reduce the amount of pesticides used. However, the undoubted advantages of seasonal forecasts are often opposed by their high degree of uncertainty. The great challenge of generating seasonal forecasts with lead times of several months mainly originates from the chaotic nature of the earth system. In a chaotic system, even tiny differences in the initial conditions can lead to strong deviations in the system’s state in the long run.
In this dissertation we propose an emergent machine learning approach for seasonal forecasting, called the AnlgModel. The AnlgModel combines the analogue method with myopic feature selection and bootstrapping. To benchmark the abilities of the AnlgModel we apply it to seasonal cyclone activity forecasts in the North Atlantic and Northwest Pacific. The AnlgModel demonstrates competitive hindcast skills with two operational forecasts and even outperforms these for long lead times.
In the second chapter we comprehend the forecasting strategy of the Anlg-Model. We thereby analyse the analogue selection process for the 2017 North Atlantic and the 2018 Northwest Pacific seasonal cyclone activity. The analysis shows that those climate indices which are known to influence the seasonal cyclone activity, such as the Niño 3.4 SST, are correctly represented among the selected analogues. Furthermore the selected analogues reflect large-scale climate patterns that were identified by expert reports as being determinative for these particular seasons.
In the third chapter we analyse the features that are used by the AnlgModel for its predictions. We therefore inspect the feature relevance (FR). The FR patterns learned by the AnlgModel show a high congruence with the predictor regions used by the operational forecasts. However, the AnlgModel also discovered new features, such as the SST anomaly in the Gulf of Guinea during November. This SST pattern exhibits a remarkably high predictive potential for the upcoming Atlantic hurricane activity.
In the final chapter we investigate potential mechanisms, that link two of these regions with high feature relevance to the Atlantic hurricane activity. We mainly focus on ocean surface transport. The ocean surface flow paths are calculated using Lagrangian particle analysis. We demonstrate that the FR patterns in the region of the Canary islands do not correspond with ocean surface transport. It is instead likely that these FR patterns fingerprint a wind transport of latent heat. The second region to be studied is situated in the Gulf of Guinea. Our analysis shows that the FR patterns seen there do fingerprint ocean surface transport. However, our simulations also show that at least one other mechanism is involved in linking the Gulf of Guinea SST anomaly in November to the hurricane activity of the upcoming season.
In this work the AnlgModel does not only demonstrate its outstanding forecast skills but also shows its capabilities as research tool for detecting oceanic and atmospheric mechanisms.
Nowadays, innovative and entrepreneurial activities and their actors are embedded in interdependent systems to drive joint value creation. Innovation ecosystems and entrepreneurial ecosystems have become established system-level concepts in management research to explain how value transpires between different actors and institutions in distinct contexts. Despite the popularity of the concepts, researchers have critiqued their theoretical depth, conceptual distinctiveness, as well as operationalization and measurement (Autio & Thomas, 2022; Klimas & Czakon, 2022). Furthermore, in light of current-day challenges, research has yet to address how context impacts innovation and entrepreneurial ecosystems and their actors and elements (Wurth et al., 2022).
The aim of this cumulative thesis is to provide a deeper understanding of the conceptualization, operationalization, and measurement of innovation and entrepreneurial ecosystems and investigate how contextual factors can influence the overall ecosystem and its key actors. To this end, bibliometric and empirical-qualitative methods, as well as narrative and systematic literature reviews, are employed. After introducing the research scope and key concepts in Chapter 1, a systematic literature review to operationalize and measure the concept of innovation ecosystems is conducted, and an integrative framework of its composition is introduced in Chapter 2. In Chapter 3, the innovation journal network is outlined by means of science mapping to determine current and emerging research areas characterizing innovation studies. In Chapters 4 and 5, the interplay between the temporal context of the Covid-19 pandemic and the spatial context of entrepreneurial ecosystems is assessed by focusing on the role of organizational resilience and affordances. The findings shed new light on the dynamics and boundaries of entrepreneurial ecosystems as they move between the spatial and digital realm. Building on this, an integrative framework of digital entrepreneurial ecosystems is presented in Chapter 6. The concluding Chapter 7 summarizes my thesis’s conceptual, theoretical, and empirical insights, highlighting implications, limitations, and promising future research avenues.
The findings of this cumulative thesis contribute to the theoretical and conceptual advancement of ecosystems in innovation and entrepreneurship by providing insights into the measurement and operationalization of its elements. Furthermore, the results show that contextual factors, such as crisis events or institutional circumstances, influence innovation and entrepreneurial ecosystems and their actors, calling for a more nuanced consideration of ecosystem configurations and dynamics. By drawing from the theory of affordances, the elements that actually afford value to the actors and how they shift between the physical and digital realm are portrayed. Based on these findings, this thesis introduces novel frameworks and conceptual advancements of the configurations and boundaries of innovation and (digital) entrepreneurial ecosystems, laying the foundation for a renewed understanding of how to design, orchestrate, and evaluate ecosystems today and in the future.
Self-efficacy reflects the self-belief that one can persistently perform difficult and novel tasks while coping with adversity. As such beliefs reflect how individuals behave, think, and act, they are key for successful entrepreneurial activities. While existing literature mainly analyzes the influence of the task-related construct of entrepreneurial self-efficacy, we take a different perspective and investigate, based on a representative sample of 1,405 German business founders, how the personality characteristic of generalized self-efficacy influences start-up performance as measured by a broad set of business outcomes up to 19 months after business creation. Outcomes include start-up survival and entrepreneurial income, as well as growth-oriented outcomes such as job creation and innovation. We find statistically significant and economically important positive effects of high scores of self-efficacy on start-up survival and entrepreneurial income, which become even stronger when focusing on the growth-oriented outcome of innovation. Furthermore, we observe that generalized self-efficacy is similarly distributed between female and male business founders, with effects being partly stronger for female entrepreneurs. Our findings are important for policy instruments that are meant to support firm growth by facilitating the design of more target-oriented offers for training, coaching, and entrepreneurial incubators.
Business processes are regularly modified either to capture requirements from the organization’s environment or due to internal optimization and restructuring. Implementing the changes into the individual work routines is aided by change management tools. These tools aim at the acceptance of the process by and empowerment of the process executor. They cover a wide range of general factors and seldom accurately address the changes in task execution and sequence. Furthermore, change is only framed as a learning activity, while most obstacles to change arise from the inability to unlearn or forget behavioural patterns one is acquainted with. Therefore, this paper aims to develop and demonstrate a notation to capture changes in business processes and identify elements that are likely to present obstacles during change. It connects existing research from changes in work routines and psychological insights from unlearning and intentional forgetting to the BPM domain. The results contribute to more transparency in business process models regarding knowledge changes. They provide better means to understand the dynamics and barriers of change processes.