Refine
Year of publication
Document Type
- Article (20531)
- Doctoral Thesis (3111)
- Postprint (2090)
- Monograph/Edited Volume (1194)
- Other (643)
- Review (583)
- Conference Proceeding (298)
- Preprint (230)
- Part of a Book (218)
- Working Paper (131)
Language
- English (29240) (remove)
Is part of the Bibliography
- yes (29240) (remove)
Keywords
- climate change (173)
- Germany (98)
- machine learning (81)
- diffusion (76)
- German (68)
- Arabidopsis thaliana (65)
- anomalous diffusion (58)
- stars: massive (57)
- Climate change (55)
- Holocene (55)
Institute
- Institut für Physik und Astronomie (4847)
- Institut für Biochemie und Biologie (4670)
- Institut für Geowissenschaften (3290)
- Institut für Chemie (2845)
- Institut für Mathematik (1560)
- Department Psychologie (1396)
- Institut für Ernährungswissenschaft (1014)
- Department Linguistik (915)
- Wirtschaftswissenschaften (826)
- Institut für Informatik und Computational Science (794)
Personal data privacy is considered to be a fundamental right. It forms a part of our highest ethical standards and is anchored in legislation and various best practices from the technical perspective. Yet, protecting against personal data exposure is a challenging problem from the perspective of generating privacy-preserving datasets to support machine learning and data mining operations. The issue is further compounded by the fact that devices such as consumer wearables and sensors track user behaviours on such a fine-grained level, thereby accelerating the formation of multi-attribute and large-scale high-dimensional datasets.
In recent years, increasing news coverage regarding de-anonymisation incidents, including but not limited to the telecommunication, transportation, financial transaction, and healthcare sectors, have resulted in the exposure of sensitive private information. These incidents indicate that releasing privacy-preserving datasets requires serious consideration from the pre-processing perspective. A critical problem that appears in this regard is the time complexity issue in applying syntactic anonymisation methods, such as k-anonymity, l-diversity, or t-closeness to generating privacy-preserving data. Previous studies have shown that this problem is NP-hard.
This thesis focuses on large high-dimensional datasets as an example of a special case of data that is characteristically challenging to anonymise using syntactic methods. In essence, large high-dimensional data contains a proportionately large number of attributes in proportion to the population of attribute values. Applying standard syntactic data anonymisation approaches to generating privacy-preserving data based on such methods results in high information-loss, thereby rendering the data useless for analytics operations or in low privacy due to inferences based on the data when information loss is minimised.
We postulate that this problem can be resolved effectively by searching for and eliminating all the quasi-identifiers present in a high-dimensional dataset. Essentially, we quantify the privacy-preserving data sharing problem as the Find-QID problem.
Further, we show that despite the complex nature of absolute privacy, the discovery of QID can be achieved reliably for large datasets. The risk of private data exposure through inferences can be circumvented, and both can be practicably achieved without the need for high-performance computers.
For this purpose, we present, implement, and empirically assess both mathematical and engineering optimisation methods for a deterministic discovery of privacy-violating inferences. This includes a greedy search scheme by efficiently queuing QID candidates based on their tuple characteristics, projecting QIDs on Bayesian inferences, and countering Bayesian network’s state-space-explosion with an aggregation strategy taken from multigrid context and vectorised GPU acceleration. Part of this work showcases magnitudes of processing acceleration, particularly in high dimensions. We even achieve near real-time runtime for currently impractical applications. At the same time, we demonstrate how such contributions could be abused to de-anonymise Kristine A. and Cameron R. in a public Twitter dataset addressing the US Presidential Election 2020.
Finally, this work contributes, implements, and evaluates an extended and generalised version of the novel syntactic anonymisation methodology, attribute compartmentation. Attribute compartmentation promises sanitised datasets without remaining quasi-identifiers while minimising information loss. To prove its functionality in the real world, we partner with digital health experts to conduct a medical use case study. As part of the experiments, we illustrate that attribute compartmentation is suitable for everyday use and, as a positive side effect, even circumvents a common domain issue of base rate neglect.
In an effort to describe and produce different formats for video instruction, the research community in technology-enhanced learning, and MOOC scholars in particular, have focused on the general style of video production: whether it is a digitally scripted “talk-and-chalk” or a “talking head” version of a learning unit. Since these production styles include various sub-elements, this paper deconstructs the inherited elements of video production in the context of educational live-streams. Using over 700 videos – both from synchronous and asynchronous modalities of large video-based platforms (YouTube and Twitch), 92 features were found in eight categories of video production. These include commonly analyzed features such as the use of green screen and a visible instructor, but also less studied features such as social media connections and changing camera perspective depending on the topic being covered. Overall, the research results enable an analysis of common video production styles and a toolbox for categorizing new formats – independent of their final (a)synchronous use in MOOCs. Keywords: video production, MOOC video styles, live-streaming.
Invention
(2023)
This entry addresses invention from five different perspectives: (i) definition of the term, (ii) mechanisms underlying invention processes, (iii) (pre-)history of human inventions, (iv) intellectual property protection vs open innovation, and (v) case studies of great inventors. Regarding the definition, an invention is the outcome of a creative process taking place within a technological milieu, which is recognized as successful in terms of its effectiveness as an original technology. In the process of invention, a technological possibility becomes realized. Inventions are distinct from either discovery or innovation. In human creative processes, seven mechanisms of invention can be observed, yielding characteristic outcomes: (1) basic inventions, (2) invention branches, (3) invention combinations, (4) invention toolkits, (5) invention exaptations, (6) invention values, and (7) game-changing inventions. The development of humanity has been strongly shaped by inventions ever since early stone tools and the conception of agriculture. An “explosion of creativity” has been associated with Homo sapiens, and inventions in all fields of human endeavor have followed suit, engendering an exponential growth of cumulative culture. This culture development emerges essentially through a reuse of previous inventions, their revision, amendment and rededication. In sociocultural terms, humans have increasingly regulated processes of invention and invention-reuse through concepts such as intellectual property, patents, open innovation and licensing methods. Finally, three case studies of great inventors are considered: Edison, Marconi, and Montessori, next to a discussion of human invention processes as collaborative endeavors.
We conduct a laboratory experiment to study how locus of control operates through people's preferences and beliefs to influence their decisions. Using the principal-agent setting of the delegation game, we test four key channels that conceptually link locus of control to decision-making: (i) preference for agency; (ii) optimism and (iii) confidence regarding the return to effort; and (iv) illusion of control. Knowing the return and cost of stated effort, principals either retain or delegate the right to make an investment decision that generates payoffs for themselves and their agents. Extending the game to the context in which the return to stated effort is unknown allows us to explicitly study the relationship between locus of control and beliefs about the return to effort. We find that internal locus of control is linked to the preference for agency, an effect that is driven by women. We find no evidence that locus of control influences optimism and confidence about the return to stated effort, or that it operates through an illusion of control.
Design Thinking is a human-centered approach to innovation that has become increasingly popular globally over the last decade. While the spread of Design Thinking is well understood and documented in the Western cultural contexts, particularly in Europe and the US due to the popularity of the Stanford-Potsdam Design Thinking education model, this is not the case when it comes to non-Western cultural contexts. This thesis fills a gap identified in the literature regarding how Design Thinking emerged, was perceived, adopted, and practiced in the Arab world. The culture in that part of the world differs from that of the Western context, which impacts the mindset of people and how they interact with Design Thinking tools and methods.
A mixed-methods research approach was followed in which both quantitative and qualitative methods were employed. First, two methods were used in the quantitative phase: a social media analysis using Twitter as a source of data, and an online questionnaire. The results and analysis of the quantitative data informed the design of the qualitative phase in which two methods were employed: ten semi-structured interviews, and participant observation of seven Design Thinking training events.
According to the analyzed data, the Arab world appears to have had an early, though relatively weak, and slow, adoption of Design Thinking since 2006. Increasing adoption, however, has been witnessed over the last decade, especially in Saudi Arabia, the United Arab Emirates and Egypt. The results also show that despite its limited spread, Design Thinking has been practiced the most in education, information technology and communication, administrative services, and the non-profit sectors. The way it is being practiced, though, is not fully aligned with how it is being practiced and taught in the US and Europe, as most people in the region do not necessarily believe in all mindset attributes introduced by the Stanford-Potsdam tradition.
Practitioners in the Arab world also seem to shy away from the 'wild side' of Design Thinking in particular, and do not fully appreciate the connection between art-design, and science-engineering. This questions the role of the educational institutions in the region since -according to the findings- they appear to be leading the movement in promoting and developing Design Thinking in the Arab world. Nonetheless, it is notable that people seem to be aware of the positive impact of applying Design Thinking in the region, and its potential to bring meaningful transformation. However, they also seem to be concerned about the current cultural, social, political, and economic challenges that may challenge this transformation. Therefore, they call for more awareness and demand to create Arabic, culturally appropriate programs to respond to the local needs. On another note, the lack of Arabic content and local case studies on Design Thinking were identified by several interviewees and were also confirmed by the participant observation as major challenges that are slowing down the spread of Design Thinking or sometimes hampering capacity building in the region. Other challenges that were revealed by the study are: changing the mindset of people, the lack of dedicated Design Thinking spaces, and the need for clear instructions on how to apply Design Thinking methods and activities. The concept of time and how Arabs deal with it, gender management during trainings, and hierarchy and power dynamics among training participants are also among the identified challenges. Another key finding revealed by the study is the confirmation of التفكير التصميمي as the Arabic term to be most widely adopted in the region to refer to Design Thinking, since four other Arabic terms were found to be associated with Design Thinking.
Based on the findings of the study, the thesis concludes by presenting a list of recommendations on how to overcome the mentioned challenges and what factors should be considered when designing and implementing culturally-customized Design Thinking training in the Arab region.
The MOOChub is a joined web-based catalog of all relevant German and Austrian MOOC platforms that lists well over 750 Massive Open Online Courses (MOOCs). Automatically building such a catalog requires that all partners describe and publicly offer the metadata of their courses in the same way. The paper at hand presents the genesis of the idea to establish a common metadata standard and the story of its subsequent development. The result of this effort is, first, an open-licensed de-facto-standard, which is based on existing commonly used standards and second, a first prototypical platform that is using this standard: the MOOChub, which lists all courses of the involved partners. This catalog is searchable and provides a more comprehensive overview of basically all MOOCs that are offered by German and Austrian MOOC platforms. Finally, the upcoming developments to further optimize the catalog and the metadata standard are reported.
At the beginning of 2020, with COVID-19, courts of justice worldwide had to move online to continue providing judicial service. Digital technologies materialized the court practices in ways unthinkable shortly before the pandemic creating resonances with judicial and legal regulation, as well as frictions. A better understanding of the dynamics at play in the digitalization of courts is paramount for designing justice systems that serve their users better, ensure fair and timely dispute resolutions, and foster access to justice. Building on three major bodies of literature —e-justice, digitalization and organization studies, and design research— Designing for Digital Justice takes a nuanced approach to account for human and more-than-human agencies.
Using a qualitative approach, I have studied in depth the digitalization of Chilean courts during the pandemic, specifically between April 2020 and September 2022. Leveraging a comprehensive source of primary and secondary data, I traced back the genealogy of the novel materializations of courts’ practices structured by the possibilities offered by digital technologies. In five (5) cases studies, I show in detail how the courts got to 1) work remotely, 2) host hearings via videoconference, 3) engage with users via social media (i.e., Facebook and Chat Messenger), 4) broadcast a show with judges answering questions from users via Facebook Live, and 5) record, stream, and upload judicial hearings to YouTube to fulfil the publicity requirement of criminal hearings. The digitalization of courts during the pandemic is characterized by a suspended normativity, which makes innovation possible yet presents risks. While digital technologies enabled the judiciary to provide services continuously, they also created the risk of displacing traditional judicial and legal regulation.
Contributing to liminal innovation and digitalization research, Designing for Digital Justice theorizes four phases: 1) the pre-digitalization phase resulting in the development of regulation, 2) the hotspot of digitalization resulting in the extension of regulation, 3) the digital innovation redeveloping regulation (moving to a new, preliminary phase), and 4) the permanence of temporal practices displacing regulation. Contributing to design research Designing for Digital Justice provides new possibilities for innovation in the courts, focusing at different levels to better address tensions generated by digitalization. Fellow researchers will find in these pages a sound theoretical advancement at the intersection of digitalization and justice with novel methodological references. Practitioners will benefit from the actionable governance framework Designing for Digital Justice Model, which provides three fields of possibilities for action to design better justice systems. Only by taking into account digital, legal, and social factors can we design better systems that promote access to justice, the rule of law, and, ultimately social peace.
The Security Operations Center (SOC) represents a specialized unit responsible for managing security within enterprises. To aid in its responsibilities, the SOC relies heavily on a Security Information and Event Management (SIEM) system that functions as a centralized repository for all security-related data, providing a comprehensive view of the organization's security posture. Due to the ability to offer such insights, SIEMS are considered indispensable tools facilitating SOC functions, such as monitoring, threat detection, and incident response.
Despite advancements in big data architectures and analytics, most SIEMs fall short of keeping pace. Architecturally, they function merely as log search engines, lacking the support for distributed large-scale analytics. Analytically, they rely on rule-based correlation, neglecting the adoption of more advanced data science and machine learning techniques.
This thesis first proposes a blueprint for next-generation SIEM systems that emphasize distributed processing and multi-layered storage to enable data mining at a big data scale. Next, with the architectural support, it introduces two data mining approaches for advanced threat detection as part of SOC operations.
First, a novel graph mining technique that formulates threat detection within the SIEM system as a large-scale graph mining and inference problem, built on the principles of guilt-by-association and exempt-by-reputation. The approach entails the construction of a Heterogeneous Information Network (HIN) that models shared characteristics and associations among entities extracted from SIEM-related events/logs. Thereon, a novel graph-based inference algorithm is used to infer a node's maliciousness score based on its associations with other entities in the HIN. Second, an innovative outlier detection technique that imitates a SOC analyst's reasoning process to find anomalies/outliers. The approach emphasizes explainability and simplicity, achieved by combining the output of simple context-aware univariate submodels that calculate an outlier score for each entry.
Both approaches were tested in academic and real-world settings, demonstrating high performance when compared to other algorithms as well as practicality alongside a large enterprise's SIEM system.
This thesis establishes the foundation for next-generation SIEM systems that can enhance today's SOCs and facilitate the transition from human-centric to data-driven security operations.
Throughout the last ~3 million years, the Earth's climate system was characterised by cycles of glacial and interglacial periods. The current warm period, the Holocene, is comparably stable and stands out from this long-term cyclicality. However, since the industrial revolution, the climate has been increasingly affected by a human-induced increase in greenhouse gas concentrations. While instrumental observations are used to describe changes over the past ~200 years, indirect observations via proxy data are the main source of information beyond this instrumental era. These data are indicators of past climatic conditions, stored in palaeoclimate archives around the Earth. The proxy signal is affected by processes independent of the prevailing climatic conditions. In particular, for sedimentary archives such as marine sediments and polar ice sheets, material may be redistributed during or after the initial deposition and subsequent formation of the archive. This leads to noise in the records challenging reliable reconstructions on local or short time scales. This dissertation characterises the initial deposition of the climatic signal and quantifies the resulting archive-internal heterogeneity and its influence on the observed proxy signal to improve the representativity and interpretation of climate reconstructions from marine sediments and ice cores.
To this end, the horizontal and vertical variation in radiocarbon content of a box-core from the South China Sea is investigated. The three-dimensional resolution is used to quantify the true uncertainty in radiocarbon age estimates from planktonic foraminifera with an extensive sampling scheme, including different sample volumes and replicated measurements of batches of small and large numbers of specimen. An assessment on the variability stemming from sediment mixing by benthic organisms reveals strong internal heterogeneity. Hence, sediment mixing leads to substantial time uncertainty of proxy-based reconstructions with error terms two to five times larger than previously assumed.
A second three-dimensional analysis of the upper snowpack provides insights into the heterogeneous signal deposition and imprint in snow and firn. A new study design which combines a structure-from-motion photogrammetry approach with two-dimensional isotopic data is performed at a study site in the accumulation zone of the Greenland Ice Sheet. The photogrammetry method reveals an intermittent character of snowfall, a layer-wise snow deposition with substantial contributions by wind-driven erosion and redistribution to the final spatially variable accumulation and illustrated the evolution of stratigraphic noise at the surface. The isotopic data show the preservation of stratigraphic noise within the upper firn column, leading to a spatially variable climate signal imprint and heterogeneous layer thicknesses. Additional post-depositional modifications due to snow-air exchange are also investigated, but without a conclusive quantification of the contribution to the final isotopic signature.
Finally, this characterisation and quantification of the complex signal formation in marine sediments and polar ice contributes to a better understanding of the signal content in proxy data which is needed to assess the natural climate variability during the Holocene.
In late summer, migratory bats of the temperate zone face the challenge of accomplishing two energy-demanding tasks almost at the same time: migration and mating. Both require information and involve search efforts, such as localizing prey or finding potential mates. In non-migrating bat species, playback studies showed that listening to vocalizations of other bats, both con-and heterospecifics, may help a recipient bat to find foraging patches and mating sites. However, we are still unaware of the degree to which migrating bats depend on con-or heterospecific vocalizations for identifying potential feeding or mating opportunities during nightly transit flights. Here, we investigated the vocal responses of Nathusius’ pipistrelle bats, Pipistrellus nathusii, to simulated feeding and courtship aggregations at a coastal migration corridor. We presented migrating bats either feeding buzzes or courtship calls of their own or a heterospecific migratory species, the common noctule, Nyctalus noctula. We expected that during migratory transit flights, simulated feeding opportunities would be particularly attractive to bats, as well as simulated mating opportunities which may indicate suitable roosts for a stopover. However, we found that when compared to the natural silence of both pre-and post-playback phases, bats called indifferently during the playback of conspecific feeding sounds, whereas P. nathusii echolocation call activity increased during simulated feeding of N. noctula. In contrast, the call activity of P. nathusii decreased during the playback of conspecific courtship calls, while no response could be detected when heterospecific call types were broadcasted. Our results suggest that while on migratory transits, P. nathusii circumnavigate conspecific mating aggregations, possibly to save time or to reduce the risks associated with social interactions where aggression due to territoriality might be expected. This avoidance behavior could be a result of optimization strategies by P. nathusii when performing long-distance migratory flights, and it could also explain the lack of a response to simulated conspecific feeding. However, the observed increase of activity in response to simulated feeding of N. noctula, suggests that P. nathusii individuals may be eavesdropping on other aerial hawking insectivorous species during migration, especially if these occupy a slightly different foraging niche.
Sulfur is an important element that is incorporated into many biomolecules in humans. The incorporation and transfer of sulfur into biomolecules is, however, facilitated by a series of different sulfurtransferases. Among these sulfurtransferases is the human mercaptopyruvate sulfurtransferase (MPST) also designated as tRNA thiouridine modification protein (TUM1). The role of the human TUM1 protein has been suggested in a wide range of physiological processes in the cell among which are but not limited to involvement in Molybdenum cofactor (Moco) biosynthesis, cytosolic tRNA thiolation and generation of H2S as signaling molecule both in mitochondria and the cytosol. Previous interaction studies showed that TUM1 interacts with the L-cysteine desulfurase NFS1 and the Molybdenum cofactor biosynthesis protein 3 (MOCS3). Here, we show the roles of TUM1 in human cells using CRISPR/Cas9 genetically modified Human Embryonic Kidney cells. Here, we show that TUM1 is involved in the sulfur transfer for Molybdenum cofactor synthesis and tRNA thiomodification by spectrophotometric measurement of the activity of sulfite oxidase and liquid chromatography quantification of the level of sulfur-modified tRNA. Further, we show that TUM1 has a role in hydrogen sulfide production and cellular bioenergetics.
Digitalization, as well as sustainability, are gaining increased relevance and have attracted significant attention in research and practice. However, the research already published about this topic examining digitalization in the retail sector does not consider the acceptance of related innovations, nor their impact on sustainability. Therefore, this article critically analyzes the acceptance of customers towards digital technologies in fashion stores as well as their impact on sustainability in the textile industry. The comprehensive analysis of the literature and the current state of research provide the basis of this paper. Theoretical models, such as the Technology-Acceptance-Model (TAM) and the Unified Theory of Acceptance and Use of Technology 2 (UTAUT 2) enable the evaluation of expectations and acceptance, as well as the assessment of possible inhibitory factors for the subsequent descriptive and statistical examination of the acceptance of digital technologies in fashion stores. The research on this subject was examined in a quantitative way. The key findings show that customers do accept digital technologies in fashion stores. The final part of this contribution describes the innovative Digitalization 4 Sustainability Framework which shows that digital technologies at the point of sale (PoS) in fashion stores could have a positive impact on sustainability. Overall, this paper shows that it is particularly important for fashion stores to concentrate on their individual strengths and customer needs as well as to indicate a more sustainable way by using digital technologies, in order to achieve added value for the customers and to set themselves apart from the competition while designing a more sustainable future. Moreover, fashion stores should make it a point of their honor to harness the power of digitalization for sake of sustainability and economic value creation.
Due to anthropogenic greenhouse gas emissions, Earth’s average surface temperature is steadily increasing. As a consequence, many weather extremes are likely to become more frequent and intense. This poses a threat to natural and human systems, with local impacts capable of destroying exposed assets and infrastructure, and disrupting economic and societal activity. Yet, these effects are not locally confined to the directly affected regions, as they can trigger indirect economic repercussions through loss propagation along supply chains. As a result, local extremes yield a potentially global economic response. To build economic resilience and design effective adaptation measures that mitigate adverse socio-economic impacts of ongoing climate change, it is crucial to gain a comprehensive understanding of indirect impacts and the underlying economic mechanisms.
Presenting six articles in this thesis, I contribute towards this understanding. To this end, I expand on local impacts under current and future climate, the resulting global economic response, as well as the methods and tools to analyze this response.
Starting with a traditional assessment of weather extremes under climate change, the first article investigates extreme snowfall in the Northern Hemisphere until the end of the century. Analyzing an ensemble of global climate model projections reveals an increase of the most extreme snowfall, while mean snowfall decreases.
Assessing repercussions beyond local impacts, I employ numerical simulations to compute indirect economic effects from weather extremes with the numerical agent-based shock propagation model Acclimate. This model is used in conjunction with the recently emerged storyline framework, which involves analyzing the impacts of a particular reference extreme event and comparing them to impacts in plausible counterfactual scenarios under various climate or socio-economic conditions. Using this approach, I introduce three primary storylines that shed light on the complex mechanisms underlying economic loss propagation.
In the second and third articles of this thesis, I analyze storylines for the historical Hurricanes Sandy (2012) and Harvey (2017) in the USA. For this, I first estimate local economic output losses and then simulate the resulting global economic response with Acclimate. The storyline for Hurricane Sandy thereby focuses on global consumption price anomalies and the resulting changes in consumption. I find that the local economic disruption leads to a global wave-like economic price ripple, with upstream effects propagating in the supplier direction and downstream effects in the buyer direction. Initially, an upstream demand reduction causes consumption price decreases, followed by a downstream supply shortage and increasing prices, before the anomalies decay in a normalization phase. A dominant upstream or downstream effect leads to net consumption gains or losses of a region, respectively. Moreover, I demonstrate that a longer direct economic shock intensifies the downstream effect for many regions, leading to an overall consumption loss.
The third article of my thesis builds upon the developed loss estimation method by incorporating projections to future global warming levels. I use these projections to explore how the global production response to Hurricane Harvey would change under further increased global warming. The results show that, while the USA is able to nationally offset direct losses in the reference configuration, other countries have to compensate for increasing shares of counterfactual future losses. This compensation is mainly achieved by large exporting countries, but gradually shifts towards smaller regions. These findings not only highlight the economy’s ability to flexibly mitigate disaster losses to a certain extent, but also reveal the vulnerability and economic disadvantage of regions that are exposed to extreme weather events.
The storyline in the fourth article of my thesis investigates the interaction between global economic stress and the propagation of losses from weather extremes. I examine indirect impacts of weather extremes — tropical cyclones, heat stress, and river floods — worldwide under two different economic conditions: an unstressed economy and a globally stressed economy, as seen during the Covid-19 pandemic. I demonstrate that the adverse effects of weather extremes on global consumption are strongly amplified when the economy is under stress. Specifically, consumption losses in the USA and China double and triple, respectively, due to the global economy’s decreased capacity for disaster loss compensation. An aggravated scarcity intensifies the price response, causing consumption losses to increase.
Advancing on the methods and tools used here, the final two articles in my thesis extend the agent-based model Acclimate and formalize the storyline approach. With the model extension described in the fifth article, regional consumers make rational choices on the goods bought such that their utility is maximized under a constrained budget. In an out-of-equilibrium economy, these rational consumers are shown to temporarily increase consumption of certain goods in spite of rising prices.
The sixth article of my thesis proposes a formalization of the storyline framework, drawing on multiple studies including storylines presented in this thesis. The proposed guideline defines eight central elements that can be used to construct a storyline.
Overall, this thesis contributes towards a better understanding of economic repercussions of weather extremes. It achieves this by providing assessments of local direct impacts, highlighting mechanisms and impacts of loss propagation, and advancing on methods and tools used.
The CH2Cl2/MeOH (1:1) extract of Zanthoxylum holstzianum stem bark showed good antiplasmodial activity (IC50 2.5 +/- 0.3 and 2.6 +/- 0.3 mu g/mL against the W2 and D6 strains of Plasmodium falciparum, respectively). From the extract five benzophenanthridine alkaloids [8-acetonyldihydrochelerythrine (1), nitidine (2), dihydrochelerythine (3), norchelerythrine (5), arnottianamide (8)]; a 2-quinolone alkaloid [N-methylflindersine (4)]; a lignan [4,4 '-dihydroxy-3,3 '-dimethoxylignan-9,9 '-diyl diacetate (7)] and a dimer of a benzophenanthridine and 2-quinoline [holstzianoquinoline (6)] were isolated. The CH2Cl2/MeOH (1:1) extract of the root bark afforded 1, 3-6, 8, chelerythridimerine (9) and 9-demethyloxychelerythrine (10). Holstzianoquinoline (6) is new, and is the second dimer linked by a C-C bond of a benzophenanthridine and a 2-quinoline reported thus far. The compounds were identified based on spectroscopic evidence. Amongst five compounds (1-5) tested against two strains of P. falciparum, nitidine (IC50 0.11 +/- 0.01 mu g/mL against W2 and D6 strains) and norchelerythrine (IC50 value of 0.15 +/- 0.01 mu g/mL against D6 strain) were the most active.
Droughts in São Paulo
(2023)
Literature has suggested that droughts and societies are mutually shaped and, therefore, both require a better understanding of their coevolution on risk reduction and water adaptation. Although the Sao Paulo Metropolitan Region drew attention because of the 2013-2015 drought, this was not the first event. This paper revisits this event and the 1985-1986 drought to compare the evolution of drought risk management aspects. Documents and hydrological records are analyzed to evaluate the hazard intensity, preparedness, exposure, vulnerability, responses, and mitigation aspects of both events. Although the hazard intensity and exposure of the latter event were larger than the former one, the policy implementation delay and the dependency of service areas in a single reservoir exposed the region to higher vulnerability. In addition to the structural and non-structural tools implemented just after the events, this work raises the possibility of rainwater reuse for reducing the stress in reservoirs.
Its properties make copper one of the world’s most important functional metals. Numerous megatrends are increasing the demand for copper. This requires the prospection and exploration of new deposits, as well as the monitoring of copper quality in the various production steps. A promising technique to perform these tasks is Laser Induced Breakdown Spectroscopy (LIBS). Its unique feature, among others, is the ability to measure on site without sample collection and preparation. In this work, copper-bearing minerals from two different deposits are studied. The first set of field samples come from a volcanogenic massive sulfide (VMS) deposit, the second part from a stratiform sedimentary copper (SSC) deposit. Different approaches are used to analyze the data. First, univariate regression (UVR) is used. However, due to the strong influence of matrix effects, this is not suitable for the quantitative analysis of copper grades. Second, the multivariate method of partial least squares regression (PLSR) is used, which is more suitable for quantification. In addition, the effects of the surrounding matrices on the LIBS data are characterized by principal component analysis (PCA), alternative regression methods to PLSR are tested and the PLSR calibration is validated using field samples.
Keep on scrolling?
(2023)
Smartphones are an integral part of daily life for many people worldwide. However, concerns have been raised that long usage times and the fragmentation of daily life through smartphone usage are detrimental to well-being. This preregistered study assesses (1) whether differences in smartphone usage behaviors between individuals predict differences in a variety of well-being measures (between-person effects) and (2) whether differences in smartphone usage behaviors between situations predict whether an individual is feeling better or worse (within-person effects). In addition to total usage time, several indicators capturing the fragmentation of usage/nonusage time were developed. The study combines objectively measured smartphone usage with self-reports of well-being in surveys (N = 236) and an experience sampling period (N = 378, n = 5775 datapoints). To ensure the robustness of the results, we replicated our analyses in a second measurement period (surveys: N = 305; experience sampling: N = 534, n = 7287 datapoints) and considered the pattern of effects across different operational definitions and constructs. Results show that individuals who use their smartphone more report slightly lower well-being (between-person effect) but no evidence for within-person effects of total usage time emerged. With respect to fragmentation, we found no robust association with well-being.
This study utilizes cross-country survey data to analyze differences in attitudes toward cryptocurrency as an alternative to traditional money issued by a central bank. Particularly, we investigate women’s general attitude toward cryptocurrency systems. Results suggest that women invest less into cryptocurrency, show less interest in the future cryptocurrency investment, and see less economic potential in these systems than men do. Further evidence shows that these attitudes are directly connected with lower literacy in cryptocurrency systems. These findings support theory on gender differences in investment behavior. We contribute to the existing literature by conducting a cross-country survey on cryptocurrency attitudes in Europe and Asia, and hence show that this gender effect is robust across these cultures.
Extreme flooding displaces an average of 12 million people every year. Marginalized populations in low-income countries are in particular at high risk, but also industrialized countries are susceptible to displacement and its inherent societal impacts. The risk of being displaced results from a complex interaction of flood hazard, population exposed in the floodplains, and socio-economic vulnerability. Ongoing global warming changes the intensity, frequency, and duration of flood hazards, undermining existing protection measures. Meanwhile, settlements in attractive yet hazardous flood-prone areas have led to a higher degree of population exposure. Finally, the vulnerability to displacement is altered by demographic and social change, shifting economic power, urbanization, and technological development. These risk components have been investigated intensively in the context of loss of life and economic damage, however, only little is known about the risk of displacement under global change.
This thesis aims to improve our understanding of flood-induced displacement risk under global climate change and socio-economic change. This objective is tackled by addressing the following three research questions. First, by focusing on the choice of input data, how well can a global flood modeling chain reproduce flood hazards of historic events that lead to displacement? Second, what are the socio-economic characteristics that shape the vulnerability to displacement? Finally, to what degree has climate change potentially contributed to recent flood-induced displacement events?
To answer the first question, a global flood modeling chain is evaluated by comparing simulated flood extent with satellite-derived inundation information for eight major flood events. A focus is set on the sensitivity to different combinations of the underlying climate reanalysis datasets and global hydrological models which serve as an input for the global hydraulic model. An evaluation scheme of performance scores shows that simulated flood extent is mostly overestimated without the consideration of flood protection and only for a few events dependent on the choice of global hydrological models. Results are more sensitive to the underlying climate forcing, with two datasets differing substantially from a third one. In contrast, the incorporation of flood protection standards results in an underestimation of flood extent, pointing to potential deficiencies in the protection level estimates or the flood frequency distribution within the modeling chain.
Following the analysis of a physical flood hazard model, the socio-economic drivers of vulnerability to displacement are investigated in the next step. For this purpose, a satellite- based, global collection of flood footprints is linked with two disaster inventories to match societal impacts with the corresponding flood hazard. For each event the number of affected population, assets, and critical infrastructure, as well as socio-economic indicators are computed. The resulting datasets are made publicly available and contain 335 displacement events and 695 mortality/damage events. Based on this new data product, event-specific displacement vulnerabilities are determined and multiple (national) dependencies with the socio-economic predictors are derived. The results suggest that economic prosperity only partially shapes vulnerability to displacement; urbanization, infant mortality rate, the share of elderly, population density and critical infrastructure exhibit a stronger functional relationship, suggesting that higher levels of development are generally associated with lower vulnerability.
Besides examining the contextual drivers of vulnerability, the role of climate change in the context of human displacement is also being explored. An impact attribution approach is applied on the example of Cyclone Idai and associated extreme coastal flooding in Mozambique. A combination of coastal flood modeling and satellite imagery is used to construct factual and counterfactual flood events. This storyline-type attribution method allows investigating the isolated or combined effects of sea level rise and the intensification of cyclone wind speeds on coastal flooding. The results suggest that displacement risk has increased by 3.1 to 3.5% due to the total effects of climate change on coastal flooding, with the effects of increasing wind speed being the dominant factor.
In conclusion, this thesis highlights the potentials and challenges of modeling flood- induced displacement risk. While this work explores the sensitivity of global flood modeling to the choice of input data, new questions arise on how to effectively improve the reproduction of flood return periods and the representation of protection levels. It is also demonstrated that disentangling displacement vulnerabilities is feasible, with the results providing useful information for risk assessments, effective humanitarian aid, and disaster relief. The impact attribution study is a first step in assessing the effects of global warming on displacement risk, leading to new research challenges, e.g., coupling fluvial and coastal flood models or the attribution of other hazard types and displacement events. This thesis is one of the first to address flood-induced displacement risk from a global perspective. The findings motivate for further development of the global flood modeling chain to improve our understanding of displacement vulnerability and the effects of global warming.