Refine
Has Fulltext
- yes (486) (remove)
Year of publication
Document Type
- Doctoral Thesis (132)
- Conference Proceeding (122)
- Article (79)
- Postprint (69)
- Working Paper (39)
- Monograph/Edited Volume (16)
- Review (15)
- Preprint (6)
- Master's Thesis (5)
- Habilitation Thesis (2)
Language
- English (486) (remove)
Keywords
- USA (7)
- United States (7)
- Arktis (6)
- climate change (6)
- moderne jüdische Geschichte (6)
- Arctic (5)
- Fernerkundung (5)
- modern Jewish history (5)
- 20. Jahrhundert (4)
- 20th century (4)
Institute
- Extern (486) (remove)
Genome-scale metabolic models are mathematical representations of all known reactions occurring in a cell. Combined with constraints based on physiological measurements, these models have been used to accurately predict metabolic fluxes and effects of perturbations (e.g. knock-outs) and to inform metabolic engineering strategies. Recently, protein-constrained models have been shown to increase predictive potential (especially in overflow metabolism), while alleviating the need for measurement of nutrient uptake rates. The resulting modelling frameworks quantify the upkeep cost of a certain metabolic flux as the minimum amount of enzyme required for catalysis. These improvements are based on the use of in vitro turnover numbers or in vivo apparent catalytic rates of enzymes for model parameterization. In this thesis several tools for the estimation and refinement of these parameters based on in vivo proteomics data of Escherichia coli, Saccharomyces cerevisiae, and Chlamydomonas reinhardtii have been developed and applied. The difference between in vitro and in vivo catalytic rate measures for the three microorganisms was systematically analyzed. The results for the facultatively heterotrophic microalga C. reinhardtii considerably expanded the apparent catalytic rate estimates for photosynthetic organisms. Our general finding pointed at a global reduction of enzyme efficiency in heterotrophy compared to other growth scenarios. Independent of the modelled organism, in vivo estimates were shown to improve accuracy of predictions of protein abundances compared to in vitro values for turnover numbers. To further improve the protein abundance predictions, machine learning models were trained that integrate features derived from protein-constrained modelling and codon usage. Combining the two types of features outperformed single feature models and yielded good prediction results without relying on experimental transcriptomic data. The presented work reports valuable advances in the prediction of enzyme allocation in unseen scenarios using protein constrained metabolic models. It marks the first successful application of this modelling framework in the biotechnological important taxon of green microalgae, substantially increasing our knowledge of the enzyme catalytic landscape of phototrophic microorganisms.
Floods continue to be the leading cause of economic damages and fatalities among natural disasters worldwide. As future climate and exposure changes are projected to intensify these damages, the need for more accurate and scalable flood risk models is rising. Over the past decade, macro-scale flood risk models have evolved from initial proof-of-concepts to indispensable tools for decision-making at global-, nationaland, increasingly, the local-level. This progress has been propelled by the advent of high-performance computing and the availability of global, space-based datasets. However, despite such advancements, these models are rarely validated and consistently fall short of the accuracy achieved by high-resolution local models. While capabilities have improved, significant gaps persist in understanding the behaviours of such macro-scale models, particularly their tendency to overestimate risk. This dissertation aims to address such gaps by examining the scale transfers inherent in the construction and application of coarse macroscale models. To achieve this, four studies are presented that, collectively, address exposure, hazard, and vulnerability components of risk affected by upscaling or downscaling.
The first study focuses on a type of downscaling where coarse flood hazard inundation grids are enhanced to a finer resolution. While such inundation downscaling has been employed in numerous global model chains, ours is the first study to focus specifically on this component, providing an evaluation of the state of the art and a novel algorithm. Findings demonstrate that our novel algorithm is eight times faster than existing methods, offers a slight improvement in accuracy, and generates more physically coherent flood maps in hydraulically challenging regions. When applied to a case study, the algorithm generated a 4m resolution inundation map from 30m hydrodynamic model outputs in 33 s, a 60-fold improvement in runtime with a 25% increase in RMSE compared with direct hydrodynamic modelling. All evaluated downscaling algorithms yielded better accuracy than the coarse hydrodynamic model when compared to observations, demonstrating similar limits of coarse hydrodynamic models reported by others. The substitution of downscaling into flood risk model chains, in place of high-resolution modelling, can drastically improve the lead time of impactbased forecasts and the efficiency of hazard map production. With downscaling, local regions could obtain high resolution local inundation maps by post-processing a global model without the need for expensive modelling or expertise.
The second study focuses on hazard aggregation and its implications for exposure, investigating implicit aggregations commonly used to intersect hazard grids with coarse exposure models. This research introduces a novel spatial classification framework to understand the effects of rescaling flood hazard grids to a coarser resolution. The study derives closed-form analytical solutions for the location and direction of bias from flood grid aggregation, showing that bias will always be present in regions near the edge of inundation. For example, inundation area will be positively biased when water depth grids are aggregated, while volume will be negatively biased when water elevation grids are aggregated. Extending the analysis to effects of hazard aggregation on building exposure, this study shows that exposure in regions at the edge of inundation are an order of magnitude more sensitive to aggregation errors than hazard alone. Among the two aggregation routines considered, averaging water surface elevation grids better preserved flood depths at buildings than averaging of water depth grids. The study provides the first mathematical proof and generalizeable treatment of flood hazard grid aggregation, demonstrating important mechanisms to help flood risk modellers understand and control model behaviour.
The final two studies focus on the aggregation of vulnerability models or flood damage functions, investigating the practice of applying per-asset functions to aggregate exposure models. Both studies extend Jensen’s inequality, a well-known 1906 mathematical proof, to demonstrate how the aggregation of flood damage functions leads to bias. Applying Jensen’s proof in this new context, results show that typically concave flood damage functions will introduce a positive bias (overestimation) when aggregated. This behaviour was further investigated with a simulation experiment including 2 million buildings in Germany, four global flood hazard simulations and three aggregation scenarios. The results show that positive aggregation bias is not distributed evenly in space, meaning some regions identified as “hot spots of risk” in assessments may in fact just be hot spots of aggregation bias. This study provides the first application of Jensen’s inequality to explain the overestimates reported elsewhere and advice for modellers to minimize such artifacts.
In total, this dissertation investigates the complex ways aggregation and disaggregation influence the behaviour of risk models, focusing on the scale-transfers underpinning macro-scale flood risk assessments. Extending a key finding of the flood hazard literature to the broader context of flood risk, this dissertation concludes that all else equal, coarse models overestimate risk. This dissertation goes beyond previous studies by providing mathematical proofs for how and where such bias emerges in aggregation routines, offering a mechanistic explanation for coarse model overestimates. It shows that this bias is spatially heterogeneous, necessitating a deep understanding of how rescaling may bias models to effectively reduce or communicate uncertainties. Further, the dissertation offers specific recommendations to help modellers minimize scale transfers in problematic regions. In conclusion, I argue that such aggregation errors are epistemic, stemming from choices in model structure, and therefore hold greater potential and impetus for study and mitigation. This deeper understanding of uncertainties is essential for improving macro-scale flood risk models and their effectiveness in equitable, holistic, and sustainable flood management.
Jews and Muslims have lived in the territory of modern-day Austria for centuries untold, yet often continue to be construed as the essential “other.” This essay explores a selection of sometimes divergent, sometimes convergent historical experiences amongst these two broad population groups, focusing specifically on demographic diversity, community-building, discrimination and persecution, and the post-war situation. The ultimate aim is to illuminate paradigmatically through the Austrian case study the complex multicultural mosaic of historical Central Europe, the understanding of which, so our contention, sheds a critical light on the often divisive present-day debates concerning immigration and diversity in Austria and Central Europe more broadly. It furthermore opens up a hitherto understudied field of historical research, namely the entangled history of Jews and Muslims in modern Europe.
The Jewish museums established in the fin-de-siècle Habsburg Empire postulated the unity of “the Jewish people,” with custodians constructing an “us” (Jews) in distinction to the “other” (non-Jews). In the difference-oriented frenzy of the time, Jewish identity was predominantly presented as Central European, enlightened, not overly religious, and middle-class. Then, when the Viennese Jewish Museum opened its doors in 1895, the painters Isidor Kaufmann and David Kohn created an installation called “Die Gute Stube” (The Parlor). This exhibit housed books, furniture, as well as decorative and ritual objects of the kind that were thought to be found in typical Eastern European Jewish households. However, as this article argues, this attempted visualization of the essence of Judaism and the range of Jewish life worlds promoted a paradigmatic stereotype with which Jewish museums would have to struggle for decades to come.
Even though Salonican Jews are not typically associated with the Habsburg Empire, some of them, nonetheless, lived there. This paper aims to examine the formation of these Salonican Jews’ (self-)identification by studying their social interactions with the local Viennese population such as the Viennese Sephardi or the Greek-Orthodox communities. The change of the milieu within which they found themselves subsequently impacted their self-perception. Thus, the issue of the surrounding environment and their relations with other groups became central to their self-understanding, as will be demonstrated. By examining different aspects, like migration patterns, financial decisions and family ties, one can understand how their intersection influenced Salonica Jews’ self-identification, which, at the same time, shaped and was shaped by the surrounding milieu. Within this framework, these people perceived themselves and were perceived as Salonican, Sephardi, Jewish, and as subjects of the Emperor.
“Domestic Foreigners”
(2024)
This paper examines the relationship between the Sephardic Jewish community of Vienna and the Ottoman and Habsburg Empires in the latter half of the 19th century. The community’s legal status was transformed following the emancipation of Austrian Jews, but very few first-hand accounts of these changes exist today. The primary sources analyzed in this paper are Judezmo-language newspapers published in Vienna at that time. The paper emphasizes the historical and political contexts surrounding these sources, particularly the community’s close ties to the Ottoman and Habsburg regimes.
Shared Spaces
(2024)
Galicia was home to the largest Jewish population of the Cisleithanian part of the Habsburg Empire. After the Josephinian “German-Jewish schools” had closed already in 1806, educational patterns differed from those in Moravia and Bohemia, where Jewish children received a secular education in a more consistent “Jewish” space. In Galicia in the constitutional era (post-1867), however, with mandatory education enforced, public schools became a shared space in which Jews and (Catholic) Christians functioned together. In Galicia, most Jewish children received public education but usually constituted a religious minority in the student body. The article analyzes how the school space, calendar, and routines were adjusted to accommodate the multi-religious character of the student body.
The article analyzes the interdependences between the history of the Habsburg Empire and the names of its Jewish inhabitants. Until today, these names tell stories about this close relationship and they are an everlasting symbol of this era. By focusing on names, this paper shows how state policies towards Jews shifted over time, and how the perspective on names and name regulations can be a tool to connect and investigate both Habsburg and Jewish studies.
This article aims to demonstrate the exceptional potential of Habsburg military records for the study of Jewish history during Europe’s Age of Revolution. We begin with the random discovery of six Jewish veterans of Freikorps Grün Loudon – a unit of mercenary freebooters – which fought for the Habsburgs during the first war against the French Republic (1792 – 97). A careful re-reading of the available archival evidence reveals that these men were the survivors of a much larger group numbering at least two dozen Jewish soldiers. While Jewish conscripts had been drafted into the Habsburg army since 1788, the fact that Jews could also serve – even volunteer – as professional soldiers in that period is completely new to us. When considered together, the personal circumstances and service experiences of the Jewish soldiers of Freikorps Grün Loudon enable us to make several observations about their motivation as well as their position vis-à-vis their non-Jewish comrades.
This article brings two seemingly disconnected historiographic models of periodization into conversation: Habsburg studies and Habsburg Jewish studies. It argues for an expansion of the temporal frameworks of both fields to highlight historical continuities connecting the Holy Roman and Habsburg Empire at least from a structural perspective. These historical continuums are a useful analytical lens when applied to marginalized groups, like early modern Jews, in tandem with a central group of contemporary powerholders, such as the Habsburg nobility. Using Bohemia as a case study, this essay juxtaposes questions of transregional transfer of cultural, economic, and social capital with the challenges of Jewish marginalization and discrimination to highlight the changing yet interconnected imperial landscapes.
During the last decades, therapeutical proteins have risen to great significance in the pharmaceutical industry. As non-human proteins that are introduced into the human body cause a distinct immune system reaction that triggers their rapid clearance, most newly approved protein pharmaceuticals are shielded by modification with synthetic polymers to significantly improve their blood circulation time. All such clinically approved protein-polymer conjugates contain polyethylene glycol (PEG) and its conjugation is denoted as PEGylation. However, many patients develop anti-PEG antibodies which cause a rapid clearance of PEGylated molecules upon repeated administration. Therefore, the search for alternative polymers that can replace PEG in therapeutic applications has become important. In addition, although the blood circulation time is significantly prolonged, the therapeutic activity of some conjugates is decreased compared to the unmodified protein. The reason is that these conjugates are formed by the traditional conjugation method that addresses the protein's lysine side chains. As proteins have many solvent exposed lysines, this results in a somewhat uncontrolled attachment of polymer chains, leading to a mixture of regioisomers, with some of them eventually affecting the therapeutic performance.
This thesis investigates a novel method for ligating macromolecules in a site-specific manner, using enzymatic catalysis. Sortase A is used as the enzyme: It is a well-studied transpeptidase which is able to catalyze the intermolecular ligation of two peptides. This process is commonly referred to as sortase-mediated ligation (SML). SML constitutes an equilibrium reaction, which limits product yield. Two previously reported methods to overcome this major limitation were tested with polymers without using an excessive amount of one reactant.
Specific C- or N-terminal peptide sequences (recognition sequence and nucleophile) as part of the protein are required for SML. The complementary peptide was located at the polymer chain end. Grafting-to was used to avoid damaging the protein during polymerization. To be able to investigate all possible combinations (protein-recognition sequence and nucleophile-protein as well as polymer-recognition sequence and nucleophile-polymer) all necessary building blocks were synthesized. Polymerization via reversible deactivation radical polymerization (RDRP) was used to achieve a narrow molecular weight distribution of the polymers, which is required for therapeutic use.
The synthesis of the polymeric building blocks was started by synthesizing the peptide via automated solid-phase peptide synthesis (SPPS) to avoid post-polymerization attachment and to enable easy adaptation of changes in the peptide sequence. To account for the different functionalities (free N- or C-terminus) required for SML, different linker molecules between resin and peptide were used.
To facilitate purification, the chain transfer agent (CTA) for reversible addition-fragmentation chain-transfer (RAFT) polymerization was coupled to the resin-immobilized recognition sequence peptide. The acrylamide and acrylate-based monomers used in this thesis were chosen for their potential to replace PEG.
Following that, surface-initiated (SI) ATRP and RAFT polymerization were attempted, but failed. As a result, the newly developed method of xanthate-supported photo-iniferter (XPI) RAFT polymerization in solution was used successfully to obtain a library of various peptide-polymer conjugates with different chain lengths and narrow molar mass distributions.
After peptide side chain deprotection, these constructs were used first to ligate two polymers via SML, which was successful but revealed a limit in polymer chain length (max. 100 repeat units). When utilizing equimolar amounts of reactants, the use of Ni2+ ions in combination with a histidine after the recognition sequence to remove the cleaved peptide from the equilibrium maximized product formation with conversions of up to 70 %.
Finally, a model protein and a nanobody with promising properties for therapeutical use were biotechnologically modified to contain the peptide sequences required for SML. Using the model protein for C- or N-terminal SML with various polymers did not result in protein-polymer conjugates. The reason is most likely the lack of accessibility of the protein termini to the enzyme. Using the nanobody for C-terminal SML, on the other hand, was successful. However, a similar polymer chain length limit was observed as in polymer-polymer SML. Furthermore, in case of the synthesis of protein-polymer conjugates, it was more effective to shift the SML equilibrium by using an excess of polymer than by employing the Ni2+ ion strategy.
Overall, the experimental data from this work provides a good foundation for future research in this promising field; however, more research is required to fully understand the potential and limitations of using SML for protein-polymer synthesis. In future, the method explored in this dissertation could prove to be a very versatile pathway to obtain therapeutic protein-polymer conjugates that exhibit high activities and long blood circulation times.