Refine
Year of publication
Document Type
- Article (192)
- Doctoral Thesis (167)
- Monograph/Edited Volume (70)
- Working Paper (36)
- Postprint (20)
- Conference Proceeding (11)
- Master's Thesis (7)
- Preprint (7)
- Report (6)
- Part of Periodical (4)
- Part of a Book (3)
- Contribution to a Periodical (2)
- Habilitation Thesis (2)
- Lecture (2)
- Other (2)
- Bachelor Thesis (1)
- Course Material (1)
- Sound (1)
Language
- English (378)
- German (154)
- Portuguese (1)
- Spanish (1)
Is part of the Bibliography
- yes (534) (remove)
Keywords
- climate change (7)
- Arktis (6)
- Arctic (5)
- COVID-19 (5)
- Fernerkundung (5)
- Lehrkräftebildung (5)
- Reflexion (5)
- Forschungsdaten (4)
- Klimawandel (4)
- Nanopartikel (4)
Institute
- Extern (534) (remove)
Die bedarfsgerechte Versorgung im Alter zukünftig sicherzustellen, gehört zu den entscheidenden Aufgaben unserer Zeit. Der in Deutschland bestehende Fachkräftemangel sowie der demografische Wandel belasten das Pflegesystem in mehrfacher Hinsicht: In einer alternden Gesellschaft sind immer mehr Menschen auf eine anhaltende Unterstützung angewiesen. Niedrige Geburtenraten und damit verbunden ein sinkender Bevölkerungs-anteil von Menschen im erwerbsfähigen Alter bringen einen bereits heute spürbaren Mangel an beruflich Pflegenden mit sich.
Um eine menschenwürdige Pflege anhaltend zu gewährleisten, müssen vorhandene Ressourcen gezielter eingesetzt und zusätzliche Reserven freigelegt werden. Viele Hoffnungen liegen hier auf technologischen Innovationen. Die Digitalisierung soll das Gesundheitswesen effizienter gestalten und beispielsweise durch Künstliche Intelligenz zeitraubende Prozesse vereinfachen oder sogar automatisieren. Im Kontext der Pflege wird der Einsatz von robotischen Assistenzsystemen diskutiert.
Aus diesem Grund wurde die die Potsdamer Bürger:innenkonferenz „Robotik in der Altenpflege?“ initiiert. Um die Zukunft der Pflege gemeinsam zu gestalten, wurden 3.500 Potsdamer Bürgerinnen und Bürger kontaktiert und schließlich fünfundzwanzig Teilnehmende ausgewählt. Im Frühjahr 2024 kamen sie zusammen, um den verantwortlichen Einsatz von Robotik in der Pflege zu diskutieren.
Die hier vorliegende Erklärung ist das Ergebnis der Bürger:innenkonferenz. Sie enthält die zentralen Positionen der Teilnehmenden.
Die Bürger:innenkonferenz ist Teil des Projekts E-cARE („Ethics Guidelines for Socially Assistive Robots in Elderly Care: An Empirical-Participatory Approach“), welches die Juniorprofessur für Medizinische Ethik mit Schwerpunkt auf Digitalisierung der Fakultät für Gesundheitswissenschaften Brandenburg, Universität Potsdam, durchgeführt hat.
Genome-scale metabolic models are mathematical representations of all known reactions occurring in a cell. Combined with constraints based on physiological measurements, these models have been used to accurately predict metabolic fluxes and effects of perturbations (e.g. knock-outs) and to inform metabolic engineering strategies. Recently, protein-constrained models have been shown to increase predictive potential (especially in overflow metabolism), while alleviating the need for measurement of nutrient uptake rates. The resulting modelling frameworks quantify the upkeep cost of a certain metabolic flux as the minimum amount of enzyme required for catalysis. These improvements are based on the use of in vitro turnover numbers or in vivo apparent catalytic rates of enzymes for model parameterization. In this thesis several tools for the estimation and refinement of these parameters based on in vivo proteomics data of Escherichia coli, Saccharomyces cerevisiae, and Chlamydomonas reinhardtii have been developed and applied. The difference between in vitro and in vivo catalytic rate measures for the three microorganisms was systematically analyzed. The results for the facultatively heterotrophic microalga C. reinhardtii considerably expanded the apparent catalytic rate estimates for photosynthetic organisms. Our general finding pointed at a global reduction of enzyme efficiency in heterotrophy compared to other growth scenarios. Independent of the modelled organism, in vivo estimates were shown to improve accuracy of predictions of protein abundances compared to in vitro values for turnover numbers. To further improve the protein abundance predictions, machine learning models were trained that integrate features derived from protein-constrained modelling and codon usage. Combining the two types of features outperformed single feature models and yielded good prediction results without relying on experimental transcriptomic data. The presented work reports valuable advances in the prediction of enzyme allocation in unseen scenarios using protein constrained metabolic models. It marks the first successful application of this modelling framework in the biotechnological important taxon of green microalgae, substantially increasing our knowledge of the enzyme catalytic landscape of phototrophic microorganisms.
Floods continue to be the leading cause of economic damages and fatalities among natural disasters worldwide. As future climate and exposure changes are projected to intensify these damages, the need for more accurate and scalable flood risk models is rising. Over the past decade, macro-scale flood risk models have evolved from initial proof-of-concepts to indispensable tools for decision-making at global-, nationaland, increasingly, the local-level. This progress has been propelled by the advent of high-performance computing and the availability of global, space-based datasets. However, despite such advancements, these models are rarely validated and consistently fall short of the accuracy achieved by high-resolution local models. While capabilities have improved, significant gaps persist in understanding the behaviours of such macro-scale models, particularly their tendency to overestimate risk. This dissertation aims to address such gaps by examining the scale transfers inherent in the construction and application of coarse macroscale models. To achieve this, four studies are presented that, collectively, address exposure, hazard, and vulnerability components of risk affected by upscaling or downscaling.
The first study focuses on a type of downscaling where coarse flood hazard inundation grids are enhanced to a finer resolution. While such inundation downscaling has been employed in numerous global model chains, ours is the first study to focus specifically on this component, providing an evaluation of the state of the art and a novel algorithm. Findings demonstrate that our novel algorithm is eight times faster than existing methods, offers a slight improvement in accuracy, and generates more physically coherent flood maps in hydraulically challenging regions. When applied to a case study, the algorithm generated a 4m resolution inundation map from 30m hydrodynamic model outputs in 33 s, a 60-fold improvement in runtime with a 25% increase in RMSE compared with direct hydrodynamic modelling. All evaluated downscaling algorithms yielded better accuracy than the coarse hydrodynamic model when compared to observations, demonstrating similar limits of coarse hydrodynamic models reported by others. The substitution of downscaling into flood risk model chains, in place of high-resolution modelling, can drastically improve the lead time of impactbased forecasts and the efficiency of hazard map production. With downscaling, local regions could obtain high resolution local inundation maps by post-processing a global model without the need for expensive modelling or expertise.
The second study focuses on hazard aggregation and its implications for exposure, investigating implicit aggregations commonly used to intersect hazard grids with coarse exposure models. This research introduces a novel spatial classification framework to understand the effects of rescaling flood hazard grids to a coarser resolution. The study derives closed-form analytical solutions for the location and direction of bias from flood grid aggregation, showing that bias will always be present in regions near the edge of inundation. For example, inundation area will be positively biased when water depth grids are aggregated, while volume will be negatively biased when water elevation grids are aggregated. Extending the analysis to effects of hazard aggregation on building exposure, this study shows that exposure in regions at the edge of inundation are an order of magnitude more sensitive to aggregation errors than hazard alone. Among the two aggregation routines considered, averaging water surface elevation grids better preserved flood depths at buildings than averaging of water depth grids. The study provides the first mathematical proof and generalizeable treatment of flood hazard grid aggregation, demonstrating important mechanisms to help flood risk modellers understand and control model behaviour.
The final two studies focus on the aggregation of vulnerability models or flood damage functions, investigating the practice of applying per-asset functions to aggregate exposure models. Both studies extend Jensen’s inequality, a well-known 1906 mathematical proof, to demonstrate how the aggregation of flood damage functions leads to bias. Applying Jensen’s proof in this new context, results show that typically concave flood damage functions will introduce a positive bias (overestimation) when aggregated. This behaviour was further investigated with a simulation experiment including 2 million buildings in Germany, four global flood hazard simulations and three aggregation scenarios. The results show that positive aggregation bias is not distributed evenly in space, meaning some regions identified as “hot spots of risk” in assessments may in fact just be hot spots of aggregation bias. This study provides the first application of Jensen’s inequality to explain the overestimates reported elsewhere and advice for modellers to minimize such artifacts.
In total, this dissertation investigates the complex ways aggregation and disaggregation influence the behaviour of risk models, focusing on the scale-transfers underpinning macro-scale flood risk assessments. Extending a key finding of the flood hazard literature to the broader context of flood risk, this dissertation concludes that all else equal, coarse models overestimate risk. This dissertation goes beyond previous studies by providing mathematical proofs for how and where such bias emerges in aggregation routines, offering a mechanistic explanation for coarse model overestimates. It shows that this bias is spatially heterogeneous, necessitating a deep understanding of how rescaling may bias models to effectively reduce or communicate uncertainties. Further, the dissertation offers specific recommendations to help modellers minimize scale transfers in problematic regions. In conclusion, I argue that such aggregation errors are epistemic, stemming from choices in model structure, and therefore hold greater potential and impetus for study and mitigation. This deeper understanding of uncertainties is essential for improving macro-scale flood risk models and their effectiveness in equitable, holistic, and sustainable flood management.
During the last decades, therapeutical proteins have risen to great significance in the pharmaceutical industry. As non-human proteins that are introduced into the human body cause a distinct immune system reaction that triggers their rapid clearance, most newly approved protein pharmaceuticals are shielded by modification with synthetic polymers to significantly improve their blood circulation time. All such clinically approved protein-polymer conjugates contain polyethylene glycol (PEG) and its conjugation is denoted as PEGylation. However, many patients develop anti-PEG antibodies which cause a rapid clearance of PEGylated molecules upon repeated administration. Therefore, the search for alternative polymers that can replace PEG in therapeutic applications has become important. In addition, although the blood circulation time is significantly prolonged, the therapeutic activity of some conjugates is decreased compared to the unmodified protein. The reason is that these conjugates are formed by the traditional conjugation method that addresses the protein's lysine side chains. As proteins have many solvent exposed lysines, this results in a somewhat uncontrolled attachment of polymer chains, leading to a mixture of regioisomers, with some of them eventually affecting the therapeutic performance.
This thesis investigates a novel method for ligating macromolecules in a site-specific manner, using enzymatic catalysis. Sortase A is used as the enzyme: It is a well-studied transpeptidase which is able to catalyze the intermolecular ligation of two peptides. This process is commonly referred to as sortase-mediated ligation (SML). SML constitutes an equilibrium reaction, which limits product yield. Two previously reported methods to overcome this major limitation were tested with polymers without using an excessive amount of one reactant.
Specific C- or N-terminal peptide sequences (recognition sequence and nucleophile) as part of the protein are required for SML. The complementary peptide was located at the polymer chain end. Grafting-to was used to avoid damaging the protein during polymerization. To be able to investigate all possible combinations (protein-recognition sequence and nucleophile-protein as well as polymer-recognition sequence and nucleophile-polymer) all necessary building blocks were synthesized. Polymerization via reversible deactivation radical polymerization (RDRP) was used to achieve a narrow molecular weight distribution of the polymers, which is required for therapeutic use.
The synthesis of the polymeric building blocks was started by synthesizing the peptide via automated solid-phase peptide synthesis (SPPS) to avoid post-polymerization attachment and to enable easy adaptation of changes in the peptide sequence. To account for the different functionalities (free N- or C-terminus) required for SML, different linker molecules between resin and peptide were used.
To facilitate purification, the chain transfer agent (CTA) for reversible addition-fragmentation chain-transfer (RAFT) polymerization was coupled to the resin-immobilized recognition sequence peptide. The acrylamide and acrylate-based monomers used in this thesis were chosen for their potential to replace PEG.
Following that, surface-initiated (SI) ATRP and RAFT polymerization were attempted, but failed. As a result, the newly developed method of xanthate-supported photo-iniferter (XPI) RAFT polymerization in solution was used successfully to obtain a library of various peptide-polymer conjugates with different chain lengths and narrow molar mass distributions.
After peptide side chain deprotection, these constructs were used first to ligate two polymers via SML, which was successful but revealed a limit in polymer chain length (max. 100 repeat units). When utilizing equimolar amounts of reactants, the use of Ni2+ ions in combination with a histidine after the recognition sequence to remove the cleaved peptide from the equilibrium maximized product formation with conversions of up to 70 %.
Finally, a model protein and a nanobody with promising properties for therapeutical use were biotechnologically modified to contain the peptide sequences required for SML. Using the model protein for C- or N-terminal SML with various polymers did not result in protein-polymer conjugates. The reason is most likely the lack of accessibility of the protein termini to the enzyme. Using the nanobody for C-terminal SML, on the other hand, was successful. However, a similar polymer chain length limit was observed as in polymer-polymer SML. Furthermore, in case of the synthesis of protein-polymer conjugates, it was more effective to shift the SML equilibrium by using an excess of polymer than by employing the Ni2+ ion strategy.
Overall, the experimental data from this work provides a good foundation for future research in this promising field; however, more research is required to fully understand the potential and limitations of using SML for protein-polymer synthesis. In future, the method explored in this dissertation could prove to be a very versatile pathway to obtain therapeutic protein-polymer conjugates that exhibit high activities and long blood circulation times.
Zum dreißigjährigen Bestehen des Kommunalwissenschaftlichen Instituts an der Universität Potsdam vereint dieser Jubiläumsband kurze Aufsätze von ehemaligen und aktuellen Vorstandsmitgliedern, von Ehrenmitgliedern des Vorstands, langjährigen wissenschaftlichen Mitarbeitern des Instituts und aktuellen wissenschaftlichen Kooperationspartnern. Die insgesamt zwölf Beiträge befassen sich mit den Kommunalwissenschaften und der Geschichte des Kommunalwissenschaftlichen Instituts, mit aktuellen kommunalwissenschaftlichen Fragestellungen und wissenschaftlichen Kooperationen des KWI. Der vom KWI-Vorstand herausgegebene Band soll einen breiten Blick auf 30 Jahre Kommunalwissenschaften in Brandenburg und an der Universität Potsdam werfen und einen Ausblick auf zukünftige kommunalwissenschaftliche Forschung geben.
Arctic climate change is marked by intensified warming compared to global trends and a significant reduction in Arctic sea ice which can intricately influence mid-latitude atmospheric circulation through tropo- and stratospheric pathways. Achieving accurate simulations of current and future climate demands a realistic representation of Arctic climate processes in numerical climate models, which remains challenging.
Model deficiencies in replicating observed Arctic climate processes often arise due to inadequacies in representing turbulent boundary layer interactions that determine the interactions between the atmosphere, sea ice, and ocean. Many current climate models rely on parameterizations developed for mid-latitude conditions to handle Arctic turbulent boundary layer processes.
This thesis focuses on modified representation of the Arctic atmospheric processes and understanding their resulting impact on large-scale mid-latitude atmospheric circulation within climate models. The improved turbulence parameterizations, recently developed based on Arctic measurements, were implemented in the global atmospheric circulation model ECHAM6. This involved modifying the stability functions over sea ice and ocean for stable stratification and changing the roughness length over sea ice for all stratification conditions. Comprehensive analyses are conducted to assess the impacts of these modifications on ECHAM6's simulations of the Arctic boundary layer, overall atmospheric circulation, and the dynamical pathways between the Arctic and mid-latitudes.
Through a step-wise implementation of the mentioned parameterizations into ECHAM6, a series of sensitivity experiments revealed that the combined impacts of the reduced roughness length and the modified stability functions are non-linear. Nevertheless, it is evident that both modifications consistently lead to a general decrease in the heat transfer coefficient, being in close agreement with the observations.
Additionally, compared to the reference observations, the ECHAM6 model falls short in accurately representing unstable and strongly stable conditions.
The less frequent occurrence of strong stability restricts the influence of the modified stability functions by reducing the affected sample size. However, when focusing solely on the specific instances of a strongly stable atmosphere, the sensible heat flux approaches near-zero values, which is in line with the observations. Models employing commonly used surface turbulence parameterizations were shown to have difficulties replicating the near-zero sensible heat flux in strongly stable stratification.
I also found that these limited changes in surface layer turbulence parameterizations have a statistically significant impact on the temperature and wind patterns across multiple pressure levels, including the stratosphere, in both the Arctic and mid-latitudes. These significant signals vary in strength, extent, and direction depending on the specific month or year, indicating a strong reliance on the background state.
Furthermore, this research investigates how the modified surface turbulence parameterizations may influence the response of both stratospheric and tropospheric circulation to Arctic sea ice loss.
The most suitable parameterizations for accurately representing Arctic boundary layer turbulence were identified from the sensitivity experiments. Subsequently, the model's response to sea ice loss is evaluated through extended ECHAM6 simulations with different prescribed sea ice conditions.
The simulation with adjusted surface turbulence parameterizations better reproduced the observed Arctic tropospheric warming in vertical extent, demonstrating improved alignment with the reanalysis data. Additionally, unlike the control experiments, this simulation successfully reproduced specific circulation patterns linked to the stratospheric pathway for Arctic-mid-latitude linkages. Specifically, an increased occurrence of the Scandinavian-Ural blocking regime (negative phase of the North Atlantic Oscillation) in early (late) winter is observed. Overall, it can be inferred that improving turbulence parameterizations at the surface layer can improve the ECHAM6's response to sea ice loss.
We analyze how conventional emissions trading schemes (ETS) can be modified by introducing “clean-up certificates” to allow for a phase of net-negative emissions. Clean-up certificates bundle the permission to emit CO2 with the obligation for its removal. We show that demand for such certificates is determined by cost-saving technological progress, the discount rate and the length of the compliance period. Introducing extra clean-up certificates into an existing ETS reduces near-term carbon prices and mitigation efforts. In contrast, substituting ETS allowances with clean-up certificates reduces cumulative emissions without depressing carbon prices or mitigation in the near term. We calibrate our model to the EU ETS and identify reforms where simultaneously (i) ambition levels rise, (ii) climate damages fall, (iii) revenues from carbon prices rise and (iv) carbon prices and aggregate mitigation cost fall. For reducing climate damages, roughly half of the issued clean-up certificates should replace conventional ETS allowances. In the context of the EU ETS, a European Carbon Central Bank could manage the implementation of cleanup certificates and could serve as an enforcement mechanism.
Improving permafrost dynamics in land surface models: insights from dual sensitivity experiments
(2024)
The thawing of permafrost and the subsequent release of greenhouse gases constitute one of the most significant and uncertain positive feedback loops in the context of climate change, making predictions regarding changes in permafrost coverage of paramount importance. To address these critical questions, climate scientists have developed Land Surface Models (LSMs) that encompass a multitude of physical soil processes. This thesis is committed to advancing our understanding and refining precise representations of permafrost dynamics within LSMs, with a specific focus on the accurate modeling of heat fluxes, an essential component for simulating permafrost physics.
The first research question overviews fundamental model prerequisites for the representation of permafrost soils within land surface modeling. It includes a first-of-its-kind comparison between LSMs in CMIP6 to reveal their differences and shortcomings in key permafrost physics parameters. Overall, each of these LSMs represents a unique approach to simulating soil processes and their interactions with the climate system. Choosing the most appropriate model for a particular application depends on factors such as the spatial and temporal scale of the simulation, the specific research question, and available computational resources.
The second research question evaluates the performance of the state-of-the-art Community Land Model (CLM5) in simulating Arctic permafrost regions. Our approach overcomes traditional evaluation limitations by individually addressing depth, seasonality, and regional variations, providing a comprehensive assessment of permafrost and soil temperature dynamics. I compare CLM5's results with three extensive datasets: (1) soil temperatures from 295 borehole stations, (2) active layer thickness (ALT) data from the Circumpolar Active Layer Monitoring Network (CALM), and (3) soil temperatures, ALT, and permafrost extent from the ESA Climate Change Initiative (ESA-CCI). The results show that CLM5 aligns well with ESA-CCI and CALM for permafrost extent and ALT but reveals a significant global cold temperature bias, notably over Siberia. These results echo a persistent challenge identified in numerous studies: the existence of a systematic 'cold bias' in soil temperature over permafrost regions. To address this challenge, the following research questions propose dual sensitivity experiments.
The third research question represents the first study to apply a Plant Functional Type (PFT)-based approach to derive soil texture and soil organic matter (SOM), departing from the conventional use of coarse-resolution global data in LSMs. This novel method results in a more uniform distribution of soil organic matter density (OMD) across the domain, characterized by reduced OMD values in most regions. However, changes in soil texture exhibit a more intricate spatial pattern. Comparing the results to observations reveals a significant reduction in the cold bias observed in the control run. This method shows noticeable improvements in permafrost extent, but at the cost of an overestimation in ALT. These findings emphasize the model's high sensitivity to variations in soil texture and SOM content, highlighting the crucial role of soil composition in governing heat transfer processes and shaping the seasonal variation of soil temperatures in permafrost regions.
Expanding upon a site experiment conducted in Trail Valley Creek by \citet{dutch_impact_2022}, the fourth research question extends the application of the snow scheme proposed by \citet{sturm_thermal_1997} to cover the entire Arctic domain. By employing a snow scheme better suited to the snow density profile observed over permafrost regions, this thesis seeks to assess its influence on simulated soil temperatures. Comparing this method to observational datasets reveals a significant reduction in the cold bias that was present in the control run. In most regions, the Sturm run exhibits a substantial decrease in the cold bias. However, there is a distinctive overshoot with a warm bias observed in mountainous areas. The Sturm experiment effectively addressed the overestimation of permafrost extent in the control run, albeit resulting in a substantial reduction in permafrost extent over mountainous areas. ALT results remain relatively consistent compared to the control run. These outcomes align with our initial hypothesis, which anticipated that the reduced snow insulation in the Sturm run would lead to higher winter soil temperatures and a more accurate representation of permafrost physics.
In summary, this thesis demonstrates significant advancements in understanding permafrost dynamics and its integration into LSMs. It has meticulously unraveled the intricacies involved in the interplay between heat transfer, soil properties, and snow dynamics in permafrost regions. These insights offer novel perspectives on model representation and performance.
Das Forschungsprojekt „Workflow-Management-Systeme für Open-Access-Hochschulverlage (OA-WFMS)” ist eine Kooperation zwischen der HTWK Leipzig und der Universität Potsdam. Ziel ist es, die Bedarfe von Universitäts- und Hochschulverlagen und Anforderungen an ein Workflow-Management-Systeme (WFMS) zu analysieren, um daraus ein generisches Lastenheft zu erstellen. Das WFMS soll den Publikationsprozess in OA-Verlagen erleichtern, beschleunigen sowie die Verbreitung von Open Access und das nachhaltige, digitale wissenschaftliche Publizieren fördern.
Das Projekt baut auf den Ergebnissen der Projekte „Open-Access-Hochschulverlag (OA-HVerlag)“ und „Open-Access-Strukturierte-Kommunikation (OA-STRUKTKOMM)“ auf. Der diesem Bericht zugrunde liegende Auftaktworkshop fand 2024 in Leipzig mit Vertreter:innen von zehn Institutionen statt. Der Workshop diente dazu, Herausforderungen und Anforderungen an ein WFMS zu ermitteln sowie bestehende Lösungsansätze und Tools zu diskutieren.
Im Workshop wurden folgende Fragen behandelt:
a. Wie kann die Organisation und Überwachung von Publikationsprozessen in wissenschaftlichen Verlagen durch ein WFMS effizient gestaltet werden?
b. Welche Anforderungen muss ein WFMS erfüllen, um Publikationsprozesse optimal zu unterstützen?
c. Welche Schnittstellen müssen berücksichtigt werden, um die Interoperabilität der Systeme zu garantieren?
d. Welche bestehenden Lösungsansätze und Tools sind bereits im Einsatz und welche Vor- und Nachteile haben diese?
Der Workshop gliederte sich in zwei Teile : Teil 1 behandelte Herausforderungen und Anforderungen (Fragen a. bis c.), Teil 2 bestehende Lösungen und Tools (Frage d.). Die Ergebnisse des Workshops fließen in die Bedarfsanalyse des Forschungsprojekts ein.
Die im Bericht dokumentierten Ergebnisse zeigen die Vielzahl der Herausforderungen der bestehenden Ansätze bezüglich des OA-Publikationsmanagements . Die Herausforderungen zeigen sich insbesondere bei der Systemheterogenität, den individuellen Anpassungsbedarfen und der Notwendigkeit der systematischen Dokumentation. Die eingesetzten Unterstützungssysteme und Tools wie Dateiablagen, Projektmanagement- und Kommunikationstools können insgesamt den Anforderungen nicht genügen, für Teillösungen sind sie jedoch nutzbar. Deshalb muss die Integration bestehender Systeme in ein zu entwickelndes OA-WFMS in Betracht gezogen und die Interoperabilität der miteinander interagierenden Systeme gewährleistet werden. Die Beteiligten des Workshops waren sich einig, dass das OA-WFMS flexibel und modular aufgebaut werden soll. Einer konsortialen Softwareentwicklung und einem gemeinsamen Betrieb im Verbund wurde der Vorrang gegeben.
Der Workshop lieferte wertvolle Einblicke in die Arbeit der Hochschulverlage und bildet somit eine solide Grundlage für die in Folge zu erarbeitende weitere Bedarfsanalyse und die Erstellung des generischen Lastenheftes.