Refine
Has Fulltext
- yes (85) (remove)
Year of publication
- 2024 (85) (remove)
Document Type
- Doctoral Thesis (57)
- Working Paper (8)
- Master's Thesis (7)
- Part of Periodical (3)
- Monograph/Edited Volume (2)
- Other (2)
- Postprint (2)
- Report (2)
- Article (1)
- Habilitation Thesis (1)
Is part of the Bibliography
- yes (85) (remove)
Keywords
- Arctic (4)
- Arktis (4)
- Klimawandel (3)
- Satzverarbeitung (3)
- climate change (3)
- communication (3)
- experiment (3)
- machine learning (3)
- sentence processing (3)
- Atmosphäre (2)
Institute
- Extern (14)
- Institut für Physik und Astronomie (10)
- Hasso-Plattner-Institut für Digital Engineering GmbH (8)
- Institut für Biochemie und Biologie (8)
- Center for Economic Policy Analysis (CEPA) (7)
- Fachgruppe Volkswirtschaftslehre (7)
- Institut für Geowissenschaften (6)
- Institut für Chemie (5)
- Institut für Umweltwissenschaften und Geographie (5)
- Department Linguistik (4)
Actin is one of the most highly conserved proteins in eukaryotes and distinct actin-related proteins with filament-forming properties are even found in prokaryotes. Due to these commonalities, actin-modulating proteins of many species share similar structural properties and proposed functions. The polymerization and depolymerization of actin are critical processes for a cell as they can contribute to shape changes to adapt to its environment and to move and distribute nutrients and cellular components within the cell. However, to what extent functions of actin-binding proteins are conserved between distantly related species, has only been addressed in a few cases. In this work, functions of Coronin-A (CorA) and Actin-interacting protein 1 (Aip1), two proteins involved in actin dynamics, were characterized. In addition, the interchangeability and function of Aip1 were investigated in two phylogenetically distant model organisms. The flowering plant Arabidopsis thaliana (encoding two homologs, AIP1-1 and AIP1-2) and in the amoeba Dictyostelium discoideum (encoding one homolog, DdAip1) were chosen because the functions of their actin cytoskeletons may differ in many aspects. Functional analyses between species were conducted for AIP1 homologs as flowering plants do not harbor a CorA gene.
In the first part of the study, the effect of four different mutation methods on the function of Coronin-A protein and the resulting phenotype in D. discoideum was revealed in two genetic knockouts, one RNAi knockdown and a sudden loss-of-function mutant created by chemical-induced dislocation (CID). The advantages and disadvantages of the different mutation methods on the motility, appearance and development of the amoebae were investigated, and the results showed that not all observed properties were affected with the same intensity. Remarkably, a new combination of Selection-Linked Integration and CID could be established.
In the second and third parts of the thesis, the exchange of Aip1 between plant and amoeba was carried out. For A. thaliana, the two homologs (AIP1-1 and AIP1-2) were analyzed for functionality as well as in D. discoideum. In the Aip1-deficient amoeba, rescue with AIP1-1 was more effective than with AIP1-2. The main results in the plant showed that in the aip1-2 mutant background, reintroduced AIP1-2 displayed the most efficient rescue and A. thaliana AIP1-1 rescued better than DdAip1. The choice of the tagging site was important for the function of Aip1 as steric hindrance is a problem. The DdAip1 was less effective when tagged at the C-terminus, while the plant AIP1s showed mixed results depending on the tag position. In conclusion, the foreign proteins partially rescued phenotypes of mutant plants and mutant amoebae, despite the organisms only being very distantly related in evolutionary terms.
Access to digital finance
(2024)
Financing entrepreneurship spurs innovation and economic growth. Digital financial platforms that crowdfund equity for entrepreneurs have emerged globally, yet they remain poorly understood. We model equity crowdfunding in terms of the relationship between the number of investors and the amount of money raised per pitch. We examine heterogeneity in the average amount raised per pitch that is associated with differences across three countries and seven platforms. Using a novel dataset of successful fundraising on the most prominent platforms in the UK, Germany, and the USA, we find the underlying relationship between the number of investors and the amount of money raised for entrepreneurs is loglinear, with a coefficient less than one and concave to the origin. We identify significant variation in the average amount invested in each pitch across countries and platforms. Our findings have implications for market actors as well as regulators who set competitive frameworks.
Climate change fundamentally transforms glaciated high-alpine regions, with well-known cryospheric and hydrological implications, such as accelerating glacier retreat, transiently increased runoff, longer snow-free periods and more frequent and intense summer rainstorms. These changes affect the availability and transport of sediments in high alpine areas by altering the interaction and intensity of different erosion processes and catchment properties.
Gaining insight into the future alterations in suspended sediment transport by high alpine streams is crucial, given its wide-ranging implications, e.g. for flood damage potential, flood hazard in downstream river reaches, hydropower production, riverine ecology and water quality. However, the current understanding of how climate change will impact suspended sediment dynamics in these high alpine regions is limited. For one, this is due to the scarcity of measurement time series that are long enough to e.g. infer trends. On the other hand, it is difficult – if not impossible – to develop process-based models, due to the complexity and multitude of processes involved in high alpine sediment dynamics. Therefore, knowledge has so far been confined to conceptual models (which do not facilitate deriving concrete timings or magnitudes for individual catchments) or qualitative estimates (‘higher export in warmer years’) that may not be able to capture decreases in sediment export. Recently, machine-learning approaches have gained in popularity for modeling sediment dynamics, since their black box nature tailors them to the problem at hand, i.e. relatively well-understood input and output data, linked by very complex processes.
Therefore, the overarching aim of this thesis is to estimate sediment export from the high alpine Ötztal valley in Tyrol, Austria, over decadal timescales in the past and future – i.e. timescales relevant to anthropogenic climate change. This is achieved by informing, extending, evaluating and applying a quantile regression forest (QRF) approach, i.e. a nonparametric, multivariate machine-learning technique based on random forest.
The first study included in this thesis aimed to understand present sediment dynamics, i.e. in the period with available measurements (up to 15 years). To inform the modeling setup for the two subsequent studies, this study identified the most important predictors, areas within the catchments and time periods. To that end, water and sediment yields from three nested gauges in the upper Ötztal, Vent, Sölden and Tumpen (98 to almost 800 km² catchment area, 930 to 3772 m a.s.l.) were analyzed for their distribution in space, their seasonality and spatial differences therein, and the relative importance of short-term events. The findings suggest that the areas situated above 2500 m a.s.l., containing glacier tongues and recently deglaciated areas, play a pivotal role in sediment generation across all sub-catchments. In contrast, precipitation events were relatively unimportant (on average, 21 % of annual sediment yield was associated to precipitation events). Thus, the second and third study focused on the Vent catchment and its sub-catchment above gauge Vernagt (11.4 and 98 km², 1891 to 3772 m a.s.l.), due to their higher share of areas above 2500 m. Additionally, they included discharge, precipitation and air temperature (as well as their antecedent conditions) as predictors.
The second study aimed to estimate sediment export since the 1960s/70s at gauges Vent and Vernagt. This was facilitated by the availability of long records of the predictors, discharge, precipitation and air temperature, and shorter records (four and 15 years) of turbidity-derived sediment concentrations at the two gauges. The third study aimed to estimate future sediment export until 2100, by applying the QRF models developed in the second study to pre-existing precipitation and temperature projections (EURO-CORDEX) and discharge projections (physically-based hydroclimatological and snow model AMUNDSEN) for the three representative concentration pathways RCP2.6, RCP4.5 and RCP8.5.
The combined results of the second and third study show overall increasing sediment export in the past and decreasing export in the future. This suggests that peak sediment is underway or has already passed – unless precipitation changes unfold differently than represented in the projections or changes in the catchment erodibility prevail and override these trends. Despite the overall future decrease, very high sediment export is possible in response to precipitation events. This two-fold development has important implications for managing sediment, flood hazard and riverine ecology.
This thesis shows that QRF can be a very useful tool to model sediment export in high-alpine areas. Several validations in the second study showed good performance of QRF and its superiority to traditional sediment rating curves – especially in periods that contained high sediment export events, which points to its ability to deal with threshold effects. A technical limitation of QRF is the inability to extrapolate beyond the range of values represented in the training data. We assessed the number and severity of such out-of-observation-range (OOOR) days in both studies, which showed that there were few OOOR days in the second study and that uncertainties associated with OOOR days were small before 2070 in the third study. As the pre-processed data and model code have been made publically available, future studies can easily test further approaches or apply QRF to further catchments.
The mobile-immobile model (MIM) has been established in geoscience in the context of contaminant transport in groundwater. Here the tracer particles effectively immobilise, e.g., due to diffusion into dead-end pores or sorption. The main idea of the MIM is to split the total particle density into a mobile and an immobile density. Individual tracers switch between the mobile and immobile state following a two-state telegraph process, i.e., the residence times in each state are distributed exponentially. In geoscience the focus lies on the breakthrough curve (BTC), which is the concentration at a fixed location over time. We apply the MIM to biological experiments with a special focus on anomalous scaling regimes of the mean squared displacement (MSD) and non-Gaussian displacement distributions. As an exemplary system, we have analysed the motion of tau proteins, that diffuse freely inside axons of neurons. Their free diffusion thereby corresponds to the mobile state of the MIM. Tau proteins stochastically bind to microtubules, which effectively immobilises the tau proteins until they unbind and continue diffusing. Long immobilisation durations compared to the mobile durations give rise to distinct non-Gaussian Laplace shaped distributions. It is accompanied by a plateau in the MSD for initially mobile tracer particles at relevant intermediate timescales. An equilibrium fraction of initially mobile tracers gives rise to non-Gaussian displacements at intermediate timescales, while the MSD remains linear at all times. In another setting bio molecules diffuse in a biosensor and transiently bind to specific receptors, where advection becomes relevant in the mobile state. The plateau in the MSD observed for the advection-free setting and long immobilisation durations persists also for the case with advection. We find a new clear regime of anomalous diffusion with non-Gaussian distributions and a cubic scaling of the MSD. This regime emerges for initially mobile and for initially immobile tracers. For an equilibrium fraction of initially mobile tracers we observe an intermittent ballistic scaling of the MSD. The long-time effective diffusion coefficient is enhanced by advection, which we physically explain with the variance of mobile durations. Finally, we generalize the MIM to incorporate arbitrary immobilisation time distributions and focus on a Mittag-Leffler immobilisation time distribution with power-law tail ~ t^(-1-mu) with 0<mu<1 and diverging mean immobilisation durations. A fit of our model to the BTC of experimental data from tracer particles in aquifers matches the BTC including the power-law tail. We use the fit parameters for plotting the displacement distributions and the MSD. We find Gaussian normal diffusion at short times and long-time power-law decay of mobile mass accompanied by anomalous diffusion at long times. The long-time diffusion is subdiffusive in the advection-free setting, while it is either subdiffusive for 0<mu<1/2 or superdiffusive for 1/2<mu<1 when advection is present. In the long-time limit we show equivalence of our model to a bi-fractional diffusion equation.
Das Forschungsprojekt „Workflow-Management-Systeme für Open-Access-Hochschulverlage (OA-WFMS)” ist eine Kooperation zwischen der HTWK Leipzig und der Universität Potsdam. Ziel ist es, die Bedarfe von Universitäts- und Hochschulverlagen und Anforderungen an ein Workflow-Management-Systeme (WFMS) zu analysieren, um daraus ein generisches Lastenheft zu erstellen. Das WFMS soll den Publikationsprozess in OA-Verlagen erleichtern, beschleunigen sowie die Verbreitung von Open Access und das nachhaltige, digitale wissenschaftliche Publizieren fördern.
Das Projekt baut auf den Ergebnissen der Projekte „Open-Access-Hochschulverlag (OA-HVerlag)“ und „Open-Access-Strukturierte-Kommunikation (OA-STRUKTKOMM)“ auf. Der diesem Bericht zugrunde liegende Auftaktworkshop fand 2024 in Leipzig mit Vertreter:innen von zehn Institutionen statt. Der Workshop diente dazu, Herausforderungen und Anforderungen an ein WFMS zu ermitteln sowie bestehende Lösungsansätze und Tools zu diskutieren.
Im Workshop wurden folgende Fragen behandelt:
a. Wie kann die Organisation und Überwachung von Publikationsprozessen in wissenschaftlichen Verlagen durch ein WFMS effizient gestaltet werden?
b. Welche Anforderungen muss ein WFMS erfüllen, um Publikationsprozesse optimal zu unterstützen?
c. Welche Schnittstellen müssen berücksichtigt werden, um die Interoperabilität der Systeme zu garantieren?
d. Welche bestehenden Lösungsansätze und Tools sind bereits im Einsatz und welche Vor- und Nachteile haben diese?
Der Workshop gliederte sich in zwei Teile : Teil 1 behandelte Herausforderungen und Anforderungen (Fragen a. bis c.), Teil 2 bestehende Lösungen und Tools (Frage d.). Die Ergebnisse des Workshops fließen in die Bedarfsanalyse des Forschungsprojekts ein.
Die im Bericht dokumentierten Ergebnisse zeigen die Vielzahl der Herausforderungen der bestehenden Ansätze bezüglich des OA-Publikationsmanagements . Die Herausforderungen zeigen sich insbesondere bei der Systemheterogenität, den individuellen Anpassungsbedarfen und der Notwendigkeit der systematischen Dokumentation. Die eingesetzten Unterstützungssysteme und Tools wie Dateiablagen, Projektmanagement- und Kommunikationstools können insgesamt den Anforderungen nicht genügen, für Teillösungen sind sie jedoch nutzbar. Deshalb muss die Integration bestehender Systeme in ein zu entwickelndes OA-WFMS in Betracht gezogen und die Interoperabilität der miteinander interagierenden Systeme gewährleistet werden. Die Beteiligten des Workshops waren sich einig, dass das OA-WFMS flexibel und modular aufgebaut werden soll. Einer konsortialen Softwareentwicklung und einem gemeinsamen Betrieb im Verbund wurde der Vorrang gegeben.
Der Workshop lieferte wertvolle Einblicke in die Arbeit der Hochschulverlage und bildet somit eine solide Grundlage für die in Folge zu erarbeitende weitere Bedarfsanalyse und die Erstellung des generischen Lastenheftes.
Bildung:digital
(2024)
Heute Morgen schon im Bett geswiped, geliked oder gepostet? Auf Arbeit an einer Video-Konferenz teilgenommen, eine Datenbank benutzt oder programmiert? Auf dem Heimweg schnell noch im Laden mit dem Smartphone bezahlt, Podcasts gehört und die Ausleihe der Bibliotheksbücher verlängert? Und abends auf der Couch mit dem Tablet auf ELSTER.de die Steuererklärung ausgefüllt, online geshoppt oder Rechnungen bezahlt, ehe die Streaming-Plattform mit einer Serie lockt?
Unser Leben ist durch und durch digitalisiert. Diese Veränderungen machen vieles schneller, leichter, effizienter. Doch damit Schritt zu halten, verlangt uns einiges ab und gelingt beileibe nicht allen. Es gibt Menschen, die für eine Überweisung lieber zur Bank gehen, das Programmieren den Experten überlassen, die Steuererklärung per Post schicken und das Smartphone nur zum Telefonieren benutzen. Sie wollen nicht, vielleicht können sie auch nicht. Haben es nicht gelernt. Andere, jüngere Menschen, wachsen als „Digital Natives“ inmitten digitaler Geräte, Tools und Prozesse auf. Aber können sie deshalb wirklich damit umgehen? Oder brauchen auch sie digitale Bildung?
Aber wie sieht erfolgreiche digitale Bildung eigentlich aus? Lernen wir dabei ein Tablet zu bedienen, richtig zu googeln und Excel-Tabellen zu schreiben? Möglicherweise geht es um mehr: darum, den umfassenden Wandel zu verstehen, der unsere Welt erfasst, seitdem sie in Einsen und Nullen zerlegt und virtuell neu aufgebaut wird. Aber wie lernen wir, in einer Welt der Digitalität zu leben – mit allem, was dazu gehört und zu unserem Nutzen? Für die aktuelle Ausgabe der „Portal Wissen“ haben wir uns an der Universität Potsdam umgeschaut, welche Rolle die Verbindung von Digitalisierung und Lernen in der Forschung der verschiedenen Disziplinen spielt: Wir haben mit Katharina Scheiter, Professorin für digitale Bildung, über die Zukunft in deutschen Schulen gesprochen und uns gleich von mehreren Expert*innen Beispiele dafür zeigen lassen, wie digitale Instrumente schulisches Lernen, aber auch Weiterbildung im Berufsleben verbessern können. Außerdem haben uns Forschende aus Informatik und Agrarforschung vorgeführt, wie auch gestandene Landwirte dank digitaler Hilfsmittel noch viel über ihr Land und ihre Arbeit lernen können. Wir haben mit Bildungsforschenden gesprochen, die mithilfe von Big Data analysieren, wie Jungen und Mädchen lernen und wo mögliche Ursachen für Unterschiede zu suchen sind. Die Bildungsund Politikwissenschaftlerin Nina Kolleck wiederum schaut auf Bildung vor dem Hintergrund der Globalisierung und setzt dabei auf die Auswertung von großen Mengen Social-Media- Daten.
Dabei verlieren wir natürlich die Vielfalt der Forschung an der Uni Potsdam nicht aus den Augen: Wir stellen der Strafrechtlerin Anna Albrecht 33 Fragen, begleiten eine Gruppe von Geoforschenden in den Himalaya und lassen uns erklären, welche Alternativen es bald zu Antibiotika geben könnte. Außerdem geht es in diesem Magazin um Stress und wie er uns krankmacht, die Forschung zu nachhaltiger Erzgewinnung und neue Ansätze in der Schulentwicklung.
Neu ist auch eine ganze Reihe kürzerer Beiträge, die zum Blättern und Schmökern einladen: von Forschungsnews und Personalia- Infos über fotografische Einblicke in Labore, einfache Erklärungen komplexer Phänomene und Ausblicke in die weite Forschungswelt bis hin zu einer kleinen Wissenschaftsutopie, einem persönlichen Dank an die Forschung und einem Wissenschaftscomic. All das im Namen der Bildung, versteht sich. Viel Vergnügen bei der Lektüre!
Knowledge about causal structures is crucial for decision support in various domains. For example, in discrete manufacturing, identifying the root causes of failures and quality deviations that interrupt the highly automated production process requires causal structural knowledge. However, in practice, root cause analysis is usually built upon individual expert knowledge about associative relationships. But, "correlation does not imply causation", and misinterpreting associations often leads to incorrect conclusions. Recent developments in methods for causal discovery from observational data have opened the opportunity for a data-driven examination. Despite its potential for data-driven decision support, omnipresent challenges impede causal discovery in real-world scenarios. In this thesis, we make a threefold contribution to improving causal discovery in practice.
(1) The growing interest in causal discovery has led to a broad spectrum of methods with specific assumptions on the data and various implementations. Hence, application in practice requires careful consideration of existing methods, which becomes laborious when dealing with various parameters, assumptions, and implementations in different programming languages. Additionally, evaluation is challenging due to the lack of ground truth in practice and limited benchmark data that reflect real-world data characteristics.
To address these issues, we present a platform-independent modular pipeline for causal discovery and a ground truth framework for synthetic data generation that provides comprehensive evaluation opportunities, e.g., to examine the accuracy of causal discovery methods in case of inappropriate assumptions.
(2) Applying constraint-based methods for causal discovery requires selecting a conditional independence (CI) test, which is particularly challenging in mixed discrete-continuous data omnipresent in many real-world scenarios. In this context, inappropriate assumptions on the data or the commonly applied discretization of continuous variables reduce the accuracy of CI decisions, leading to incorrect causal structures.
Therefore, we contribute a non-parametric CI test leveraging k-nearest neighbors methods and prove its statistical validity and power in mixed discrete-continuous data, as well as the asymptotic consistency when used in constraint-based causal discovery. An extensive evaluation of synthetic and real-world data shows that the proposed CI test outperforms state-of-the-art approaches in the accuracy of CI testing and causal discovery, particularly in settings with low sample sizes.
(3) To show the applicability and opportunities of causal discovery in practice, we examine our contributions in real-world discrete manufacturing use cases. For example, we showcase how causal structural knowledge helps to understand unforeseen production downtimes or adds decision support in case of failures and quality deviations in automotive body shop assembly lines.
Advancing digitalization is changing society and has far-reaching effects on people and companies. Fundamental to these changes are the new technological possibilities for processing data on an ever-increasing scale and for various purposes. The availability of large and high-quality data sets, especially those based on personal data, is crucial. They are used either to improve the productivity, quality, and individuality of products and services or to develop new types of services. Today, user behavior is tracked more actively and comprehensively than ever despite increasing legal requirements for protecting personal data worldwide. That increasingly raises ethical, moral, and social questions, which have moved to the forefront of the political debate, not least due to popular cases of data misuse. Given this discourse and the legal requirements, today's data management must fulfill three conditions: Legality or legal conformity of use and ethical legitimacy. Thirdly, the use of data should add value from a business perspective. Within the framework of these conditions, this cumulative dissertation pursues four research objectives with a focus on gaining a better understanding of
(1) the challenges of implementing privacy laws,
(2) the factors that influence customers' willingness to share personal data,
(3) the role of data protection for digital entrepreneurship, and
(4) the interdisciplinary scientific significance, its development, and its interrelationships.
Classification, prediction and evaluation of graph neural networks on online social media platforms
(2024)
The vast amount of data generated on social media platforms have made them a valuable source of information for businesses, governments and researchers. Social media data can provide insights into user behavior, preferences, and opinions. In this work, we address two important challenges in social media analytics. Predicting user engagement with online content has become a critical task for content creators to increase user engagement and reach larger audiences. Traditional user engagement prediction approaches rely solely on features derived from the user and content. However, a new class of deep learning methods based on graphs captures not only the content features but also the graph structure of social media networks.
This thesis proposes a novel Graph Neural Network (GNN) approach to predict user interaction with tweets. The proposed approach combines the features of users, tweets and their engagement graphs. The tweet text features are extracted using pre-trained embeddings from language models, and a GNN layer is used to embed the user in a vector space. The GNN model then combines the features and graph structure to predict user engagement. The proposed approach achieves an accuracy value of 94.22% in classifying user interactions, including likes, retweets, replies, and quotes.
Another major challenge in social media analysis is detecting and classifying social bot accounts. Social bots are automated accounts used to manipulate public opinion by spreading misinformation or generating fake interactions. Detecting social bots is critical to prevent their negative impact on public opinion and trust in social media. In this thesis, we classify social bots on Twitter by applying Graph Neural Networks. The proposed approach uses a combination of both the features of a node and an aggregation of the features of a node’s neighborhood to classify social bot accounts. Our final results indicate a 6% improvement in the area under the curve score in the final predictions through the utilization of GNN.
Overall, our work highlights the importance of social media data and the potential of new methods such as GNNs to predict user engagement and detect social bots. These methods have important implications for improving the quality and reliability of information on social media platforms and mitigating the negative impact of social bots on public opinion and discourse.
The present dissertation investigates changes in lingual coarticulation across childhood in German-speaking children from three to nine years of age and adults. Coarticulation refers to the mismatch between the abstract phonological units and their seemingly commingled realization in continuous speech. Being a process at the intersection of phonology and phonetics, addressing its changes across childhood allows for insights in speech motor as well as phonological developments. Because specific predictions for changes in coarticulation across childhood can be derived from existing speech production models, investigating children’s coarticulatory patterns can help us model human speech production.
While coarticulatory changes may shed light on some of the central questions of speech production development, previous studies on the topic were sparse and presented a puzzling picture of conflicting findings. One of the reasons for this lack is the difficulty in articulatory data acquisition in a young population. Within the research program this dissertation is embedded in, we accepted this challenge and successfully set up the hitherto largest corpus of articulatory data from children using ultrasound tongue imaging. In contrast to earlier studies, a high number of participants in tight age cohorts across a wide age range and a thoroughly controlled set of pseudowords allowed for statistically powerful investigations of a process known as variable and complicated to track.
The specific focus of my studies is on lingual vocalic coarticulation as measured in the horizontal position of the highest point of the tongue dorsum. Based on three studies on a) anticipatory coarticulation towards the left, b) carryover coarticulation towards the right side of the utterance, and c) anticipatory coarticulatory extent in repeated versus read aloud speech, I deduct the following main theses:
1. Maturing speech motor control is responsible for some developmental changes in coarticulation.
2. Coarticulation can be modeled as the coproduction of articulatory gestures.
3. The developmental change in coarticulation results from a decrease of vocalic activation width.