Refine
Year of publication
Document Type
- Doctoral Thesis (6536) (remove)
Language
Keywords
- climate change (56)
- Klimawandel (55)
- Modellierung (36)
- Nanopartikel (28)
- machine learning (23)
- Fernerkundung (20)
- Deutschland (19)
- Spracherwerb (19)
- Synchronisation (19)
- Arabidopsis thaliana (18)
Institute
- Institut für Biochemie und Biologie (1050)
- Institut für Physik und Astronomie (784)
- Institut für Chemie (679)
- Institut für Geowissenschaften (508)
- Wirtschaftswissenschaften (407)
- Institut für Ernährungswissenschaft (280)
- Öffentliches Recht (255)
- Bürgerliches Recht (221)
- Historisches Institut (218)
- Institut für Informatik und Computational Science (206)
Zielsetzung und leitendes Erkenntnisinteresse Als Arbeitsgrundlage soll folgende Definition der Bühnengesellschaft als Ausgangspunkt dienen: Die Bühnengesellschaft wird durch die Gesamtzahl der handelnden Personen einer Posse gebildet, die in einem sozialen, psychologischen und mentalen Beziehungsverhältnis zueinanderstehen. Ihr Umfang ist sowohl von gattungsspezifischen als auch von theaterpraktischen Faktoren abhängig. Die Bühnengesellschaft bildet die Schnittstelle zwischen der fiktionalen Welt der Posse und der Lebenswirklichkeit der Zuschauer. Sie verbindet Spielwelt und Wirklichkeit. Durch diese Brückenfunktion werden beide Elemente aufeinander bezogen. Der so umrissene Blick muss bei der Analyse der Bühnengesellschaft von Nestroys Possen folgende unterschiedlich gewichtete und akzentuierte Schwerpunkte enthalten: 1. Figurenkonzeption Die Konzeption der Figuren der Nestroyschen Bühnengesellschaften wird sowohl im Kontext der historischen Wirklichkeit und den herrschenden kulturellen Konzepten als auch in Bezug zur Gattungstradition analysiert unter der erkenntnisleitenden Fragestellung, wie Zeitwirklichkeit („das konfliktreiche Welttheater“) erfahrbar wird. Der Analyse der „komischen Zentralfigur“ kommt in diesem Zusammenhang eine besondere Bedeutung zu. 2. Bühnengesellschaftliche Beziehungen Eine Analyse der Figurenkonstellation, des Verhältnisses von Haupt- und Nebenfiguren sowie der Bildung von Gruppierungen erhellt ihre dramaturgische Funktion im Kontext der satirischen Intention. 3. Bühnengesellschaft und Theater Der theaterwissenschaftliche Aspekt wird dadurch akzentuiert, dass Theaterorganisation, Bühne und die männlich-weibliche Zusammensetzung des Ensembles in den Blick genommen werden, um die Abhängigkeit des Umfangs und der Zusammensetzung der Bühnengesellschaft von den realen Gegebenheiten des Volkstheaterbetriebs aufzuzeigen. Auf der Grundlage der genannten Schwerpunkte wird für die Untersuchung folgendes leitendes Erkenntnisinteresse formuliert: Die Bühnengesellschaft besitzt eine Schnittstellenfunktion zwischen der fiktionalen Spielwelt der Posse und der Lebenswirklichkeit der Zuschauerschaft. Durch die Analyse der personellen und strukturellen Gestaltung der Bühnengesellschaft wird ihre Brückenfunktion für die ‚Wirksamkeit‘ der satirischen und parodistischen Intention erhellt.
In Forschungsprogrammen werden zahlreiche Akteure mit unterschiedlichen Hintergründen und fachlichen Expertisen in Einzel- oder Verbundvorhaben vereint, die jedoch weitestgehend unabhängig voneinander durchgeführt werden. Vor dem Hintergrund, dass gesamtgesellschaftliche Herausforderungen wie die globale Erwärmung zunehmend disziplinübergreifende Lösungsansätze erfordern, sollten Vernetzungs- und Transferprozesse in Forschungsprogrammen stärker in den Fokus rücken. Mit der Implementierung einer Begleitforschung kann dieser Forderung Rechnung getragen werden. Begleitforschung unterscheidet sich in ihrer Herangehensweise und ihrer Zielvorstellung von den „üblichen“ Projekten und kann in unterschiedlichen theoretischen Reinformen auftreten. Verkürzt dargestellt agiert sie entweder (1) inhaltlich komplementär zu den jeweiligen Forschungsprojekten, (2) auf einer Metaebene mit Fokus auf die Prozesse im Forschungsprogramm oder (3) als integrierende, synthetisierende Instanz, für die die Vernetzung der Projekte im Forschungsprogramm sowie der Wissenstransfer von Bedeutung sind. Zwar sind diese Formen analytisch in theoretische Reinformen trennbar, in der Praxis ergibt sich in der Regel jedoch ein Mix aus allen dreien.
In diesem Zusammenhang schließt die vorliegende Dissertation als ergänzende Studie an bisherige Ansätze zum methodischen Handwerkszeug der Begleitforschung an und fokussiert auf folgende Fragestellungen: Auf welcher Basis kann die Vernetzung der Akteure in einem Forschungsprogramm durchgeführt werden, um diese effektiv zusammenzubringen? Welche weiteren methodischen Elemente sollten daran ansetzen, um einen Mehrwert zu generieren, der die Summe der Einzelergebnisse des Forschungsprogrammes übersteigt? Von welcher Art kann dann ein solcher Mehrwert sein und welche Rolle spielt dabei die Begleitforschung?
Das erste methodische Element bildet die Erhebung und Aufbereitung einer Ausgangsdatenbasis. Durch eine auf semantischer Analyse basierenden Verschlagwortung projektbezogener Texte lässt sich eine umfassende Datenbasis aus den Inhalten der Forschungsprojekte generieren. Die Schlagwörter werden dabei anhand eines kontrollierten Vokabulars in einem Schlagwortkatalog strukturiert. Parallel dazu werden sie wiederum den jeweiligen Projekten zugeordnet, wodurch diese thematische Merkmale erhalten. Um thematische Überschneidungen zwischen Forschungsprojekten sichtbar und interpretierbar zu machen, beinhaltet das zweite Element Ansätze zur Visualisierung. Dazu werden die Informationen in einen Netzwerkgraphen transferiert, der sowohl alle im Forschungsprogramm involvierten Projekte als auch die identifizierten Schlagwörter in Relation zueinander abbilden kann. So kann zum Beispiel sichtbar gemacht werden, welche Forschungsprojekte sich auf Basis ihrer Inhalte „näher“ sind als andere. Genau diese Information wird im dritten methodischen Element als Planungsgrundlage für unterschiedliche Veranstaltungsformate wie Arbeitstagungen oder Transferwerkstätten genutzt. Das vierte methodische Element umfasst die Synthesebildung. Diese gestaltet sich als Prozess über den gesamten Zeitraum der Zusammenarbeit zwischen Begleitforschung und den weiteren Forschungsprojekten hinweg, da in die Synthese unter anderem Zwischen-, Teil- und Endergebnisse der Projekte einfließen, genauso wie Inhalte aus den unterschiedlichen Veranstaltungen. Letztendlich ist dieses vierte Element auch das Mittel, um aus den integrierten und synthetisierten Informationen Handlungsempfehlungen für zukünftige Vorhaben abzuleiten.
Die Erarbeitung der methodischen Elemente erfolgte im laufenden Prozess des Begleitforschungsprojektes KlimAgrar, welches der vorliegenden Dissertation als Fallbeispiel dient und dessen Hintergründe in der Thematik Klimaschutz und Klimaanpassung in der Landwirtschaft im Text ausführlich erläutert werden.
This cumulative dissertation consists of three full empirical investigations based on three separate collections of data dealing with the phenomenon of negotiations in audit processes, which are combined in two research articles. In the first study, I examine internal auditors’ views on negotiation interactions with auditees. My research is based on 23 semi-structured interviews with internal auditors (14 in-house and 9 external service providers) to gain insight into when and about what (RQ1), why (RQ2), and how (RQ3) they negotiate with auditees. By adapting the Gibbins et al. (2001) negotiation framework to the context of internal auditing, I obtain specific process (negotiation issue, auditor-auditee process, and outcome) and context elements that form the basis of my analyses. Through the additional use of inductive procedures, I conclude that internal auditors negotiate when they face professional and non-professional resistance from auditees during the audit process (RQ1). This resistance occurs in a variety of audit types and audit issues. Internal auditors choose negotiations to overcome this resistance primarily out of functional interest, as they cannot simply instruct auditees to acknowledge the findings and implement the required actions (RQ2). I find that the implementation of the required actions is the main goal of the respondents, which is also an important quality factor for internal auditing. Although few respondents interpret these interactions with auditees as negotiations, all respondents use a variety of negotiation strategies to create value (e.g., cost cutting, logrolling, and bridging) and claim value (e.g. positional commitment and threats) (RQ3). Finally, I contribute to empirical research on internal audit negotiations and internal audit quality by shedding light on the black box of internal auditor-auditee interactions. The second study consists of two experiments that examine the effects of tax auditors’ emotion expressions during tax audit negotiations. In the first experiment, we demonstrate that auditors expressing anger obtain more concessions from taxpayers than auditors expressing happiness. This reveals that taxpayers interpret auditors’ emotions strategically and do not respond affectively. In the second experiment, we show that the experience with an auditor who expressed either happiness or anger reduces taxpayers’ post-audit compliance compared to the experience with an emotionally neutral auditor. Apparently, taxpayers use their experience with an emotional auditor to rationalize later noncompliance. Taken together, both experiments show the potentially detrimental effects of positive and negative emotion expressions by the auditor and point to the benefits of avoiding emotion expressions. We find that when auditors avoid emotion expressions this does not result in fewer concessions from taxpayers than when auditors express anger. However, when auditors avoid emotion expressions this leads to a significantly better evaluation of the taxpayer-auditor relationship and significantly reduces taxpayers’ post-audit noncompliance.
Natural gas hydrates are ice-like crystalline compounds containing water cavities that trap natural gas molecules like methane (CH4), which is a potent greenhouse gas with high energy density. The Mallik site at the Mackenzie Delta in the Canadian Arctic contains a large volume of technically recoverable CH4 hydrate beneath the base of the permafrost. Understanding how the sub-permafrost hydrate is distributed can aid in searching for the ideal locations for deploying CH4 production wells to develop the hydrate as a cleaner alternative to crude oil or coal. Globally, atmospheric warming driving permafrost thaw results in sub-permafrost hydrate dissociation, releasing CH4 into the atmosphere to intensify global warming. It is therefore crucial to evaluate the potential risk of hydrate dissociation due to permafrost degradation. To quantitatively predict hydrate distribution and volume in complex sub-permafrost environments, a numerical framework was developed to simulate sub-permafrost hydrate formation by coupling the equilibrium CH4-hydrate formation approach with a fluid flow and transport simulator (TRANSPORTSE). In addition, integrating the equations of state describing ice melting and forming with TRANSPORTSE enabled this framework to simulate the permafrost evolution during the sub-permafrost hydrate formation. A modified sub-permafrost hydrate formation mechanism for the Mallik site is presented in this study. According to this mechanism, the CH4-rich fluids have been vertically transported since the Late Pleistocene from deep overpressurized zones via geologic fault networks to form the observed hydrate deposits in the Kugmallit–Mackenzie Bay Sequences. The established numerical framework was verified by a benchmark of hydrate formation via dissolved methane. Model calibration was performed based on laboratory data measured during a multi-stage hydrate formation experiment undertaken in the LArge scale Reservoir Simulator (LARS). As the temporal and spatial evolution of simulated and observed hydrate saturation matched well, the LARS model was therefore validated. This laboratory-scale model was then upscaled to a field-scale 2D model generated from a seismic transect across the Mallik site. The simulation confirmed the feasibility of the introduced sub-permafrost hydrate formation mechanism by demonstrating consistency with field observations. The 2D model was extended to the first 3D model of the Mallik site by using well-logs and seismic profiles, to investigate the geologic controls on the spatial hydrate distribution. An assessment of this simulation revealed the hydraulic contribution of each geological element, including relevant fault networks and sedimentary sequences. Based on the simulation results, the observed heterogeneous distribution of sub-permafrost hydrate resulted from the combined factors of the source-gas generation rate, subsurface temperature, and the permeability of geologic elements. Analysis of the results revealed that the Mallik permafrost was heated by 0.8–1.3 °C, induced by the global temperature increase of 0.44 °C and accelerated by Arctic amplification from the early 1970s to the mid-2000s. This study presents a numerical framework that can be applied to study the formation of the permafrost-hydrate system from laboratory to field scales, across timescales ranging from hours to millions of years. Overall, these simulations deepen the knowledge about the dominant factors controlling the spatial hydrate distribution in sub-permafrost environments with heterogeneous geologic elements. The framework can support improving the design of hydrate formation experiments and provide valuable contributions to future industrial hydrate exploration and exploitation activities.
The East African Rift System (EARS) is a significant example of active tectonics, which provides opportunities to examine the stages of continental faulting and landscape evolution. The southwest extension of the EARS is one of the most significant examples of active tectonics nowadays, however, seismotectonic research in the area has been scarce, despite the fundamental importance of neotectonics. Our first study area is located between the Northern Province of Zambia and the southeastern Katanga Province of the Democratic Republic of Congo. Lakes Mweru and Mweru Wantipa are part of the southwest extension of the EARS. Fault analysis reveals that, since the Miocene, movements along the active Mweru-Mweru Wantipa Fault System (MMFS) have been largely responsible for the reorganization of the landscape and the drainage patterns across the southwestern branch of the EARS. To investigate the spatial and temporal patterns of fluvial-lacustrine landscape development, we determined in-situ cosmogenic 10Be and 26Al in a total of twenty-six quartzitic bedrock samples that were collected from knickpoints across the Mporokoso Plateau (south of Lake Mweru) and the eastern part of the Kundelungu Plateau (north of Lake Mweru). Samples from the Mporokoso Plateau and close to the MMFS provide evidence of temporary burial. By contrast, surfaces located far from the MMFS appear to have remained uncovered since their initial exposure as they show consistent 10Be and 26Al exposure ages ranging up to ~830 ka. Reconciliation of the observed burial patterns with morphotectonic and stratigraphic analysis reveals the existence of an extensive paleo-lake during the Pleistocene. Through hypsometric analyses of the dated knickpoints, the potential maximum water level of the paleo-lake is constrained to ~1200 m asl (present lake lavel: 917 m asl). High denudation rates (up to ~40 mm ka-1) along the eastern Kundelungu Plateau suggest that footwall uplift, resulting from normal faulting, caused river incision, possibly controlling paleo-lake drainage. The lake level was reduced gradually reaching its current level at ~350 ka.
Parallel to the MMFS in the north, the Upemba Fault System (UFS) extends across the southeastern Katanga Province of the Democratic Republic of Congo. This part of our research is focused on the geomorphological behavior of the Kiubo Waterfalls. The waterfalls are the currently active knickpoint of the Lufira River, which flows into the Upemba Depression. Eleven bedrock samples along the Lufira River and its tributary stream, Luvilombo River, were collected. In-situ cosmogenic 10Be and 26Al were used in order to constrain the K constant of the Stream Power Law equation. Constraining the K constant allowed us to calculate the knickpoint retreat rate of the Kiubo Waterfalls at ~0.096 m a-1. Combining the calculated retreat rate of the knickpoint with DNA sequencing from fish populations, we managed to present extrapolation models and estimate the location of the onset of the Kiubo Waterfalls, revealing its connection to the seismicity of the UFS.
Vereine als Gefahr
(2023)
Die von kriminellen oder extremistischen Gruppen ausgehende Gefahr erhöht sich entsprechend ihres Organisationsgrades. Eine Zustandsanalyse des Vereinsrechts zeigt, dass Vereins- und Kennzeichenverbote wirkmächtige präemptive Maßnahmen gegen neue dezentrale oder mehrstufige Vereinigungen bleiben. Bei ihrer Weiterentwicklung lenkt Sandra Lukosek den Blick auf die Auslegung und wechselseitige Zurechnung verbotsrelevanten Verhaltens einzelner Mitglieder zum Verein und der Erstreckung auf gleichrangige Schwestervereine. Anhand des Waffenrechts arbeitet sie umgekehrt die Vereinszugehörigkeit als taugliches Wesensmerkmal der Mitglieder heraus. Sie betrachtet verschiedene Vereinstypen mit zu differenzierenden vereinigungsfreiheitlichen Schutzbereichen. Ein Verbot religiöser islamistisch-extremistischer Vereine unterscheidet sich vom Verbot eines Rocker- oder Reichsbürgervereins. Die Autorin lässt sich auf sicherheitsbehördliche Herausforderungen ein und findet praktikable Reformansätze zur Fortentwicklung des Vereinsrechts.
Air pollution has been a persistent global problem in the past several hundred years. While some industrialized nations have shown improvements in their air quality through stricter regulation, others have experienced declines as they rapidly industrialize. The WHO’s 2021 update of their recommended air pollution limit values reflects the substantial impacts on human health of pollutants such as NO2 and O3, as recent epidemiological evidence suggests substantial long-term health impacts of air pollution even at low concentrations. Alongside developments in our understanding of air pollution's health impacts, the new technology of low-cost sensors (LCS) has been taken up by both academia and industry as a new method for measuring air pollution. Due primarily to their lower cost and smaller size, they can be used in a variety of different applications, including in the development of higher resolution measurement networks, in source identification, and in measurements of air pollution exposure. While significant efforts have been made to accurately calibrate LCS with reference instrumentation and various statistical models, accuracy and precision remain limited by variable sensor sensitivity. Furthermore, standard procedures for calibration still do not exist and most proprietary calibration algorithms are black-box, inaccessible to the public. This work seeks to expand the knowledge base on LCS in several different ways: 1) by developing an open-source calibration methodology; 2) by deploying LCS at high spatial resolution in urban environments to test their capability in measuring microscale changes in urban air pollution; 3) by connecting LCS deployments with the implementation of local mobility policies to provide policy advice on resultant changes in air quality.
In a first step, it was found that LCS can be consistently calibrated with good performance against reference instrumentation using seven general steps: 1) assessing raw data distribution, 2) cleaning data, 3) flagging data, 4) model selection and tuning, 5) model validation, 6) exporting final predictions, and 7) calculating associated uncertainty. By emphasizing the need for consistent reporting of details at each step, most crucially on model selection, validation, and performance, this work pushed forward with the effort towards standardization of calibration methodologies. In addition, with the open-source publication of code and data for the seven-step methodology, advances were made towards reforming the largely black-box nature of LCS calibrations.
With a transparent and reliable calibration methodology established, LCS were then deployed in various street canyons between 2017 and 2020. Using two types of LCS, metal oxide (MOS) and electrochemical (EC), their performance in capturing expected patterns of urban NO2 and O3 pollution was evaluated. Results showed that calibrated concentrations from MOS and EC sensors matched general diurnal patterns in NO2 and O3 pollution measured using reference instruments. While MOS proved to be unreliable for discerning differences among measured locations within the urban environment, the concentrations measured with calibrated EC sensors matched expectations from modelling studies on NO2 and O3 pollution distribution in street canyons. As such, it was concluded that LCS are appropriate for measuring urban air quality, including for assisting urban-scale air pollution model development, and can reveal new insights into air pollution in urban environments.
To achieve the last goal of this work, two measurement campaigns were conducted in connection with the implementation of three mobility policies in Berlin. The first involved the construction of a pop-up bike lane on Kottbusser Damm in response to the COVID-19 pandemic, the second surrounded the temporary implementation of a community space on Böckhstrasse, and the last was focused on the closure of a portion of Friedrichstrasse to all motorized traffic. In all cases, measurements of NO2 were collected before and after the measure was implemented to assess changes in air quality resultant from these policies. Results from the Kottbusser Damm experiment showed that the bike-lane reduced NO2 concentrations that cyclists were exposed to by 22 ± 19%. On Friedrichstrasse, the street closure reduced NO2 concentrations to the level of the urban background without worsening the air quality on side streets. These valuable results were communicated swiftly to partners in the city administration responsible for evaluating the policies’ success and future, highlighting the ability of LCS to provide policy-relevant results.
As a new technology, much is still to be learned about LCS and their value to academic research in the atmospheric sciences. Nevertheless, this work has advanced the state of the art in several ways. First, it contributed a novel open-source calibration methodology that can be used by a LCS end-users for various air pollutants. Second, it strengthened the evidence base on the reliability of LCS for measuring urban air quality, finding through novel deployments in street canyons that LCS can be used at high spatial resolution to understand microscale air pollution dynamics. Last, it is the first of its kind to connect LCS measurements directly with mobility policies to understand their influences on local air quality, resulting in policy-relevant findings valuable for decisionmakers. It serves as an example of the potential for LCS to expand our understanding of air pollution at various scales, as well as their ability to serve as valuable tools in transdisciplinary research.
Rainfall-triggered landslides are a globally occurring hazard that cause several thousand fatalities per year on average and lead to economic damages by destroying buildings and infrastructure and blocking transportation networks. For people living and governing in susceptible areas, knowing not only where, but also when landslides are most probable is key to inform strategies to reduce risk, requiring reliable assessments of weather-related landslide hazard and adequate warning. Taking proper action during high hazard periods, such as moving to higher levels of houses, closing roads and rail networks, and evacuating neighborhoods, can save lives. Nevertheless, many regions of the world with high landslide risk currently lack dedicated, operational landslide early warning systems.
The mounting availability of temporal landslide inventory data in some regions has increasingly enabled data-driven approaches to estimate landslide hazard on the basis of rainfall conditions. In other areas, however, such data remains scarce, calling for appropriate statistical methods to estimate hazard with limited data. The overarching motivation for this dissertation is to further our ability to predict rainfall-triggered landslides in time in order to expand and improve warning. To this end, I applied Bayesian inference to probabilistically quantify and predict landslide activity as a function of rainfall conditions at spatial scales ranging from a small coastal town, to metropolitan areas worldwide, to a multi-state region, and temporal scales from hourly to seasonal. This thesis is composed of three studies.
In the first study, I contributed to developing and validating statistical models for an online landslide warning dashboard for the small town of Sitka, Alaska, USA. We used logistic and Poisson regressions to estimate daily landslide probability and counts from an inventory of only five reported landslide events and 18 years of hourly precipitation measurements at the Sitka airport. Drawing on community input, we established two warning thresholds for implementation in the dashboard, which uses observed rainfall and US National Weather Service forecasts to provide real-time estimates of landslide hazard.
In the second study, I estimated rainfall intensity-duration thresholds for shallow landsliding for 26 cities worldwide and a global threshold for urban landslides. I found that landslides in urban areas occurred at rainfall intensities that were lower than previously reported global thresholds, and that 31% of urban landslides were triggered during moderate rainfall events. However, landslides in cities with widely varying climates and topographies were triggered above similar critical rainfall intensities: thresholds for 77% of cities were indistinguishable from the global threshold, suggesting that urbanization may harmonize thresholds between cities, overprinting natural variability. I provide a baseline threshold that could be considered for warning in cities with limited landslide inventory data.
In the third study, I investigated seasonal landslide response to annual precipitation patterns in the Pacific Northwest region, USA by using Bayesian multi-level models to combine data from five heterogeneous landslide inventories that cover different areas and time periods. I quantitatively confirmed a distinctly seasonal pattern of landsliding and found that peak landslide activity lags the annual precipitation peak. In February, at the height of the landslide season, landslide intensity for a given amount of monthly rainfall is up to ten times higher than at the season onset in November, underlining the importance of antecedent seasonal hillslope conditions.
Together, these studies contributed actionable, objective information for landslide early warning and examples for the application of Bayesian methods to probabilistically quantify landslide hazard from inventory and rainfall data.
Residential segregation is a widespread phenomenon that can be observed in almost every major city. In these urban areas, residents with different ethnical or socioeconomic backgrounds tend to form homogeneous clusters. In Schelling’s classical segregation model two types of agents are placed on a grid. An agent is content with its location if the fraction of its neighbors, which have the same type as the agent, is at least 𝜏, for some 0 < 𝜏 ≤ 1. Discontent agents simply swap their location with a randomly chosen other discontent agent or jump to a random empty location. The model gives a coherent explanation of how clusters can form even if all agents are tolerant, i.e., if they agree to live in mixed neighborhoods. For segregation to occur, all it needs is a slight bias towards agents preferring similar neighbors.
Although the model is well studied, previous research focused on a random process point of view. However, it is more realistic to assume instead that the agents strategically choose where to live. We close this gap by introducing and analyzing game-theoretic models of Schelling segregation, where rational agents strategically choose their locations.
As the first step, we introduce and analyze a generalized game-theoretic model that allows more than two agent types and more general underlying graphs modeling the residential area. We introduce different versions of Swap and Jump Schelling Games. Swap Schelling Games assume that every vertex of the underlying graph serving as a residential area is occupied by an agent and pairs of discontent agents can swap their locations, i.e., their occupied vertices, to increase their utility. In contrast, for the Jump Schelling Game, we assume that there exist empty vertices in the graph and agents can jump to these vacant vertices if this increases their utility. We show that the number of agent types as well as the structure of underlying graph heavily influence the dynamic properties and the tractability of finding an optimal strategy profile.
As a second step, we significantly deepen these investigations for the swap version with 𝜏 = 1 by studying the influence of the underlying topology modeling the residential area on the existence of equilibria, the Price of Anarchy, and the dynamic properties. Moreover, we restrict the movement of agents locally. As a main takeaway, we find that both aspects influence the existence and the quality of stable states.
Furthermore, also for the swap model, we follow sociological surveys and study, asking the same core game-theoretic questions, non-monotone singlepeaked utility functions instead of monotone ones, i.e., utility functions that are not monotone in the fraction of same-type neighbors. Our results clearly show that moving from monotone to non-monotone utilities yields novel structural properties and different results in terms of existence and quality of stable states.
In the last part, we introduce an agent-based saturated open-city variant, the Flip Schelling Process, in which agents, based on the predominant type in their neighborhood, decide whether to change their types. We provide a general framework for analyzing the influence of the underlying topology on residential segregation and investigate the probability that an edge is monochrome, i.e., that both incident vertices have the same type, on random geometric and Erdős–Rényi graphs. For random geometric graphs, we prove the existence of a constant c > 0 such that the expected fraction of monochrome edges after the Flip Schelling Process is at least 1/2 + c. For Erdős–Rényi graphs, we show the expected fraction of monochrome edges after the Flip Schelling Process is at most 1/2 + o(1).