Refine
Year of publication
- 2020 (2816) (remove)
Document Type
- Article (1710)
- Postprint (311)
- Doctoral Thesis (234)
- Part of a Book (212)
- Monograph/Edited Volume (105)
- Review (77)
- Other (45)
- Part of Periodical (33)
- Conference Proceeding (27)
- Working Paper (15)
Language
Keywords
- climate change (24)
- Germany (19)
- random point processes (19)
- statistical mechanics (19)
- stochastic analysis (19)
- machine learning (18)
- diffusion (17)
- dynamics (13)
- model (12)
- prediction (12)
Institute
- Institut für Biochemie und Biologie (248)
- Institut für Physik und Astronomie (246)
- Institut für Geowissenschaften (203)
- Institut für Chemie (153)
- Bürgerliches Recht (148)
- Historisches Institut (144)
- Department Psychologie (101)
- Institut für Umweltwissenschaften und Geographie (100)
- Institut für Mathematik (90)
- Fachgruppe Politik- & Verwaltungswissenschaft (86)
- Institut für Ernährungswissenschaft (85)
- Department Sport- und Gesundheitswissenschaften (83)
- Institut für Romanistik (80)
- Fachgruppe Betriebswirtschaftslehre (76)
- Department Linguistik (72)
- Öffentliches Recht (66)
- Wirtschaftswissenschaften (58)
- Institut für Germanistik (54)
- Strukturbereich Kognitionswissenschaften (52)
- Strafrecht (49)
- Extern (42)
- Hasso-Plattner-Institut für Digital Engineering GmbH (42)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (41)
- Institut für Künste und Medien (39)
- Institut für Jüdische Theologie (34)
- Institut für Jüdische Studien und Religionswissenschaft (33)
- Sozialwissenschaften (33)
- Vereinigung für Jüdische Studien e. V. (30)
- Department Erziehungswissenschaft (28)
- Wirtschafts- und Sozialwissenschaftliche Fakultät (28)
- Institut für Informatik und Computational Science (27)
- Institut für Anglistik und Amerikanistik (26)
- Fachgruppe Soziologie (25)
- Fachgruppe Volkswirtschaftslehre (23)
- Mathematisch-Naturwissenschaftliche Fakultät (23)
- MenschenRechtsZentrum (22)
- Humanwissenschaftliche Fakultät (21)
- Institut für Philosophie (21)
- Verband für Patholinguistik e. V. (vpl) (18)
- Department Musik und Kunst (14)
- Kommunalwissenschaftliches Institut (13)
- Strukturbereich Bildungswissenschaften (12)
- Zentrum für Qualitätsentwicklung in Lehre und Studium (ZfQ) (12)
- Department für Inklusionspädagogik (11)
- Philosophische Fakultät (10)
- Center for Economic Policy Analysis (CEPA) (9)
- Klassische Philologie (9)
- Referat für Presse- und Öffentlichkeitsarbeit (9)
- Universitätsbibliothek (9)
- Fakultät für Gesundheitswissenschaften (8)
- Institut für Slavistik (8)
- Institut für Lebensgestaltung-Ethik-Religionskunde (6)
- Lehreinheit für Wirtschafts-Arbeit-Technik (6)
- Berlin Potsdam Research Group "The International Rule of Law - Rise or Decline?" (5)
- Department Grundschulpädagogik (5)
- WeltTrends e.V. Potsdam (5)
- Hochschulambulanz (4)
- Zentrum für Lehrerbildung und Bildungsforschung (ZeLB) (4)
- Juristische Fakultät (3)
- Potsdam Institute for Climate Impact Research (PIK) e. V. (3)
- eLiS - E-Learning in Studienbereichen (3)
- Akademie für Psychotherapie und Interventionsforschung GmbH (2)
- Botanischer Garten (2)
- Deutsches MEGA-Konsortialbüro an der Universität Potsdam (2)
- Institut für Religionswissenschaft (2)
- Interdisziplinäres Zentrum für Kognitive Studien (2)
- Kommissionen des Senats (2)
- Patholinguistics/Neurocognition of Language (2)
- Senat (2)
- Zentrum für Sprachen und Schlüsselkompetenzen (Zessko) (2)
- Abraham Geiger Kolleg gGmbH (1)
- Career Service (1)
- DV und Statistik Wirtschaftswissenschaften (1)
- Digital Engineering Fakultät (1)
- Moses Mendelssohn Zentrum für europäisch-jüdische Studien e. V. (1)
- Potsdam Graduate School (1)
- Präsident | Vizepräsidenten (1)
- Syntax, Morphology & Variability (1)
- Theodor-Fontane-Archiv (1)
- Zentrum für Umweltwissenschaften (1)
Plants located adjacent to agricultural fields are important for maintaining biodiversity in semi-natural landscapes. To avoid undesired impacts on these plants due to herbicide application on the arable fields, regulatory risk assessments are conducted prior to registration to ensure proposed uses of plant protection products do not present an unacceptable risk. The current risk assessment approach for these non-target terrestrial plants (NTTPs) examines impacts at the individual-level as a surrogate approach for protecting the plant community due to the inherent difficulties of directly assessing population or community level impacts. However, modelling approaches are suitable higher tier tools to upscale individual-level effects to community level. IBC-grass is a sophisticated plant community model, which has already been applied in several studies. However, as it is a console application software, it was not deemed sufficiently user-friendly for risk managers and assessors to be conveniently operated without prior expertise in ecological models. Here, we present a user-friendly and open source graphical user interface (GUI) for the application of IBC-grass in regulatory herbicide risk assessment. It facilitates the use of the plant community model for predicting long-term impacts of herbicide applications on NTTP communities. The GUI offers two options to integrate herbicide impacts: (1) dose responses based on current standard experiments (acc. to testing guidelines) and (2) based on specific effect intensities. Both options represent suitable higher tier options for future risk assessments of NTTPs as well as for research on the ecological relevance of effects.
Plants located adjacent to agricultural fields are important for maintaining biodiversity in semi-natural landscapes. To avoid undesired impacts on these plants due to herbicide application on the arable fields, regulatory risk assessments are conducted prior to registration to ensure proposed uses of plant protection products do not present an unacceptable risk. The current risk assessment approach for these non-target terrestrial plants (NTTPs) examines impacts at the individual-level as a surrogate approach for protecting the plant community due to the inherent difficulties of directly assessing population or community level impacts. However, modelling approaches are suitable higher tier tools to upscale individual-level effects to community level. IBC-grass is a sophisticated plant community model, which has already been applied in several studies. However, as it is a console application software, it was not deemed sufficiently user-friendly for risk managers and assessors to be conveniently operated without prior expertise in ecological models. Here, we present a user-friendly and open source graphical user interface (GUI) for the application of IBC-grass in regulatory herbicide risk assessment. It facilitates the use of the plant community model for predicting long-term impacts of herbicide applications on NTTP communities. The GUI offers two options to integrate herbicide impacts: (1) dose responses based on current standard experiments (acc. to testing guidelines) and (2) based on specific effect intensities. Both options represent suitable higher tier options for future risk assessments of NTTPs as well as for research on the ecological relevance of effects.
Radical reactions have found many applications in carbohydrate chemistry, especially in the construction of carbon–carbon bonds. The formation of carbon–heteroatom bonds has been less intensively studied. This mini-review will summarize the efforts to add heteroatom radicals to unsaturated carbohydrates like endo-glycals. Starting from early examples, developed more than 50 years ago, the importance of such reactions for carbohydrate chemistry and recent applications will be discussed. After a short introduction, the mini-review is divided in sub-chapters according to the heteroatoms halogen, nitrogen, phosphorus, and sulfur. The mechanisms of radical generation by chemical or photochemical processes and the subsequent reactions of the radicals at the 1-position will be discussed. This mini-review cannot cover all aspects of heteroatom-centered radicals in carbohydrate chemistry, but should provide an overview of the various strategies and future perspectives
Radical reactions have found many applications in carbohydrate chemistry, especially in the construction of carbon–carbon bonds. The formation of carbon–heteroatom bonds has been less intensively studied. This mini-review will summarize the efforts to add heteroatom radicals to unsaturated carbohydrates like endo-glycals. Starting from early examples, developed more than 50 years ago, the importance of such reactions for carbohydrate chemistry and recent applications will be discussed. After a short introduction, the mini-review is divided in sub-chapters according to the heteroatoms halogen, nitrogen, phosphorus, and sulfur. The mechanisms of radical generation by chemical or photochemical processes and the subsequent reactions of the radicals at the 1-position will be discussed. This mini-review cannot cover all aspects of heteroatom-centered radicals in carbohydrate chemistry, but should provide an overview of the various strategies and future perspectives
Das Bewegungsverb gehen liegt im Gegenwartsdeutschen in zwei Ausprägungen vor: Neben der Vollverbvariante gibt es eine semi-auxiliare Verwendung von gehen mit aspektueller Bedeutung. Diese Annahme ist in der Literatur zum Gegenwartsdeutschen durchaus umstritten. Im vorliegenden Beitrag wird auf der Grundlage von Daten aus dem Gegenwartsdeutschen dafür argumentiert, dass gehen in Verbindung mit einem Infinitiv tatsächlich als semi-auxiliares Verb mit aspektueller Bedeutung behandelt werden muss. Die Auswertung von Daten aus der deutschen Sprachgeschichte vom Althochdeutschen bis zum Gegenwartsdeutschen liefert die Grundlage für die Herausarbeitung von einzelnen Stadien in der Geschichte des semi-auxiliaren Verbs gehen, die unterschiedliche Grade seiner Auxiliarisierung anzeigen. Die noch im Mittelhochdeutschen zu beobachtende Alternation des infiniten Komplements zwischen Präsenspartizip und Infinitiv lässt sich auf die lautlich bedingte Mehrdeutigkeit infiniter Komplemente zurückführen, die im Deutschen letztendlich zum Verlust des Partizips als verbaler Kategorie führt.
The use of monoclonal antibodies is ubiquitous in science and biomedicine but the generation and validation process of antibodies is nevertheless complicated and time-consuming. To address these issues we developed a novel selective technology based on an artificial cell surface construct by which secreted antibodies were connected to the corresponding hybridoma cell when they possess the desired antigen-specificity. Further the system enables the selection of desired isotypes and the screening for potential cross-reactivities in the same context. For the design of the construct we combined the transmembrane domain of the EGF-receptor with a hemagglutinin epitope and a biotin acceptor peptide and performed a transposon-mediated transfection of myeloma cell lines. The stably transfected myeloma cell line was used for the generation of hybridoma cells and an antigen- and isotype-specific screening method was established. The system has been validated for globular protein antigens as well as for haptens and enables a fast and early stage selection and validation of monoclonal antibodies in one step.
The use of monoclonal antibodies is ubiquitous in science and biomedicine but the generation and validation process of antibodies is nevertheless complicated and time-consuming. To address these issues we developed a novel selective technology based on an artificial cell surface construct by which secreted antibodies were connected to the corresponding hybridoma cell when they possess the desired antigen-specificity. Further the system enables the selection of desired isotypes and the screening for potential cross-reactivities in the same context. For the design of the construct we combined the transmembrane domain of the EGF-receptor with a hemagglutinin epitope and a biotin acceptor peptide and performed a transposon-mediated transfection of myeloma cell lines. The stably transfected myeloma cell line was used for the generation of hybridoma cells and an antigen- and isotype-specific screening method was established. The system has been validated for globular protein antigens as well as for haptens and enables a fast and early stage selection and validation of monoclonal antibodies in one step.
The increasing application of intersectionality to the psychological study of identity development raises questions regarding how we as researchers construct and operationalize social identity categories, as well as how we best capture and address systems of oppression and privilege within our work. In the continental European context, the use of the intersectionality paradigm raises additional issues, since “race” was officially removed from the vernacular following the atrocities of WWII, yet racialized oppression continues to occur at every level of society. Within psychological research, participants are often divided into those with and without “migration background,” which can reiterate inequitable norms of national belonging while washing over salient lived experiences in relation to generation status, citizenship, religion, gender, and the intersection between these and other social locations. Although discrimination is increasingly examined in identity development research, rarely are the history and impact of colonialism and related socio-historical elements acknowledged. In the current paper, we aim to address these issues by reviewing previous research and discussing theoretical and practical possibilities for the future. In doing so, we delve into the problems of trading in one static social identity category (e.g., “race”) for another (e.g., “migration background/migrant”) without examining the power structures inherent in the creation of these top-down categories, or the lived experiences of those navigating what it means to be marked as a racialized Other. Focusing primarily on contextualized ethno-cultural identity development, we discuss relevant examples from the continental European context, highlighting research gaps, points for improvement, and best practices.
The increasing application of intersectionality to the psychological study of identity development raises questions regarding how we as researchers construct and operationalize social identity categories, as well as how we best capture and address systems of oppression and privilege within our work. In the continental European context, the use of the intersectionality paradigm raises additional issues, since “race” was officially removed from the vernacular following the atrocities of WWII, yet racialized oppression continues to occur at every level of society. Within psychological research, participants are often divided into those with and without “migration background,” which can reiterate inequitable norms of national belonging while washing over salient lived experiences in relation to generation status, citizenship, religion, gender, and the intersection between these and other social locations. Although discrimination is increasingly examined in identity development research, rarely are the history and impact of colonialism and related socio-historical elements acknowledged. In the current paper, we aim to address these issues by reviewing previous research and discussing theoretical and practical possibilities for the future. In doing so, we delve into the problems of trading in one static social identity category (e.g., “race”) for another (e.g., “migration background/migrant”) without examining the power structures inherent in the creation of these top-down categories, or the lived experiences of those navigating what it means to be marked as a racialized Other. Focusing primarily on contextualized ethno-cultural identity development, we discuss relevant examples from the continental European context, highlighting research gaps, points for improvement, and best practices.
The purpose of the present study was to investigate the role of gender and gender stereotype traits (masculinity, femininity) in cyber victimization behaviors (cyber relational victimization, cyber verbal victimization, hacking) through different technologies (mobile phones, gaming consoles, social networking sites). There were 456 8th graders (226 females; M age = 13.66, SD = 0.41) from two midwestern middle schools in the United States included in this study. They completed questionnaires on their endorsement of masculine and feminine traits, and self-reported cyber victimization through different technologies. The findings revealed main effects of types of cyber victimization for boys and of technology for girls. In particular, boys with feminine traits experienced the most victimization by cyber verbal aggression, cyber relational aggression, and hacking when compared to the other groups of boys. Girls with feminine traits experienced the most cyber victimization through social networking sites, gaming consoles, and mobile phones in comparison to the other groups of girls. For girls with feminine traits, they reported more cyber relational victimization and cyber verbal victimization through mobile phones and social networking sites, as well as more hacking via social networking sites. Such findings underscore the importance of considering gender stereotype traits, types of victimization, and technologies when examining cyber victimization.
The purpose of the present study was to investigate the role of gender and gender stereotype traits (masculinity, femininity) in cyber victimization behaviors (cyber relational victimization, cyber verbal victimization, hacking) through different technologies (mobile phones, gaming consoles, social networking sites). There were 456 8th graders (226 females; M age = 13.66, SD = 0.41) from two midwestern middle schools in the United States included in this study. They completed questionnaires on their endorsement of masculine and feminine traits, and self-reported cyber victimization through different technologies. The findings revealed main effects of types of cyber victimization for boys and of technology for girls. In particular, boys with feminine traits experienced the most victimization by cyber verbal aggression, cyber relational aggression, and hacking when compared to the other groups of boys. Girls with feminine traits experienced the most cyber victimization through social networking sites, gaming consoles, and mobile phones in comparison to the other groups of girls. For girls with feminine traits, they reported more cyber relational victimization and cyber verbal victimization through mobile phones and social networking sites, as well as more hacking via social networking sites. Such findings underscore the importance of considering gender stereotype traits, types of victimization, and technologies when examining cyber victimization.
Many institutions struggle to tap into the potential of their large archives of radar reflectivity: these data are often affected by miscalibration, yet the bias is typically unknown and temporally volatile. Still, relative calibration techniques can be used to correct the measurements a posteriori. For that purpose, the usage of spaceborne reflectivity observations from the Tropical Rainfall Measuring Mission (TRMM) and Global Precipitation Measurement (GPM) platforms has become increasingly popular: the calibration bias of a ground radar (GR) is estimated from its average reflectivity difference to the spaceborne radar (SR). Recently, Crisologo et al. (2018) introduced a formal procedure to enhance the reliability of such estimates: each match between SR and GR observations is assigned a quality index, and the calibration bias is inferred as a quality-weighted average of the differences between SR and GR. The relevance of quality was exemplified for the Subic S-band radar in the Philippines, which is greatly affected by partial beam blockage.
The present study extends the concept of quality-weighted averaging by accounting for path-integrated attenuation (PIA) in addition to beam blockage. This extension becomes vital for radars that operate at the C or X band. Correspondingly, the study setup includes a C-band radar that substantially overlaps with the S-band radar. Based on the extended quality-weighting approach, we retrieve, for each of the two ground radars, a time series of calibration bias estimates from suitable SR overpasses. As a result of applying these estimates to correct the ground radar observations, the consistency between the ground radars in the region of overlap increased substantially. Furthermore, we investigated if the bias estimates can be interpolated in time, so that ground radar observations can be corrected even in the absence of prompt SR overpasses. We found that a moving average approach was most suitable for that purpose, although limited by the absence of explicit records of radar maintenance operations.
Many institutions struggle to tap into the potential of their large archives of radar reflectivity: these data are often affected by miscalibration, yet the bias is typically unknown and temporally volatile. Still, relative calibration techniques can be used to correct the measurements a posteriori. For that purpose, the usage of spaceborne reflectivity observations from the Tropical Rainfall Measuring Mission (TRMM) and Global Precipitation Measurement (GPM) platforms has become increasingly popular: the calibration bias of a ground radar (GR) is estimated from its average reflectivity difference to the spaceborne radar (SR). Recently, Crisologo et al. (2018) introduced a formal procedure to enhance the reliability of such estimates: each match between SR and GR observations is assigned a quality index, and the calibration bias is inferred as a quality-weighted average of the differences between SR and GR. The relevance of quality was exemplified for the Subic S-band radar in the Philippines, which is greatly affected by partial beam blockage.
The present study extends the concept of quality-weighted averaging by accounting for path-integrated attenuation (PIA) in addition to beam blockage. This extension becomes vital for radars that operate at the C or X band. Correspondingly, the study setup includes a C-band radar that substantially overlaps with the S-band radar. Based on the extended quality-weighting approach, we retrieve, for each of the two ground radars, a time series of calibration bias estimates from suitable SR overpasses. As a result of applying these estimates to correct the ground radar observations, the consistency between the ground radars in the region of overlap increased substantially. Furthermore, we investigated if the bias estimates can be interpolated in time, so that ground radar observations can be corrected even in the absence of prompt SR overpasses. We found that a moving average approach was most suitable for that purpose, although limited by the absence of explicit records of radar maintenance operations.
As an essential trace element, copper plays a pivotal role in physiological body functions. In fact, dysregulated copper homeostasis has been clearly linked to neurological disorders including Wilson and Alzheimer’s disease. Such neurodegenerative diseases are associated with progressive loss of neurons and thus impaired brain functions. However, the underlying mechanisms are not fully understood. Characterization of the element species and their subcellular localization is of great importance to uncover cellular mechanisms. Recent research activities focus on the question of how copper contributes to the pathological findings. Cellular bioimaging of copper is an essential key to accomplish this objective. Besides information on the spatial distribution and chemical properties of copper, other essential trace elements can be localized in parallel. Highly sensitive and high spatial resolution techniques such as LA-ICP-MS, TEM-EDS, S-XRF and NanoSIMS are required for elemental mapping on subcellular level. This review summarizes state-of-the-art techniques in the field of bioimaging. Their strengths and limitations will be discussed with particular focus on potential applications for the elucidation of copper-related diseases. Based on such investigations, further information on cellular processes and mechanisms can be derived under physiological and pathological conditions. Bioimaging studies might enable the clarification of the role of copper in the context of neurodegenerative diseases and provide an important basis to develop therapeutic strategies for reduction or even prevention of copper-related disorders and their pathological consequences.
As an essential trace element, copper plays a pivotal role in physiological body functions. In fact, dysregulated copper homeostasis has been clearly linked to neurological disorders including Wilson and Alzheimer’s disease. Such neurodegenerative diseases are associated with progressive loss of neurons and thus impaired brain functions. However, the underlying mechanisms are not fully understood. Characterization of the element species and their subcellular localization is of great importance to uncover cellular mechanisms. Recent research activities focus on the question of how copper contributes to the pathological findings. Cellular bioimaging of copper is an essential key to accomplish this objective. Besides information on the spatial distribution and chemical properties of copper, other essential trace elements can be localized in parallel. Highly sensitive and high spatial resolution techniques such as LA-ICP-MS, TEM-EDS, S-XRF and NanoSIMS are required for elemental mapping on subcellular level. This review summarizes state-of-the-art techniques in the field of bioimaging. Their strengths and limitations will be discussed with particular focus on potential applications for the elucidation of copper-related diseases. Based on such investigations, further information on cellular processes and mechanisms can be derived under physiological and pathological conditions. Bioimaging studies might enable the clarification of the role of copper in the context of neurodegenerative diseases and provide an important basis to develop therapeutic strategies for reduction or even prevention of copper-related disorders and their pathological consequences.
Cleft exhaustivity
(2020)
In this dissertation a series of experimental studies are presented which demonstrate that the exhaustive inference of focus-background it-clefts in English and their cross-linguistic counterparts in Akan, French, and German is neither robust nor systematic. The inter-speaker and cross-linguistic variability is accounted for with a discourse-pragmatic approach to cleft exhaustivity, in which -- following Pollard & Yasavul 2016 -- the exhaustive inference is derived from an interaction with another layer of meaning, namely, the existence presupposition encoded in clefts.
Geomechanical and petrological characterisation of exposed slip zones, Alpine Fault, New Zealand
(2020)
The Alpine Fault is a large, plate-bounding, strike-slip fault extending along the north-western edge of the Southern Alps, South Island, New Zealand. It regularly accommodates large (MW > 8) earthquakes and has a high statistical probability of failure in the near future, i.e., is late in its seismic cycle. This pending earthquake and associated co-seismic landslides are expected to cause severe infrastructural damage that would affect thousands of people, so it presents a substantial geohazard. The interdisciplinary study presented here aims to characterise the fault zone’s 4D (space and time) architecture, because this provides information about its rheological properties that will enable better assessment of the hazard
the fault poses.
The studies undertaken include field investigations of principal slip zone fault gouges exposed
along strike of the fault, and subsequent laboratory analyses of these outcrop and additional borehole samples. These observations have provided new information on (I) characteristic microstructures down to the nanoscale that indicate which deformation mechanisms operated within the rocks, (II) mineralogical information that constrains the fault’s geomechanical behaviour and (III) geochemical compositional information that allows the influence of fluid- related alteration processes on material properties to be unraveled.
Results show that along-strike variations of fault rock properties such as microstructures and mineralogical composition are minor and / or do not substantially influence fault zone architecture. They furthermore provide evidence that the architecture of the fault zone, particularly its fault core, is more complex than previously considered, and also more complex than expected for this sort of mature fault cutting quartzofeldspathic rocks. In particular our results strongly suggest that the fault has more than one principal slip zone, and that these form an anastomosing network extending into the basement below the cover of Quaternary sediments.
The observations detailed in this thesis highlight that two major processes, (I) cataclasis and (II) authigenic mineral formation, are the major controls on the rheology of the Alpine Fault. The velocity-weakening behaviour of its fault gouge is favoured by abundant nanoparticles
promoting powder lubrication and grain rolling rather than frictional sliding. Wall-rock fragmentation is accompanied by co-seismic, fluid-assisted dilatancy that is recorded by calcite cementation. This mineralisation, along with authigenic formation of phyllosilicates, quickly alters the petrophysical fault zone properties after each rupture, restoring fault competency. Dense networks of anastomosing and mutually cross-cutting calcite veins and intensively reworked gouge matrix demonstrate that strain repeatedly localised within the narrow fault gouge. Abundantly undeformed euhedral chlorite crystallites and calcite veins cross-cutting both fault gouge and gravels that overlie basement on the fault’s footwall provide evidence that the processes of authigenic phyllosilicate growth, fluid-assisted dilatancy and associated fault healing are processes active particularly close to the Earth’s surface in this fault zone.
Exposed Alpine Fault rocks are subject to intense weathering as direct consequence of abundant orogenic rainfall associated with the fault’s location at the base of the Southern Alps. Furthermore, fault rock rheology is substantially affected by shallow-depth conditions such as the juxtaposition of competent hanging wall fault rocks on poorly consolidated footwall sediments. This means microstructural, mineralogical and geochemical properties of the exposed fault rocks may differ substantially from those at deeper levels, and thus are not characteristic of the majority of the fault rocks’ history. Examples are (I) frictionally weak smectites found within the fault gouges being artefacts formed at temperature conditions, and imparting petrophysical properties that are not typical for most of fault rocks of the Alpine Fault, (II) grain-scale dissolution resulting from subaerial weathering rather than deformation by pressure-solution processes and (III) fault gouge geometries being more complex than expected for deeper counterparts.
The methodological approaches deployed in analyses of this, and other fault zones, and the major results of this study are finally discussed in order to contextualize slip zone investigations of fault zones and landslides. Like faults, landslides are major geohazards, which highlights the importance of characterising their geomechanical properties. Similarities between faults, especially those exposed to subaerial processes, and landslides, include mineralogical composition and geomechanical behaviour. Together, this ensures failure occurs predominantly by cataclastic processes, although aseismic creep promoted by weak phyllosilicates is not uncommon. Consequently, the multidisciplinary approach commonly used to investigate fault zones may contribute to increase the understanding of landslide faulting processes and the assessment of their hazard potential.
Hugo Greßmann (1877-1927) hat als einer der führenden Vertreter der Religionsgeschichtlichen Schule die religionsgeschichtliche Methode zur Geltung gebracht. Die biographisch-wissenschaftsgeschichtliche Studie stellt Greßmanns religionsgeschichtliches Programm dar, ordnet es in den wissenschaftshistorischen Kontext ein und zeigt seine Bedeutung für die weitere Wissenschaftsgeschichte auf.
Das 12. Herbsttreffen Patholinguistik mit dem Schwerpunktthema »Weg(e) mit dem Stottern: Therapie und Selbsthilfe für Kinder und Erwachsene« fand am 24.11.2018 in Potsdam statt. Das Herbsttreffen wird seit 2007 jährlich vom Verband für Patholinguistik e.V. (vpl) durchgeführt. Der vorliegende Tagungsband beinhaltet die Vorträge zum Schwerpunktthema sowie Beiträge der Posterpräsentationen zu weiteren Themen aus der sprachtherapeutischen Forschung und Praxis.
Interactions involving biological interfaces such as lipid-based membranes are of paramount importance for all life processes. The same also applies to artificial interfaces to which biological matter is exposed, for example the surfaces of drug delivery systems or implants. This thesis deals with the two main types of interface interactions, namely (i) interactions between a single interface and the molecular components of the surrounding aqueous medium and (ii) interactions between two interfaces. Each type is investigated with regard to an important scientific problem in the fields of biotechnology and biology:
1.) The adsorption of proteins to surfaces functionalized with hydrophilic polymer brushes; a process of great biomedical relevance in context with harmful foreign-body-response to implants and drug delivery systems.
2.) The influence of glycolipids on the interaction between lipid membranes; a hitherto largely unexplored phenomenon with potentially great biological relevance.
Both problems are addressed with the help of (quasi-)planar, lipid-based model surfaces in combination with x-ray and neutron scattering techniques which yield detailed structural insights into the interaction processes. Regarding the adsorption of proteins to brush-functionalized surfaces, the first scenario considered is the exposure of the surfaces to human blood serum containing a multitude of protein species. Significant blood protein adsorption was observed despite the functionalization, which is commonly believed to act as a protein repellent. The adsorption consists of two distinct modes, namely strong adsorption to the brush grafting surface and weak adsorption to the brush itself. The second aspect investigated was the fate of the brush-functionalized surfaces when exposed to aqueous media containing immune proteins (antibodies) against the brush polymer, an emerging problem in current biomedical applications. To this end, it was found that antibody binding cannot be prevented by variation of the brush grafting density or the polymer length. This result motivates the search for alternative, strictly non-antigenic brush chemistries. With respect to the influence of glycolipids on the interaction between lipid membranes, this thesis focused on the glycolipids’ ability to crosslink and thereby to tightly attract adjacent membranes. This adherence is due to preferential saccharide-saccharide interactions occurring among the glycolipid headgroups. This phenomenon had previously been described for lipids with special oligo-saccharide motifs. Here, it was investigated how common this phenomenon is among glycolipids with a variety of more abundant saccharide-headgroups. It was found that glycolipid-induced membrane crosslinking is equally observed for some of these abundant glycolipid types, strongly suggesting that this under-explored phenomenon is potentially of great biological relevance.
In this dissertation, I describe the mechanisms involved in magmatic plumbing system establishment and evolution. Magmatic plumbing systems play a key role in determining volcanic activity style and recognizing its complexities can help in forecasting eruptions, especially within hazardous volcanic systems such as calderas. I explore the mechanisms of dike emplacement and intrusion geometry that shape magmatic plumbing systems beneath caldera-like topographies and how their characteristics relate to precursory activity of a volcanic eruption. For this purpose, I use scaled laboratory models to study the effect of stress field reorientation on a propagating dike induced by caldera topography. I construct these models by using solid gelatin to mimic the elastic properties of the earth's crust with a caldera on the surface. I inject water as the magma analog and track the evolution of the experiments through qualitative (geometry and stress evolution) and quantitative (displacement and strain computation) descriptions. The results show that a vertical dike deviates towards and outside of the caldera-like margin due to stress field reorientation beneath the caldera-like topography. The propagating intrusion forms a circumferential-eruptive dike when the caldera-like size is small, whereas a cone sheet develops beneath the large caldera-like topography.
To corroborate the results obtained from the experimental models, this thesis also describes the results of a case study utilizing seismic monitoring data associated with the unrest period of the 2015 phreatic eruption of Lascar volcano. Lascar has a crater with a small-scale caldera-like topography and exhibited long-lasting anomalous evolution of the number of long-period (LP) events preceding the 2015 eruption. I apply seismic techniques to constrain the hypocentral locations of LP events and characterize their spatial distribution, obtaining an image of Lascar's plumbing system. I observe an agreement in shallow hypocentral locations obtained through four different seismic techniques; nevertheless, the cross-correlation technique provides the best results. These results depict a plumbing system with a narrow sub-vertical deep conduit and a shallow hydrothermal system, where most LP events are located. These two regions are connected through an intermediate region of path divergence, whose geometry and orientation likely is influenced by stress reorientation due to topographic effects of the caldera-like crater.
Finally, in order to further enhance the interpretations of the previous case study, the seismic data was analyzed in tandem with a complementary multiparametric monitoring dataset. This complementary study confirms that the anomalous LP activity occurred as a sign of unrest in the preparatory phase of the phreatic eruption. In addition, I show how changes observed in other monitored parameters enabled to detect further signs of unrest in the shallow hydrothermal system. Overall, this study demonstrates that detecting complex geometric regions within plumbing systems beneath volcanoes is fundamental to produce an effective forecast of eruptions that from a first view seem to occur without any precursory activity.
Furthermore, through the development of this research I show that combining methods that include both observations and models allows one to obtain a more precise interpretation of the volcanic processes.
Anhand des Beispiels der vorsichtig ablehnenden Antwort
von Papst Gregor den Großen auf die von der byzantinischen
Kaiserin Constantina gestellte Bitte nach der Übersendung
des Kopfes von Paulus, eine Körperreliquie von großem symbolischen
Kapital, an den Kaiserhof von Konstantinopel untersucht
dieser Aufsatz einen schon vor dem 6. Jahrhundert
greifbaren Prozess der Genese einer westlich-spätrömischen
Identität, die ihren Ausdruck zunehmend in religiös-moralischen
Argumenten findet, und beleuchtet diesen auch vor
dem Hintergrund der diskursiven Verwendbarkeit von scheinbaren
Differenzen als Argument in der Kommunikation zwischen
Osten und Westen, zwischen weltlicher und religiöser
Macht.
Geomorphology seeks to characterize the forms, rates, and magnitudes of sediment and water transport that sculpt landscapes. This is generally referred to as earth surface processes, which incorporates the influence of biologic (e.g., vegetation), climatic (e.g., rainfall), and tectonic (e.g., mountain uplift) factors in dictating the transport of water and eroded material. In mountains, high relief and steep slopes combine with strong gradients in rainfall and vegetation to create dynamic expressions of earth surface processes. This same rugged topography presents challenges in data collection and process measurement, where traditional techniques involving detailed observations or physical sampling are difficult to apply at the scale of entire catchments. Herein lies the utility of remote sensing. Remote sensing is defined as any measurement that does not disturb the natural environment, typically via acquisition of images in the visible- to radio-wavelength range of the electromagnetic spectrum. Remote sensing is an especially attractive option for measuring earth surface processes, because large areal measurements can be acquired at much lower cost and effort than traditional methods. These measurements cover not only topographic form, but also climatic and environmental metrics, which are all intertwined in the study of earth surface processes. This dissertation uses remote sensing data ranging from handheld camera-based photo surveying to spaceborne satellite observations to measure the expressions, rates, and magnitudes of earth surface processes in high-mountain catchments of the Eastern Central Andes in Northwest Argentina. This work probes the limits and caveats of remote sensing data and techniques applied to geomorphic research questions, and presents important progress at this disciplinary intersection.
Organizing immigration
(2020)
Immigration constitutes a dynamic policy field with – often quite unpredictable – dynamics. This is based on immigration constituting a ‘wicked problem’ meaning that it is characterized by uncertainty, ambiguity and complexity. Due to the dynamics in the policy field, expectations towards public administrations often change. Following neo-institutionalist theory, public administrations depend on meeting the expectations in the organizational field in order to maintain legitimacy as the basis for, e.g., resources and compliance of stakeholders. With the dynamics in the policy field, expectations might change and public administrations consequently need to adapt in order to maintain or repair the then threatened legitimacy. If their organizational legitimacy is threatened by a perception of structures and processes being inadequate for changed expectations, an ‘institutional crisis’ unfolds. However, we know little about ministerial bureaucracies’ structural reactions to such crucial momentums and how this effects the quest for coordination within policy-making. Overall, the dissertation thus links to both policy analysis and public administration research and consists of five publications. It asks: How do structures in ministerial bureaucracies change in the context of institutional crises? And what effect do these changes have on ministerial coordination? The dissertation hereby focusses on the above described dynamic policy field of immigration in Germany in the period from 2005 to 2017 and pursues three objectives: 1) to identify the context and impulse for changes in the structures of ministerial bureaucracies, 2) to describe respective changes with regard to their organizational structures, and 3) to identify their effect on coordination. It hereby compares and contrasts institutional crises by incremental change and shock as well as changes and effects at federal and Länder level which allows a comprehensive answer to both of the research questions. Theoretically, the dissertation follows neo-institutionalist theory with a particular focus on changes in organizational structures, coordination and crisis management. Methodologically, it follows a comparative design. Each article (except for the literature review), focusses on ministerial bureaucracies at one governmental level (federal or Länder) and on an institutional crisis induced by either an incremental process or a shock. Thus, responses and effects can be compared and contrasted across impulses for institutional crises and governmental levels. Overall, the dissertation follows a mixed methods approach with a majority of qualitative single and small-n case studies based on document analysis and semi-structured interviews. Additionally, two articles use quantitative methods as they best suited the respective research question. The rather explorative nature of these two articles however fits to the overall interpretivist approach of the dissertation. Overall, the dissertation’s core argument is: Within the investigation period, varying dynamics and thus impulses for institutional crises took place in the German policy field of immigration. Respectively, expectations by stakeholders on how the politico-administrative system should address the policy problem changed. Ministerial administrations at both the federal and Länder level adapted to these expectations in order to maintain, or regain respectively, organizational legitimacy. The administration hereby referred to well-known recipes of structural changes. Institutional crises do not constitute fields of experimentation. The new structures had an immediate effect on ministerial coordination, with respect to both the horizontal and vertical dimension. Yet, they did not mean a comprehensive change of the system in place. The dissertation thus challenges the idea of the toppling effect of crises and rather shows that adaptability and persistence of public administrations constitute two sides of the same coin.
Hugo Greßmann (1877-1927) hat als einer der führenden Vertreter der Religionsgeschichtlichen Schule die religionsgeschichtliche Methode zur Geltung gebracht. Die biographisch-wissenschaftsgeschichtliche Studie stellt Greßmanns religionsgeschichtliches Programm dar, ordnet es in den wissenschaftshistorischen Kontext ein und zeigt seine Bedeutung für die weitere Wissenschaftsgeschichte auf.
Die Rechtsformen und Instrumente der staatlichen Subventionierung von innovativen Tätigkeiten
(2020)
Die Unterstützung von Innovationen wurde von der Europäischen Union als prioritär eingestuft. Da es notwendig ist, diese Priorität umzusetzen, gaben die EU-Organe in Form von nicht verbindlichen Rechtsakten im Rahmen der Politik allgemeine Leitlinien für die Art und Weise an, wie die Mitgliedstaaten die Politik zur Unterstützung der Innovationstätigkeit führen sollen. Die Mitgliedstaaten können im Rahmen ihrer Unabhängigkeit Maßnahmen, Handlungsformen und Verfahren zur Unterstützung innovativer Aktivitäten festlegen. Bei den diesbezüglich eingeführten Vorschriften sollten jedoch die Leitlinien berücksichtigt werden, die sich aus den politischen Rechtsakten der EU ergeben. Die Arbeit versuchte festzustellen, ob der polnische Gesetzgeber, die aus der EU-Politik ableitbare Pflicht zur Förderung innovativer Aktivitäten insbesondere auch durch die Einführung von Mittel, Rechtsformen und Verfahren für die Erteilung von Beihilfen erfüllt hat.
In Kapitel eins (I) wurden die grundlegenden Konzepte der Forschung diskutiert, insbesondere das Konzept der innovativen Tätigkeit und Fragen im Zusammenhang mit der Rolle der staatlichen Beihilfen als Maßnahmen zur Unterstützung der innovativen Tätigkeit. Daher wurde ein umfassender Maßnahmenkatalog diskutiert, mit dem die Verwaltung die Verpflichtungen aus der EU-Wachstumspolitik umsetzt. Das zweite Kapitel (II) befasst sich mit der Klassifizierung der Rechtsformen der Wirtschaftsverwaltung. In Kapitel drei (III) wurden die Rechtsformen und Verfahren für die Gewährung staatlicher Beihilfen vorgestellt. Besonders zwei Rechtsformen wurden beleuchtet, der Verwaltungsakt und der Vertrag, da auf deren Grundlage am häufigsten staatliche Beihilfen gewährt werden. Gegenstand der Analyse wurden insbesondere Fragen im Zusammenhang mit der Rechtsinstitution der Verwaltungsvertrag sowie der deutschen Zweistufentheorie.
Vor dem oben beschriebenen Hintergrund wurden in den dann folgenden Kapiteln vier (IV) und fünf (V) die Maßnahmen und Rechtsformen der Vergabe staatlicher Beihilfen speziell für innovative Tätigkeiten analysiert. Darüber hinaus wurde in Kapitel fünf (V) auch die Förderung von innovativen Tätigkeiten durch staatliche Beihilfen in Form eines Verwaltungsaktes analysiert. Es wurden spezielle rechtliche Institute (Handlungsformen) (einschließlich rechtlicher Maßnahmen) im Zusammenhang mit der Gewährung staatlicher Beihilfen in Sonderwirtschaftszonen sowie die Unterstützung innovativer Aktivitäten im Rahmen der Unterstützung von Forschungs- und Entwicklungstätigkeiten vorgestellt.
Die wirtschaftliche Intensivierung des Profisports hat in den vergangenen Jahrzehnten zu einer Anpassung der Organisationsstrukturen des Sports geführt. Immer wieder wird hierbei die Frage aufgeworfen, inwiefern eine identitätsstiftende Rückkoppelung des Sports an seine Basis gewährleistet werden kann. In der Rechtspraxis steht hier die basisdemokratische Ausrichtung der im Sport vorherrschenden Rechtsform des eingetragenen Vereins im Spannungsverhältnis zu der Ausgliederung wirtschaftlicher Betätigung auf Kapitalgesellschaften.
Der Autor untersucht in diesem Kontext, inwiefern sich die Rechtsform der eingetragenen Genossenschaft als Rechtsform für den Sport eignet. Hierbei nimmer er Bezug auf die rechtliche Zulässigkeit, die Organisations- und Finanzverfassung, das mit der Rechtsform einhergehende genossenschaftliche Prüfwesen sowie die steuerrechtlichen Auswirkungen, und entwickelt konkrete Einsatzmöglichkeiten der eingetragenen Genossenschaft in der Organisationspyramide des Sports.
Wege zur Gesangskarriere
(2020)
Hybride Gestaltungen gehören zu den wichtigsten Aufbauelementen von grenzüberschreitenden Steuergestaltungsmodellen und sind eines der komplexesten Themen des Internationalen Steuerrechts. Mit der Anti Tax Avoidance Directive (ATAD) hat der Unionsgesetzgeber den Mitgliedstaaten die rechtliche Verpflichtung auferlegt, die mit hybriden Gestaltungen einhergehenden Besteuerungsinkongruenzen durch Korrespondenzregelungen zu neutralisieren.
Das Werk durchdringt die einschlägigen Richtlinienvorschriften grundlegend und greift dabei die OECD-Regelungsempfehlungen zu BEPS-Aktionspunkt 2 auf. Dabei werden bedeutsame Divergenzen aufgezeigt und Zweifelsfragen diskutiert. Die Vorarbeiten dienen als Beurteilungsrahmen für den nachfolgend durchgeführten Abgleich mit dem deutschen Rechtsrahmen. Auf dieser Grundlage wird der verbleibende gesetzgeberische Umsetzungsbedarf identifiziert. Zum Abschluss wird der Entwurf des ATAD-Umsetzungsgesetzes kritisch gewürdigt.
Dieser Band der Reihe „Potsdamer Geographische Praxis“ enthält drei Beiträge, die sich mit dem Handlungskonzept „Tolerantes Brandenburg“ der Brandenburgischen Landesregierung befassen. In allen Beiträgen wird auf der Grundlage empirischer Erhebungen analysiert, wie dieses Konzept zum Umgang mit Rechtsextremismus und Rechtspopulismus sowie zur Demokratieförderung in den letzten Jahren umgesetzt wurde. Die ersten beiden Beiträge haben die sogenannten Zukunftsdialoge „Tolerantes Brandenburg“ zum Gegenstand, die in den Jahren 2015 bis 2017 in allen kreisfreien Städten und Landkreisen Brandenburgs durchgeführt wurden. Der erste Beitrag von Schubarth, Kohlstruck und Rolfes beinhaltet die Ergebnisse der wissenschaftlichen Beobachtung der Zukunftsdialoge; die Ergebnisse beruhen überwiegend auf teilnehmenden Beobachtungen der Zukunftsdialoge und qualitativen Interviews mit Teilnehmenden. Der zweite Beitrag von Bode und Rolfes basiert auf einer quantitativen Methodik und enthält die Auswertungen einer standardisierten Befragung der Teilnehmer/innen der Zukunftsdialoge. Die Ergebnisse beider Untersuchungen liefern wichtige Erkenntnisse und gute Ansatzpunkte, wie einerseits die Institutionen des Beratungsnetzwerks „Tolerantes Brandenburg“ und das Handlungskonzept auf lokaler Ebene eine größere Bekanntheit erlangen könnten und andererseits, welche Schritte hilfreich wären, um eine (noch) stärkere Verankerung des Handlungskonzeptes in den Regionen zu erreichen. Beim dritten Beitrag von Schubarth, Kohlstruck und Rolfes handelt es sich eine Expertise aus dem Jahr 2019. Der Beitrag liefert einen mehrdimensionalen Blick auf das Handlungskonzept aus unterschiedlichen internen wie externen Perspektiven. Dabei wird vor allem auf die gesellschaftlichen und politischen Veränderungen fokussiert, die sich seit 2014 im Handlungsfeld „Demokratiestärkung und Auseinandersetzung mit dem Rechtsextremismus“ ergeben haben. Grundlage der Expertise waren leitfadenzentrierte Interviews.
The Feasibility and Effectiveness of a New Practical Multidisciplinary Treatment for Low-Back Pain
(2020)
Low-back pain is a major health problem exacerbated by the fact that most treatments are not suitable for self-management in everyday life. Particularly, interdisciplinary programs consist of intensive therapy lasting several weeks. Additionally, therapy components are rarely coordinated regarding reinforcing effects, which would improve complaints in persons with higher pain. This study assesses the effectiveness of a self-management program, firstly for persons suffering from higher pain and secondly compared to regular routines. Study objectives were treated in a single-blind multicenter controlled trial. A total of n = 439 volunteers (age 18–65 years) were randomly assigned to a twelve-week multidisciplinary sensorimotor training (3-weeks-center- and 9-weeks-homebased) or control group. The primary outcome pain (Chronic-Pain-Grade) as well as mental health were assessed by questionnaires at baseline and follow-up (3/6/12/24 weeks, M2-M5). For statistical analysis, multiple linear regression models were used. N = 291 (age 39.7 ± 12.7 years, female = 61.1%, 77% CPG = 1) completed training (M1/M4/M5), showing a significantly stronger reduction of mental health complaints (anxiety, vital exhaustion) in people with higher than those with lower pain in multidisciplinary treatment. Compared to regular routines, the self-management–multidisciplinary treatment led to a clinically relevant reduction of pain–disability and significant mental health improvements. Low-cost exercise programs may provide enormous relief for therapeutic processes, rehabilitation aftercare, and thus, cost savings for the health system
The Feasibility and Effectiveness of a New Practical Multidisciplinary Treatment for Low-Back Pain
(2020)
Low-back pain is a major health problem exacerbated by the fact that most treatments are not suitable for self-management in everyday life. Particularly, interdisciplinary programs consist of intensive therapy lasting several weeks. Additionally, therapy components are rarely coordinated regarding reinforcing effects, which would improve complaints in persons with higher pain. This study assesses the effectiveness of a self-management program, firstly for persons suffering from higher pain and secondly compared to regular routines. Study objectives were treated in a single-blind multicenter controlled trial. A total of n = 439 volunteers (age 18–65 years) were randomly assigned to a twelve-week multidisciplinary sensorimotor training (3-weeks-center- and 9-weeks-homebased) or control group. The primary outcome pain (Chronic-Pain-Grade) as well as mental health were assessed by questionnaires at baseline and follow-up (3/6/12/24 weeks, M2-M5). For statistical analysis, multiple linear regression models were used. N = 291 (age 39.7 ± 12.7 years, female = 61.1%, 77% CPG = 1) completed training (M1/M4/M5), showing a significantly stronger reduction of mental health complaints (anxiety, vital exhaustion) in people with higher than those with lower pain in multidisciplinary treatment. Compared to regular routines, the self-management–multidisciplinary treatment led to a clinically relevant reduction of pain–disability and significant mental health improvements. Low-cost exercise programs may provide enormous relief for therapeutic processes, rehabilitation aftercare, and thus, cost savings for the health system
Spiked gold nanotriangles
(2020)
We show the formation of metallic spikes on the surface of gold nanotriangles (AuNTs) by using the same reduction process which has been used for the synthesis of gold nanostars. We confirm that silver nitrate operates as a shape-directing agent in combination with ascorbic acid as the reducing agent and investigate the mechanism by dissecting the contribution of each component, i.e., anionic surfactant dioctyl sodium sulfosuccinate (AOT), ascorbic acid (AA), and AgNO3. Molecular dynamics (MD) simulations show that AA attaches to the AOT bilayer of nanotriangles, and covers the surface of gold clusters, which is of special relevance for the spike formation process at the AuNT surface. The surface modification goes hand in hand with a change of the optical properties. The increased thickness of the triangles and a sizeable fraction of silver atoms covering the spikes lead to a blue-shift of the intense near infrared absorption of the AuNTs. The sponge-like spiky surface increases both the surface enhanced Raman scattering (SERS) cross section of the particles and the photo-catalytic activity in comparison with the unmodified triangles, which is exemplified by the plasmon-driven dimerization of 4-nitrothiophenol (4-NTP) to 4,4'-dimercaptoazobenzene (DMAB).
Spiked gold nanotriangles
(2020)
We show the formation of metallic spikes on the surface of gold nanotriangles (AuNTs) by using the same reduction process which has been used for the synthesis of gold nanostars. We confirm that silver nitrate operates as a shape-directing agent in combination with ascorbic acid as the reducing agent and investigate the mechanism by dissecting the contribution of each component, i.e., anionic surfactant dioctyl sodium sulfosuccinate (AOT), ascorbic acid (AA), and AgNO3. Molecular dynamics (MD) simulations show that AA attaches to the AOT bilayer of nanotriangles, and covers the surface of gold clusters, which is of special relevance for the spike formation process at the AuNT surface. The surface modification goes hand in hand with a change of the optical properties. The increased thickness of the triangles and a sizeable fraction of silver atoms covering the spikes lead to a blue-shift of the intense near infrared absorption of the AuNTs. The sponge-like spiky surface increases both the surface enhanced Raman scattering (SERS) cross section of the particles and the photo-catalytic activity in comparison with the unmodified triangles, which is exemplified by the plasmon-driven dimerization of 4-nitrothiophenol (4-NTP) to 4,4'-dimercaptoazobenzene (DMAB).
Magnetite containing aerogels were synthesized by freeze-drying olive oil/silicone oil-based Janus emulsion gels containing gelatin and sodium carboxymethylcellulose (NaCMC). The magnetite nanoparticles dispersed in olive oil are processed into the gel and remain in the macroporous aerogel after removing the oil components. The coexistence of macropores from the Janus droplets and mesopores from freeze-drying of the hydrogels in combination with the magnetic properties offer a special hierarchical pore structure, which is of relevance for smart supercapacitors, biosensors, and spilled oil sorption and separation. The morphology of the final structure was investigated in dependence on initial compositions. More hydrophobic aerogels with magnetic responsiveness were synthesized by bisacrylamide-crosslinking of the hydrogel. The crosslinked aerogels can be successfully used in magnetically responsive clean up experiments of the cationic dye methylene blue.
Magnetite containing aerogels were synthesized by freeze-drying olive oil/silicone oil-based Janus emulsion gels containing gelatin and sodium carboxymethylcellulose (NaCMC). The magnetite nanoparticles dispersed in olive oil are processed into the gel and remain in the macroporous aerogel after removing the oil components. The coexistence of macropores from the Janus droplets and mesopores from freeze-drying of the hydrogels in combination with the magnetic properties offer a special hierarchical pore structure, which is of relevance for smart supercapacitors, biosensors, and spilled oil sorption and separation. The morphology of the final structure was investigated in dependence on initial compositions. More hydrophobic aerogels with magnetic responsiveness were synthesized by bisacrylamide-crosslinking of the hydrogel. The crosslinked aerogels can be successfully used in magnetically responsive clean up experiments of the cationic dye methylene blue.
Bayesian Data Assimilation to Support Informed Decision Making in Individualized Chemotherapy
(2020)
An essential component of therapeutic drug/biomarker monitoring (TDM) is to combine patient data with prior knowledge for model-based predictions of therapy outcomes. Current Bayesian forecasting tools typically rely only on the most probable model parameters (maximum a posteriori (MAP) estimate). This MAP-based approach, however, does neither necessarily predict the most probable outcome nor does it quantify the risks of treatment inefficacy or toxicity. Bayesian data assimilation (DA) methods overcome these limitations by providing a comprehensive uncertainty quantification. We compare DA methods with MAP-based approaches and show how probabilistic statements about key markers related to chemotherapy-induced neutropenia can be leveraged for more informative decision support in individualized chemotherapy. Sequential Bayesian DA proved to be most computationally efficient for handling interoccasion variability and integrating TDM data. For new digital monitoring devices enabling more frequent data collection, these features will be of critical importance to improve patient care decisions in various therapeutic areas.
Bayesian Data Assimilation to Support Informed Decision Making in Individualized Chemotherapy
(2020)
An essential component of therapeutic drug/biomarker monitoring (TDM) is to combine patient data with prior knowledge for model-based predictions of therapy outcomes. Current Bayesian forecasting tools typically rely only on the most probable model parameters (maximum a posteriori (MAP) estimate). This MAP-based approach, however, does neither necessarily predict the most probable outcome nor does it quantify the risks of treatment inefficacy or toxicity. Bayesian data assimilation (DA) methods overcome these limitations by providing a comprehensive uncertainty quantification. We compare DA methods with MAP-based approaches and show how probabilistic statements about key markers related to chemotherapy-induced neutropenia can be leveraged for more informative decision support in individualized chemotherapy. Sequential Bayesian DA proved to be most computationally efficient for handling interoccasion variability and integrating TDM data. For new digital monitoring devices enabling more frequent data collection, these features will be of critical importance to improve patient care decisions in various therapeutic areas.
We applied the Social Cognitive Theory to investigate whether parent–child relationships, bullying victimization, and teacher–student relationships are directly as well as indirectly via self-efficacy in social conflicts associated with adolescents’ willingness to intervene in a bullying incident. There were 2071 (51.3% male) adolescents between the ages of 12 and 17 from 24 schools in Germany who participated in this study. A mediation test using structural equation modeling revealed that parent–child relationships, bullying victimization, and teacher–student relationships were directly related to adolescents’ self-efficacy in social conflicts. Further, teacher–student relationships and bullying victimization were directly associated with adolescents’ willingness to intervene in bullying. Finally, relationships with parents, peers and teachers were indirectly related to higher levels of students’ willingness to intervene in bullying situations due to self-efficacy in social conflicts. Thus, our analysis confirms the general assumptions of Social Cognitive Theory and the usefulness of applying its approach to social conflicts such as bullying situations.
We applied the Social Cognitive Theory to investigate whether parent–child relationships, bullying victimization, and teacher–student relationships are directly as well as indirectly via self-efficacy in social conflicts associated with adolescents’ willingness to intervene in a bullying incident. There were 2071 (51.3% male) adolescents between the ages of 12 and 17 from 24 schools in Germany who participated in this study. A mediation test using structural equation modeling revealed that parent–child relationships, bullying victimization, and teacher–student relationships were directly related to adolescents’ self-efficacy in social conflicts. Further, teacher–student relationships and bullying victimization were directly associated with adolescents’ willingness to intervene in bullying. Finally, relationships with parents, peers and teachers were indirectly related to higher levels of students’ willingness to intervene in bullying situations due to self-efficacy in social conflicts. Thus, our analysis confirms the general assumptions of Social Cognitive Theory and the usefulness of applying its approach to social conflicts such as bullying situations.
The pathophysiology of Parkinson’s disease (PD) is still not understood. There are investigations which show a changed oscillatory behaviour of brain circuits or changes in variability of, e.g., gait parameters in PD. The aim of this study was to investigate whether or not the motor output differs between PD patients and healthy controls. Thereby, patients without tremor are investigated in the medication off state performing a special bilateral isometric motor task. The force and accelerations (ACC) were recorded as well as the Mechanomyography (MMG) of the biceps brachii, the brachioradialis and of the pectoralis major muscles using piezoelectric-sensors during the bilateral motor task at 60% of the maximal isometric contraction. The frequency, a specific power ratio, the amplitude variation and the slope of amplitudes were analysed. The results indicate that the oscillatory behaviour of motor output in PD patients without tremor deviates from controls: thereby, the 95%-confidence-intervals of power ratio and of amplitude variation of all signals are disjoint between PD and controls and show significant differences in group comparisons (power ratio: p = 0.000–0.004, r = 0.441–0.579; amplitude variation: p = 0.000–0.001, r = 0.37–0.67). The mean frequency shows a significant difference for ACC (p = 0.009, r = 0.43), but not for MMG. It remains open, whether this muscular output reflects changes of brain circuits and whether the results are reproducible and specific for PD.
The pathophysiology of Parkinson’s disease (PD) is still not understood. There are investigations which show a changed oscillatory behaviour of brain circuits or changes in variability of, e.g., gait parameters in PD. The aim of this study was to investigate whether or not the motor output differs between PD patients and healthy controls. Thereby, patients without tremor are investigated in the medication off state performing a special bilateral isometric motor task. The force and accelerations (ACC) were recorded as well as the Mechanomyography (MMG) of the biceps brachii, the brachioradialis and of the pectoralis major muscles using piezoelectric-sensors during the bilateral motor task at 60% of the maximal isometric contraction. The frequency, a specific power ratio, the amplitude variation and the slope of amplitudes were analysed. The results indicate that the oscillatory behaviour of motor output in PD patients without tremor deviates from controls: thereby, the 95%-confidence-intervals of power ratio and of amplitude variation of all signals are disjoint between PD and controls and show significant differences in group comparisons (power ratio: p = 0.000–0.004, r = 0.441–0.579; amplitude variation: p = 0.000–0.001, r = 0.37–0.67). The mean frequency shows a significant difference for ACC (p = 0.009, r = 0.43), but not for MMG. It remains open, whether this muscular output reflects changes of brain circuits and whether the results are reproducible and specific for PD.
Die Reformation in Brandenburg gilt als Musterbeispiel einer obrigkeitlich initiierten und gesteuerten Fürstenreformation in einem nordostdeutschen Territorialstaat. Deshalb haben lange Zeit das Handeln und die Motive hauptsächlich der Landesherrschaft die Aufmerksamkeit der brandenburgischen Reformationsgeschichtsforschung auf sich vereinigt. Felix Engel erweitert diesen recht eng gesteckten Horizont, indem er die Perspektive »von unten« in seine Betrachtung einbezieht und systematisch ergründet, welche Mittel und Wege den städtischen Akteuren zu verschiedenen Phasen des Reformationsprozesses offenstanden, um die Umgestaltung ihrer kommunalen Kirchenwesen aktiv zu beeinflussen. Daher finden zunächst sowohl die strukturellen und ideellen Vorbedingungen als auch die Reformationsverläufe Berücksichtigung. Anschließend werden Kontinuitäten und Brüche sowie die Handlungsspielräume der beteiligten Protagonisten in zentralen Bereichen der städtischen Kirchenwesen analysiert.
Recent research indicates that affective responses during exercise are an important determinant of future exercise and physical activity. Thus far these responses have been measured with standardized self-report scales, but this study used biometric software for automated facial action analysis to analyze the changes that occur during physical exercise. A sample of 132 young, healthy individuals performed an incremental test on a cycle ergometer. During that test the participants’ faces were video-recorded and the changes were algorithmically analyzed at frame rate (30 fps). Perceived exertion and affective valence were measured every two minutes with established psychometric scales. Taking into account anticipated inter-individual variability, multilevel regression analysis was used to model how affective valence and ratings of perceived exertion (RPE) covaried with movement in 20 facial action areas. We found the expected quadratic decline in self-reported affective valence (more negative) as exercise intensity increased. Repeated measures correlation showed that the facial action mouth open was linked to changes in (highly intercorrelated) affective valence and RPE. Multilevel trend analyses were calculated to investigate whether facial actions were typically linked to either affective valence or RPE. These analyses showed that mouth open and jaw drop predicted RPE, whereas (additional) nose wrinkle was indicative for the decline in affective valence. Our results contribute to the view that negative affect, escalating with increasing exercise intensity, may be the body’s essential warning signal that physiological overload is imminent. We conclude that automated facial action analysis provides new options for researchers investigating feelings during exercise. In addition, our findings offer physical educators and coaches a new way of monitoring the affective state of exercisers, without interrupting and asking them.
Recent research indicates that affective responses during exercise are an important determinant of future exercise and physical activity. Thus far these responses have been measured with standardized self-report scales, but this study used biometric software for automated facial action analysis to analyze the changes that occur during physical exercise. A sample of 132 young, healthy individuals performed an incremental test on a cycle ergometer. During that test the participants’ faces were video-recorded and the changes were algorithmically analyzed at frame rate (30 fps). Perceived exertion and affective valence were measured every two minutes with established psychometric scales. Taking into account anticipated inter-individual variability, multilevel regression analysis was used to model how affective valence and ratings of perceived exertion (RPE) covaried with movement in 20 facial action areas. We found the expected quadratic decline in self-reported affective valence (more negative) as exercise intensity increased. Repeated measures correlation showed that the facial action mouth open was linked to changes in (highly intercorrelated) affective valence and RPE. Multilevel trend analyses were calculated to investigate whether facial actions were typically linked to either affective valence or RPE. These analyses showed that mouth open and jaw drop predicted RPE, whereas (additional) nose wrinkle was indicative for the decline in affective valence. Our results contribute to the view that negative affect, escalating with increasing exercise intensity, may be the body’s essential warning signal that physiological overload is imminent. We conclude that automated facial action analysis provides new options for researchers investigating feelings during exercise. In addition, our findings offer physical educators and coaches a new way of monitoring the affective state of exercisers, without interrupting and asking them.
Remembering German-Australian Colonial Entanglements emphatically promotes a critical and nuanced understanding of the complex entanglement of German colonial actors and activities within Australian colonial institutions and different imperial ideologies. Case studies ranging from the German reception of James Cook’s voyages through to the legacies of 19th- and 20th-century settler colonialism foreground the highly ambiguous roles played by explorers, missionaries, intellectuals and other individuals, as well as by objects and things that travelled between worlds – ancestral human remains, rare animal skins, songs, and even military tanks. The chapters foreground the complex relationship between science, religion, art and exploitation, displacement and annihilation.
Abiotic stresses cause oxidative damage in plants. Here, we demonstrate that foliar application of an extract from the seaweed Ascophyllum nodosum, SuperFifty (SF), largely prevents paraquat (PQ)-induced oxidative stress in Arabidopsis thaliana. While PQ-stressed plants develop necrotic lesions, plants pre-treated with SF (i.e., primed plants) were unaffected by PQ. Transcriptome analysis revealed induction of reactive oxygen species (ROS) marker genes, genes involved in ROS-induced programmed cell death, and autophagy-related genes after PQ treatment. These changes did not occur in PQ-stressed plants primed with SF. In contrast, upregulation of several carbohydrate metabolism genes, growth, and hormone signaling as well as antioxidant-related genes were specific to SF-primed plants. Metabolomic analyses revealed accumulation of the stress-protective metabolite maltose and the tricarboxylic acid cycle intermediates fumarate and malate in SF-primed plants. Lipidome analysis indicated that those lipids associated with oxidative stress-induced cell death and chloroplast degradation, such as triacylglycerols (TAGs), declined upon SF priming. Our study demonstrated that SF confers tolerance to PQ-induced oxidative stress in A. thaliana, an effect achieved by modulating a range of processes at the transcriptomic, metabolic, and lipid levels.
Abiotic stresses cause oxidative damage in plants. Here, we demonstrate that foliar application of an extract from the seaweed Ascophyllum nodosum, SuperFifty (SF), largely prevents paraquat (PQ)-induced oxidative stress in Arabidopsis thaliana. While PQ-stressed plants develop necrotic lesions, plants pre-treated with SF (i.e., primed plants) were unaffected by PQ. Transcriptome analysis revealed induction of reactive oxygen species (ROS) marker genes, genes involved in ROS-induced programmed cell death, and autophagy-related genes after PQ treatment. These changes did not occur in PQ-stressed plants primed with SF. In contrast, upregulation of several carbohydrate metabolism genes, growth, and hormone signaling as well as antioxidant-related genes were specific to SF-primed plants. Metabolomic analyses revealed accumulation of the stress-protective metabolite maltose and the tricarboxylic acid cycle intermediates fumarate and malate in SF-primed plants. Lipidome analysis indicated that those lipids associated with oxidative stress-induced cell death and chloroplast degradation, such as triacylglycerols (TAGs), declined upon SF priming. Our study demonstrated that SF confers tolerance to PQ-induced oxidative stress in A. thaliana, an effect achieved by modulating a range of processes at the transcriptomic, metabolic, and lipid levels.
Am Beispiel der Erd- und Umweltwissenschaften (einschließlich der landschafts- und standortbezogenen Teilgebiete der Agrarwissenschaften) zeigt dieser Beitrag, dass auch in scheinbar „unverdächtigen“ Disziplinen personenbezogene Forschungsdaten vorkommen. Eine Auswertung der Literatur zeigt, dass allgemeine Handreichungen zum Datenschutz in der Forschung kaum Unterstützung bei der Arbeit mit den für diese Disziplinen besonders relevanten Fällen bieten. Für die in den Erd- und Umweltwissenschaften besonders relevanten raumbezogenen Daten kommt hinzu, dass selbst unter Fachjuristinnen Uneinigkeit über die datenschutzrechtliche Bewertung herrscht. Die Ergebnisse einer empirischen Vorstudie zeigen eine ganze Reihe verschiedener Arten personenbezogener Forschungsdaten auf, die in der Forschungspraxis der Erd- und Umweltwissenschaften eine Rolle spielen. Sie legen außerdem nahe, dass der Umgang mit personenbezogenen Daten in der Forschungspraxis der Erd- und Umweltwissenschaften auf Grund der mangelnden Vertrautheit mit dem Datenschutz nicht immer den rechtlichen Anforderungen entspricht. Auch Unterstützung durch Fachgesellschaften und Infrastruktureinrichtungen – etwa in Form disziplinspezifischer Handreichungen, qualifizierter Beratung oder institutionalisierten Möglichkeiten, Daten sicher zu archivieren und gegebenenfalls zugangsbeschränkt zu publizieren – bestehen kaum. Aus dieser Situation ergeben sich Herausforderungen an die Weiterentwicklung der disziplinären Datenkultur und Dateninfrastruktur, beispielsweise im Rahmen des Prozesses zum Aufbau einer Nationalen Forschungsdateninfrastruktur (NFDI). Zu den Möglichkeiten für Infrastruktureinrichtungen, diese Weiterentwicklung zu unterstützen, zeigt dieser Beitrag Handlungsoptionen auf.
Some of the most frequent questions surrounding business negotiations address not only the nature of such negotiations, but also how they should be conducted. The answers given by business people from different cultural backgrounds to these questions are likely to differ from the standard answers found in business manuals.
In her book, Milene Mendes de Oliveira investigates how Brazilian and German business people conceptualize and act out business negotiations using English as a Lingua Franca. The frameworks of Cultural Linguistics, English as a Lingua Franca, World Englishes, and Business Discourse offer the theoretical and methodological grounding for the analysis of interviews with high-ranking Brazilian and German business people. Moreover, a side study on e-mail exchanges between Brazilian and German employees of a healthcare company serves as a test case for the results arising from the interviews, and helps understand other facets of authentic intercultural business communication.
Offering new insights on English as a Lingua Franca in international business contexts, Business Negotiations in ELF from a Cultural Linguistic Perspective simultaneously provides a detailed cultural-conceptual account of business negotiations from the viewpoint of Brazilian and German business people and a secondary analysis of their pragmatic aspects.
Cells and tissues are sensitive to mechanical forces applied to them. In particular, bone forming cells and connective tissues, composed of cells embedded in fibrous extracellular matrix (ECM), are continuously remodeled in response to the loads they bear. The mechanoresponses of cells embedded in tissue include proliferation, differentiation, apoptosis, internal signaling between cells, and formation and resorption of tissue.
Experimental in-vitro systems of various designs have demonstrated that forces affect tissue growth, maturation and mineralization. However, the results depended on different parameters such as the type and magnitude of the force applied in each study. Some experiments demonstrated that applied forces increase cell proliferation and inhibit cell maturation rate, while other studies found the opposite effect. When the effect of different magnitudes of forces was compared, some studies showed that higher forces resulted in a cell proliferation increase or differentiation decrease, while other studies observed the opposite trend or no trend at all.
In this study, MC3T3-E1 cells, a cell line of pre-osteoblasts (bone forming cells), was used. In this cell line, cell differentiation is known to accelerate after cells stop proliferating, typically at confluency. This makes this cell line an interesting subject for studying the influence of forces on the switch between the proliferation stage of the precursor cell and the differentiation to the mature osteoblasts.
A new experimental system was designed to perform systematic investigations of the influence of the type and magnitude of forces on tissue growth. A single well plate contained an array of 80 rectangular pores. Each pore was seeded with MC3T3-E1 cells. The culture medium contained magnetic beads (MBs) of 4.5 μm in diameter that were incorporated into the pre-osteoblast cells. Using an N52 neodymium magnet, forces ranging over three orders of magnitude were applied to MBs incorporated in cells at 10 different distances from the magnet. The amount of formed tissue was assessed after 24 days of culture. The experimental design allowed to obtain data concerning (i) the influence of the type of the force (static, oscillating, no force) on tissue growth; (ii) the influence of the magnitude of force (pN-nN range); (iii) the effect of functionalizing the magnetic beads with the tripeptide Arg-Gly-Asp (RGD). To learn about cell differentiation state, in the final state of the tissue growth experiments, an analysis for the expression of alkaline phosphatase (ALP), a well - known marker of osteoblast differentiation, was performed.
The experiments showed that the application of static magnetic forces increased tissue growth compared to control, while oscillating forces resulted in tissue growth reduction. A statistically significant positive correlation was found between the amount of tissue grown and the magnitude of the oscillating magnetic force. A positive but non-significant correlation of the amount of tissue with the magnitude of forces was obtained when static forces were applied. Functionalizing the MBs with RGD peptides and applying oscillating forces resulted in an increase of tissue growth relative to tissues incubated with “plain” epoxy MBs. ALP expression decreased as a function of the magnitude of force both when static and oscillating forces were applied. ALP stain intensity was reduced relative to control when oscillating forces were applied and was not significantly different than control for static forces.
The suggested interpretation of the experimental findings is that larger mechanical forces delay cell maturation and keep the pre-osteoblasts in a more proliferative stage characterized by more tissue formed and lower expression of ALP. While the influence of the force magnitude can be well explained by an effect of the force on the switch between proliferation and differentiation, the influence of force type (static or oscillating) is less clear. In particular, it is challenging to reconcile the reduction of tissue formed under oscillating forces as compared to controls with the simultaneous reduction of ALP expression. To better understand this, it may be necessary to refine the staining protocol of the scaffolds and to include the amount and structure of ECM as well as other factors that were not monitored in the experiment and which may influence tissue growth and maturation.
The developed experimental system proved well suited for a systematic and efficient study of the mechanoresponsiveness of tissue growth, it allowed a study of the dependence of tissue growth on force magnitude ranging over three orders of magnitude, and a comparison between the effect of static and oscillating forces. Future experiments can explore the multiple parameters that affect tissue growth as a function of the magnitude of the force: by applying different time-dependent forces; by extending the force range studied; or by using different cell lines and manipulating the mechanotransduction in the cells biochemically.
Introduction
(2020)
Why is Cleopatra, a descendent of Alexander the Great, a Ptolemy from a Greek–Macedonian family, in popular imagination an Oriental woman? True, she assumed some aspects of pharaonic imagery in order to rule Egypt, but her Orientalism mostly derives from ancient (Roman) and modern stereotypes: both the Orient and the idea of a woman in power are signs, in the Western tradition, of ‘otherness’ – and in this sense they can easily overlap and interchange.
This volume investigates how ancient women, and particularly powerful women, such as queens and empresses, have been re-imagined in Western (and not only Western) arts; highlights how this re-imagination and re-visualization is, more often than not, the product of Orientalist stereotypes – even when dealing with women who had nothing to do with Eastern regions; and compares these images with examples of Eastern gaze on the same women. Through the chapters in this volume, readers will discover the similarities and differences in the ways in which women in power were and still are described and decried by their opponents.
Mit zunehmender Schnelligkeit etablieren sich neue Medien, Kommunikationsmittel und Kunstformen innerhalb unserer Gesellschaften. Oft sind es Jugendliche, die sich als „digital natives“ unbefangen auf diese Entwicklungen einlassen können. Der Ruf nach einem kritischen Umgang mit Medien, einer systematischen Medienerziehung und Medienbildung wird seit geraumer Zeit formuliert. Allerdings existieren bisher wenige Bemühungen die angehenden Pädagoginnen und Pädagogen mit entsprechenden Methoden, Ideen und Materialien auszustatten.
Der vorliegende Band der DIGAREC Series enthält Beiträge der interdisziplinären Ringvorlesung „Videospiele als didaktische Herausforderung“, die im Sommersemester 2017 an der Philosophischen Fakultät der Universität Potsdam durchgeführt wurde. Die Beiträge machen Vorschläge zum Einsatz von Computer- und Videospielen im schulischen Unterricht und für Aktivitäten in außerschulischen Jugendeinrichtungen. Die Autorinnen und Autoren erörtern aus den jeweiligen Perspektiven ihrer Fachdisziplinen konkrete Methoden und Anwendungsmöglichkeiten anhand von ausgewählten Computerspielen. Schwerpunkt des Interesses bilden dabei Videospiele, die in erster Linie zur Unterhaltung eingesetzt werden, da der Einsatz von „Serious Games“/“Educational Games“ in jüngster Zeit schon breiter wahrgenommen wird.
Halb Europa in Brandenburg
(2020)
Die Mark Brandenburg gehörte zu den am stärksten vom Dreißigjährigen Krieg verwüsteten Gebieten. Diese zogen sich wie ein breiter Streifen von Nordost- über Mitteldeutschland bis an den Mittel- und Oberrhein. Nachdem der junge brandenburgische Kurfürst Friedrich Wilhelm im März 1643 nach langen Jahren im Kriegsexil in seine Stammlande zurückgekehrt war, musste er feststellen, dass diese gründlich ruiniert waren. Dort hatte der Krieg vor allem in den 1630er Jahren nach dem Einfall des schwedischen Königs Gustav II. Adolf in Pommern besonders heftig gewütet.
Mehr noch als in den Städten hatten die Menschen in den Dörfern und unbefestigten Flecken unter den Söldnerheeren zu leiden gehabt. Die ständigen Durchmärsche des Kriegsvolks, dessen Einquartierungen und Übergriffe führten zu Hungersnöten, Pestwellen und Massenflucht der Zivilbevölkerung. Der Wiederaufbau der Mark Brandenburg sollte sich, auch wenn er schon in den letzten Kriegsjahren eingesetzt hatte, wegen des wirtschaftlichen Zusammenbruchs, der massiven Zerstörungen und der enormen Menschenverluste als äußerst langwierig erweisen.
Die in den Kriegswirren erlittenen Traumata der Menschen prägten – nicht nur in Brandenburg – bis zum Ersten Weltkrieg die Erinnerungskultur der nachfolgenden Generationen.
Social comparison processes and the social position within a school class already play a major role in performance evaluation as early as in elementary school. The influence of contrast and assimilation effects on self-evaluation of performance as well as task interest has been widely researched in observational studies under the labels big-fish-little-pond and basking-in-reflected-glory effect. This study examined the influence of similar contrast and assimilation effects in an experimental paradigm. Fifth and sixth grade students (n = 230) completed a computer-based learning task during which they received social comparative feedback based on 2 × 2 experimentally manipulated feedback conditions: social position (high vs. low) and peer performance (high vs. low). Results show a more positive development of task interest and self-evaluation of performance in both the high social position and the high peer performance condition. When applied to the school setting, results of this study suggest that students who already perform well in comparison to their peer group are also the ones who profit most from social comparative feedback, given that they are the ones who usually receive the corresponding positive performance feedback.
Social comparison processes and the social position within a school class already play a major role in performance evaluation as early as in elementary school. The influence of contrast and assimilation effects on self-evaluation of performance as well as task interest has been widely researched in observational studies under the labels big-fish-little-pond and basking-in-reflected-glory effect. This study examined the influence of similar contrast and assimilation effects in an experimental paradigm. Fifth and sixth grade students (n = 230) completed a computer-based learning task during which they received social comparative feedback based on 2 × 2 experimentally manipulated feedback conditions: social position (high vs. low) and peer performance (high vs. low). Results show a more positive development of task interest and self-evaluation of performance in both the high social position and the high peer performance condition. When applied to the school setting, results of this study suggest that students who already perform well in comparison to their peer group are also the ones who profit most from social comparative feedback, given that they are the ones who usually receive the corresponding positive performance feedback.
Instructions given prior to extinction training facilitate the extinction of conditioned skin conductance (SCRs) and fear-potentiated startle responses (FPSs) and serve as laboratory models for cognitive interventions implemented in exposure-based treatments of pathological anxiety. Here, we investigated how instructions given prior to extinction training, with or without the additional removal of the electrode used to deliver the unconditioned stimulus (US), affect the return of fear assessed 24 hours later. We replicated previous instruction effects on extinction and added that the additional removal of the US electrode slightly enhanced facilitating effects on the extinction of conditioned FPSs. In contrast, extinction instructions hardly affected the return of conditioned fear responses. These findings suggest that instruction effects observed during extinction training do not extent to tests of return of fear 24 hours later which serve as laboratory models of relapse and improvement stability of exposure-based treatments.
Instructions given prior to extinction training facilitate the extinction of conditioned skin conductance (SCRs) and fear-potentiated startle responses (FPSs) and serve as laboratory models for cognitive interventions implemented in exposure-based treatments of pathological anxiety. Here, we investigated how instructions given prior to extinction training, with or without the additional removal of the electrode used to deliver the unconditioned stimulus (US), affect the return of fear assessed 24 hours later. We replicated previous instruction effects on extinction and added that the additional removal of the US electrode slightly enhanced facilitating effects on the extinction of conditioned FPSs. In contrast, extinction instructions hardly affected the return of conditioned fear responses. These findings suggest that instruction effects observed during extinction training do not extent to tests of return of fear 24 hours later which serve as laboratory models of relapse and improvement stability of exposure-based treatments.
Renormalisation and locality
(2020)
The “HPI Future SOC Lab” is a cooperation of the Hasso Plattner Institute (HPI) and industry partners. Its mission is to enable and promote exchange and interaction between the research community and the industry partners.
The HPI Future SOC Lab provides researchers with free of charge access to a complete infrastructure of state of the art hard and software. This infrastructure includes components, which might be too expensive for an ordinary research environment, such as servers with up to 64 cores and 2 TB main memory. The offerings address researchers particularly from but not limited to the areas of computer science and business information systems. Main areas of research include cloud computing, parallelization, and In-Memory technologies.
This technical report presents results of research projects executed in 2017. Selected projects have presented their results on April 25th and November 15th 2017 at the Future SOC Lab Day events.
Geleitwort
(2020)
Envy is an unpleasant emotion. If individuals anticipate that comparing their payoff with the (potentially higher) payoff of others will make them envious, they may want to actively avoid information about other people’s payoffs. Given the opportunity to reduce another person’s payoff, an individual’s envy may trigger behavior that is detrimental to welfare. In this case, if individuals anticipate that they will react in a welfare-reducing way, they may also avoid information about other people’s payoffs from the outset. We investigated these two hypotheses using three experiments. We found that 13% of our potentially envious subjects avoided information when they did not have the opportunity to reduce another participant’s payoff. Psychological scales do not explain this behavior. We also found that voluntarily uninformed subjects did neither deduct less of the payoff nor less frequently than subjects who could not avoid the information.
The ability of a company to innovate and to launch innovation is a critical competitive edge to remain competitive in the 21st century. Large organizations therefore increasingly recognize employees as a significant factor and critical source of innovation. Several studies assert the fact that every employee has to offer certain skills and knowledge and can contribute to innovation. Hence, every employee has a certain ‘entrepreneurial potential’. This potential can be expressed in the form of entrepreneurial behaviour and can occur in many ways, from monopersonal innovation championing to several small scale contributions, where several individuals team up for innovation. To support entrepreneurial behaviour of their employees, large organizations increasingly rely on Corporate Entrepreneurship. They set up organizational structures and venturing units, offer vehicles and tools to their employees to be more entrepreneurial. The evolvement of new tools and technologies thereby allow for new ways of employee involvement, also allowing for more radical innovation to be developed collaboratively. Yet, many of such offerings fail to achieve the desired outcome. While some employees immediately opt-in for innovation, others do not and their entrepreneurial potential remains untapped. This research explores how large organizations can better support their employees to express their entrepreneurial potential, thus moving from non-entrepreneurial behaviour or not wanting to be involved, to actually expressing entrepreneurial behaviour. The underlying research therefore is two-fold. While focusing on the individual level and the entrepreneurial behaviour of employees, this research also takes the organizational perspective into account in order to identify how non-entrepreneurial behaviour can be stimulated towards entrepreneurial behaviour. Using an empirical qualitative research design based on pragmatism and abduction, data is collected by means of qualitative interviews as well as a longitudinal use case setting. Grounded theory is then applied for analysis and sense making. The main outcome is a theoretical model of why employees are expressing or not expressing their entrepreneurial potential and how non-expression can potentially be triggered towards entrepreneurial behaviour. The results indicate that there is no one-size-fits all model of Corporate Entrepreneurship. This research therefore argues that organizations can achieve higher levels of entrepreneurial behaviour when addressing employees differently. By developing a theoretical model as well as suggestions of how this model can be applied in practice, this research contributes to theory and practice alike. This document closes suggesting future research areas around supporting employees to express their entrepreneurial potential.
SandBlocks
(2020)
Visuelle Programmiersprachen werden heutzutage zugunsten textueller Programmiersprachen nahezu nicht verwendet, obwohl visuelle Programmiersprachen einige Vorteile bieten. Diese reichen von der Vermeidung von Syntaxfehlern, über die Nutzung konkreter domänenspezifischer Notation bis hin zu besserer Lesbarkeit und Wartbarkeit des Programms. Trotzdem greifen professionelle Softwareentwickler nahezu ausschließlich auf textuelle Programmiersprachen zurück.
Damit Entwickler diese Vorteile visueller Programmiersprachen nutzen können, aber trotzdem nicht auf die ihnen bekannten textuellen Programmiersprachen verzichten müssen, gibt es die Idee, textuelle und visuelle Programmelemente gemeinsam in einer Programmiersprache nutzbar zu machen. Damit ist dem Entwickler überlassen wann und wie er visuelle Elemente in seinem Programmcode verwendet.
Diese Arbeit stellt das SandBlocks-Framework vor, das diese gemeinsame Nutzung visueller und textueller Programmelemente ermöglicht. Neben einer Auswertung visueller Programmiersprachen, zeigt es die technische Integration visueller Programmelemente in das Squeak/Smalltalk-System auf, gibt Einblicke in die Umsetzung und Verwendung in Live-Programmiersystemen und diskutiert ihre Verwendung in unterschiedlichen Domänen.
Die Arbeit beschäftigt sich mit Einwirkungen des Unionsrechts auf Doppelbesteuerungsabkommen. Werden Doppelbesteuerungsabkommen zwischen Staaten geschlossen, die zugleich Mitglied der Europäischen Union sind, haben diese zusätzlich das Unionsrecht, insbesondere die Grundfreiheiten und Richtlinien, zu beachten. Die Arbeit beschäftigt sich mit der Frage, wie das grundsätzlich vorrangige Unionsrecht auf die Regelungen der Doppelbesteuerungsabkommen einwirkt, wann diese Regelungen dennoch anzuwenden sind und wie die Vorschriften für das jeweils nationale Steuerrecht sowie die Doppelbesteuerungsabkommen ausgestaltet werden dürfen.
Auf der Grundlage verschiedener Entscheidungen des Europäischen Gerichtshofs schafft die Arbeit eine Grundlage für die verlässliche Beurteilung vorliegender und künftiger Fälle, findet dabei einen Ausgleich zwischen dem Unionsrecht und dem jeweils nationalen Steuerrecht und zeigt Möglichkeiten für die Staaten auf, Steuervermeidungsmodelle zu verhindern.
This paper evaluates the construction of the rights of human rights defenders within international law and its shortcomings in protecting women. Human rights defenders have historically been defined on the basis of their actions as defenders. However, as Marxist-feminist scholar Silvia Federici contends, women are inherently politicised and, moreover, face obstacles to political action which are invisible to and untouchable by the law. Labour rights set an example of handling such a disadvantaged political position by placing vital importance on workers’ right to association and collective action. The paper closes with the suggestion that transposing this construction of rights to women would better protect women as human rights defenders while emphasising their capacity for self-determination in their political actions.
Early numeracy is one of the strongest predictors for later success in school mathematics (e.g., Duncan et al., 2007). The main goal of first grade mathematics teachers should therefore be to provide learning opportunities that enable all students to develop sound early numeracy skills. Developmental models, or learning progressions, can describe how early numerical understanding typically develops. Assessments that are aligned to empirically validated learning progressions can support teachers to understand their students learning better and target instruction accordingly. To date, there have been no progression-based instruments made available for German teachers to monitor their students’ progress in the domain of early numeracy. This dissertation contributes to the design of such an instrument. The first study analysed the suitability of early numeracy assessments currently used in German primary schools at school entry to identify students’ individual starting points for subsequent progress monitoring. The second study described the development of progression-based items and investigated the items in regards to main test quality criteria, such as reliability, validity, and test fairness, to find a suitable item pool to build targeted tests. The third study described the construction of the progress monitoring measure, referred to as the learning progress assessment (LPA). The study investigated the extent to which the LPA was able to monitor students’ individual learning progress in early numeracy over time. The results of the first study indicated that current school entry assessments were not able to provide meaningful information about the students’ initial learning status. Thus, the MARKO-D test (Ricken, Fritz, & Balzer, 2013) was used to determine the students’ initial numerical understanding in the other two studies, because it has been shown to be an effective measure of conceptual numerical understanding (Fritz, Ehlert, & Leutner, 2018). Both studies provided promising evidence for the quality of the LPA and its ability to detect changes in numerical understanding over the course of first grade. The studies of this dissertation can be considered an important step in the process of designing an empirically validated instrument that supports teachers to monitor their students’ early numeracy development and to adjust their teaching accordingly to enhance school achievement.
This work presents a new design for programming environments that promote the exploration of domain-specific software artifacts and the construction of graphical tools for such program comprehension tasks. In complex software projects, tool building is essential because domain- or task-specific tools can support decision making by representing concerns concisely with low cognitive effort. In contrast, generic tools can only support anticipated scenarios, which usually align with programming language concepts or well-known project domains.
However, the creation and modification of interactive tools is expensive because the glue that connects data to graphics is hard to find, change, and test. Even if valuable data is available in a common format and even if promising visualizations could be populated, programmers have to invest many resources to make changes in the programming environment. Consequently, only ideas of predictably high value will be implemented. In the non-graphical, command-line world, the situation looks different and inspiring: programmers can easily build their own tools as shell scripts by configuring and combining filter programs to process data.
We propose a new perspective on graphical tools and provide a concept to build and modify such tools with a focus on high quality, low effort, and continuous adaptability. That is, (1) we propose an object-oriented, data-driven, declarative scripting language that reduces the amount of and governs the effects of glue code for view-model specifications, and (2) we propose a scalable UI-design language that promotes short feedback loops in an interactive, graphical environment such as Morphic known from Self or Squeak/Smalltalk systems.
We implemented our concept as a tool building environment, which we call VIVIDE, on top of Squeak/Smalltalk and Morphic. We replaced existing code browsing and debugging tools to iterate within our solution more quickly. In several case studies with undergraduate and graduate students, we observed that VIVIDE can be applied to many domains such as live language development, source-code versioning, modular code browsing, and multi-language debugging. Then, we designed a controlled experiment to measure the effect on the time to build tools. Several pilot runs showed that training is crucial and, presumably, takes days or weeks, which implies a need for further research.
As a result, programmers as users can directly work with tangible representations of their software artifacts in the VIVIDE environment. Tool builders can write domain-specific scripts to populate views to approach comprehension tasks from different angles. Our novel perspective on graphical tools can inspire the creation of new trade-offs in modularity for both data providers and view designers.
Precision agriculture (PA) strongly relies on spatially differentiated sensor information. Handheld instruments based on laser-induced breakdown spectroscopy (LIBS) are a promising sensor technique for the in-field determination of various soil parameters. In this work, the potential of handheld LIBS for the determination of the total mass fractions of the major nutrients Ca, K, Mg, N, P and the trace nutrients Mn, Fe was evaluated. Additionally, other soil parameters, such as humus content, soil pH value and plant available P content, were determined. Since the quantification of nutrients by LIBS depends strongly on the soil matrix, various multivariate regression methods were used for calibration and prediction. These include partial least squares regression (PLSR), least absolute shrinkage and selection operator regression (Lasso), and Gaussian process regression (GPR). The best prediction results were obtained for Ca, K, Mg and Fe. The coefficients of determination obtained for other nutrients were smaller. This is due to much lower concentrations in the case of Mn, while the low number of lines and very weak intensities are the reason for the deviation of N and P. Soil parameters that are not directly related to one element, such as pH, could also be predicted. Lasso and GPR yielded slightly better results than PLSR. Additionally, several methods of data pretreatment were investigated.
The DNA in living cells can be effectively damaged by high-energy radiation, which can lead to cell death. Through the ionization of water molecules, highly reactive secondary species such as low-energy electrons (LEEs) with the most probable energy around 10 eV are generated, which are able to induce DNA strand breaks via dissociative electron attachment. Absolute DNA strand break cross sections of specific DNA sequences can be efficiently determined using DNA origami nanostructures as platforms exposing the target sequences towards LEEs. In this paper, we systematically study the effect of the oligonucleotide length on the strand break cross section at various irradiation energies. The present work focuses on poly-adenine sequences (d(A₄), d(A₈), d(A₁₂), d(A₁₆), and d(A₂₀)) irradiated with 5.0, 7.0, 8.4, and 10 eV electrons. Independent of the DNA length, the strand break cross section shows a maximum around 7.0 eV electron energy for all investigated oligonucleotides confirming that strand breakage occurs through the initial formation of negative ion resonances. When going from d(A₄) to d(A₁₆), the strand break cross section increases with oligonucleotide length, but only at 7.0 and 8.4 eV, i.e., close to the maximum of the negative ion resonance, the increase in the strand break cross section with the length is similar to the increase of an estimated geometrical cross section. For d(A₂₀), a markedly lower DNA strand break cross section is observed for all electron energies, which is tentatively ascribed to a conformational change of the dA₂₀ sequence. The results indicate that, although there is a general length dependence of strand break cross sections, individual nucleotides do not contribute independently of the absolute strand break cross section of the whole DNA strand. The absolute quantification of sequence specific strand breaks will help develop a more accurate molecular level understanding of radiation induced DNA damage, which can then be used for optimized risk estimates in cancer radiation therapy.
The DNA in living cells can be effectively damaged by high-energy radiation, which can lead to cell death. Through the ionization of water molecules, highly reactive secondary species such as low-energy electrons (LEEs) with the most probable energy around 10 eV are generated, which are able to induce DNA strand breaks via dissociative electron attachment. Absolute DNA strand break cross sections of specific DNA sequences can be efficiently determined using DNA origami nanostructures as platforms exposing the target sequences towards LEEs. In this paper, we systematically study the effect of the oligonucleotide length on the strand break cross section at various irradiation energies. The present work focuses on poly-adenine sequences (d(A₄), d(A₈), d(A₁₂), d(A₁₆), and d(A₂₀)) irradiated with 5.0, 7.0, 8.4, and 10 eV electrons. Independent of the DNA length, the strand break cross section shows a maximum around 7.0 eV electron energy for all investigated oligonucleotides confirming that strand breakage occurs through the initial formation of negative ion resonances. When going from d(A₄) to d(A₁₆), the strand break cross section increases with oligonucleotide length, but only at 7.0 and 8.4 eV, i.e., close to the maximum of the negative ion resonance, the increase in the strand break cross section with the length is similar to the increase of an estimated geometrical cross section. For d(A₂₀), a markedly lower DNA strand break cross section is observed for all electron energies, which is tentatively ascribed to a conformational change of the dA₂₀ sequence. The results indicate that, although there is a general length dependence of strand break cross sections, individual nucleotides do not contribute independently of the absolute strand break cross section of the whole DNA strand. The absolute quantification of sequence specific strand breaks will help develop a more accurate molecular level understanding of radiation induced DNA damage, which can then be used for optimized risk estimates in cancer radiation therapy.
The notion of ‘epiphenomenon’ is usually used to exclude certain
aspects of a scientific object because they are considered to be deduced from others. In linguistics, restrictions of the research object were made, invoking the notion of ‘epiphenomenon’, which was partially done with a polemical attitude, and was always responded to polemically. The best-known definition of languages as an epiphenomenon is that proposed by Chomsky, who declared that the specific realisations of language do not warrant scientific attention, but there were early relegations of properties of individual languages to the domain of an epiphenomenon of grammar, to the domain of an art and not a science. These relegations from a certain point of abstraction did advance theories of language, even though they took a point of abstraction that did not correspond to the complexity of language.
With rising complexity of today's software and hardware systems and the hypothesized increase in autonomous, intelligent, and self-* systems, developing correct systems remains an important challenge. Testing, although an important part of the development and maintainance process, cannot usually establish the definite correctness of a software or hardware system - especially when systems have arbitrarily large or infinite state spaces or an infinite number of initial states. This is where formal verification comes in: given a representation of the system in question in a formal framework, verification approaches and tools can be used to establish the system's adherence to its similarly formalized specification, and to complement testing.
One such formal framework is the field of graphs and graph transformation systems. Both are powerful formalisms with well-established foundations and ongoing research that can be used to describe complex hardware or software systems with varying degrees of abstraction. Since their inception in the 1970s, graph transformation systems have continuously evolved; related research spans extensions of expressive power, graph algorithms, and their implementation, application scenarios, or verification approaches, to name just a few topics.
This thesis focuses on a verification approach for graph transformation systems called k-inductive invariant checking, which is an extension of previous work on 1-inductive invariant checking. Instead of exhaustively computing a system's state space, which is a common approach in model checking, 1-inductive invariant checking symbolically analyzes graph transformation rules - i.e. system behavior - in order to draw conclusions with respect to the validity of graph constraints in the system's state space. The approach is based on an inductive argument: if a system's initial state satisfies a graph constraint and if all rules preserve that constraint's validity, we can conclude the constraint's validity in the system's entire state space - without having to compute it.
However, inductive invariant checking also comes with a specific drawback: the locality of graph transformation rules leads to a lack of context information during the symbolic analysis of potential rule applications. This thesis argues that this lack of context can be partly addressed by using k-induction instead of 1-induction. A k-inductive invariant is a graph constraint whose validity in a path of k-1 rule applications implies its validity after any subsequent rule application - as opposed to a 1-inductive invariant where only one rule application is taken into account. Considering a path of transformations then accumulates more context of the graph rules' applications.
As such, this thesis extends existing research and implementation on 1-inductive invariant checking for graph transformation systems to k-induction. In addition, it proposes a technique to perform the base case of the inductive argument in a symbolic fashion, which allows verification of systems with an infinite set of initial states. Both k-inductive invariant checking and its base case are described in formal terms. Based on that, this thesis formulates theorems and constructions to apply this general verification approach for typed graph transformation systems and nested graph constraints - and to formally prove the approach's correctness.
Since unrestricted graph constraints may lead to non-termination or impracticably high execution times given a hypothetical implementation, this thesis also presents a restricted verification approach, which limits the form of graph transformation systems and graph constraints. It is formalized, proven correct, and its procedures terminate by construction. This restricted approach has been implemented in an automated tool and has been evaluated with respect to its applicability to test cases, its performance, and its degree of completeness.
Largescale patterns of global land use change are very frequently accompanied by natural habitat loss. To assess the consequences of habitat loss for the remaining natural and semi-natural biotopes, inclusion of cumulative effects at the landscape level is required. The interdisciplinary concept of vulnerability constitutes an appropriate assessment framework at the landscape level, though with few examples of its application for ecological assessments. A comprehensive biotope vulnerability analysis allows identification of areas most affected by landscape change and at the same time with the lowest chances of regeneration.
To this end, a series of ecological indicators were reviewed and developed. They measured spatial attributes of individual biotopes as well as some ecological and conservation characteristics of the respective resident species community. The final vulnerability index combined seven largely independent indicators, which covered exposure, sensitivity and adaptive capacity of biotopes to landscape changes. Results for biotope vulnerability were provided at the regional level. This seems to be an appropriate extent with relevance for spatial planning and designing the distribution of nature reserves.
Using the vulnerability scores calculated for the German federal state of Brandenburg, hot spots and clusters within and across the distinguished types of biotopes were analysed. Biotope types with high dependence on water availability, as well as biotopes of the open landscape containing woody plants (e.g., orchard meadows) are particularly vulnerable to landscape changes. In contrast, the majority of forest biotopes appear to be less vulnerable. Despite the appeal of such generalised statements for some biotope types, the distribution of values suggests that conservation measures for the majority of biotopes should be designed specifically for individual sites. Taken together, size, shape and spatial context of individual biotopes often had a dominant influence on the vulnerability score.
The implementation of biotope vulnerability analysis at the regional level indicated that large biotope datasets can be evaluated with high level of detail using geoinformatics. Drawing on previous work in landscape spatial analysis, the reproducible approach relies on transparent calculations of quantitative and qualitative indicators. At the same time, it provides a synoptic overview and information on the individual biotopes. It is expected to be most useful for nature conservation in combination with an understanding of population, species, and community attributes known for specific sites. The biotope vulnerability analysis facilitates a foresighted assessment of different land uses, aiding in identifying options to slow habitat loss to sustainable levels. It can also be incorporated into planning of restoration measures, guiding efforts to remedy ecological damage. Restoration of any specific site could yield synergies with the conservation objectives of other sites, through enhancing the habitat network or buffering against future landscape change.
Biotope vulnerability analysis could be developed in line with other important ecological concepts, such as resilience and adaptability, further extending the broad thematic scope of the vulnerability concept. Vulnerability can increasingly serve as a common framework for the interdisciplinary research necessary to solve major societal challenges.
Das bayerisch-ligistische Kriegskommissariat kontrollierte das weitestgehend autonome Söldnerheer im Dreißigjährigen Krieg. Es gilt daher als ein beispielhaftes Forschungsobjekt zur fürstlichen Macht dieser Zeit. Während sich die Forschung bisher auf die normative Ebene beschränkte, unternimmt diese Arbeit durch die Auswertung von Feldakten und Privatkorrespondenzen sowie anhand der Methoden der Prosopographie und Netzwerkanalyse eine multiperspektivische Annäherung an das Thema. Die Verwicklungen verschiedener Kompetenzen und Funktionen des sozialen Netzwerks für die Amtsausführung des Kriegskommissariats werden zu Tage gefördert. Damit trägt die Arbeit zur Erfassung der Vielfältigkeit der Herrschaft in der Frühen Neuzeit bei.