Refine
Year of publication
Document Type
- Article (35844)
- Doctoral Thesis (6497)
- Monograph/Edited Volume (5559)
- Postprint (3296)
- Review (2290)
- Part of a Book (1068)
- Other (922)
- Preprint (567)
- Conference Proceeding (547)
- Part of Periodical (527)
Language
- English (30803)
- German (26068)
- Spanish (363)
- French (330)
- Italian (115)
- Russian (112)
- Multiple languages (70)
- Hebrew (36)
- Portuguese (25)
- Polish (24)
Keywords
- Germany (202)
- climate change (183)
- Deutschland (142)
- machine learning (87)
- European Union (79)
- diffusion (78)
- Sprachtherapie (77)
- Migration (74)
- morphology (74)
- Logopädie (73)
Institute
- Institut für Biochemie und Biologie (5470)
- Institut für Physik und Astronomie (5441)
- Institut für Geowissenschaften (3666)
- Institut für Chemie (3484)
- Wirtschaftswissenschaften (2645)
- Historisches Institut (2514)
- Department Psychologie (2351)
- Institut für Mathematik (2150)
- Institut für Romanistik (2108)
- Sozialwissenschaften (1883)
- Extern (1580)
- Institut für Umweltwissenschaften und Geographie (1573)
- Department Sport- und Gesundheitswissenschaften (1504)
- Department Erziehungswissenschaft (1471)
- Institut für Germanistik (1445)
- Department Linguistik (1386)
- Bürgerliches Recht (1339)
- Institut für Ernährungswissenschaft (1271)
- Institut für Jüdische Studien und Religionswissenschaft (1164)
- Öffentliches Recht (1136)
- MenschenRechtsZentrum (1130)
- Institut für Informatik und Computational Science (1110)
- Institut für Anglistik und Amerikanistik (766)
- WeltTrends e.V. Potsdam (660)
- Department Grundschulpädagogik (632)
- Institut für Slavistik (588)
- Philosophische Fakultät (563)
- Mathematisch-Naturwissenschaftliche Fakultät (543)
- Strafrecht (477)
- Fachgruppe Politik- & Verwaltungswissenschaft (463)
- Vereinigung für Jüdische Studien e. V. (458)
- Hasso-Plattner-Institut für Digital Engineering GmbH (432)
- Strukturbereich Kognitionswissenschaften (423)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (408)
- Fachgruppe Betriebswirtschaftslehre (388)
- Institut für Künste und Medien (387)
- Lehreinheit für Wirtschafts-Arbeit-Technik (306)
- Humanwissenschaftliche Fakultät (293)
- Department für Inklusionspädagogik (291)
- Zentrum für Lehrerbildung und Bildungsforschung (ZeLB) (245)
- Department Musik und Kunst (242)
- Kommunalwissenschaftliches Institut (223)
- Fachgruppe Soziologie (216)
- Fachgruppe Volkswirtschaftslehre (190)
- Institut für Philosophie (184)
- Wirtschafts- und Sozialwissenschaftliche Fakultät (165)
- Zentrum für Umweltwissenschaften (165)
- Referat für Presse- und Öffentlichkeitsarbeit (138)
- Klassische Philologie (123)
- Interdisziplinäres Zentrum für Dynamik komplexer Systeme (120)
- Strukturbereich Bildungswissenschaften (114)
- Institut für Jüdische Theologie (96)
- Arbeitskreis Militär und Gesellschaft in der Frühen Neuzeit e. V. (92)
- Verband für Patholinguistik e. V. (vpl) (87)
- Zentrum für Gerechtigkeitsforschung (87)
- Center for Economic Policy Analysis (CEPA) (79)
- Fakultät für Gesundheitswissenschaften (79)
- Zentrum für Sprachen und Schlüsselkompetenzen (Zessko) (79)
- An-Institute (72)
- Institut für Religionswissenschaft (65)
- Universitätsbibliothek (64)
- Zentrum für Qualitätsentwicklung in Lehre und Studium (ZfQ) (57)
- ZIM - Zentrum für Informationstechnologie und Medienmanagement (49)
- Berlin Potsdam Research Group "The International Rule of Law - Rise or Decline?" (46)
- Juristische Fakultät (34)
- Moses Mendelssohn Zentrum für europäisch-jüdische Studien e. V. (34)
- Universitätsleitung und Verwaltung (34)
- dbs Deutscher Bundesverband für akademische Sprachtherapie und Logopädie e.V. (31)
- Zentrum für Lern- und Lehrforschung (30)
- Institut für Lebensgestaltung-Ethik-Religionskunde (27)
- Interdisziplinäres Zentrum für Dünne Organische und Biochemische Schichten (26)
- Sonderforschungsbereich 632 - Informationsstruktur (25)
- Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung (23)
- Potsdam Research Institute for Multilingualism (PRIM) (18)
- Dezernat 2: Studienangelegenheiten (16)
- Hochschulambulanz (16)
- Potsdam Transfer - Zentrum für Gründung, Innovation, Wissens- und Technologietransfer (16)
- Potsdam Institute for Climate Impact Research (PIK) e. V. (13)
- Interdisziplinäres Zentrum für Biopolymere (12)
- Organe und Gremien (11)
- Digital Engineering Fakultät (10)
- Institut für angewandte Familien-, Kindheits- und Jugendforschung e.V. (10)
- Präsident | Vizepräsidenten (9)
- Deutsches MEGA-Konsortialbüro an der Universität Potsdam (8)
- Abraham Geiger Kolleg gGmbH (7)
- Netzwerk Studienqualität Brandenburg (sqb) (7)
- Gleichstellungsbeauftragte (6)
- Interdisziplinäres Zentrum für Kognitive Studien (6)
- Multilingualism (6)
- Patholinguistics/Neurocognition of Language (6)
- Theodor-Fontane-Archiv (6)
- Zentrum für Australienforschung (6)
- Senat (5)
- Forschungsbereich „Politik, Verwaltung und Management“ (4)
- Gesundheitsmanagement (4)
- Kanonistisches Institut e.V. (4)
- Psycholinguistics and Neurolinguistics (4)
- Akademie für Psychotherapie und Interventionsforschung GmbH (3)
- Kanzler (3)
- Projekte (3)
- UP Transfer (3)
- eLiS - E-Learning in Studienbereichen (3)
- DV und Statistik Wirtschaftswissenschaften (2)
- Kommissionen des Senats (2)
- Language Acquisition (2)
- Zentrale und wissenschaftliche Einrichtungen (2)
- Allgemeine Studierendenausschuss (AStA) (1)
- Applied Computational Linguistics (1)
- Botanischer Garten (1)
- Career Service (1)
- Chief Information Officer (CIO) (1)
- Evangelisches Institut für Kirchenrecht e.V. (1)
- Foundations of Computational Linguistics (1)
- Geschlechtersoziologie (1)
- Interdisziplinäres Zentrum für Massenspektronomie von Biopolymeren (1)
- Phonology & Phonetics (1)
- Potsdam Graduate School (1)
- Präsidialamt (1)
- Redaktion *studere (1)
- Studierendenparlament (StuPa) (1)
- Syntax, Morphology & Variability (1)
- Weitere Einrichtungen (1)
We present an overview of four deep phase-constrained Chandra HETGS X-ray observations of delta Ori A. Delta Ori A is actually a triple system that includes the nearest massive eclipsing spectroscopic binary, delta Ori Aa, the only such object that can be observed with little phase-smearing with the Chandra gratings. Since the fainter star, delta Ori Aa2, has a much lower X-ray luminosity than the brighter primary (delta Ori Aa1), delta Ori Aa provides a unique system with which to test the spatial distribution of the X-ray emitting gas around delta Ori Aa1 via occultation by the photosphere of, and wind cavity around, the X-ray dark secondary. Here we discuss the X-ray spectrum and X-ray line profiles for the combined observation, having an exposure time of nearly 500 ks and covering nearly the entire binary orbit. The companion papers discuss the X-ray variability seen in the Chandra spectra, present new space-based photometry and ground-based radial velocities obtained simultaneously with the X-ray data to better constrain the system parameters, and model the effects of X-rays on the optical and UV spectra. We find that the X-ray emission is dominated by embedded wind shock emission from star Aa1, with little contribution from the tertiary star Ab or the shocked gas produced by the collision of the wind of Aa1 against the surface of Aa2. We find a similar temperature distribution to previous X-ray spectrum analyses. We also show that the line half-widths are about 0.3-0.5 times the terminal velocity of the wind of star Aa1. We find a strong anti-correlation between line widths and the line excitation energy, which suggests that longer-wavelength, lower-temperature lines form farther out in the wind. Our analysis also indicates that the ratio of the intensities of the strong and weak lines of Fe XVII and Ne X are inconsistent with model predictions, which may be an effect of resonance scattering.
Transition path theory (TPT) for diffusion processes is a framework for analyzing the transitions of multiscale ergodic diffusion processes between disjoint metastable subsets of state space. Most methods for applying TPT involve the construction of a Markov state model on a discretization of state space that approximates the underlying diffusion process. However, the assumption of Markovianity is difficult to verify in practice, and there are to date no known error bounds or convergence results for these methods. We propose a Monte Carlo method for approximating the forward committor, probability current, and streamlines from TPT for diffusion processes. Our method uses only sample trajectory data and partitions of state space based on Voronoi tessellations. It does not require the construction of a Markovian approximating process. We rigorously prove error bounds for the approximate TPT objects and use these bounds to show convergence to their exact counterparts in the limit of arbitrarily fine discretization. We illustrate some features of our method by application to a process that solves the Smoluchowski equation on a triple-well potential.
A conundrum of trends
(2022)
This comment is meant to reiterate two warnings: One applies to the uncritical use of ready-made (openly available) program packages, and one to the estimation of trends in serially correlated time series. Both warnings apply to the recent publication of Lischeid et al. about lake-level trends in Germany.
Design flood estimation is an essential part of flood risk assessment. Commonly applied are flood frequency analyses and design storm approaches, while the derived flood frequency using continuous simulation has been getting more attention recently. In this study, a continuous hydrological modelling approach on an hourly time scale, driven by a multi-site weather generator in combination with a -nearest neighbour resampling procedure, based on the method of fragments, is applied. The derived 100-year flood estimates in 16 catchments in Vorarlberg (Austria) are compared to (a) the flood frequency analysis based on observed discharges, and (b) a design storm approach. Besides the peak flows, the corresponding runoff volumes are analysed. The spatial dependence structure of the synthetically generated flood peaks is validated against observations. It can be demonstrated that the continuous modelling approach can achieve plausible results and shows a large variability in runoff volume across the flood events.
Model-informed precision dosing (MIPD) is a quantitative dosing framework that combines prior knowledge on the drug-disease-patient system with patient data from therapeutic drug/ biomarker monitoring (TDM) to support individualized dosing in ongoing treatment. Structural models and prior parameter distributions used in MIPD approaches typically build on prior clinical trials that involve only a limited number of patients selected according to some exclusion/inclusion criteria. Compared to the prior clinical trial population, the patient population in clinical practice can be expected to also include altered behavior and/or increased interindividual variability, the extent of which, however, is typically unknown. Here, we address the question of how to adapt and refine models on the level of the model parameters to better reflect this real-world diversity. We propose an approach for continued learning across patients during MIPD using a sequential hierarchical Bayesian framework. The approach builds on two stages to separate the update of the individual patient parameters from updating the population parameters. Consequently, it enables continued learning across hospitals or study centers, because only summary patient data (on the level of model parameters) need to be shared, but no individual TDM data. We illustrate this continued learning approach with neutrophil-guided dosing of paclitaxel. The present study constitutes an important step toward building confidence in MIPD and eventually establishing MIPD increasingly in everyday therapeutic use.
We consider a class of ergodic Hamilton-Jacobi-Bellman (HJB) equations, related to large time asymptotics of non-smooth multiplicative functional of difusion processes. Under suitable ergodicity assumptions on the underlying difusion, we show existence of these asymptotics, and that they solve the related HJB equation in the viscosity sense.
We analyse whether a stellar atmosphere model computed with the code CMFGEN provides an optimal description of the stellar observations of WR 136 and simultaneously reproduces the nebular observations of NGC 6888, such as the ionization degree, which is modelled with the pyCloudy code. All the observational material available (far and near UV and optical spectra) were used to constrain such models. We found that the stellar temperature T∗, at τ = 20, can be in a range between 70 000 and 110 000 K, but when using the nebula as an additional restriction, we found that the stellar models with T∗ ∼ 70 000 K represent the best solution for both, the star and the nebula.
Exendin-4 is a pharmaceutical peptide used in the control of insulin secretion. Structural information on exendin-4 and related peptides especially on the level of quaternary structure is scarce. We present the first published association equilibria of exendin-4 directly measured by static and dynamic light scattering. We show that exendin-4 oligomerization is pH dependent and that these oligomers are of low compactness. We relate our experimental results to a structural hypothesis to describe molecular details of exendin-4 oligomers. Discussion of the validity of this hypothesis is based on NMR, circular dichroism and fluorescence spectroscopy, and light scattering data on exendin-4 and a set of exendin-4 derived peptides. The essential forces driving oligomerization of exendin-4 are helix–helix interactions and interactions of a conserved hydrophobic moiety. Our structural hypothesis suggests that key interactions of exendin-4 monomers in the experimentally supported trimer take place between a defined helical segment and a hydrophobic triangle constituted by the Phe22 residues of the three monomeric subunits. Our data rationalize that Val19 might function as an anchor in the N-terminus of the interacting helix-region and that Trp25 is partially shielded in the oligomer by C-terminal amino acids of the same monomer. Our structural hypothesis suggests that the Trp25 residues do not interact with each other, but with C-terminal Pro residues of their own monomers.
Exendin-4 is a pharmaceutical peptide used in the control of insulin secretion. Structural information on exendin-4 and related peptides especially on the level of quaternary structure is scarce. We present the first published association equilibria of exendin-4 directly measured by static and dynamic light scattering. We show that exendin-4 oligomerization is pH dependent and that these oligomers are of low compactness. We relate our experimental results to a structural hypothesis to describe molecular details of exendin-4 oligomers. Discussion of the validity of this hypothesis is based on NMR, circular dichroism and fluorescence spectroscopy, and light scattering data on exendin-4 and a set of exendin-4 derived peptides. The essential forces driving oligomerization of exendin-4 are helix–helix interactions and interactions of a conserved hydrophobic moiety. Our structural hypothesis suggests that key interactions of exendin-4 monomers in the experimentally supported trimer take place between a defined helical segment and a hydrophobic triangle constituted by the Phe22 residues of the three monomeric subunits. Our data rationalize that Val19 might function as an anchor in the N-terminus of the interacting helix-region and that Trp25 is partially shielded in the oligomer by C-terminal amino acids of the same monomer. Our structural hypothesis suggests that the Trp25 residues do not interact with each other, but with C-terminal Pro residues of their own monomers.
Apoptotic death of cells damaged by genotoxic stress requires regulatory input from surrounding tissues. The C. elegans scaffold protein KRI-1, ortholog of mammalian KRIT1/CCM1, permits DNA damage-induced apoptosis of cells in the germline by an unknown cell non-autonomous mechanism. We reveal that KRI-1 exists in a complex with CCM-2 in the intestine to negatively regulate the ERK-5/MAPK pathway. This allows the KLF-3 transcription factor to facilitate expression of the SLC39 zinc transporter gene zipt-2.3, which functions to sequester zinc in the intestine. Ablation of KRI-1 results in reduced zinc sequestration in the intestine, inhibition of IR-induced MPK-1/ERK1 activation, and apoptosis in the germline. Zinc localization is also perturbed in the vasculature of krit1(-/-) zebrafish, and SLC39 zinc transporters are mis-expressed in Cerebral Cavernous Malformations (CCM) patient tissues. This study provides new insights into the regulation of apoptosis by cross-tissue communication, and suggests a link between zinc localization and CCM disease.
A Conjunction of Mysteries
(2016)
A conformational study of N-acetyl glucosamine derivatives utilizing residual dipolar couplings
(2013)
A conformational study of N-acetyl glucosamine derivatives utilizing residual dipolar couplings
(2011)
The conformational analyses of six non-rigid N-acetyl glucosamine (NAG) derivatives employing residual dipolar couplings (RDCs) and NOEs together with molecular dynamics (MD) simulations are presented. Due to internal dynamics we had to consider different conformer ratios existing in solution. The good quality of the correlation between theoretically and experimentally obtained RDCs show the correctness of the calculated conformers even if the ratios derived from the MD simulations do not exactly meet the experimental data. If possible, the results were compared to former published data and commented.
The minima on the potential energy surface of 1,2-bis(o-carboxyphenoxy) ethane (CPE) molecule in its electronic ground state were searched by a molecular dynamics simulation performed with MM2 force field. For each of the found minimum-energy conformers, the corresponding equilibrium geometry, charge distribution, HOMO-LUMO energy gap, force field, vibrational normal modes and associated IR and Raman spectral data were determined by means of the density functional theory (DFT) based electronic structure calculations carried out by using B3LYP method and various Pople- style basis sets. The obtained theoretical data confirmed the significant effects of the intra- and inter-molecular hydrogen bonding interactions on the conformational structure, force field, and group vibrations of the molecule. The same data have also revealed that two of the determined stable conformers, both of which exhibit pseudo-crown structure, are considerably more favorable in energy to the others and accordingly provide the major c ntribution to the experimental spectra of CPE. In the light of the improved vibrational spectral data obtained within the "SQM FF" methodology and "Dual Scale Factors" approach for the monomer and dimer forms of these two conformers, a reliable assignment of the fundamental bands observed in the experimental room-temperature IR and Raman spectra of the molecule was given, and the sensitivities of its group vibratb20s to conformation, substitution and dimerization were discussed.
The minima on the potential energy surface of 1,2-bis(o-carboxyphenoxy)ethane (CPE) molecule in its electronic ground state were searched by a molecular dynamics simulation performed with MM2 force field. For each of the found minimum-energy conformers, the corresponding equilibrium geometry, charge distribution, HOMO-LUMO energy gap, force field, vibrational normal modes and associated IR and Raman spectral data were determined by means of the density functional theory (DFT) based electronic structure calculations carried out by using B3LYP method and various Pople-style basis sets. The obtained theoretical data confirmed the significant effects of the intra- and inter-molecular hydrogen bonding interactions on the conformational structure, force field, and group vibrations of the molecule. The same data have also revealed that two of the determined stable conformers, both of which exhibit pseudo-crown structure, are considerably more favorable in energy to the others and accordingly provide the major contribution to the experimental spectra of CPE. In the light of the improved vibrational spectral data obtained within the "SQM FF" methodology and "Dual Scale Factors" approach for the monomer and dimer forms of these two conformers, a reliable assignment of the fundamental bands observed in the experimental room-temperature IR and Raman spectra of the molecule was given, and the sensitivities of its group vibrations to conformation, substitution and dimerization were discussed.
A confocal set-up is presented that improves micro-XRF and XAFS experiment with high-pressure e diamond-anvil cells (DACs) In this experiment a probing volume is defined by the focus of the incoming synchrotron radiation beam and that of a polycapillary X-ray half-lens with a very long working distance, which is placed in front of the fluorescence detector This set-up enhances the quality of the fluorescence and XAFS spectra, and thus the sensitivity for detecting elements at low concentrations. It efficiently suppresses signal from outside the sample chamber, which stems from elastic and inelastic scattering of the incoming beam by the diamond anvils as well as from excitation of fluorescence from the body of the DAC
The Net Reclassification Improvement (NRI) has become a popular metric for evaluating improvement in disease prediction models through the past years. The concept is relatively straightforward but usage and interpretation has been different across studies. While no thresholds exist for evaluating the degree of improvement, many studies have relied solely on the significance of the NRI estimate. However, recent studies recommend that statistical testing with the NRI should be avoided. We propose using confidence ellipses around the estimated values of event and non-event NRIs which might provide the best measure of variability around the point estimates. Our developments are illustrated using practical examples from EPIC-Potsdam study.
The Net Reclassification Improvement (NRI) has become a popular metric for evaluating improvement in disease prediction models through the past years. The concept is relatively straightforward but usage and interpretation has been different across studies. While no thresholds exist for evaluating the degree of improvement, many studies have relied solely on the significance of the NRI estimate. However, recent studies recommend that statistical testing with the NRI should be avoided. We propose using confidence ellipses around the estimated values of event and non-event NRIs which might provide the best measure of variability around the point estimates. Our developments are illustrated using practical examples from EPIC-Potsdam study.
Time-delayed collection field (TDCF) and bias-amplified charge extraction (BACE) are applied to as-prepared and annealed poly(3-hexylthiophene):[6,6]-phenyl C-71 butyric acid methyl ester (P3HT:PCBM) blends coated from chloroform. Despite large differences in fill factor, short-circuit current, and power conversion efficiency, both blends exhibit a negligible dependence of photogeneration on the electric field and strictly bimolecular recombination (BMR) with a weak dependence of the BMR coefficient on charge density. Drift-diffusion simulations are performed using the measured coefficients and mobilities, taking into account bimolecular recombination and the possible effects of surface recombination. The excellent agreement between the simulation and the experimental data for an intensity range covering two orders of magnitude indicates that a field-independent generation rate and a density-independent recombination coefficient describe the current-voltage characteristics of the annealed P3HT: PCBM devices, while the performance of the as-prepared blend is shown to be limited by space charge effects due to a low hole mobility. Finally, even though the bimolecular recombination coefficient is small, surface recombination is found to be a negligible loss mechanism in these solar cells.
A conclusione degli eccetera
(2011)
Technological advancements are giving rise to the fourth industrial revolution - Industry 4.0 -characterized by the mass employment of smart objects in highly reconfigurable and thoroughly connected industrialproduct-service systems. The purpose of this paper is to propose a theory-based knowledgedynamics model in the smart grid scenario that would provide a holistic view on the knowledge-based interactions among smart objects, humans, and other actors as an underlyingmechanism of value co-creation in Industry 4.0. A multi-loop and three-layer - physical, virtual, and interface - model of knowledge dynamics is developedby building on the concept of ba - an enabling space for interactions and theemergence of knowledge. The model depicts how big data analytics are just one component inunlocking the value of big data, whereas the tacit engagement of humans-in-the-loop - theirsense-making and decision-making - is needed for insights to be evoked fromanalytics reports and customer needs to be met.
Environmental heterogeneity is a major determinant of plant population dynamics. In semi-arid Kalahari savannas, heterogeneity is created by savanna structure, i.e. by the spatial arrangement and temporal dynamics of woody plant and open grassland microsites. We formulate a conceptual model describing the effects of savanna dynamics on the population dynamics of the animal-dispersed shrub Grewia flava. From empirical results we derive model rules describing effects of savanna structure on several processes in Grewia's life cycle. By formulating the model, we summarise existing information on Grewia demography and identify gaps in this knowledge. Despite a number of such gaps, the model can be used to make certain quantitative predictions. As an example, we apply the model to investigate the role of seed dispersal in Grewia encroachment on rangelands. Model results show that cattle promote encroachment by depositing substantial numbers of seeds in open areas, where Grewia is otherwise dispersal-limited. Finally, we draw some general conclusions about Grewia's life history and population dynamics. Under natural conditions, concentrated seed deposition under woody plants appears to be a key process causing the observed association between Grewia and other woody plants. Furthermore, low rates of recruitment and high adult survival result in slow-motion dynamics of Grewia populations. As a consequence, Grewia populations interact with savanna dynamics on long temporal and short to intermediate spatial scales.
Land-use concepts provide decision support for the most efficient usage options according to sustainable development and multifunctionality requirements. However, developments in landscape-related, agricultural production schemes are primarily driven by economic benefits. Therefore, most agricultural land-use concepts tackle particular problems or interests and lack a systemic perspective. As a result, we discuss a conceptual model for future site-specific agricultural land-use with an inbuilt requirement for adequate experimental sites to enable monitoring systems for a new generation of ecosystem models and for new approaches to address science-stakeholder interactions.
BACKGROUND: Work capacity demands are a concept to describe which psychological capacities are required in a job. Assessing psychological work capacity demands is of specific importance when mental health problems at work endanger work ability. Exploring psychological work capacity demands is the basis for mental hazard analysis or rehabilitative action, e.g. in terms of work adjustment. OBJECTIVE: This is the first study investigating psychological work capacity demands in rehabilitation patients with and without mental disorders. METHODS: A structured interview on psychological work capacity demands (Mini-ICF-Work; Muschalla, 2015; Linden et al., 2015) was done with 166 rehabilitation patients of working age. All interviews were done by a state-licensed socio-medically trained psychotherapist. Inter-rater-reliability was assessed by determining agreement in independent co-rating in 65 interviews. For discriminant validity purposes, participants filled in the Short Questionnaire for Work Analysis (KFZA, Prumper et al., 1994). RESULTS: In different professional fields, different psychological work capacity demands were of importance. The Mini-ICF-Work capacity dimensions reflect different aspects than the KFZA. Patients with mental disorders were longer on sick leave and had worse work ability prognosis than patients without mental disorders, although both groups reported similar work capacity demands. CONCLUSIONS: Psychological work demands - which are highly relevant for work ability prognosis and work adjustment processes - can be explored and differentiated in terms of psychological capacity demands.
As AI technology is increasingly used in production systems, different approaches have emerged from highly decentralized small-scale AI at the edge level to centralized, cloud-based services used for higher-order optimizations. Each direction has disadvantages ranging from the lack of computational power at the edge level to the reliance on stable network connections with the centralized approach. Thus, a hybrid approach with centralized and decentralized components that possess specific abilities and interact is preferred. However, the distribution of AI capabilities leads to problems in self-adapting learning systems, as knowledgebases can diverge when no central coordination is present. Edge components will specialize in distinctive patterns (overlearn), which hampers their adaptability for different cases. Therefore, this paper aims to present a concept for a distributed interchangeable knowledge base in CPPS. The approach is based on various AI components and concepts for each participating node. A service-oriented infrastructure allows a decentralized, loosely coupled architecture of the CPPS. By exchanging knowledge bases between nodes, the overall system should become more adaptive, as each node can “forget” their present specialization.
The past three decades of policy process studies have seen the emergence of a clear intellectual lineage with regard to complexity. Implicitly or explicitly, scholars have employed complexity theory to examine the intricate dynamics of collective action in political contexts. However, the methodological counterparts to complexity theory, such as computational methods, are rarely used and, even if they are, they are often detached from established policy process theory. Building on a critical review of the application of complexity theory to policy process studies, we present and implement a baseline model of policy processes using the logic of coevolving networks. Our model suggests that an actor's influence depends on their environment and on exogenous events facilitating dialogue and consensus-building. Our results validate previous opinion dynamics models and generate novel patterns. Our discussion provides ground for further research and outlines the path for the field to achieve a computational turn.
We present a computational evaluation of three hypotheses about sources of deficit in sentence comprehension in aphasia: slowed processing, intermittent deficiency, and resource reduction. The ACT-R based Lewis and Vasishth (2005) model is used to implement these three proposals. Slowed processing is implemented as slowed execution time of parse steps; intermittent deficiency as increased random noise in activation of elements in memory; and resource reduction as reduced spreading activation. As data, we considered subject vs. object relative sentences, presented in a self-paced listening modality to 56 individuals with aphasia (IWA) and 46 matched controls. The participants heard the sentences and carried out a picture verification task to decide on an interpretation of the sentence. These response accuracies are used to identify the best parameters (for each participant) that correspond to the three hypotheses mentioned above. We show that controls have more tightly clustered (less variable) parameter values than IWA; specifically, compared to controls, among IWA there are more individuals with slow parsing times, high noise, and low spreading activation. We find that (a) individual IWA show differential amounts of deficit along the three dimensions of slowed processing, intermittent deficiency, and resource reduction, (b) overall, there is evidence for all three sources of deficit playing a role, and (c) IWA have a more variable range of parameter values than controls. An important implication is that it may be meaningless to talk about sources of deficit with respect to an abstract verage IWA; the focus should be on the individual's differential degrees of deficit along different dimensions, and on understanding the causes of variability in deficit between participants.
To provide physically based wind modelling for wind erosion research at regional scale, a 3D computational fluid dynamics (CFD) wind model was developed. The model was programmed in C language based on the Navier-Stokes equations, and it is freely available as open source. Integrated with the spatial analysis and modelling tool (SAMT), the wind model has convenient input preparation and powerful output visualization. To validate the wind model, a series of experiments was conducted in a wind tunnel. A blocking inflow experiment was designed to test the performance of the model on simulation of basic fluid processes. A round obstacle experiment was designed to check if the model could simulate the influences of the obstacle on wind field. Results show that measured and simulated wind fields have high correlations, and the wind model can simulate both the basic processes of the wind and the influences of the obstacle on the wind field. These results show the high reliability of the wind model. A digital elevation model (DEM) of an area (3800 m long and 1700 m wide) in the Xilingele grassland in Inner Mongolia (autonomous region, China) was applied to the model, and a 3D wind field has been successfully generated. The clear implementation of the model and the adequate validation by wind tunnel experiments laid a solid foundation for the prediction and assessment of wind erosion at regional scale.
Individuals with agrammatic Broca's aphasia experience difficulty when processing reversible non-canonical sentences. Different accounts have been proposed to explain this phenomenon. The Trace Deletion account (Grodzinsky, 1995, 2000, 2006) attributes this deficit to an impairment in syntactic representations, whereas others (e.g., Caplan, Waters, Dede, Michaud, & Reddy, 2007; Haarmann, Just, & Carpenter, 1997) propose that the underlying structural representations are unimpaired, but sentence comprehension is affected by processing deficits, such as slow lexical activation, reduction in memory resources, slowed processing and/or intermittent deficiency, among others. We test the claims of two processing accounts, slowed processing and intermittent deficiency, and two versions of the Trace Deletion Hypothesis (TDH), in a computational framework for sentence processing (Lewis & Vasishth, 2005) implemented in ACT-R (Anderson, Byrne, Douglass, Lebiere, & Qin, 2004). The assumption of slowed processing is operationalized as slow procedural memory, so that each processing action is performed slower than normal, and intermittent deficiency as extra noise in the procedural memory, so that the parsing steps are more noisy than normal. We operationalize the TDH as an absence of trace information in the parse tree. To test the predictions of the models implementing these theories, we use the data from a German sentence—picture matching study reported in Hanne, Sekerina, Vasishth, Burchert, and De Bleser (2011). The data consist of offline (sentence-picture matching accuracies and response times) and online (eye fixation proportions) measures. From among the models considered, the model assuming that both slowed processing and intermittent deficiency are present emerges as the best model of sentence processing difficulty in aphasia. The modeling of individual differences suggests that, if we assume that patients have both slowed processing and intermittent deficiency, they have them in differing degrees.
Successful sentence comprehension requires the comprehender to correctly figure out who did what to whom. For example, in the sentence John kicked the ball, the comprehender has to figure out who did the action of kicking and what was being kicked. This process of identifying and connecting the syntactically-related words in a sentence is called dependency completion. What are the cognitive constraints that determine dependency completion? A widely-accepted theory is cue-based retrieval. The theory maintains that dependency completion is driven by a content-addressable search for the co-dependents in memory. The cue-based retrieval explains a wide range of empirical data from several constructions including subject-verb agreement, subject-verb non-agreement, plausibility mismatch configurations, and negative polarity items.
However, there are two major empirical challenges to the theory: (i) Grammatical sentences’ data from subject-verb number agreement dependencies, where the theory predicts a slowdown at the verb in sentences like the key to the cabinet was rusty compared to the key to the cabinets was rusty, but the data are inconsistent with this prediction; and, (ii) Data from antecedent-reflexive dependencies, where a facilitation in reading times is predicted at the reflexive in the bodybuilder who worked with the trainers injured themselves vs. the bodybuilder who worked with the trainer injured themselves, but the data do not show a facilitatory effect.
The work presented in this dissertation is dedicated to building a more general theory of dependency completion that can account for the above two datasets without losing the original empirical coverage of the cue-based retrieval assumption. In two journal articles, I present computational modeling work that addresses the above two empirical challenges.
To explain the grammatical sentences’ data from subject-verb number agreement dependencies, I propose a new model that assumes that the cue-based retrieval operates on a probabilistically distorted representation of nouns in memory (Article I). This hybrid distortion-plus-retrieval model was compared against the existing candidate models using data from 17 studies on subject-verb number agreement in 4 languages. I find that the hybrid model outperforms the existing models of number agreement processing suggesting that the cue-based retrieval theory must incorporate a feature distortion assumption.
To account for the absence of facilitatory effect in antecedent-reflexive dependencies, I propose an individual difference model, which was built within the cue-based retrieval framework (Article II). The model assumes that individuals may differ in how strongly they weigh a syntactic cue over a number cue. The model was fitted to data from two studies on antecedent-reflexive dependencies, and the participant-level cue-weighting was estimated. We find that one-fourth of the participants, in both studies, weigh the syntactic cue higher than the number cue in processing reflexive dependencies and the remaining participants weigh the two cues equally. The result indicates that the absence of predicted facilitatory effect at the level of grouped data is driven by some, not all, participants who weigh syntactic cues higher than the number cue. More generally, the result demonstrates that the assumption of differential cue weighting is important for a theory of dependency completion processes. This differential cue weighting idea was independently supported by a modeling study on subject-verb non-agreement dependencies (Article III).
Overall, the cue-based retrieval, which is a general theory of dependency completion, needs to incorporate two new assumptions: (i) the nouns stored in memory can undergo probabilistic feature distortion, and (ii) the linguistic cues used for retrieval can be weighted differentially. This is the cumulative result of the modeling work presented in this dissertation.
The dissertation makes an important theoretical contribution: Sentence comprehension in humans is driven by a mechanism that assumes cue-based retrieval, probabilistic feature distortion, and differential cue weighting. This insight is theoretically important because there is some independent support for these three assumptions in sentence processing and the broader memory literature. The modeling work presented here is also methodologically important because for the first time, it demonstrates (i) how the complex models of sentence processing can be evaluated using data from multiple studies simultaneously, without oversimplifying the models, and (ii) how the inferences drawn from the individual-level behavior can be used in theory development.
A comprehensive workflow to analyze ensembles of globally inverted 2D electrical resistivity models
(2022)
Electrical resistivity tomography (ERT) aims at imaging the subsurface resistivity distribution and provides valuable information for different geological, engineering, and hydrological applications. To obtain a subsurface resistivity model from measured apparent resistivities, stochastic or deterministic inversion procedures may be employed. Typically, the inversion of ERT data results in non-unique solutions; i.e., an ensemble of different models explains the measured data equally well. In this study, we perform inference analysis of model ensembles generated using a well-established global inversion approach to assess uncertainties related to the nonuniqueness of the inverse problem. Our interpretation strategy starts by establishing model selection criteria based on different statistical descriptors calculated from the data residuals. Then, we perform cluster analysis considering the inverted resistivity models and the corresponding data residuals. Finally, we evaluate model uncertainties and residual distributions for each cluster. To illustrate the potential of our approach, we use a particle swarm optimization (PSO) algorithm to obtain an ensemble of 2D layer-based resistivity models from a synthetic data example and a field data set collected in Loon-Plage, France. Our strategy performs well for both synthetic and field data and allows us to extract different plausible model scenarios with their associated uncertainties and data residual distributions. Although we demonstrate our workflow using 2D ERT data and a PSObased inversion approach, the proposed strategy is general and can be adapted to analyze model ensembles generated from other kinds of geophysical data and using different global inversion approaches.
Narcissists are assumed to lack the motivation and ability to share and understand the mental states of others. Prior empirical research, however, has yielded inconclusive findings and has differed with respect to the specific aspects of narcissism and socioemotional cognition that have been examined. Here, we propose a differentiated facet approach that can be applied across research traditions and that distinguishes between facets of narcissism (agentic vs. antagonistic) on the one hand, and facets of socioemotional cognition ability (SECA; self-perceived vs. actual) on the other. Using five nonclinical samples in two studies (total N = 602), we investigated the effect of facets of grandiose narcissism on aspects of socioemotional cognition across measures of affective and cognitive empathy, Theory of Mind, and emotional intelligence, while also controlling for general reasoning ability. Across both studies, agentic facets of narcissism were found to be positively related to perceived SECA, whereas antagonistic facets of narcissism were found to be negatively related to perceived SECA. However, both narcissism facets were negatively related to actual SECA. Exploratory condition-based regression analyses further showed that agentic narcissists had a higher directed discrepancy between perceived and actual SECA: They self-enhanced their socio-emotional capacities. Implications of these results for the multifaceted theoretical understanding of the narcissism-SECA link are discussed.
The improvement of process representations in hydrological models is often only driven by the modelers' knowledge and data availability. We present a comprehensive comparison between two hydrological models of different complexity that is developed to support (1) the understanding of the differences between model structures and (2) the identification of the observations needed for model assessment and improvement. The comparison is conducted on both space and time and by aggregating the outputs at different spatiotemporal scales. In the present study, mHM, a process‐based hydrological model, and ParFlow‐CLM, an integrated subsurface‐surface hydrological model, are used. The models are applied in a mesoscale catchment in Germany. Both models agree in the simulated river discharge at the outlet and the surface soil moisture dynamics, lending their supports for some model applications (drought monitoring). Different model sensitivities are, however, found when comparing evapotranspiration and soil moisture at different soil depths. The analysis supports the need of observations within the catchment for model assessment, but it indicates that different strategies should be considered for the different variables. Evapotranspiration measurements are needed at daily resolution across several locations, while highly resolved spatially distributed observations with lower temporal frequency are required for soil moisture. Finally, the results show the impact of the shallow groundwater system simulated by ParFlow‐CLM and the need to account for the related soil moisture redistribution. Our comparison strategy can be applied to other models types and environmental conditions to strengthen the dialog between modelers and experimentalists for improving process representations in Earth system models.
Over the past decades, natural hazards, many of which are aggravated by climate change and reveal an increasing trend in frequency and intensity, have caused significant human and economic losses and pose a considerable obstacle to sustainable development. Hence, dedicated action toward disaster risk reduction is needed to understand the underlying drivers and create efficient risk mitigation plans. Such action is requested by the Sendai Framework for Disaster Risk Reduction 2015-2030 (SFDRR), a global agreement launched in 2015 that establishes stating priorities for action, e.g. an improved understanding of disaster risk. Turkey is one of the SFDRR contracting countries and has been severely affected by many natural hazards, in particular earthquakes and floods. However, disproportionately little is known about flood hazards and risks in Turkey. Therefore, this thesis aims to carry out a comprehensive analysis of flood hazards for the first time in Turkey from triggering drivers to impacts. It is intended to contribute to a better understanding of flood risks, improvements of flood risk mitigation and the facilitated monitoring of progress and achievements while implementing the SFDRR.
In order to investigate the occurrence and severity of flooding in comparison to other natural hazards in Turkey and provide an overview of the temporal and spatial distribution of flood losses, the Turkey Disaster Database (TABB) was examined for the years 1960-2014. The TABB database was reviewed through comparison with the Emergency Events Database (EM-DAT), the Dartmouth Flood Observatory database, the scientific literature and news archives. In addition, data on the most severe flood events between 1960 and 2014 were retrieved. These served as a basis for analyzing triggering mechanisms (i.e. atmospheric circulation and precipitation amounts) and aggravating pathways (i.e. topographic features, catchment size, land use types and soil properties). For this, a new approach was developed and the events were classified using hierarchical cluster analyses to identify the main influencing factor per event and provide additional information about the dominant flood pathways for severe floods. The main idea of the study was to start with the event impacts based on a bottom-up approach and identify the causes that created damaging events, instead of applying a model chain with long-term series as input and searching for potentially impacting events as model outcomes. However, within the frequency analysis of the flood-triggering circulation pattern types, it was discovered that events in terms of heavy precipitation were not included in the list of most severe floods, i.e. their impacts were not recorded in national and international loss databases but were mentioned in news archives and reported by the Turkish State Meteorological Service. This finding challenges bottom-up modelling approaches and underlines the urgent need for consistent event and loss documentation. Therefore, as a next step, the aim was to enhance the flood loss documentation by calibrating, validating and applying the United Nations Office for Disaster Risk Reduction (UNDRR) loss estimation method for the recent severe flood events (2015-2020). This provided, a consistent flood loss estimation model for Turkey, allowing governments to estimate losses as quickly as possible after events, e.g. to better coordinate financial aid.
This thesis reveals that, after earthquakes, floods have the second most destructive effects in Turkey in terms of human and economic impacts, with over 800 fatalities and US$ 885.7 million in economic losses between 1960 and 2020, and that more attention should be paid on the national scale. The clustering results of the dominant flood-producing mechanisms (e.g. circulation pattern types, extreme rainfall, sudden snowmelt) present crucial information regarding the source and pathway identification, which can be used as base information for hazard identification in the preliminary risk assessment process. The implementation of the UNDRR loss estimation model shows that the model with country-specific parameters, calibrated damage ratios and sufficient event documentation (i.e. physically damaged units) can be recommended in order to provide first estimates of the magnitude of direct economic losses, even shortly after events have occurred, since it performed well when estimates were compared to documented losses.
The presented results can contribute to improving the national disaster loss database in Turkey and thus enable a better monitoring of the national progress and achievements with regard to the targets stated by the SFDRR. In addition, the outcomes can be used to better characterize and classify flood events. Information on the main underlying factors and aggravating flood pathways further supports the selection of suitable risk reduction policies.
All input variables used in this thesis were obtained from publicly available data. The results are openly accessible and can be used for further research.
As an overall conclusion, it can be stated that consistent loss data collection and better event documentation should gain more attention for a reliable monitoring of the implementation of the SFDRR. Better event documentation should be established according to a globally accepted standard for disaster classification and loss estimation in Turkey. Ultimately, this enables stakeholders to create better risk mitigation actions based on clear hazard definitions, flood event classification and consistent loss estimations.
Home range estimation is routine practice in ecological research. While advances in animal tracking technology have increased our capacity to collect data to support home range analysis, these same advances have also resulted in increasingly autocorrelated data. Consequently, the question of which home range estimator to use on modern, highly autocorrelated tracking data remains open. This question is particularly relevant given that most estimators assume independently sampled data. Here, we provide a comprehensive evaluation of the effects of autocorrelation on home range estimation. We base our study on an extensive data set of GPS locations from 369 individuals representing 27 species distributed across five continents. We first assemble a broad array of home range estimators, including Kernel Density Estimation (KDE) with four bandwidth optimizers (Gaussian reference function, autocorrelated‐Gaussian reference function [AKDE], Silverman's rule of thumb, and least squares cross‐validation), Minimum Convex Polygon, and Local Convex Hull methods. Notably, all of these estimators except AKDE assume independent and identically distributed (IID) data. We then employ half‐sample cross‐validation to objectively quantify estimator performance, and the recently introduced effective sample size for home range area estimation ( N̂ area
) to quantify the information content of each data set. We found that AKDE 95% area estimates were larger than conventional IID‐based estimates by a mean factor of 2. The median number of cross‐validated locations included in the hold‐out sets by AKDE 95% (or 50%) estimates was 95.3% (or 50.1%), confirming the larger AKDE ranges were appropriately selective at the specified quantile. Conversely, conventional estimates exhibited negative bias that increased with decreasing N̂ area. To contextualize our empirical results, we performed a detailed simulation study to tease apart how sampling frequency, sampling duration, and the focal animal's movement conspire to affect range estimates. Paralleling our empirical results, the simulation study demonstrated that AKDE was generally more accurate than conventional methods, particularly for small N̂ area. While 72% of the 369 empirical data sets had >1,000 total observations, only 4% had an N̂ area >1,000, where 30% had an N̂ area <30. In this frequently encountered scenario of small N̂ area, AKDE was the only estimator capable of producing an accurate home range estimate on autocorrelated data.
Companies develop process models to explicitly describe their business operations. In the same time, business operations, business processes, must adhere to various types of compliance requirements. Regulations, e.g., Sarbanes Oxley Act of 2002, internal policies, best practices are just a few sources of compliance requirements. In some cases, non-adherence to compliance requirements makes the organization subject to legal punishment. In other cases, non-adherence to compliance leads to loss of competitive advantage and thus loss of market share. Unlike the classical domain-independent behavioral correctness of business processes, compliance requirements are domain-specific. Moreover, compliance requirements change over time. New requirements might appear due to change in laws and adoption of new policies. Compliance requirements are offered or enforced by different entities that have different objectives behind these requirements. Finally, compliance requirements might affect different aspects of business processes, e.g., control flow and data flow. As a result, it is infeasible to hard-code compliance checks in tools. Rather, a repeatable process of modeling compliance rules and checking them against business processes automatically is needed. This thesis provides a formal approach to support process design-time compliance checking. Using visual patterns, it is possible to model compliance requirements concerning control flow, data flow and conditional flow rules. Each pattern is mapped into a temporal logic formula. The thesis addresses the problem of consistency checking among various compliance requirements, as they might stem from divergent sources. Also, the thesis contributes to automatically check compliance requirements against process models using model checking. We show that extra domain knowledge, other than expressed in compliance rules, is needed to reach correct decisions. In case of violations, we are able to provide a useful feedback to the user. The feedback is in the form of parts of the process model whose execution causes the violation. In some cases, our approach is capable of providing automated remedy of the violation.
The quantification of spatial propagation of extreme precipitation events is vital in water resources planning and disaster mitigation. However, quantifying these extreme events has always been challenging as many traditional methods are insufficient to capture the nonlinear interrelationships between extreme event time series. Therefore, it is crucial to develop suitable methods for analyzing the dynamics of extreme events over a river basin with a diverse climate and complicated topography. Over the last decade, complex network analysis emerged as a powerful tool to study the intricate spatiotemporal relationship between many variables in a compact way. In this study, we employ two nonlinear concepts of event synchronization and edit distance to investigate the extreme precipitation pattern in the Ganga river basin. We use the network degree to understand the spatial synchronization pattern of extreme rainfall and identify essential sites in the river basin with respect to potential prediction skills. The study also attempts to quantify the influence of precipitation seasonality and topography on extreme events. The findings of the study reveal that (1) the network degree is decreased in the southwest to northwest direction, (2) the timing of 50th percentile precipitation within a year influences the spatial distribution of degree, (3) the timing is inversely related to elevation, and (4) the lower elevation greatly influences connectivity of the sites. The study highlights that edit distance could be a promising alternative to analyze event-like data by incorporating event time and amplitude and constructing complex networks of climate extremes.
A long-standing and profound problem in astronomy is the difficulty in obtaining deep near-infrared observations due to the extreme brightness and variability of the night sky at these wavelengths. A solution to this problem is crucial if we are to obtain the deepest possible observations of the early Universe, as redshifted starlight from distant galaxies appears at these wavelengths. The atmospheric emission between 1,000 and 1,800 nm arises almost entirely from a forest of extremely bright, very narrow hydroxyl emission lines that varies on timescales of minutes. The astronomical community has long envisaged the prospect of selectively removing these lines, while retaining high throughput between them. Here we demonstrate such a filter for the first time, presenting results from the first on-sky tests. Its use on current 8 m telescopes and future 30 m telescopes will open up many new research avenues in the years to come.