Refine
Year of publication
Document Type
- Article (21198)
- Doctoral Thesis (3169)
- Postprint (2347)
- Monograph/Edited Volume (1222)
- Other (661)
- Review (618)
- Preprint (530)
- Conference Proceeding (466)
- Part of a Book (245)
- Working Paper (179)
Language
- English (30854) (remove)
Keywords
- climate change (178)
- Germany (101)
- machine learning (85)
- diffusion (76)
- German (68)
- morphology (67)
- Arabidopsis thaliana (66)
- anomalous diffusion (58)
- stars: massive (58)
- Climate change (55)
Institute
- Institut für Physik und Astronomie (4980)
- Institut für Biochemie und Biologie (4714)
- Institut für Geowissenschaften (3331)
- Institut für Chemie (2913)
- Institut für Mathematik (1874)
- Department Psychologie (1486)
- Institut für Ernährungswissenschaft (1042)
- Department Linguistik (1006)
- Wirtschaftswissenschaften (858)
- Institut für Informatik und Computational Science (841)
- Institut für Umweltwissenschaften und Geographie (806)
- Department Sport- und Gesundheitswissenschaften (721)
- Extern (646)
- Mathematisch-Naturwissenschaftliche Fakultät (538)
- Institut für Anglistik und Amerikanistik (504)
- Sozialwissenschaften (442)
- Hasso-Plattner-Institut für Digital Engineering GmbH (422)
- Strukturbereich Kognitionswissenschaften (409)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (366)
- Fachgruppe Betriebswirtschaftslehre (293)
- Fachgruppe Politik- & Verwaltungswissenschaft (278)
- Humanwissenschaftliche Fakultät (253)
- Historisches Institut (235)
- Department Erziehungswissenschaft (209)
- Institut für Romanistik (192)
- Institut für Jüdische Studien und Religionswissenschaft (172)
- Fachgruppe Volkswirtschaftslehre (170)
- Institut für Germanistik (166)
- Philosophische Fakultät (129)
- Vereinigung für Jüdische Studien e. V. (126)
- Öffentliches Recht (114)
- Interdisziplinäres Zentrum für Dynamik komplexer Systeme (113)
- Fachgruppe Soziologie (90)
- Wirtschafts- und Sozialwissenschaftliche Fakultät (87)
- Center for Economic Policy Analysis (CEPA) (79)
- Institut für Künste und Medien (75)
- Fakultät für Gesundheitswissenschaften (71)
- Institut für Philosophie (67)
- Strukturbereich Bildungswissenschaften (60)
- Institut für Slavistik (58)
- Department für Inklusionspädagogik (53)
- MenschenRechtsZentrum (49)
- Berlin Potsdam Research Group "The International Rule of Law - Rise or Decline?" (45)
- Department Grundschulpädagogik (43)
- Bürgerliches Recht (40)
- Institut für Jüdische Theologie (31)
- Interdisziplinäres Zentrum für Dünne Organische und Biochemische Schichten (26)
- Referat für Presse- und Öffentlichkeitsarbeit (23)
- Sonderforschungsbereich 632 - Informationsstruktur (23)
- Klassische Philologie (22)
- Zentrum für Gerechtigkeitsforschung (21)
- Department Musik und Kunst (20)
- Potsdam Research Institute for Multilingualism (PRIM) (18)
- Hochschulambulanz (15)
- WeltTrends e.V. Potsdam (15)
- Lehreinheit für Wirtschafts-Arbeit-Technik (13)
- Potsdam Institute for Climate Impact Research (PIK) e. V. (13)
- Zentrum für Lehrerbildung und Bildungsforschung (ZeLB) (12)
- Interdisziplinäres Zentrum für Biopolymere (11)
- Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung (11)
- Digital Engineering Fakultät (10)
- Potsdam Transfer - Zentrum für Gründung, Innovation, Wissens- und Technologietransfer (10)
- Strafrecht (9)
- Zentrum für Umweltwissenschaften (8)
- Institut für Religionswissenschaft (7)
- Juristische Fakultät (6)
- Multilingualism (6)
- Zentrum für Qualitätsentwicklung in Lehre und Studium (ZfQ) (6)
- Abraham Geiger Kolleg gGmbH (5)
- Zentrum für Lern- und Lehrforschung (5)
- Forschungsbereich „Politik, Verwaltung und Management“ (4)
- Gesundheitsmanagement (4)
- Moses Mendelssohn Zentrum für europäisch-jüdische Studien e. V. (4)
- Patholinguistics/Neurocognition of Language (4)
- Akademie für Psychotherapie und Interventionsforschung GmbH (3)
- Interdisziplinäres Zentrum für Kognitive Studien (3)
- Psycholinguistics and Neurolinguistics (3)
- Zentrum für Sprachen und Schlüsselkompetenzen (Zessko) (3)
- An-Institute (2)
- DV und Statistik Wirtschaftswissenschaften (2)
- Senat (2)
- UP Transfer (2)
- Universitätsbibliothek (2)
- Zentrum für Australienforschung (2)
- Applied Computational Linguistics (1)
- Arbeitskreis Militär und Gesellschaft in der Frühen Neuzeit e. V. (1)
- Botanischer Garten (1)
- Foundations of Computational Linguistics (1)
- Institut für Lebensgestaltung-Ethik-Religionskunde (1)
- Interdisziplinäres Zentrum für Massenspektronomie von Biopolymeren (1)
- Kommissionen des Senats (1)
- Kommunalwissenschaftliches Institut (1)
- Language Acquisition (1)
- Organe und Gremien (1)
- Phonology & Phonetics (1)
- Syntax, Morphology & Variability (1)
- Theodor-Fontane-Archiv (1)
- Weitere Einrichtungen (1)
This study investigates whether number dissimilarities on subject and object DPs facilitate the comprehension of subject-and object-extracted centre-embedded relative clauses in children with Grammatical Specific Language Impairment (G-SLI). We compared the performance of a group of English-speaking children with G-SLI (mean age: 12; 11) with that of two groups of younger typically developing (TD) children, matched on grammar and receptive vocabulary, respectively. All groups were more accurate on subject-extracted relative clauses than object-extracted ones and, crucially, they all showed greater accuracy for sentences with dissimilar number features (i.e., one singular, one plural) on the head noun and the embedded DP. These findings are interpreted in the light of current psycholinguistic models of sentence comprehension in TD children and provide further insight into the linguistic nature of G-SLI.
This study investigates whether number dissimilarities on subject and object DPs facilitate the comprehension of subject-and object-extracted centre-embedded relative clauses in children with Grammatical Specific Language Impairment (G-SLI). We compared the performance of a group of English-speaking children with G-SLI (mean age: 12; 11) with that of two groups of younger typically developing (TD) children, matched on grammar and receptive vocabulary, respectively. All groups were more accurate on subject-extracted relative clauses than object-extracted ones and, crucially, they all showed greater accuracy for sentences with dissimilar number features (i.e., one singular, one plural) on the head noun and the embedded DP. These findings are interpreted in the light of current psycholinguistic models of sentence comprehension in TD children and provide further insight into the linguistic nature of G-SLI.
The predictions of two contrasting approaches to the acquisition of transitive relative clauses were tested within the same groups of German-speaking participants aged from 3 to 5 years old. The input frequency approach predicts that object relative clauses with inanimate heads (e.g., the pullover that the man is scratching) are comprehended earlier and more accurately than those with an animate head (e.g., the man that the boy is scratching). In contrast, the structural intervention approach predicts that object relative clauses with two full NP arguments mismatching in number (e.g., the man that the boys are scratching) are comprehended earlier and more accurately than those with number-matching NPs (e.g., the man that the boy is scratching). These approaches were tested in two steps. First, we ran a corpus analysis to ensure that object relative clauses with number-mismatching NPs are not more frequent than object relative clauses with number-matching NPs in child directed speech. Next, the comprehension of these structures was tested experimentally in 3-, 4-, and 5-year-olds respectively by means of a color naming task. By comparing the predictions of the two approaches within the same participant groups, we were able to uncover that the effects predicted by the input frequency and by the structural intervention approaches co-exist and that they both influence the performance of children on transitive relative clauses, but in a manner that is modulated by age. These results reveal a sensitivity to animacy mismatch already being demonstrated by 3-year-olds and show that animacy is initially deployed more reliably than number to interpret relative clauses correctly. In all age groups, the animacy mismatch appears to explain the performance of children, thus, showing that the comprehension of frequent object relative clauses is enhanced compared to the other conditions. Starting with 4-year-olds but especially in 5-year-olds, the number mismatch supported comprehension-a facilitation that is unlikely to be driven by input frequency. Once children fine-tune their sensitivity to verb agreement information around the age of four, they are also able to deploy number marking to overcome the intervention effects. This study highlights the importance of testing experimentally contrasting theoretical approaches in order to characterize the multifaceted, developmental nature of language acquisition.
We elicited the production of various types of relative clauses in a group of German-speaking children with specific language impairment (SLI) and typically developing controls in order to test the movement optionality account of grammatical difficulty in SLI. The results show that German-speaking children with SLI are impaired in relative clause production compared to typically developing children. The alternative structures that they produce consist of simple main clauses, as well as nominal and prepositional phrases produced in isolation, sometimes contextually appropriate, and sometimes not. Crucially for evaluating the movement optionality account, children with SLI produce very few instances of embedded clauses where the relative clause head noun is pronounced in situ; in fact, such responses are more common among the typically developing child controls. These results underscore the difficulty German-speaking children with SLI have with structures involving movement, but provide no specific support for the movement optionality account.
We elicited the production of various types of relative clauses in a group of German-speaking children with specific language impairment (SLI) and typically developing controls in order to test the movement optionality account of grammatical difficulty in SLI. The results show that German-speaking children with SLI are impaired in relative clause production compared to typically developing children. The alternative structures that they produce consist of simple main clauses, as well as nominal and prepositional phrases produced in isolation, sometimes contextually appropriate, and sometimes not. Crucially for evaluating the movement optionality account, children with SLI produce very few instances of embedded clauses where the relative clause head noun is pronounced in situ; in fact, such responses are more common among the typically developing child controls. These results underscore the difficulty German-speaking children with SLI have with structures involving movement, but provide no specific support for the movement optionality account.
The predictions of two contrasting approaches to the acquisition of transitive relative clauses were tested within the same groups of German-speaking participants aged from 3 to 5 years old. The input frequency approach predicts that object relative clauses with inanimate heads (e.g., the pullover that the man is scratching) are comprehended earlier and more accurately than those with an animate head (e.g., the man that the boy is scratching). In contrast, the structural intervention approach predicts that object relative clauses with two full NP arguments mismatching in number (e.g., the man that the boys are scratching) are comprehended earlier and more accurately than those with number-matching NPs (e.g., the man that the boy is scratching). These approaches were tested in two steps. First, we ran a corpus analysis to ensure that object relative clauses with number-mismatching NPs are not more frequent than object relative clauses with number-matching NPs in child directed speech. Next, the comprehension of these structures was tested experimentally in 3-, 4-, and 5-year-olds respectively by means of a color naming task. By comparing the predictions of the two approaches within the same participant groups, we were able to uncover that the effects predicted by the input frequency and by the structural intervention approaches co-exist and that they both influence the performance of children on transitive relative clauses, but in a manner that is modulated by age. These results reveal a sensitivity to animacy mismatch already being demonstrated by 3-year-olds and show that animacy is initially deployed more reliably than number to interpret relative clauses correctly. In all age groups, the animacy mismatch appears to explain the performance of children, thus, showing that the comprehension of frequent object relative clauses is enhanced compared to the other conditions. Starting with 4-year-olds but especially in 5-year-olds, the number mismatch supported comprehension—a facilitation that is unlikely to be driven by input frequency. Once children fine-tune their sensitivity to verb agreement information around the age of four, they are also able to deploy number marking to overcome the intervention effects. This study highlights the importance of testing experimentally contrasting theoretical approaches in order to characterize the multifaceted, developmental nature of language acquisition.
The predictions of two contrasting approaches to the acquisition of transitive relative clauses were tested within the same groups of German-speaking participants aged from 3 to 5 years old. The input frequency approach predicts that object relative clauses with inanimate heads (e.g., the pullover that the man is scratching) are comprehended earlier and more accurately than those with an animate head (e.g., the man that the boy is scratching). In contrast, the structural intervention approach predicts that object relative clauses with two full NP arguments mismatching in number (e.g., the man that the boys are scratching) are comprehended earlier and more accurately than those with number-matching NPs (e.g., the man that the boy is scratching). These approaches were tested in two steps. First, we ran a corpus analysis to ensure that object relative clauses with number-mismatching NPs are not more frequent than object relative clauses with number-matching NPs in child directed speech. Next, the comprehension of these structures was tested experimentally in 3-, 4-, and 5-year-olds respectively by means of a color naming task. By comparing the predictions of the two approaches within the same participant groups, we were able to uncover that the effects predicted by the input frequency and by the structural intervention approaches co-exist and that they both influence the performance of children on transitive relative clauses, but in a manner that is modulated by age. These results reveal a sensitivity to animacy mismatch already being demonstrated by 3-year-olds and show that animacy is initially deployed more reliably than number to interpret relative clauses correctly. In all age groups, the animacy mismatch appears to explain the performance of children, thus, showing that the comprehension of frequent object relative clauses is enhanced compared to the other conditions. Starting with 4-year-olds but especially in 5-year-olds, the number mismatch supported comprehension—a facilitation that is unlikely to be driven by input frequency. Once children fine-tune their sensitivity to verb agreement information around the age of four, they are also able to deploy number marking to overcome the intervention effects. This study highlights the importance of testing experimentally contrasting theoretical approaches in order to characterize the multifaceted, developmental nature of language acquisition.
The Relativized Minimality approach to A'-dependencies (Friedmann et al., 2009) predicts that headed object relative clauses (RCs) and which questions are the most difficult, due to the presence of a lexical restriction on both the subject and the object DP which creates intervention. We investigated comprehension of center-embedded headed object RCs with Italian children, where Number and Gender feature values on subject and object DPs are manipulated. We found that. Number conditions are always more accurate than Gender ones, showing that intervention is sensitive to DP-internal structure. We propose a finer definition of the lexical restriction where external and syntactically active features (such as Number) reduce intervention whereas internal and (possibly) lexicalized features (such as Gender) do so to a lesser extent. Our results are also compatible with a memory interference approach in which the human parser is sensitive to highly specific properties of the linguistic input, such as the cue-based model (Van Dyke, 2007).
Models of union behavior
(1995)
The effect of worker representation on employment bebaviour in Germany: another case of -2.5%
(2004)
Quality attributes of fruit determine its acceptability by the retailer and consumer. The objective of this work was to investigate the potential of absorption (μa) and reduced scattering (μs’) coefficients of European pear to analyze its fruit flesh firmness and soluble solids content (SSC). The absolute reference values, μa* (cm−1) and μs’* (cm−1), of pear were invasively measured, employing multi-spectral photon density wave (PDW) spectroscopy at preselected wavelengths of 515, 690, and 940 nm considering two batches of unripe and overripe fruit. On eight measuring dates during fruit development, μa and μs’ were analyzed non-destructively by means of laser light backscattering imaging (LLBI) at similar wavelengths of 532, 660, and 830 nm by means of fitting according to Farrell’s diffusion theory, using fix reference values of either μa* or μs’*. Both, the μa* and the μa as well as μs’* and μs’ showed similar trends. Considering the non-destructively measured data during fruit development, μa at 660 nm decreased 91 till 141 days after full bloom (dafb) from 1.49 cm−1 to 0.74 cm−1 due to chlorophyll degradation. At 830 nm, μa only slightly decreased from 0.41 cm−1 to 0.35 cm−1. The μs’ at all wavelengths revealed a decreasing trend as the fruit developed. The difference measured at 532 nm was most pronounced decreasing from 24 cm−1 to 10 cm−1, while at 660 nm and 830 nm values decreased from 15 cm−1 to 13 cm−1 and from 10 cm−1 to 8 cm−1, respectively. When building calibration models with partial least-squares regression analysis on the optical properties for non-destructive analysis of the fruit SSC, μa at 532 nm and 830 nm resulted in a correlation coefficient of R = 0.66, however, showing high measuring uncertainty. The combination of all three wavelengths gave an enhanced, encouraging R = 0.89 for firmness analysis using μs’ in the freshly picked fruit.
The central rift of the Red Sea has 25 brine pools with different physical and geochemical characteristics. Atlantis II (ATIID), Discovery Deeps (DD) and Chain Deep (CD) are characterized by high salinity, temperature and metal content. Several studies reported microbial communities in these brine pools, but few studies addressed the brine pool sediments. Therefore, sediment cores were collected from ATIID, DD, CD brine pools and an adjacent brine-influenced site. Sixteen different lithologic sediment sections were subjected to shotgun DNA pyrosequencing to generate 1.47 billion base pairs (1.47 x 10(9) bp). We generated sediment-specific reads and attempted to annotate all reads. We report the phylogenetic and biochemical uniqueness of the deepest ATIID sulfur-rich brine pool sediments. In contrary to all other sediment sections, bacteria dominate the deepest ATIID sulfur-rich brine pool sediments. This decrease in virus-to-bacteria ratio in selected sections and depth coincided with an overrepresentation of mobile genetic elements. Skewing in the composition of viruses-to-mobile genetic elements may uniquely contribute to the distinct microbial consortium in sediments in proximity to hydrothermally active vents of the Red Sea and possibly in their surroundings, through differential horizontal gene transfer.
The aim of this work was the generation of carbon materials with high surface area, exhibiting a hierarchical pore system in the macro- and mesorange. Such a pore system facilitates the transport through the material and enhances the interaction with the carbon matrix (macropores are pores with diameters > 50 nm, mesopores between 2 – 50 nm). Thereto, new strategies for the synthesis of novel carbon materials with designed porosity were developed that are in particular useful for the storage of energy. Besides the porosity, it is the graphene structure itself that determines the properties of a carbon material. Non-graphitic carbon materials usually exhibit a quite large degree of disorder with many defects in the graphene structure, and thus exhibit inherent microporosity (d < 2nm). These pores are traps and oppose reversible interaction with the carbon matrix. Furthermore they reduce the stability and conductivity of the carbon material, which was undesired for the proposed applications. As one part of this work, the graphene structures of different non-graphitic carbon materials were studied in detail using a novel wide-angle x-ray scattering model that allowed precise information about the nature of the carbon building units (graphene stacks). Different carbon precursors were evaluated regarding their potential use for the synthesis shown in this work, whereas mesophase pitch proved to be advantageous when a less disordered carbon microstructure is desired. By using mesophase pitch as carbon precursor, two templating strategies were developed using the nanocasting approach. The synthesized (monolithic) materials combined for the first time the advantages of a hierarchical interconnected pore system in the macro- and mesorange with the advantages of mesophase pitch as carbon precursor. In the first case, hierarchical macro- / mesoporous carbon monoliths were synthesized by replication of hard (silica) templates. Thus, a suitable synthesis procedure was developed that allowed the infiltration of the template with the hardly soluble carbon precursor. In the second case, hierarchical macro- / mesoporous carbon materials were synthesized by a novel soft-templating technique, taking advantage of the phase separation (spinodal decomposition) between mesophase pitch and polystyrene. The synthesis also allowed the generation of monolithic samples and incorporation of functional nanoparticles into the material. The synthesized materials showed excellent properties as an anode material in lithium batteries and support material for supercapacitors.
A concentrated solution of a symmetric triblock copolymer with a thermoresponsive poly(methoxy diethylene glycol acrylate) (PMDEGA) middle block and short hydrophobic, fully deuterated polystyrene end blocks is investigated in D2O where it undergoes a lower critical solution temperature-type phase transition at ca. 36 A degrees C. Small-angle neutron scattering (SANS) in a wide temperature range (15-50 A degrees C) is used to characterize the size and inner structure of the micelles as well as the correlation between the micelles and the formation of aggregates by the micelles above the cloud point (CP). A model featuring spherical core-shell micelles, which are correlated by a hard-sphere potential or a sticky hard-sphere potential together with a Guinier form factor describing aggregates formed by the micelles above the CP, fits the SANS curves well in the entire temperature range. The thickness of the thermoresponsive micellar PMDEGA shell as well as the hard-sphere radius increase slightly already below the cloud point. Whereas the thickness of the thermoresponsive micellar shell hardly shrinks when heating through the CP and up to 50 A degrees C, the hard-sphere radius decreases within 3.5 K at the CP. The volume fraction decreases already significantly below the CP, which may be at the origin of the previously observed gel-sol transition far below the CP (Miasnikova et al., Langmuir 28: 4479-4490, 2012). Above the CP, small, and at higher temperatures, large aggregates are formed by the micelles.
In aqueous solution, symmetric triblock copolymers with a thermoresponsive middle block and hydrophobic end blocks form flower-like core-shell micelles which collapse and aggregate upon heating through the cloud point (CP). The collapse of the micellar shell and the intermicellar aggregation are followed in situ and in real-time using time-resolved small-angle neutron scattering (SANS), while heating micellar solutions of a poly((styrene-d(8))-b-(N-isopropyl acrylamide)-b-(styrene-d(8))) triblock copolymer in D2O rapidly through their CP. The influence of polymer concentration as well as of the start and target temperatures is addressed. In all cases, the micellar collapse is very fast. The collapsed micelles immediately form small clusters which contain voids. They densify which slows down or even stops their growth. For low concentrations and target temperatures just above the CP, i.e. shallow temperature jumps, the subsequent growth of the clusters is described by diffusion-limited aggregation. In contrast, for higher concentrations and/or higher target temperatures, i.e. deep temperature jumps, intermicellar bridges dominate the growth. Eventually, in all cases, the clusters coagulate which results in macroscopic phase separation. For shallow temperature jumps, the cluster surfaces stay rough; whereas for deep temperature jumps, a concentration gradient develops at late stages. These results are important for the development of conditions for thermal switching in applications, e.g. for the use of thermoresponsive micellar systems for transport and delivery purposes.
We have studied I lie thermal behavior of amphiphilic, symmetric triblock copolymers having short, deuterated polystyrene (PS) end blocks and a large poly(N-isopropylacrylarnicle) (PNIPAM) middle block exhibiting a lower critical solution temperature (LCST) in aqueous solution. A wide range of concentrations (0.1-300 mg/mL) is investigated using it number of analytical methods such as fluorescence correlation spectroscopy (FCS), turbidimetry, dynamic light scattering (DLS), small-angle neutron scattering (SANS), and neutron spin-echo spectroscopy (NSE). The critical micelle concentration is determined using FCS to be 1 mu M or less. The collapse of the micelles at the LCST is investigated using turbidimetry and DLS and shows a weak dependence on the degree of polymerization of the PNIPAM block. SANS with contrast matching allows its to reveal the core-shell Structure of the micelles as well as their correlation as a function of temperature. The segmental dynamics of the PNIPAM shell are studied as a function of temperature and arc found to be faster in the collapsed state than in the swollen state. The mode detected has a linear dispersion in q(2) and is found to be faster in the collapsed state as compared to the swollen state. We attribute this result to the averaging over mobile and immobilized segments.
We investigate concentrated solutions of poly(styrene-b-N-isopropyl acrylamide) (P(S-b-NIPAM)) diblock copolymers in deuterated water (D2O). Both structural changes and the changes of the segmental dynamics occurring upon heating through the lower critical solution temperature (LCST) of PNIPAM are studied using small-angle neutron scattering and neutron spin-echo spectroscopy. The collapse of the micellar shell and the cluster formation of collapsed micelles at the LCST as well as an increase of the segmental diffusion coefficient after crossing the LCST are detected. Comparing to our recent results on a triblock copolymer P(S-b-NIPAM-b-S) [25], we observe that the collapse transition of P(S-b-NIPAM) is more complex and that the PNIPAM segmental dynamics are faster than in P(S-b-NIPAM-b-S).
Structural changes at the intra- as well as intermicellar level were induced by the LCST-type collapse transition of poly(N-isopropyl acrylamide) in ABA triblock copolymer micelles in water. The distinct process kinetics was followed in situ and in real-time using time-resolved small-angle neutron scattering (SANS), while a micellar solution of a triblock copolymer, consisting of two short deuterated polystyrene endblocks and a long thermoresponsive poly(N-isopropyl acrylamide) middle block, was heated rapidly above its cloud point. A very fast collapse together with a multistep aggregation behavior is observed. The findings of the transition occurring at several size and time levels may have implications for the design and application of such thermoresponsive self-assembled systems.
It is a well-attested finding in head-initial languages that individuals with aphasia (IWA) have greater difficulties in comprehending object-extracted relative clauses (ORCs) as compared to subject-extracted relative clauses (SRCs). Adopting the linguistically based approach of Relativized Minimality (RM; Rizzi, 1990, 2004), the subject-object asymmetry is attributed to the occurrence of a Minimality effect in ORCs due to reduced processing capacities in IWA (Garraffa & Grillo, 2008; Grillo, 2008, 2009). For ORCs, it is claimed that the embedded subject intervenes in the syntactic dependency between the moved object and its trace, resulting in greater processing demands. In contrast, no such intervener is present in SRCs. Based on the theoretical framework of RM and findings from language acquisition (Belletti et al., 2012; Friedmann et al., 2009), it is assumed that Minimality effects are alleviated when the moved object and the intervening subject differ in terms of relevant syntactic features. For German, the language under investigation, the RM approach predicts that number (i.e., singular vs. plural) and the lexical restriction [+NP] feature (i.e., lexically restricted determiner phrases vs. lexically unrestricted pronouns) are considered relevant in the computation of Minimality. Greater degrees of featural distinctiveness are predicted to result in more facilitated processing of ORCs, because IWA can more easily distinguish between the moved object and the intervener.
This cumulative dissertation aims to provide empirical evidence on the validity of the RM approach in accounting for comprehension patterns during relative clause (RC) processing in German-speaking IWA. For that purpose, I conducted two studies including visual-world eye-tracking experiments embedded within an auditory referent-identification task to study the offline and online processing of German RCs. More specifically, target sentences were created to evaluate (a) whether IWA demonstrate a subject-object asymmetry, (b) whether dissimilarity in the number and/or the [+NP] features facilitates ORC processing, and (c) whether sentence processing in IWA benefits from greater degrees of featural distinctiveness. Furthermore, by comparing RCs disambiguated through case marking (at the relative pronoun or the following noun phrase) and number marking (inflection of the sentence-final verb), it was possible to consider the role of the relative position of the disambiguation point. The RM approach predicts that dissimilarity in case should not affect the occurrence of Minimality effects. However, the case cue to sentence interpretation appears earlier within RCs than the number cue, which may result in lower processing costs in case-disambiguated RCs compared to number-disambiguated RCs.
In study I, target sentences varied with respect to word order (SRC vs. ORC) and dissimilarity in the [+NP] feature (lexically restricted determiner phrase vs. pronouns as embedded element). Moreover, by comparing the impact of these manipulations in case- and number-disambiguated RCs, the effect of dissimilarity in the number feature was explored. IWA demonstrated a subject-object asymmetry, indicating the occurrence of a Minimality effect in ORCs. However, dissimilarity neither in the number feature nor in the [+NP] feature alone facilitated ORC processing. Instead, only ORCs involving distinct specifications of both the number and the [+NP] features were well comprehended by IWA. In study II, only temporarily ambiguous ORCs disambiguated through case or number marking were investigated, while controlling for varying points of disambiguation. There was a slight processing advantage of case marking as cue to sentence interpretation as compared to number marking.
Taken together, these findings suggest that the RM approach can only partially capture empirical data from German IWA. In processing complex syntactic structures, IWA are susceptible to the occurrence of the intervening subject in ORCs. The new findings reported in the thesis show that structural dissimilarity can modulate sentence comprehension in aphasia. Interestingly, IWA can override Minimality effects in ORCs and derive correct sentence meaning if the featural specifications of the constituents are maximally different, because they can more easily distinguish the moved object and the intervening subject given their reduced processing capacities. This dissertation presents new scientific knowledge that highlights how the syntactic theory of RM helps to uncover selective effects of morpho-syntactic features on sentence comprehension in aphasia, emphasizing the close link between assumptions from theoretical syntax and empirical research.
Exploring generalisation following treatment of language deficits in aphasia can provide insights into the functional relation of the cognitive processing systems involved. In the present study, we first review treatment outcomes of interventions targeting sentence processing deficits and, second report a treatment study examining the occurrence of practice effects and generalisation in sentence comprehension and production. In order to explore the potential linkage between processing systems involved in comprehending and producing sentences, we investigated whether improvements generalise within (i.e., uni-modal generalisation in comprehension or in production) and/or across modalities (i.e., cross-modal generalisation from comprehension to production or vice versa). Two individuals with aphasia displaying co-occurring deficits in sentence comprehension and production were trained on complex, non-canonical sentences in both modalities. Two evidence-based treatment protocols were applied in a crossover intervention study with sequence of treatment phases being randomly allocated. Both participants benefited significantly from treatment, leading to uni-modal generalisation in both comprehension and production. However, cross-modal generalisation did not occur. The magnitude of uni-modal generalisation in sentence production was related to participants’ sentence comprehension performance prior to treatment. These findings support the assumption of modality-specific sub-systems for sentence comprehension and production, being linked uni-directionally from comprehension to production.
Exploring generalisation following treatment of language deficits in aphasia can provide insights into the functional relation of the cognitive processing systems involved. In the present study, we first review treatment outcomes of interventions targeting sentence processing deficits and, second report a treatment study examining the occurrence of practice effects and generalisation in sentence comprehension and production. In order to explore the potential linkage between processing systems involved in comprehending and producing sentences, we investigated whether improvements generalise within (i.e., uni-modal generalisation in comprehension or in production) and/or across modalities (i.e., cross-modal generalisation from comprehension to production or vice versa). Two individuals with aphasia displaying co-occurring deficits in sentence comprehension and production were trained on complex, non-canonical sentences in both modalities. Two evidence-based treatment protocols were applied in a crossover intervention study with sequence of treatment phases being randomly allocated. Both participants benefited significantly from treatment, leading to uni-modal generalisation in both comprehension and production. However, cross-modal generalisation did not occur. The magnitude of uni-modal generalisation in sentence production was related to participants’ sentence comprehension performance prior to treatment. These findings support the assumption of modality-specific sub-systems for sentence comprehension and production, being linked uni-directionally from comprehension to production.
The cross-linguistic finding of greater demands in processing object relatives as compared to subject relatives in individuals with aphasia and non-brain-damaged speakers has been explained within the Relativized Minimality approach. Based on this account, the asymmetry is attributed to an element intervening between the moved element and its extraction site in object relatives, but not in subject relatives. Moreover, it has been proposed that processing of object relatives is facilitated if the intervening and the moved elements differ in their internal feature structure. The present study investigates these predictions in German-speaking individuals with aphasia and a group of control participants by combining the visual world eye-tracking methodology with an auditory referent identification task. Our results provide support for the Relativized Minimality approach. Particularly, the degree of featural distinctness was shown to modulate the occurrence of the effects in aphasia. We claim that, due to reduced processing capacities, individuals with aphasia need a higher degree of featural dissimilarity to distinguish the moved from the intervening element in object relatives to overcome their syntactic deficit. (C) 2017 Elsevier Ltd. All rights reserved.
A new isoflavone, 4′-prenyloxyvigvexin A (1) and a new pterocarpan, (6aR,11aR)-3,8-dimethoxybitucarpin B (2) were isolated from the leaves of Lonchocarpus bussei and the stem bark of Lonchocarpus eriocalyx, respectively. The extract of L. bussei also gave four known isoflavones, maximaisoflavone H, 7,2′-dimethoxy-3′,4′-methylenedioxyisoflavone, 6,7,3′-trimethoxy-4′,5′-methylenedioxyisoflavone, durmillone; a chalcone, 4-hydroxylonchocarpin; a geranylated phenylpropanol, colenemol; and two known pterocarpans, (6aR,11aR)-maackiain and (6aR,11aR)-edunol. (6aR,11aR)-Edunol was also isolated from the stem bark of L. eriocalyx. The structures of the isolated compounds were elucidated by spectroscopy. The cytotoxicity of the compounds was tested by resazurin assay using drug-sensitive and multidrug-resistant cancer cell lines. Significant antiproliferative effects with IC50 values below 10 μM were observed for the isoflavones 6,7,3′-trimethoxy-4′,5′-methylenedioxyisoflavone and durmillone against leukemia CCRF-CEM cells; for the chalcone, 4-hydroxylonchocarpin and durmillone against its resistant counterpart CEM/ADR5000 cells; as well as for durmillone against the resistant breast adenocarcinoma MDA-MB231/BCRP cells and resistant gliobastoma U87MG.ΔEGFR cells.
Chromatographic separation of the extract of the roots of Dorstenia kameruniana (family Moraceae) led to the isolation of three new benzylbenzofuran derivatives, 2-(p-hydroxybenzyl)benzofuran-6-ol (1), 2-(p-hydroxybenzyl)-7-methoxybenzofuran-6-ol (2) and 2-(p-hydroxy)-3-(3-methylbut-2-en-1-yl)benzyl)benzofuran-6-ol (3) (named dorsmerunin A, B and C, respectively), along with the known furanocoumarin, bergapten (4). The twigs of Dorstenia kameruniana also produced compounds 1-4 as well as the known chalcone licoagrochalcone A (5). The structures were elucidated by NMR spectroscopy and mass spectrometry. The isolated compounds displayed cytotoxicity against the sensitive CCRF-CEM and multidrug-resistant CEM/ADR5000 leukemia cells, where compounds 4 and 5 had the highest activities (IC50 values of 7.17 mu M and 5.16 mu M, respectively) against CCRF-CEM leukemia cells. Compound 5 also showed cytotoxicity against 7 sensitive or drug-resistant solid tumor cell lines (breast carcinoma, colon carcinoma, glioblastoma), with IC50 below 50 mu M, whilst 4 showed selective activity.
Background: While incidences of cancer are continuously increasing, drug resistance of malignant cells is observed towards almost all pharmaceuticals. Several isoflavonoids and flavonoids are known for their cytotoxicity towards various cancer cells. Methods: The cytotoxicity of compounds was determined based on the resazurin reduction assay. Caspases activation was evaluated using the caspase-Glo assay. Flow cytometry was used to analyze the cell cycle (propodium iodide (PI) staining), apoptosis (annexin V/PI staining), mitochondrial membrane potential (MMP) (JC-1) and reactive oxygen species (ROS) (H2DCFH-DA). CCRF-CEM leukemia cells were used as model cells for mechanistic studies. Results: Compounds 1, 2 and 4 displayed IC50 values below 20 mu M towards CCRF-CEM and CEM/ADR5000 leukemia cells, and were further tested towards a panel of 7 carcinoma cells. The IC50 values of the compounds against carcinoma cells varied from 16.90 mu M (in resistant U87MG.Delta EGFR glioblastoma cells) to 48.67 mu M (against HepG2 hepatocarcinoma cells) for 1, from 7.85 mu M (in U87MG.Delta EGFR cells) to 14.44 mu M (in resistant MDA-MB231/BCRP breast adenocarcinoma cells) for 2, from 4.96 mu M (towards U87MG.Delta EGFRcells) to 7.76 mu M (against MDA-MB231/BCRP cells) for 4, and from 0.07 mu M (against MDA-MB231 cells) to 2.15 mu M (against HepG2 cells) for doxorubicin. Compounds 2 and 4 induced apoptosis in CCRF-CEM cells mediated by MMP alteration and increased ROS production. Conclusion: The present report indicates that isoflavones and biflavonoids from Ormocarpum kirkii are cytotoxic compounds with the potential of being exploited in cancer chemotherapy. Compounds 2 and 4 deserve further studies to develop new anticancer drugs to fight sensitive and resistant cancer cell lines.
Efficient Removal of Tetracycline and Bisphenol A from Water with a New Hybrid Clay/TiO2 Composite
(2023)
New TiO2 hybrid composites were prepared fromkaolinclay, predried and carbonized biomass, and titanium tetraisopropoxideand explored for tetracycline (TET) and bisphenol A (BPA) removalfrom water. Overall, the removal rate is 84% for TET and 51% for BPA.The maximum adsorption capacities (q (m))are 30 and 23 mg/g for TET and BPA, respectively. These capacitiesare far greater than those obtained for unmodified TiO2. Increasing the ionic strength of the solution does not change theadsorption capacity of the adsorbent. pH changes only slightly changeBPA adsorption, while a pH > 7 significantly reduces the adsorptionof TET on the material. The Brouers-Sotolongo fractal modelbest describes the kinetic data for both TET and BPA adsorption, predictingthat the adsorption process occurs via a complex mechanism involvingvarious forces of attraction. Temkin and Freundlich isotherms, whichbest fit the equilibrium adsorption data for TET and BPA, respectively,suggest that adsorption sites are heterogeneous in nature. Overall,the composite materials are much more effective for TET removal fromaqueous solution than for BPA. This phenomenon is assigned to a differencein the TET/adsorbent interactions vs the BPA/adsorbent interactions:the decisive factor appears to be favorable electrostatic interactionsfor TET yielding a more effective TET removal.
Metabolically active microbial communities are present in a wide range of subsurface environments. Techniques like enumeration of microbial cells, activity measurements with radiotracer assays and the analysis of porewater constituents are currently being used to explore the subsurface biosphere, alongside with molecular biological analyses. However, many of these techniques reach their detection limits due to low microbial activity and abundance. Direct measurements of microbial turnover not just face issues of insufficient sensitivity, they only provide information about a single specific process but in sediments many different process can occur simultaneously. Therefore, the development of a new technique to measure total microbial activity would be a major improvement. A new tritium-based hydrogenase-enzyme assay appeared to be a promising tool to quantify total living biomass, even in low activity subsurface environments. In this PhD project total microbial biomass and microbial activity was quantified in different subsurface sediments using established techniques (cell enumeration and pore water geochemistry) as well as a new tritium-based hydrogenase enzyme assay. By using a large database of our own cell enumeration data from equatorial Pacific and north Pacific sediments and published data it was shown that the global geographic distribution of subseafloor sedimentary microbes varies between sites by 5 to 6 orders of magnitude and correlates with the sedimentation rate and distance from land. Based on these correlations, global subseafloor biomass was estimated to be 4.1 petagram-C and ~0.6 % of Earth's total living biomass, which is significantly lower than previous estimates. Despite the massive reduction in biomass the subseafloor biosphere is still an important player in global biogeochemical cycles. To understand the relationship between microbial activity, abundance and organic matter flux into the sediment an expedition to the equatorial Pacific upwelling area and the north Pacific Gyre was carried out. Oxygen respiration rates in subseafloor sediments from the north Pacific Gyre, which are deposited at sedimentation rates of 1 mm per 1000 years, showed that microbial communities could survive for millions of years without fresh supply of organic carbon. Contrary to the north Pacific Gyre oxygen was completely depleted within the upper few millimeters to centimeters in sediments of the equatorial upwelling region due to a higher supply of organic matter and higher metabolic activity. So occurrence and variability of electron acceptors over depth and sites make the subsurface a complex environment for the quantification of total microbial activity. Recent studies showed that electron acceptor processes, which were previously thought to thermodynamically exclude each other can occur simultaneously. So in many cases a simple measure of the total microbial activity would be a better and more robust solution than assays for several specific processes, for example sulfate reduction rates or methanogenesis. Enzyme or molecular assays provide a more general approach as they target key metabolic compounds. Since hydrogenase enzymes are ubiquitous in microbes, the recently developed tritium-based hydrogenase radiotracer assay is applied to quantify hydrogenase enzyme activity as a parameter of total living cell activity. Hydrogenase enzyme activity was measured in sediments from different locations (Lake Van, Barents Sea, Equatorial Pacific and Gulf of Mexico). In sediment samples that contained nitrate, we found the lowest cell specific enzyme activity around 10^(-5) nmol H_(2) cell^(-1) d^(-1). With decreasing energy yield of the electron acceptor used, cell-specific hydrogenase activity increased and maximum values of up to 1 nmol H_(2) cell^(-1) d^(-1) were found in samples with methane concentrations of >10 ppm. Although hydrogenase activity cannot be converted directly into a turnover rate of a specific process, cell-specific activity factors can be used to identify specific metabolism and to quantify the metabolically active microbial population. In another study on sediments from the Nankai Trough microbial abundance and hydrogenase activity data show that both the habitat and the activity of subseafloor sedimentary microbial communities have been impacted by seismic activities. An increase in hydrogenase activity near the fault zone revealed that the microbial community was supplied with hydrogen as an energy source and that the microbes were specialized to hydrogen metabolism.
Subsurface microbial communities undertake many terminal electron-accepting processes, often simultaneously. Using a tritium-based assay, we measured the potential hydrogen oxidation catalyzed by hydrogenase enzymes in several subsurface sedimentary environments (Lake Van, Barents Sea, Equatorial Pacific, and Gulf of Mexico) with different predominant electron-acceptors. Hydrogenases constitute a diverse family of enzymes expressed by microorganisms that utilize molecular hydrogen as a metabolic substrate, product, or intermediate. The assay reveals the potential for utilizing molecular hydrogen and allows qualitative detection of microbial activity irrespective of the predominant electron-accepting process. Because the method only requires samples frozen immediately after recovery, the assay can be used for identifying microbial activity in subsurface ecosystems without the need to preserve live material. We measured potential hydrogen oxidation rates in all samples from multiple depths at several sites that collectively span a wide range of environmental conditions and biogeochemical zones. Potential activity normalized to total cell abundance ranges over five orders of magnitude and varies, dependent upon the predominant terminal electron acceptor. Lowest per-cell potential rates characterize the zone of nitrate reduction and highest per-cell potential rates occur in the methanogenic zone. Possible reasons for this relationship to predominant electron acceptor include (i) increasing importance of fermentation in successively deeper biogeochemical zones and (ii) adaptation of H(2)ases to successively higher concentrations of H-2 in successively deeper zones.
Subsurface microbial communities undertake many terminal electron-accepting processes, often simultaneously. Using a tritium-based assay, we measured the potential hydrogen oxidation catalyzed by hydrogenase enzymes in several subsurface sedimentary environments (Lake Van, Barents Sea, Equatorial Pacific, and Gulf of Mexico) with different predominant electron-acceptors. Hydrogenases constitute a diverse family of enzymes expressed by microorganisms that utilize molecular hydrogen as a metabolic substrate, product, or intermediate. The assay reveals the potential for utilizing molecular hydrogen and allows qualitative detection of microbial activity irrespective of the predominant electron-accepting process. Because the method only requires samples frozen immediately after recovery, the assay can be used for identifying microbial activity in subsurface ecosystems without the need to preserve live material. We measured potential hydrogen oxidation rates in all samples from multiple depths at several sites that collectively span a wide range of environmental conditions and biogeochemical zones. Potential activity normalized to total cell abundance ranges over five orders of magnitude and varies, dependent upon the predominant terminal electron acceptor. Lowest per-cell potential rates characterize the zone of nitrate reduction and highest per-cell potential rates occur in the methanogenic zone. Possible reasons for this relationship to predominant electron acceptor include (i) increasing importance of fermentation in successively deeper biogeochemical zones and (ii) adaptation of H(2)ases to successively higher concentrations of H-2 in successively deeper zones.
The subsurface harbors a large fraction of Earth's living biomass, forming complex microbial ecosystems. Without a profound knowledge of the ongoing biologically mediated processes and their reaction to anthropogenic changes it is difficult to assess the long-term stability and feasibility of any type of geotechnical utilization, as these influence subsurface ecosystems. Despite recent advances in many areas of subsurface microbiology, the direct quantification of turnover processes is still in its infancy, mainly due to the extremely low cell abundances. We provide an overview of the currently available techniques for the quantification of microbial turnover processes and discuss their specific strengths and limitations. Most techniques employed so far have focused on specific processes, e.g. sulfate reduction or methanogenesis. Recent studies show that processes that were previously thought to exclude each other can occur simultaneously, albeit at very low rates. Without the identification of the respective processes it is impossible to quantify total microbial activity. Even in cases where all simultaneously occurring processes can be identified, the typically very low rates prevent quantification. In many cases a simple measure of total microbial activity would be a better and more robust measure than assays for several specific processes. Enzyme or molecular assays provide a more general approach as they target key metabolic compounds. Depending on the compound targeted a broader spectrum of microbial processes can be quantified. The two most promising compounds are ATP and hydrogenase, as both are ubiquitous in microbes. Technical constraints limit the applicability of currently available ATP-assays for subsurface samples. A recently developed hydrogenase radiotracer assay has the potential to become a key tool for the quantification of subsurface microbial activity.
In clinical settings, significant resources are spent on data collection and monitoring patients' health parameters to improve decision-making and provide better care. With increased digitization, the healthcare sector is shifting towards implementing digital technologies for data management and in administration. New technologies offer better treatment opportunities and streamline clinical workflow, but the complexity can cause ineffectiveness, frustration, and errors. To address this, we believe digital solutions alone are not sufficient. Therefore, we take a human-centred design approach for AI development, and apply systems engineering methods to identify system leverage points. We demonstrate how automation enables monitoring clinical parameters, using existing non-intrusive sensor technology, resulting in more resources toward patient care. Furthermore, we provide a framework on digitization of clinical data for integration with data management.
Background:
Childhood and adolescence are critical stages of life for mental health and well-being. Schools are a key setting for mental health promotion and illness prevention. One in five children and adolescents have a mental disorder, about half of mental disorders beginning before the age of 14. Beneficial and explainable artificial intelligence can replace current paper- based and online approaches to school mental health surveys. This can enhance data acquisition, interoperability, data driven analysis, trust and compliance. This paper presents a model for using chatbots for non-obtrusive data collection and supervised machine learning models for data analysis; and discusses ethical considerations pertaining to the use of these models.
Methods:
For data acquisition, the proposed model uses chatbots which interact with students. The conversation log acts as the source of raw data for the machine learning. Pre-processing of the data is automated by filtering for keywords and phrases.
Existing survey results, obtained through current paper-based data collection methods, are evaluated by domain experts (health professionals). These can be used to create a test dataset to validate the machine learning models. Supervised learning
can then be deployed to classify specific behaviour and mental health patterns.
Results:
We present a model that can be used to improve upon current paper-based data collection and manual data analysis methods. An open-source GitHub repository contains necessary tools and components of this model. Privacy is respected through
rigorous observance of confidentiality and data protection requirements. Critical reflection on these ethics and law aspects is included in the project.
Conclusions:
This model strengthens mental health surveillance in schools. The same tools and components could be applied to other public health data. Future extensions of this model could also incorporate unsupervised learning to find clusters and patterns
of unknown effects.
Accurately predicting total electron content (TEC) during geomagnetic storms is still a challenging task for ionospheric models. In this work, a neural-network (NN)-based model is proposed which predicts relative TEC with respect to the preceding 27-day median TEC, during storm time for the European region (with longitudes 30 degrees W-50 degrees E and latitudes 32.5 degrees N-70 degrees N). The 27-day median TEC (referred to as median TEC), latitude, longitude, universal time, storm time, solar radio flux index F10.7, global storm index SYM-H and geomagnetic activity index Hp30 are used as inputs and the output of the network is the relative TEC. The relative TEC can be converted to the actual TEC knowing the median TEC. The median TEC is calculated at each grid point over the European region considering data from the last 27 days before the storm using global ionosphere maps (GIMs) from international GNSS service (IGS) sources. A storm event is defined when the storm time disturbance index Dst drops below 50 nanotesla. The model was trained with storm-time relative TEC data from the time period of 1998 until 2019 (2015 is excluded) and contains 365 storms. Unseen storm data from 33 storm events during 2015 and 2020 were used to test the model. The UQRG GIMs were used because of their high temporal resolution (15 min) compared to other products from different analysis centers. The NN-based model predictions show the seasonal behavior of the storms including positive and negative storm phases during winter and summer, respectively, and show a mixture of both phases during equinoxes. The model's performance was also compared with the Neustrelitz TEC model (NTCM) and the NN-based quiet-time TEC model, both developed at the German Aerospace Agency (DLR). The storm model has a root mean squared error (RMSE) of 3.38 TEC units (TECU), which is an improvement by 1.87 TECU compared to the NTCM, where an RMSE of 5.25 TECU was found. This improvement corresponds to a performance increase by 35.6%. The storm-time model outperforms the quiet-time model by 1.34 TECU, which corresponds to a performance increase by 28.4% from 4.72 to 3.38 TECU. The quiet-time model was trained with Carrington averaged TEC and, therefore, is ideal to be used as an input instead of the GIM derived 27-day median. We found an improvement by 0.8 TECU which corresponds to a performance increase by 17% from 4.72 to 3.92 TECU for the storm-time model using the quiet-time-model predicted TEC as an input compared to solely using the quiet-time model.
Technical report
(2019)
Design and Implementation of service-oriented architectures imposes a huge number of research questions from the fields of software engineering, system analysis and modeling, adaptability, and application integration. Component orientation and web services are two approaches for design and realization of complex web-based system. Both approaches allow for dynamic application adaptation as well as integration of enterprise application.
Commonly used technologies, such as J2EE and .NET, form de facto standards for the realization of complex distributed systems. Evolution of component systems has lead to web services and service-based architectures. This has been manifested in a multitude of industry standards and initiatives such as XML, WSDL UDDI, SOAP, etc. All these achievements lead to a new and promising paradigm in IT systems engineering which proposes to design complex software solutions as collaboration of contractually defined software services.
Service-Oriented Systems Engineering represents a symbiosis of best practices in object-orientation, component-based development, distributed computing, and business process management. It provides integration of business and IT concerns.
The annual Ph.D. Retreat of the Research School provides each member the opportunity to present his/her current state of their research and to give an outline of a prospective Ph.D. thesis. Due to the interdisciplinary structure of the research school, this technical report covers a wide range of topics. These include but are not limited to: Human Computer Interaction and Computer Vision as Service; Service-oriented Geovisualization Systems; Algorithm Engineering for Service-oriented Systems; Modeling and Verification of Self-adaptive Service-oriented Systems; Tools and Methods for Software Engineering in Service-oriented Systems; Security Engineering of Service-based IT Systems; Service-oriented Information Systems; Evolutionary Transition of Enterprise Applications to Service Orientation; Operating System Abstractions for Service-oriented Computing; and Services Specification, Composition, and Enactment.
The behaviour of individuals, businesses, and government entities before, during, and immediately after a disaster can dramatically affect the impact and recovery time. However, existing risk-assessment methods rarely include this critical factor. In this Perspective, we show why this is a concern, and demonstrate that although initial efforts have inevitably represented human behaviour in limited terms, innovations in flood-risk assessment that integrate societal behaviour and behavioural adaptation dynamics into such quantifications may lead to more accurate characterization of risks and improved assessment of the effectiveness of risk-management strategies and investments. Such multidisciplinary approaches can inform flood-risk management policy development.
Parsing of argumentative structures has become a very active line of research in recent years. Like discourse parsing or any other natural language task that requires prediction of linguistic structures, most approaches choose to learn a local model and then perform global decoding over the local probability distributions, often imposing constraints that are specific to the task at hand. Specifically for argumentation parsing, two decoding approaches have been recently proposed: Minimum Spanning Trees (MST) and Integer Linear Programming (ILP), following similar trends in discourse parsing. In contrast to discourse parsing though, where trees are not always used as underlying annotation schemes, argumentation structures so far have always been represented with trees. Using the 'argumentative microtext corpus' [in: Argumentation and Reasoned Action: Proceedings of the 1st European Conference on Argumentation, Lisbon 2015 / Vol. 2, College Publications, London, 2016, pp. 801-815] as underlying data and replicating three different decoding mechanisms, in this paper we propose a novel ILP decoder and an extension to our earlier MST work, and then thoroughly compare the approaches. The result is that our new decoder outperforms related work in important respects, and that in general, ILP and MST yield very similar performance.
Parsing of argumentative structures has become a very active line of research in recent years. Like discourse parsing or any other natural language task that requires prediction of linguistic structures, most approaches choose to learn a local model and then perform global decoding over the local probability distributions, often imposing constraints that are specific to the task at hand. Specifically for argumentation parsing, two decoding approaches have been recently proposed: Minimum Spanning Trees (MST) and Integer Linear Programming (ILP), following similar trends in discourse parsing. In contrast to discourse parsing though, where trees are not always used as underlying annotation schemes, argumentation structures so far have always been represented with trees. Using the ‘argumentative microtext corpus’ [in: Argumentation and Reasoned Action: Proceedings of the 1st European Conference on Argumentation, Lisbon 2015 / Vol. 2, College Publications, London, 2016, pp. 801–815] as underlying data and replicating three different decoding mechanisms, in this paper we propose a novel ILP decoder and an extension to our earlier MST work, and then thoroughly compare the approaches. The result is that our new decoder outperforms related work in important respects, and that in general, ILP and MST yield very similar performance.
With the recent growth of sensors, cloud computing handles the data processing of many applications. Processing some of this data on the cloud raises, however, many concerns regarding, e.g., privacy, latency, or single points of failure. Alternatively, thanks to the development of embedded systems, smart wireless devices can share their computation capacity, creating a local wireless cloud for in-network processing. In this context, the processing of an application is divided into smaller jobs so that a device can run one or more jobs.
The contribution of this thesis to this scenario is divided into three parts. In part one, I focus on wireless aspects, such as power control and interference management, for deciding which jobs to run on which node and how to route data between nodes. Hence, I formulate optimization problems and develop heuristic and meta-heuristic algorithms to allocate wireless and computation resources. Additionally, to deal with multiple applications competing for these resources, I develop a reinforcement learning (RL) admission controller to decide which application should be admitted. Next, I look into acoustic applications to improve wireless throughput by using microphone clock synchronization to synchronize wireless transmissions.
In the second part, I jointly work with colleagues from the acoustic processing field to optimize both network and application (i.e., acoustic) qualities. My contribution focuses on the network part, where I study the relation between acoustic and network qualities when selecting a subset of microphones for collecting audio data or selecting a subset of optional jobs for processing these data; too many microphones or too many jobs can lessen quality by unnecessary delays. Hence, I develop RL solutions to select the subset of microphones under network constraints when the speaker is moving while still providing good acoustic quality. Furthermore, I show that autonomous vehicles carrying microphones improve the acoustic qualities of different applications. Accordingly, I develop RL solutions (single and multi-agent ones) for controlling these vehicles.
In the third part, I close the gap between theory and practice. I describe the features of my open-source framework used as a proof of concept for wireless in-network processing. Next, I demonstrate how to run some algorithms developed by colleagues from acoustic processing using my framework. I also use the framework for studying in-network delays (wireless and processing) using different distributions of jobs and network topologies.
Information on structural features of a fracture network at early stages of Enhanced Geothermal System development is mostly restricted to borehole images and, if available, outcrop data. However, using this information to image discontinuities in deep reservoirs is difficult. Wellbore failure data provides only some information on components of the in situ stress state and its heterogeneity. Our working hypothesis is that slip on natural fractures primarily controls these stress heterogeneities. Based on this, we introduce stress-based tomography in a Bayesian framework to characterize the fracture network and its heterogeneity in potential Enhanced Geothermal System reservoirs. In this procedure, first a random initial discrete fracture network (DFN) realization is generated based on prior information about the network. The observations needed to calibrate the DFN are based on local variations of the orientation and magnitude of at least one principal stress component along boreholes. A Markov Chain Monte Carlo sequence is employed to update the DFN iteratively by a fracture translation within the domain. The Markov sequence compares the simulated stress profile with the observed stress profiles in the borehole, evaluates each iteration with Metropolis-Hastings acceptance criteria, and stores acceptable DFN realizations in an ensemble. Finally, this obtained ensemble is used to visualize the potential occurrence of fractures in a probability map, indicating possible fracture locations and lengths. We test this methodology to reconstruct simple synthetic and more complex outcrop-based fracture networks and successfully image the significant fractures in the domain.
Diabetes is a major public health problem with increasing global prevalence. Type 2 diabetes (T2D), which accounts for 90% of all diagnosed cases, is a complex polygenic disease also modulated by epigenetics and lifestyle factors. For the identification of T2D-associated genes, linkage analyses combined with mouse breeding strategies and bioinformatic tools were useful in the past. In a previous study in which a backcross population of the lean and diabetes-prone dilute brown non-agouti (DBA) mouse and the obese and diabetes-susceptible New Zealand obese (NZO) mouse was characterized, a major diabetes quantitative trait locus (QTL) was identified on chromosome 4. The locus was designated non-insulin dependent diabetes from DBA (Nidd/DBA). The aim of this thesis was (i) to perform a detailed phenotypic characterization of the Nidd/DBA mice, (ii) to further narrow the critical region and (iii) to identify the responsible genetic variant(s) of the Nidd/DBA locus. The phenotypic characterization of recombinant congenic mice carrying a 13.6 Mbp Nidd/DBA fragment with 284 genes presented a gradually worsening metabolic phenotype. Nidd/DBA allele carriers exhibited severe hyperglycemia (~19.9 mM) and impaired glucose clearance at 12 weeks of age. Ex vivo perifusion experiments with islets of 13-week-old congenic mice revealed a tendency towards reduced insulin secretion in homozygous DBA mice. In addition, 16-week-old mice showed a severe loss of β-cells and reduced pancreatic insulin content. Pathway analysis of transcriptome data from islets of congenic mice pointed towards a downregulation of cell survival genes. Morphological analysis of pancreatic sections displayed a reduced number of bi-hormonal cells co-expressing glucagon and insulin in homozygous DBA mice, which could indicate a reduced plasticity of endocrine cells in response to hyperglycemic stress. Further generation and phenotyping of recombinant congenic mice enabled the isolation of a 3.3 Mbp fragment that was still able to induce hyperglycemia and contained 61 genes. Bioinformatic analyses including haplotype mapping, sequence and transcriptome analysis were integrated in order to further reduce the number of candidate genes and to identify the presumable causative gene variant. Four putative candidate genes (Ttc39a, Kti12, Osbpl9, Calr4) were defined, which were either differentially expressed or carried a sequence variant. In addition, in silico ChIP-Seq analyses of the 3.3 Mbp region indicated a high number of SNPs located in active regions of binding sites of β-cell transcription factors. This points towards potentially altered cis-regulatory elements that could be responsible for the phenotype conferred by the Nidd/DBA locus. In summary, the Nidd/DBA locus mediates impaired glucose homeostasis and reduced insulin secretion capacity which finally leads to β-cell death. The downregulation of cell survival genes and reduced plasticity of endocrine cells could further contribute to the β-cell loss. The critical region was narrowed down to a 3.3 Mbp fragment containing 61 genes, of which four might be involved in the development of the diabetogenic Nidd/DBA phenotype.
Type 2 diabetes (T2D) is a complex metabolic disease regulated by an interaction of genetic predisposition and environmental factors. To understand the genetic contribution in the development of diabetes, mice varying in their disease susceptibility were crossed with the obese and diabetes-prone New Zealand obese (NZO) mouse. Subsequent whole-genome sequence scans revealed one major quantitative trait loci (QTL),Nidd/DBAon chromosome 4, linked to elevated blood glucose and reduced plasma insulin and low levels of pancreatic insulin. Phenotypical characterization of congenic mice carrying 13.6 Mbp of the critical fragment of DBA mice displayed severe hyperglycemia and impaired glucose clearance at week 10, decreased glucose response in week 13, and loss of beta-cells and pancreatic insulin in week 16. To identify the responsible gene variant(s), further congenic mice were generated and phenotyped, which resulted in a fragment of 3.3 Mbp that was sufficient to induce hyperglycemia. By combining transcriptome analysis and haplotype mapping, the number of putative responsible variant(s) was narrowed from initial 284 to 18 genes, including gene models and non-coding RNAs. Consideration of haplotype blocks reduced the number of candidate genes to four (Kti12,Osbpl9,Ttc39a, andCalr4) as potential T2D candidates as they display a differential expression in pancreatic islets and/or sequence variation. In conclusion, the integration of comparative analysis of multiple inbred populations such as haplotype mapping, transcriptomics, and sequence data substantially improved the mapping resolution of the diabetes QTLNidd/DBA. Future studies are necessary to understand the exact role of the different candidates in beta-cell function and their contribution in maintaining glycemic control.
Pancreatic steatosis associates with beta-cell failure and may participate in the development of type-2-diabetes. Our previous studies have shown that diabetes-susceptible mice accumulate more adipocytes in the pancreas than diabetes-resistant mice. In addition, we have demonstrated that the co-culture of pancreatic islets and adipocytes affect insulin secretion. The aim of this current study was to elucidate if and to what extent pancreas-resident mesenchymal stromal cells (MSCs) with adipogenic progenitor potential differ from the corresponding stromal-type cells of the inguinal white adipose tissue (iWAT). miRNA (miRNome) and mRNA expression (transcriptome) analyses of MSCs isolated by flow cytometry of both tissues revealed 121 differentially expressed miRNAs and 1227 differentially expressed genes (DEGs). Target prediction analysis estimated 510 DEGs to be regulated by 58 differentially expressed miRNAs. Pathway analyses of DEGs and miRNA target genes showed unique transcriptional and miRNA signatures in pancreas (pMSCs) and iWAT MSCs (iwatMSCs), for instance fibrogenic and adipogenic differentiation, respectively. Accordingly, iwatMSCs revealed a higher adipogenic lineage commitment, whereas pMSCs showed an elevated fibrogenesis. As a low degree of adipogenesis was also observed in pMSCs of diabetes-susceptible mice, we conclude that the development of pancreatic steatosis has to be induced by other factors not related to cell-autonomous transcriptomic changes and miRNA-based signals.
Numerical simulation of fluid-flow processes in a 3D high-resolution carbonate reservoir analogue
(2014)
A high-resolution three-dimensional (3D) outcrop model of a Jurassic carbonate ramp was used in order to perform a series of detailed and systematic flow simulations. The aim of this study was to test the impact of small- and large-scale geological features on reservoir performance and oil recovery. The digital outcrop model contains a wide range of sedimentological, diagenetic and structural features, including discontinuity surfaces, shoal bodies, mud mounds, oyster bioherms and fractures. Flow simulations are performed for numerical well testing and secondary oil recovery. Numerical well testing enables synthetic but systematic pressure responses to be generated for different geological features observed in the outcrops. This allows us to assess and rank the relative impact of specific geological features on reservoir performance. The outcome documents that, owing to the realistic representation of matrix heterogeneity, most diagenetic and structural features cannot be linked to a unique pressure signature. Instead, reservoir performance is controlled by subseismic faults and oyster bioherms acting as thief zones. Numerical simulations of secondary recovery processes reveal strong channelling of fluid flow into high-permeability layers as the primary control for oil recovery. However, appropriate reservoir-engineering solutions, such as optimizing well placement and injection fluid, can reduce channelling and increase oil recovery.
Education in knowledge society is challenged with a lot of problems in particular the interaction between the teacher and learner in social networking software as a key factor affects the learners’ learning and satisfaction (Prammanee, 2005) where “to teach is to communicate, to communicate is to interact, to interact is to learn” (Hefzallah, 2004, p. 48). Analyzing the relation between teacher-learner interaction from a side and learning outcome and learners’ satisfaction from the other side, some basic problems regarding a new learning culture using social networking software are discussed. Most of the educational institutions pay a lot of attentions to the equipments and emerging Information and Communication Technologies (ICTs) in learning situations. They try to incorporate ICT into their institutions as teaching and learning environments. They do this because they expect that by doing so they will improve the outcome of the learning process. Despite this, the learning outcome as reported in most studies is very limited, because the expectations of self-directed learning are much higher than the reality. Findings from an empirical study (investigating the role of teacher-learner interaction through new digital media wiki in higher education and learning outcome and learner’s satisfaction) are presented recommendations about the necessity of pedagogical interactions in support of teaching and learning activities in wiki courses in order to improve the learning outcome. Conclusions show the necessity for significant changes in the approach of vocational teacher training programs of online teachers in order to meet the requirements of new digital media in coherence with a new learning culture. These changes have to address collaborative instead of individual learning and ICT wiki as a tool for knowledge construction instead of a tool for gathering information.
The climate is a complex dynamical system involving interactions and feedbacks among different processes at multiple temporal and spatial scales. Although numerous studies have attempted to understand the climate system, nonetheless, the studies investigating the multiscale characteristics of the climate are scarce. Further, the present set of techniques are limited in their ability to unravel the multi-scale variability of the climate system. It is completely plausible that extreme events and abrupt transitions, which are of great interest to climate community, are resultant of interactions among processes operating at multi-scale. For instance, storms, weather patterns, seasonal irregularities such as El Niño, floods and droughts, and decades-long climate variations can be better understood and even predicted by quantifying their multi-scale dynamics. This makes a strong argument to unravel the interaction and patterns of climatic processes at different scales. With this background, the thesis aims at developing measures to understand and quantify multi-scale interactions within the climate system.
In the first part of the thesis, I proposed two new methods, viz, multi-scale event synchronization (MSES) and wavelet multi-scale correlation (WMC) to capture the scale-specific features present in the climatic processes. The proposed methods were tested on various synthetic and real-world time series in order to check their applicability and replicability. The results indicate that both methods (WMC and MSES) are able to capture scale-specific associations that exist between processes at different time scales in a more detailed manner as compared to the traditional single scale counterparts.
In the second part of the thesis, the proposed multi-scale similarity measures were used in constructing climate networks to investigate the evolution of spatial connections within climatic processes at multiple timescales. The proposed methods WMC and MSES, together with complex network were applied to two different datasets.
In the first application, climate networks based on WMC were constructed for the univariate global sea surface temperature (SST) data to identify and visualize the SSTs patterns that develop very similarly over time and distinguish them from those that have long-range teleconnections to other ocean regions. Further investigations of climate networks on different timescales revealed (i) various high variability and co-variability regions, and (ii) short and long-range teleconnection regions with varying spatial distance. The outcomes of the study not only re-confirmed the existing knowledge on the link between SST patterns like El Niño Southern Oscillation and the Pacific Decadal Oscillation, but also suggested new insights into the characteristics and origins of long-range teleconnections.
In the second application, I used the developed non-linear MSES similarity measure to quantify the multivariate teleconnections between extreme Indian precipitation and climatic patterns with the highest relevance for Indian sub-continent. The results confirmed significant non-linear influences that were not well captured by the traditional methods. Further, there was a substantial variation in the strength and nature of teleconnection across India, and across time scales.
Overall, the results from investigations conducted in the thesis strongly highlight the need for considering the multi-scale aspects in climatic processes, and the proposed methods provide robust framework for quantifying the multi-scale characteristics.
Sea surface temperature (SST) patterns can – as surface climate forcing – affect weather and climate at large distances. One example is El Niño-Southern Oscillation (ENSO) that causes climate anomalies around the globe via teleconnections. Although several studies identified and characterized these teleconnections, our understanding of climate processes remains incomplete, since interactions and feedbacks are typically exhibited at unique or multiple temporal and spatial scales. This study characterizes the interactions between the cells of a global SST data set at different temporal and spatial scales using climate networks. These networks are constructed using wavelet multi-scale correlation that investigate the correlation between the SST time series at a range of scales allowing instantaneously deeper insights into the correlation patterns compared to traditional methods like empirical orthogonal functions or classical correlation analysis. This allows us to identify and visualise regions of – at a certain timescale – similarly evolving SSTs and distinguish them from those with long-range teleconnections to other ocean regions. Our findings re-confirm accepted knowledge about known highly linked SST patterns like ENSO and the Pacific Decadal Oscillation, but also suggest new insights into the characteristics and origins of long-range teleconnections like the connection between ENSO and Indian Ocean Dipole.
Sea surface temperature (SST) patterns can – as surface climate forcing – affect weather and climate at large distances. One example is El Niño-Southern Oscillation (ENSO) that causes climate anomalies around the globe via teleconnections. Although several studies identified and characterized these teleconnections, our understanding of climate processes remains incomplete, since interactions and feedbacks are typically exhibited at unique or multiple temporal and spatial scales. This study characterizes the interactions between the cells of a global SST data set at different temporal and spatial scales using climate networks. These networks are constructed using wavelet multi-scale correlation that investigate the correlation between the SST time series at a range of scales allowing instantaneously deeper insights into the correlation patterns compared to traditional methods like empirical orthogonal functions or classical correlation analysis. This allows us to identify and visualise regions of – at a certain timescale – similarly evolving SSTs and distinguish them from those with long-range teleconnections to other ocean regions. Our findings re-confirm accepted knowledge about known highly linked SST patterns like ENSO and the Pacific Decadal Oscillation, but also suggest new insights into the characteristics and origins of long-range teleconnections like the connection between ENSO and Indian Ocean Dipole.
The quantification of spatial propagation of extreme precipitation events is vital in water resources planning and disaster mitigation. However, quantifying these extreme events has always been challenging as many traditional methods are insufficient to capture the nonlinear interrelationships between extreme event time series. Therefore, it is crucial to develop suitable methods for analyzing the dynamics of extreme events over a river basin with a diverse climate and complicated topography. Over the last decade, complex network analysis emerged as a powerful tool to study the intricate spatiotemporal relationship between many variables in a compact way. In this study, we employ two nonlinear concepts of event synchronization and edit distance to investigate the extreme precipitation pattern in the Ganga river basin. We use the network degree to understand the spatial synchronization pattern of extreme rainfall and identify essential sites in the river basin with respect to potential prediction skills. The study also attempts to quantify the influence of precipitation seasonality and topography on extreme events. The findings of the study reveal that (1) the network degree is decreased in the southwest to northwest direction, (2) the timing of 50th percentile precipitation within a year influences the spatial distribution of degree, (3) the timing is inversely related to elevation, and (4) the lower elevation greatly influences connectivity of the sites. The study highlights that edit distance could be a promising alternative to analyze event-like data by incorporating event time and amplitude and constructing complex networks of climate extremes.
Hydrologic regionalization deals with the investigation of homogeneity in watersheds and provides a classification of watersheds for regional analysis. The classification thus obtained can be used as a basis for mapping data from gauged to ungauged sites and can improve extreme event prediction. This paper proposes a wavelet power spectrum (WPS) coupled with the self-organizing map method for clustering hydrologic catchments. The application of this technique is implemented for gauged catchments. As a test case study, monthly streamflow records observed at 117 selected catchments throughout the western United States from 1951 through 2002. Further, based on WPS of each station, catchments are classified into homogeneous clusters, which provides a representative WPS pattern for the streamflow stations in each cluster.
In recent years, complex network analysis facilitated the identification of universal and unexpected patterns in complex climate systems. However, the analysis and representation of a multiscale complex relationship that exists in the global climate system are limited. A logical first step in addressing this issue is to construct multiple networks over different timescales. Therefore, we propose to apply the wavelet multiscale correlation (WMC) similarity measure, which is a combination of two state-of-the-art methods, viz. wavelet and Pearson’s correlation, for investigating multiscale processes through complex networks. Firstly we decompose the data over different timescales using the wavelet approach and subsequently construct a corresponding network by Pearson’s correlation. The proposed approach is illustrated and tested on two synthetics and one real-world example. The first synthetic case study shows the efficacy of the proposed approach to unravel scale-specific connections, which are often undiscovered at a single scale. The second synthetic case study illustrates that by dividing and constructing a separate network for each time window we can detect significant changes in the signal structure. The real-world example investigates the behavior of the global sea surface temperature (SST) network at different timescales. Intriguingly, we notice that spatial dependent structure in SST evolves temporally. Overall, the proposed measure has an immense potential to provide essential insights on understanding and extending complex multivariate process studies at multiple scales.
The temporal dynamics of climate processes are spread across different timescales and, as such, the study of these processes at only one selected timescale might not reveal the complete mechanisms and interactions within and between the (sub-) processes. To capture the non-linear interactions between climatic events, the method of event synchronization has found increasing attention recently. The main drawback with the present estimation of event synchronization is its restriction to analysing the time series at one reference timescale only. The study of event synchronization at multiple scales would be of great interest to comprehend the dynamics of the investigated climate processes. In this paper, the wavelet-based multi-scale event synchronization (MSES) method is proposed by combining the wavelet transform and event synchronization. Wavelets are used extensively to comprehend multi-scale processes and the dynamics of processes across various timescales. The proposed method allows the study of spatio-temporal patterns across different timescales. The method is tested on synthetic and real-world time series in order to check its replicability and applicability. The results indicate that MSES is able to capture relationships that exist between processes at different timescales.
Quantifying the roles of single stations within homogeneous regions using complex network analysis
(2018)
Regionalization and pooling stations to form homogeneous regions or communities are essential for reliable parameter transfer, prediction in ungauged basins, and estimation of missing information. Over the years, several clustering methods have been proposed for regional analysis. Most of these methods are able to quantify the study region in terms of homogeneity but fail to provide microscopic information about the interaction between communities, as well as about each station within the communities. We propose a complex network-based approach to extract this valuable information and demonstrate the potential of our approach using a rainfall network constructed from the Indian gridded daily precipitation data. The communities were identified using the network-theoretical community detection algorithm for maximizing the modularity. Further, the grid points (nodes) were classified into universal roles according to their pattern of within- and between-community connections. The method thus yields zoomed-in details of individual rainfall grids within each community.
The temporal dynamics of climate processes are spread across different timescales and, as such, the study of these processes at only one selected timescale might not reveal the complete mechanisms and interactions within and between the (sub-) processes. To capture the non-linear interactions between climatic events, the method of event synchronization has found increasing attention recently. The main drawback with the present estimation of event synchronization is its restriction to analysing the time series at one reference timescale only. The study of event synchronization at multiple scales would be of great interest to comprehend the dynamics of the investigated climate processes. In this paper, the wavelet-based multi-scale event synchronization (MSES) method is proposed by combining the wavelet transform and event synchronization. Wavelets are used extensively to comprehend multi-scale processes and the dynamics of processes across various timescales. The proposed method allows the study of spatio-temporal patterns across different timescales. The method is tested on synthetic and real-world time series in order to check its replicability and applicability. The results indicate that MSES is able to capture relationships that exist between processes at different timescales.
Hydrometric networks play a vital role in providing information for decision-making in water resource management. They should be set up optimally to provide as much information as possible that is as accurate as possible and, at the same time, be cost-effective. Although the design of hydrometric networks is a well-identified problem in hydrometeorology and has received considerable attention, there is still scope for further advancement. In this study, we use complex network analysis, defined as a collection of nodes interconnected by links, to propose a new measure that identifies critical nodes of station networks. The approach can support the design and redesign of hydrometric station networks. The science of complex networks is a relatively young field and has gained significant momentum over the last few years in different areas such as brain networks, social networks, technological networks, or climate networks. The identification of influential nodes in complex networks is an important field of research. We propose a new node-ranking measure – the weighted degree–betweenness (WDB) measure – to evaluate the importance of nodes in a network. It is compared to previously proposed measures used on synthetic sample networks and then applied to a real-world rain gauge network comprising 1229 stations across Germany to demonstrate its applicability. The proposed measure is evaluated using the decline rate of the network efficiency and the kriging error. The results suggest that WDB effectively quantifies the importance of rain gauges, although the benefits of the method need to be investigated in more detail.
Hydrometric networks play a vital role in providing information for decision-making in water resource management. They should be set up optimally to provide as much information as possible that is as accurate as possible and, at the same time, be cost-effective. Although the design of hydrometric networks is a well-identified problem in hydrometeorology and has received considerable attention, there is still scope for further advancement. In this study, we use complex network analysis, defined as a collection of nodes interconnected by links, to propose a new measure that identifies critical nodes of station networks. The approach can support the design and redesign of hydrometric station networks. The science of complex networks is a relatively young field and has gained significant momentum over the last few years in different areas such as brain networks, social networks, technological networks, or climate networks. The identification of influential nodes in complex networks is an important field of research. We propose a new node-ranking measure – the weighted degree–betweenness (WDB) measure – to evaluate the importance of nodes in a network. It is compared to previously proposed measures used on synthetic sample networks and then applied to a real-world rain gauge network comprising 1229 stations across Germany to demonstrate its applicability. The proposed measure is evaluated using the decline rate of the network efficiency and the kriging error. The results suggest that WDB effectively quantifies the importance of rain gauges, although the benefits of the method need to be investigated in more detail.
Functional characterization of ROS-responsive genes, ANAC085 and ATR7, in Arabidopsis thaliana
(2023)
This study focuses on three key aspects: (a) crude throat swab samples in a viral transport medium (VTM) as templates for RT-LAMP reactions; (b) a biotinylated DNA probe with enhanced specificity for LFA readouts; and (c) a digital semi-quantification of LFA readouts. Throat swab samples from SARS-CoV-2 positive and negative patients were used in their crude (no cleaning or pre-treatment) forms for the RT-LAMP reaction. The samples were heat-inactivated but not treated for any kind of nucleic acid extraction or purification. The RT-LAMP (20 min processing time) product was read out by an LFA approach using two labels: FITC and biotin. FITC was enzymatically incorporated into the RT-LAMP amplicon with the LF-LAMP primer, and biotin was introduced using biotinylated DNA probes, specifically for the amplicon region after RT-LAMP amplification. This assay setup with biotinylated DNA probe-based LFA readouts of the RT-LAMP amplicon was 98.11% sensitive and 96.15% specific. The LFA result was further analysed by a smartphone-based IVD device, wherein the T-line intensity was recorded. The LFA T-line intensity was then correlated with the qRT-PCR Ct value of the positive swab samples. A digital semi-quantification of RT-LAMP-LFA was reported with a correlation coefficient of R2 = 0.702. The overall RT-LAMP-LFA assay time was recorded to be 35 min with a LoD of three RNA copies/µL (Ct-33). With these three advancements, the nucleic acid testing-point of care technique (NAT-POCT) is exemplified as a versatile biosensor platform with great potential and applicability for the detection of pathogens without the need for sample storage, transportation, or pre-processing.
The degree of detrimental effects inflicted on mankind by the COVID-19 pandemic increased the need to develop ASSURED (Affordable, Sensitive, Specific, User-friendly, Rapid and Robust, Equipment-free, and Deliverable) POCT (point of care testing) to overcome the current and any future pandemics. Much effort in research and development is currently advancing the progress to overcome the diagnostic pressure built up by emerging new pathogens. LAMP (loop-mediated isothermal amplification) is a well-researched isothermal technique for specific nucleic acid amplification which can be combined with a highly sensitive immunochromatographic readout via lateral flow assays (LFA). Here we discuss LAMP-LFA robustness, sensitivity, and specificity for SARS-CoV-2 N-gene detection in cDNA and clinical swab-extracted RNA samples. The LFA readout is designed to produce highly specific results by incorporation of biotin and FITC labels to 11-dUTP and LF (loop forming forward) primer, respectively. The LAMP-LFA assay was established using cDNA for N-gene with an accuracy of 95.65%. To validate the study, 82 SARS-CoV-2-positive RNA samples were tested. Reverse transcriptase (RT)-LAMP-LFA was positive for the RNA samples with an accuracy of 81.66%; SARS-CoV-2 viral RNA was detected by RT-LAMP-LFA for as low as CT-33. Our method reduced the detection time to 15 min and indicates therefore that RT-LAMP in combination with LFA represents a promising nucleic acid biosensing POCT platform that combines with smartphone based semi-quantitative data analysis.
Etmopteridae (lantern sharks) is the most species-rich family of sharks, comprising more than 50 species.
Many species are described from few individuals, and re-collection of specimens is often hindered by the remoteness of their sampling sites.
For taxonomic studies, comparative morphological analysis of type specimens housed in natural history collections has been the main source of evidence.
In contrast, DNA sequence information has rarely been used.
Most lantern shark collection specimens, including the types, were formalin fixed before long-term storage in ethanol solutions.
The DNA damage caused by both fixation and preservation of specimens has excluded these specimens from DNA sequence-based phylogenetic analyses so far.
However, recent advances in the field of ancient DNA have allowed recovery of wet-collection specimen DNA sequence data.
Here we analyse archival mitochondrial DNA sequences, obtained using ancient DNA approaches, of two wet-collection lantern shark paratype specimens, namely Etmopterus litvinovi and E. pycnolepis, for which the type series represent the only known individuals.
Target capture of mitochondrial markers from single-stranded DNA libraries allows for phylogenetic placement of both species.
Our results suggest synonymy of E. benchleyi with E. litvinovi but support the species status of E. pycnolepis. This revised taxonomy is helpful for future conservation and management efforts, as our results indicate a larger distribution range of E. litvinovi. This study further demonstrates the importance of wet-collection type specimens as genetic resource for taxonomic research.
Simultaneous Barcode Sequencing of Diverse Museum Collection Specimens Using a Mixed RNA Bait Set
(2022)
A growing number of publications presenting results from sequencing natural history collection specimens reflect the importance of DNA sequence information from such samples. Ancient DNA extraction and library preparation methods in combination with target gene capture are a way of unlocking archival DNA, including from formalin-fixed wet-collection material. Here we report on an experiment, in which we used an RNA bait set containing baits from a wide taxonomic range of species for DNA hybridisation capture of nuclear and mitochondrial targets for analysing natural history collection specimens. The bait set used consists of 2,492 mitochondrial and 530 nuclear RNA baits and comprises specific barcode loci of diverse animal groups including both invertebrates and vertebrates. The baits allowed to capture DNA sequence information of target barcode loci from 84% of the 37 samples tested, with nuclear markers being captured more frequently and consensus sequences of these being more complete compared to mitochondrial markers. Samples from dry material had a higher rate of success than wet-collection specimens, although target sequence information could be captured from 50% of formalin-fixed samples. Our study illustrates how efforts to obtain barcode sequence information from natural history collection specimens may be combined and are a way of implementing barcoding inventories of scientific collection material.
Simultaneous Barcode Sequencing of Diverse Museum Collection Specimens Using a Mixed RNA Bait Set
(2022)
A growing number of publications presenting results from sequencing natural history collection specimens reflect the importance of DNA sequence information from such samples. Ancient DNA extraction and library preparation methods in combination with target gene capture are a way of unlocking archival DNA, including from formalin-fixed wet-collection material. Here we report on an experiment, in which we used an RNA bait set containing baits from a wide taxonomic range of species for DNA hybridisation capture of nuclear and mitochondrial targets for analysing natural history collection specimens. The bait set used consists of 2,492 mitochondrial and 530 nuclear RNA baits and comprises specific barcode loci of diverse animal groups including both invertebrates and vertebrates. The baits allowed to capture DNA sequence information of target barcode loci from 84% of the 37 samples tested, with nuclear markers being captured more frequently and consensus sequences of these being more complete compared to mitochondrial markers. Samples from dry material had a higher rate of success than wet-collection specimens, although target sequence information could be captured from 50% of formalin-fixed samples. Our study illustrates how efforts to obtain barcode sequence information from natural history collection specimens may be combined and are a way of implementing barcoding inventories of scientific collection material.
The color red has been implicated in a variety of social processes, including those involving mating. While previous research suggests that women sometimes wear red strategically to increase their attractiveness, the replicability of this literature has been questioned. The current research is a reasonably powered conceptual replication designed to strengthen this literature by testing whether women are more inclined to display the color red 1) during fertile (as compared with less fertile) days of the menstrual cycle, and 2) when expecting to interact with an attractive man (as compared with a less attractive man and with a control condition). Analyses controlled for a number of theoretically relevant covariates (relationship status, age, the current weather). Only the latter hypothesis received mixed support (mainly among women on hormonal birth control), whereas results concerning the former hypothesis did not reach significance. Women (N = 281) displayed more red when expecting to interact with an attractive man; findings did not support the prediction that women would increase their display of red on fertile days of the cycle. Findings thus suggested only mixed replicability for the link between the color red and psychological processes involving romantic attraction. They also illustrate the importance of further investigating the boundary conditions of color effects on everyday social processes.
A common feature in Answer Set Programming is the use of a second negation, stronger than default negation and sometimes called explicit, strong or classical negation. This explicit negation is normally used in front of atoms, rather than allowing its use as a regular operator. In this paper we consider the arbitrary combination of explicit negation with nested expressions, as those defined by Lifschitz, Tang and Turner. We extend the concept of reduct for this new syntax and then prove that it can be captured by an extension of Equilibrium Logic with this second negation. We study some properties of this variant and compare to the already known combination of Equilibrium Logic with Nelson's strong negation.
In this work we tackle the problem of checking strong equivalence of logic programs that may contain local auxiliary atoms, to be removed from their stable models and to be forbidden in any external context. We call this property projective strong equivalence (PSE). It has been recently proved that not any logic program containing auxiliary atoms can be reformulated, under PSE, as another logic program or formula without them – this is known as strongly persistent forgetting. In this paper, we introduce a conservative extension of Equilibrium Logic and its monotonic basis, the logic of Here-and-There, in which we deal with a new connective ‘|’ we call fork. We provide a semantic characterisation of PSE for forks and use it to show that, in this extension, it is always possible to forget auxiliary atoms under strong persistence. We further define when the obtained fork is representable as a regular formula.
Massive stars that become stripped of their hydrogen envelope through binary interaction or winds can be observed either as Wolf-Rayet stars, if they have optically thick winds, or as transparent-wind stripped-envelope stars. We approximate their evolution through evolutionary models of single helium stars, and compute detailed model grids in the initial mass range 1.5-70 M. for metallicities between 0.01 and 0.04, from core helium ignition until core collapse. Throughout their lifetimes some stellar models expose the ash of helium burning. We propose that models that have nitrogen-rich envelopes are candidate WN stars, while models with a carbon-rich surface are candidate WC stars during core helium burning, and WO stars afterwards. We measure the metallicity dependence of the total lifetimes of our models and the duration of their evolutionary phases. We propose an analytic estimate of the wind's optical depth to distinguish models of Wolf-Rayet stars from transparent-wind stripped-envelope stars, and find that the luminosity ranges at which WN-, WC-, and WO-type stars can exist is a strong function of metallicity. We find that all carbon-rich models produced in our grids have optically thick winds and match the luminosity distribution of observed populations. We construct population models and predict the numbers of transparent-wind stripped-envelope stars and Wolf-Rayet stars, and derive their number ratios at different metallicities. We find that as metallicity increases, the number of transparent-wind stripped-envelope stars decreases and the number of Wolf-Rayet stars increases. At high metallicities WC- and WO-type stars become more common. We apply our population models to nearby galaxies, and find that populations are more sensitive to the transition luminosity between Wolf-Rayet stars and transparent-wind helium stars than to the metallicity-dependent mass loss rates.
The levels of environmental light experienced by organisms during the behavioral activity phase deeply influence the performance of important ecological tasks. As a result, their shape and coloring may experience a light-driven selection process via the day-night rhythmic behavior. In this study, we tested the phenotypic and genetic variability of the western Mediterranean squat lobster (Munida tenuimana). We sampled at depths with different photic conditions and potentially, different burrow emergence rhythms. We performed day-night hauling at different depths, above and below the twilight zone end (i.e., 700 m, 1200 m, 1350 m, and 1500 m), to portray the occurrence of any burrow emergence rhythmicity. Collected animals were screened for shape and size (by geometric morphometry), spectrum and color variation (by photometric analysis), as well as for sequence variation at the mitochondria] DNA gene encoding for the NADH dehydrogenase subunit I. We found that a weak genetic structuring and shape homogeneity occurred together with significant variations in size, with the smaller individuals living at the twilight zone inferior limit and the larger individuals above and below. The infra-red wavelengths of spectral reflectance varied significantly with depth while the blue-green ones were size-dependent and expressed in smaller animals, which has a very small spectral reflectance. The effects of solar and bioluminescence lighting are discussed as depth-dependent evolutionary forces likely influencing the behavioral rhythms and coloring of M. tenuimana.
This paper evaluates the construction of the rights of human rights defenders within international law and its shortcomings in protecting women. Human rights defenders have historically been defined on the basis of their actions as defenders. However, as Marxist-feminist scholar Silvia Federici contends, women are inherently politicised and, moreover, face obstacles to political action which are invisible to and untouchable by the law. Labour rights set an example of handling such a disadvantaged political position by placing vital importance on workers’ right to association and collective action. The paper closes with the suggestion that transposing this construction of rights to women would better protect women as human rights defenders while emphasising their capacity for self-determination in their political actions.
Subject of this work is the investigation of universal scaling laws which are observed in coupled chaotic systems. Progress is made by replacing the chaotic fluctuations in the perturbation dynamics by stochastic processes. First, a continuous-time stochastic model for weakly coupled chaotic systems is introduced to study the scaling of the Lyapunov exponents with the coupling strength (coupling sensitivity of chaos). By means of the the Fokker-Planck equation scaling relations are derived, which are confirmed by results of numerical simulations. Next, the new effect of avoided crossing of Lyapunov exponents of weakly coupled disordered chaotic systems is described, which is qualitatively similar to the energy level repulsion in quantum systems. Using the scaling relations obtained for the coupling sensitivity of chaos, an asymptotic expression for the distribution function of small spacings between Lyapunov exponents is derived and compared with results of numerical simulations. Finally, the synchronization transition in strongly coupled spatially extended chaotic systems is shown to resemble a continuous phase transition, with the coupling strength and the synchronization error as control and order parameter, respectively. Using results of numerical simulations and theoretical considerations in terms of a multiplicative noise partial differential equation, the universality classes of the observed two types of transition are determined (Kardar-Parisi-Zhang equation with saturating term, directed percolation).
We study two coupled spatially extended dynamical systems which exhibit space-time chaos. The transition to the synchronized state is treated as a nonequilibrium phase transition, where the average synchronization error is the order parameter. The transition in one-dimensional systems is found to be generically in the universality class of the Kardar- Parisi-Zhang equation with a growth-limiting term ("bounded KPZ"). For systems with very strong nonlinearities in the local dynamics, however, the transition is found to be in the universality class of directed percolation.
The behavior of the Lyapunov exponents (LEs) of a disordered system consisting of mutually coupled chaotic maps with different parameters is studied. The LEs are demonstrated to exhibit avoided crossing and level repulsion, qualitatively similar to the behavior of energy levels in quantum chaos. Recent results for the coupling dependence of the LEs of two coupled chaotic systems are used to explain the phenomenon and to derive an approximate expression for the distribution functions of LE spacings. The depletion of the level spacing distribution is shown to be exponentially strong at small values. The results are interpreted in terms of the random matrix theory.
Over the past few years, studying abroad and other educational international experiences have become increasingly highly regarded. Nevertheless, research shows that only a minority of students actually take part in
academic mobility programs. But what is it that distinguishes those students who take up these international opportunities from those who do not? In this
study we reviewed recent quantitative studies on why (primarily German) students choose to travel abroad or not. This revealed a pattern of predictive factors. These indicate the key role played by students’ personal and social background, as well as previous international travel and the course of studies they are enrolled in. The study then focuses on teaching students. Both facilitating and debilitating factors are discussed and included in a model illustrating the decision-making process these students use. Finally, we discuss the practical implications for ways in which international, studyrelated travel might be increased in the future. We suggest that higher education institutions analyze individual student characteristics, offering differentiated programs to better meet the needs of different groups, thus raising the likelihood of disadvantaged students participating in academic international travel.
Development and application of novel genetic transformation technologies in maize (Zea mays L.)
(2007)
Plant genetic engineering approaches are of pivotal importance to both basic and applied research. However, rapid commercialization of genetically engineered crops, especially maize, raises several ecological and environmental concerns largely related to transgene flow via pollination. In most crops, the plastid genome is inherited uniparentally in a maternal manner. Consequently, a trait introduced into the plastid genome would not be transferred to the sexually compatible relatives of the crops via pollination. Thus, beside its several other advantages, plastid transformation provides transgene containment, and therefore, is an environmentally friendly approach for genetic engineering of crop plants. Reliable in vitro regeneration systems allowing repeated rounds of regeneration are of utmost importance to development of plastid transformation technologies in higher plants. While being the world’s major food crops, cereals are among the most difficult-to-handle plants in tissue culture which severely limits genetic engineering approaches. In maize, immature zygotic embryos provide the predominantly used material for establishing regeneration-competent cell or callus cultures for genetic transformation experiments. The procedures involved are demanding, laborious and time consuming and depend on greenhouse facilities. In one part of this work, a novel tissue culture and plant regeneration system was developed that uses maize leaf tissue and thus is independent of zygotic embryos and greenhouse facilities. Also, protocols were established for (i) the efficient induction of regeneration-competent callus from maize leaves in the dark, (ii) inducing highly regenerable callus in the light, and (iii) the use of leaf-derived callus for the generation of stably transformed maize plants. Furthermore, several selection methods were tested for developing a plastid transformation system in maize. However, stable plastid transformed maize plants could not be yet recovered. Possible explanations as well as suggestions for future attempts towards developing plastid transformation in maize are discussed. Nevertheless, these results represent a first essential step towards developing chloroplast transformation technology for maize, a method that requires multiple rounds of plant regeneration and selection to obtain genetically stable transgenic plants. In order to apply the newly developed transformation system towards metabolic engineering of carotenoid biosynthesis, the daffodil phytoene synthase (PSY) gene was integrated into the maize genome. The results illustrate that expression of a recombinant PSY significantly increases carotenoid levels in leaves. The beta-carotene (pro-vitamin A) amounts in leaves of transgenic plants were increased by ~21% in comparison to the wild-type. These results represent evidence for maize to have significant potential to accumulate higher amounts of carotenoids, especially beta-carotene, through transgenic expression of phytoene synthases. Finally, progresses were made towards developing transformation technologies in Peperomia (Piperaceae) by establishing an efficient leaf-based regeneration system. Also, factors determining plastid size and number in Peperomia, whose species display great interspecific variation in chloroplast size and number per cell, were investigated. The results suggest that organelle size and number are regulated in a tissue-specific manner rather than in dependency on the plastid type. Investigating plastid morphology in Peperomia species with giant chloroplasts, plasmatic connections between chloroplasts (stromules) were observed under the light microscope and in the absence of tissue fixation or GFP overexpression demonstrating the relevance of these structures in vivo. Furthermore, bacteria-like microorganisms were discovered within Peperomia cells, suggesting that this genus provides an interesting model not only for studying plastid biology but also for investigating plant-microbe interactions.
The objective and motivation behind this research is to provide applications with easy-to-use interfaces to communities of deaf and functionally illiterate users, which enables them to work without any human assistance. Although recent years have witnessed technological advancements, the availability of technology does not ensure accessibility to information and communication technologies (ICT). Extensive use of text from menus to document contents means that deaf or functionally illiterate can not access services implemented on most computer software. Consequently, most existing computer applications pose an accessibility barrier to those who are unable to read fluently. Online technologies intended for such groups should be developed in continuous partnership with primary users and include a thorough investigation into their limitations, requirements and usability barriers. In this research, I investigated existing tools in voice, web and other multimedia technologies to identify learning gaps and explored ways to enhance the information literacy for deaf and functionally illiterate users. I worked on the development of user-centered interfaces to increase the capabilities of deaf and low literacy users by enhancing lexical resources and by evaluating several multimedia interfaces for them. The interface of the platform-independent Italian Sign Language (LIS) Dictionary has been developed to enhance the lexical resources for deaf users. The Sign Language Dictionary accepts Italian lemmas as input and provides their representation in the Italian Sign Language as output. The Sign Language dictionary has 3082 signs as set of Avatar animations in which each sign is linked to a corresponding Italian lemma. I integrated the LIS lexical resources with MultiWordNet (MWN) database to form the first LIS MultiWordNet(LMWN). LMWN contains information about lexical relations between words, semantic relations between lexical concepts (synsets), correspondences between Italian and sign language lexical concepts and semantic fields (domains). The approach enhances the deaf users’ understanding of written Italian language and shows that a relatively small set of lexicon can cover a significant portion of MWN. Integration of LIS signs with MWN made it useful tool for computational linguistics and natural language processing. The rule-based translation process from written Italian text to LIS has been transformed into service-oriented system. The translation process is composed of various modules including parser, semantic interpreter, generator, and spatial allocation planner. This translation procedure has been implemented in the Java Application Building Center (jABC), which is a framework for extreme model driven design (XMDD). The XMDD approach focuses on bringing software development closer to conceptual design, so that the functionality of a software solution could be understood by someone who is unfamiliar with programming concepts. The transformation addresses the heterogeneity challenge and enhances the re-usability of the system. For enhancing the e-participation of functionally illiterate users, two detailed studies were conducted in the Republic of Rwanda. In the first study, the traditional (textual) interface was compared with the virtual character-based interactive interface. The study helped to identify usability barriers and users evaluated these interfaces according to three fundamental areas of usability, i.e. effectiveness, efficiency and satisfaction. In another study, we developed four different interfaces to analyze the usability and effects of online assistance (consistent help) for functionally illiterate users and compared different help modes including textual, vocal and virtual character on the performance of semi-literate users. In our newly designed interfaces the instructions were automatically translated in Swahili language. All the interfaces were evaluated on the basis of task accomplishment, time consumption, System Usability Scale (SUS) rating and number of times the help was acquired. The results show that the performance of semi-literate users improved significantly when using the online assistance. The dissertation thus introduces a new development approach in which virtual characters are used as additional support for barely literate or naturally challenged users. Such components enhanced the application utility by offering a variety of services like translating contents in local language, providing additional vocal information, and performing automatic translation from text to sign language. Obviously, there is no such thing as one design solution that fits for all in the underlying domain. Context sensitivity, literacy and mental abilities are key factors on which I concentrated and the results emphasize that computer interfaces must be based on a thoughtful definition of target groups, purposes and objectives.
Purpose
The objective of the investigation was to determine the concomitant effects of upper arm blood flow restriction (BFR) and inversion on elbow flexors neuromuscular responses.
Methods
Randomly allocated, 13 volunteers performed four conditions in a within-subject design: rest (control, 1-min upright position without BFR), control (1-min upright with BFR), 1-min inverted (without BFR), and 1-min inverted with BFR. Evoked and voluntary contractile properties, before, during and after a 30-s maximum voluntary contraction (MVC) exercise intervention were examined as well as pain scale.
Results
Inversion induced significant pre-exercise intervention decreases in elbow flexors MVC (21.1%, Z2p = 0.48, p = 0.02) and resting evoked twitch forces (29.4%, Z2p = 0.34, p = 0.03). The 30-s MVC induced significantly greater pre- to post-test decreases in potentiated twitch force (Z2p = 0.61, p = 0.0009) during inversion (75%) than upright (65.3%) conditions. Overall, BFR decreased MVC force 4.8% (Z2p = 0.37, p = 0.05). For upright position, BFR induced 21.0% reductions in M-wave amplitude (Z2p = 0.44, p = 0.04). There were no significant differences for electromyographic activity or voluntary activation as measured with the interpolated twitch technique. For all conditions, there was a significant increase in pain scale between the 40-60 s intervals and post-30-s MVC (upright< inversion, and without BFR< BFR).
Conclusion
The concomitant application of inversion with elbow flexors BFR only amplified neuromuscular performance impairments to a small degree. Individuals who execute forceful contractions when inverted or with BFR should be cognizant that force output may be impaired.
Natural products and their semisynthetic derivatives are an important source of drugs for the pharmaceutical industry. Bacteria are prolific producers of natural products and encode a vast diversity of natural product biosynthetic gene clusters. However, much of this diversity is inaccessible to natural product discovery. Here, we use a combination of phylogenomic analysis of the microviridin biosynthetic pathway and chemo-enzymatic synthesis of bioinformatically predicted microviridins to yield new protease inhibitors. Phylogenomic analysis demonstrated that microviridin biosynthetic gene clusters occur across the bacterial domain and encode three distinct subtypes of precursor peptides. Our analysis shed light on the evolution of microviridin biosynthesis and enabled prioritization of their chemo-enzymatic production. Targeted one-pot synthesis of four microviridins encoded by the cyanobacterium Cyanothece sp. PCC 7822 identified a set of novel and potent serine protease inhibitors, the most active of which had an IC50 value of 21.5 nM. This study advances the genome mining techniques available for natural product discovery and obviates the need to culture bacteria.
Vienna
(2021)
This book explores and debates the urban transformations that have taken place in Vienna over the past 30 years and their consequences in policy fields such as labour and housing, political and social participation and the environment. Historically, European cities have been characterised by a strong association between social cohesion, quality of life, economic ambition and a robust State. Vienna is an excellent example for that. In more recent years, however, cities were pressured to change policy principles and mechanisms in the context of demographic shifts, post-industrial transformations and welfare recalibration which have led to worsened social conditions in many cities. Each chapter in this volume discusses Vienna's responses to these pressures in key policy arenas, looking at outcomes from the context-specific local arrangements. Against a theoretical framework debating the European city as a model of inclusion and social justice, authors explore the local capacity to innovate urban policies and to address new social risks, while paying attention to potential trade-offs.
The book questions and assesses the city's resilience using time series and an institutional analysis of four key dimensions that characterise the European city model within the context of post-industrial transition: redistribution, recognition, representation and sustainability. It offers a multiscalar perspective of urban governance through labour, housing, participatory and environmental policies, bringing together different levels and public policy types.
Vienna
(2021)
This book explores and debates the urban transformations that have taken place in Vienna over the past 30 years and their consequences in policy fields such as labour and housing, political and social participation and the environment. Historically, European cities have been characterised by a strong association between social cohesion, quality of life, economic ambition and a robust State. Vienna is an excellent example for that. In more recent years, however, cities were pressured to change policy principles and mechanisms in the context of demographic shifts, post-industrial transformations and welfare recalibration which have led to worsened social conditions in many cities. Each chapter in this volume discusses Vienna's responses to these pressures in key policy arenas, looking at outcomes from the context-specific local arrangements. Against a theoretical framework debating the European city as a model of inclusion and social justice, authors explore the local capacity to innovate urban policies and to address new social risks, while paying attention to potential trade-offs.
The book questions and assesses the city's resilience using time series and an institutional analysis of four key dimensions that characterise the European city model within the context of post-industrial transition: redistribution, recognition, representation and sustainability. It offers a multiscalar perspective of urban governance through labour, housing, participatory and environmental policies, bringing together different levels and public policy types.
Aims. We present an extensive study of the BL Lac object Mrk 501 based on a data set collected during the multi-instrument campaign spanning from 2009 March 15 to 2009 August 1, which includes, among other instruments, MAGIC, VERITAS, Whipple 10 m, and Fermi-LAT to cover the gamma-ray range from 0.1 GeV to 20 TeV; RXTE and Swift to cover wavelengths from UV to hard X-rays; and GASP-WEBT, which provides coverage of radio and optical wavelengths. Optical polarization measurements were provided for a fraction of the campaign by the Steward and St. Petersburg observatories. We evaluate the variability of the source and interband correlations, the gamma-ray flaring activity occurring in May 2009, and interpret the results within two synchrotron self-Compton (SSC) scenarios. Methods. The multiband variability observed during the full campaign is addressed in terms of the fractional variability, and the possible correlations are studied by calculating the discrete correlation function for each pair of energy bands where the significance was evaluated with dedicated Monte Carlo simulations. The space of SSC model parameters is probed following a dedicated grid-scan strategy, allowing for a wide range of models to be tested and offering a study of the degeneracy of model-to-data agreement in the individual model parameters, hence providing a less biased interpretation than the "single-curve SSC model adjustment" typically reported in the literature. Results. We find an increase in the fractional variability with energy, while no significant interband correlations of flux changes are found on the basis of the acquired data set. The SSC model grid-scan shows that the flaring activity around May 22 cannot be modeled adequately with a one-zone SSC scenario (using an electron energy distribution with two breaks), while it can be suitably described within a two (independent) zone SSC scenario. Here, one zone is responsible for the quiescent emission from the averaged 4.5-month observing period, while the other one, which is spatially separated from the first, dominates the flaring emission occurring at X-rays and very-high-energy (> 100 GeV, VHE) gamma-rays. The flaring activity from May 1, which coincides with a rotation of the electric vector polarization angle (EVPA), cannot be satisfactorily reproduced by either a one-zone or a two-independent-zone SSC model, yet this is partially affected by the lack of strictly simultaneous observations and the presence of large flux changes on sub-hour timescales (detected at VHE gamma rays). Conclusions. The higher variability in the VHE emission and lack of correlation with the X-ray emission indicate that, at least during the 4.5-month observing campaign in 2009, the highest energy (and most variable) electrons that are responsible for the VHE gamma rays do not make a dominant contribution to the similar to 1 keV emission. Alternatively, there could be a very variable component contributing to the VHE gamma-ray emission in addition to that coming from the SSC scenario. The studies with our dedicated SSC grid-scan show that there is some degeneracy in both the one-zone and the two-zone SSC scenarios probed, with several combinations of model parameters yielding a similar model-to-data agreement, and some parameters better constrained than others. The observed gamma-ray flaring activity, with the EVPA rotation coincident with the first gamma-ray flare, resembles those reported previously for low frequency peaked blazars, hence suggesting that there are many similarities in the flaring mechanisms of blazars with different jet properties.
Aims. We aim to characterize the multiwavelength emission from Markarian 501 (Mrk 501), quantify the energy-dependent variability, study the potential multiband correlations, and describe the temporal evolution of the broadband emission within leptonic theoretical scenarios. Methods. We organized a multiwavelength campaign to take place between March and July of 2012. Excellent temporal coverage was obtained with more than 25 instruments, including the MAGIC, FACT and VERITAS Cherenkov telescopes, the instruments on board the Swift and Fermi spacecraft, and the telescopes operated by the GASP-WEBT collaboration. Results. Mrk 501 showed a very high energy (VHE) gamma-ray flux above 0.2 TeV of similar to 0.5 times the Crab Nebula flux (CU) for most of the campaign. The highest activity occurred on 2012 June 9, when the VHE flux was similar to 3 CU, and the peak of the high-energy spectral component was found to be at similar to 2 TeV. Both the X-ray and VHE gamma-ray spectral slopes were measured to be extremely hard, with spectral indices <2 during most of the observing campaign, regardless of the X-ray and VHE flux. This study reports the hardest Mrk 501 VHE spectra measured to date. The fractional variability was found to increase with energy, with the highest variability occurring at VHE. Using the complete data set, we found correlation between the X-ray and VHE bands; however, if the June 9 flare is excluded, the correlation disappears (significance <3 sigma) despite the existence of substantial variability in the X-ray and VHE bands throughout the campaign. Conclusions. The unprecedentedly hard X-ray and VHE spectra measured imply that their low- and high-energy components peaked above 5 keV and 0.5 TeV, respectively, during a large fraction of the observing campaign, and hence that Mrk 501 behaved like an extreme high-frequency-peaked blazar (EHBL) throughout the 2012 observing season. This suggests that being an EHBL may not be a permanent characteristic of a blazar, but rather a state which may change over time. The data set acquired shows that the broadband spectral energy distribution (SED) of Mrk 501, and its transient evolution, is very complex, requiring, within the framework of synchrotron self-Compton (SSC) models, various emission regions for a satisfactory description. Nevertheless the one-zone SSC scenario can successfully describe the segments of the SED where most energy is emitted, with a significant correlation between the electron energy density and the VHE gamma-ray activity, suggesting that most of the variability may be explained by the injection of high-energy electrons. The one-zone SSC scenario used reproduces the behavior seen between the measured X-ray and VHE gamma-ray fluxes, and predicts that the correlation becomes stronger with increasing energy of the X-rays.
Context. The large jet kinetic power and non-thermal processes occurring in the microquasar SS 433 make this source a good candidate for a very high-energy (VHE) gamma-ray emitter. Gamma-ray fluxes above the sensitivity limits of current Cherenkov telescopes have been predicted for both the central X-ray binary system and the interaction regions of SS 433 jets with the surrounding W50 nebula. Non-thermal emission at lower energies has been previously reported, indicating that efficient particle acceleration is taking place in the system. Aims. We explore the capability of SS 433 to emit VHE gamma rays during periods in which the expected flux attenuation due to periodic eclipses (P-orb similar to 13.1 days) and precession of the circumstellar disk (P-pre similar to 162 days) periodically covering the central binary system is expected to be at its minimum. The eastern and western SS 433/W50 interaction regions are also examined using the whole data set available. We aim to constrain some theoretical models previously developed for this system with our observations. Methods. We made use of dedicated observations from the Major Atmospheric Gamma Imaging Cherenkov telescopes (MAGIC) and High Energy Spectroscopic System (H.E.S.S.) of SS 433 taken from 2006 to 2011. These observation were combined for the first time and accounted for a total effective observation time of 16.5 h, which were scheduled considering the expected phases of minimum absorption of the putative VHE emission. Gamma-ray attenuation does not affect the jet/medium interaction regions. In this case, the analysis of a larger data set amounting to similar to 40-80 h, depending on the region, was employed. Results. No evidence of VHE gamma-ray emission either from the central binary system or from the eastern/western interaction regions was found. Upper limits were computed for the combined data set. Differential fluxes from the central system are found to be less than or similar to 10(-12)-10(-13) TeV-1 cm(-2) s(-1) in an energy interval ranging from similar to few x 100 GeV to similar to few TeV. Integral flux limits down to similar to 10(-12)-10(-13) ph cm(-2) s(-1) and similar to 10(-13)-10(-14) ph cm(-2) s(-1) are obtained at 300 and 800 GeV, respectively. Our results are used to place constraints on the particle acceleration fraction at the inner jet regions and on the physics of the jet/medium interactions. Conclusions. Our findings suggest that the fraction of the jet kinetic power that is transferred to relativistic protons must be relatively small in SS 433, q(p) <= 2.5 x 10(-5), to explain the lack of TeV and neutrino emission from the central system. At the SS 433/W50 interface, the presence of magnetic fields greater than or similar to 10 mu G is derived assuming a synchrotron origin for the observed X-ray emission. This also implies the presence of high-energy electrons with E-e up to 50 TeV, preventing an efficient production of gamma-ray fluxes in these interaction regions.
In the present work, we study wave phenomena in strongly nonlinear lattices. Such lattices are characterized by the absence of classical linear waves. We demonstrate that compactons – strongly localized solitary waves with tails decaying faster than exponential – exist and that they play a major role in the dynamics of the system under consideration. We investigate compactons in different physical setups. One part deals with lattices of dispersively coupled limit cycle oscillators which find various applications in natural sciences such as Josephson junction arrays or coupled Ginzburg-Landau equations. Another part deals with Hamiltonian lattices. Here, a prominent example in which compactons can be found is the granular chain. In the third part, we study systems which are related to the discrete nonlinear Schrödinger equation describing, for example, coupled optical wave-guides or the dynamics of Bose-Einstein condensates in optical lattices. Our investigations are based on a numerical method to solve the traveling wave equation. This results in a quasi-exact solution (up to numerical errors) which is the compacton. Another ansatz which is employed throughout this work is the quasi-continuous approximation where the lattice is described by a continuous medium. Here, compactons are found analytically, but they are defined on a truly compact support. Remarkably, both ways give similar qualitative and quantitative results. Additionally, we study the dynamical properties of compactons by means of numerical simulation of the lattice equations. Especially, we concentrate on their emergence from physically realizable initial conditions as well as on their stability due to collisions. We show that the collisions are not exactly elastic but that a small part of the energy remains at the location of the collision. In finite lattices, this remaining part will then trigger a multiple scattering process resulting in a chaotic state.
Wave energy harvesting could be a substantial renewable energy source without impact on the global climate and ecology, yet practical attempts have struggled with the problems of wear and catastrophic failure. An innovative technology for ocean wave energy harvesting was recently proposed, based on the use of soft capacitors. This study presents a realistic theoretical and numerical model for the quantitative characterization of this harvesting method. Parameter regions with optimal behavior are found, and novel material descriptors are determined, which dramatically simplify analysis. The characteristics of currently available materials are evaluated, and found to merit a very conservative estimate of 10 years for raw material cost recovery.
We study localized traveling waves and chaotic states in strongly nonlinear one-dimensional Hamiltonian lattices. We show that the solitary waves are superexponentially localized and present an accurate numerical method allowing one to find them for an arbitrary nonlinearity index. Compactons evolve from rather general initially localized perturbations and collide nearly elastically. Nevertheless, on a long time scale for finite lattices an extensive chaotic state is generally observed. Because of the system's scaling, these dynamical properties are valid for any energy.
The German Enlightenment
(2017)
The term Enlightenment (or Aufklärung) remains heavily contested. Even when historians delimit the remit of the concept, assigning it to a particular historical period rather than to an intellectual or moral programme, the public resonance of the Enlightenment remains high and problematic—especially when equated in an essentialist manner with modernity or some core values of ‘the West’. This Forum has been convened to discuss recent research on the Enlightenment in Germany, different views of the term and its ideological use in public discourse outside academia (and sometimes within it).