Refine
Has Fulltext
- yes (722) (remove)
Year of publication
- 2018 (722) (remove)
Document Type
- Postprint (287)
- Article (187)
- Doctoral Thesis (140)
- Monograph/Edited Volume (21)
- Review (21)
- Working Paper (19)
- Part of Periodical (14)
- Master's Thesis (11)
- Conference Proceeding (7)
- Other (6)
Keywords
- climate change (9)
- dynamics (7)
- adaptation (6)
- climate-change (6)
- permafrost (6)
- expression (5)
- Berlin (4)
- Deutschland (4)
- football (4)
- inflammation (4)
Institute
- Mathematisch-Naturwissenschaftliche Fakultät (106)
- Zentrum für Lehrerbildung und Bildungsforschung (ZeLB) (49)
- Vereinigung für Jüdische Studien e. V. (47)
- Institut für Geowissenschaften (45)
- Institut für Biochemie und Biologie (44)
- Humanwissenschaftliche Fakultät (40)
- MenschenRechtsZentrum (36)
- Institut für Physik und Astronomie (33)
- Institut für Chemie (27)
- Institut für Umweltwissenschaften und Geographie (23)
The interaction between surfaces displaying end-grafted hydrophilic polymer brushes plays important roles in biology and in many wet-technological applications. The outer surfaces of Gram-negative bacteria, for example, are composed of lipopolysaccharide (LPS) molecules exposing oligo- and polysaccharides to the aqueous environment. This unique, structurally complex biological interface is of great scientific interest as it mediates the interaction of bacteria with neighboring bacteria in colonies and biofilms. The interaction between polymer-decorated surfaces is generally coupled to the distance-dependent conformation of the polymer chains. Therefore, structural insight into the interacting surfaces is a prerequisite to understand the interaction characteristics as well as the underlying physical mechanisms. This problem has been addressed by theory, but accurate experimental data on polymer conformations under confinement are rare, because obtaining perturbation-free structural insight into buried soft interfaces is inherently difficult.
In this thesis, lipid membrane surfaces decorated with hydrophilic polymers of technological and biological relevance are investigated under controlled interaction conditions, i.e., at defined surface separations. For this purpose, dedicated sample architectures and experimental tools are developed. Via ellipsometry and neutron reflectometry pressure-distance curves and distance-dependent polymer conformations in terms of brush compression and reciprocative interpenetration are determined. Additional element-specific structural insight into the end-point distribution of interacting brushes is obtained by standing-wave x-ray fluorescence (SWXF).
The methodology is first established for poly[ethylene glycol] (PEG) brushes of defined length and grafting density. For this system, neutron reflectometry revealed pronounced brush interpenetration, which is not captured in common brush theories and therefore motivates rigorous simulation-based treatments. In the second step the same approach is applied to realistic mimics of the outer surfaces of Gram-negative bacteria: monolayers of wild type LPSs extracted from E. Coli O55:B5 displaying strain-specific O-side chains. The neutron reflectometry experiments yield unprecedented structural insight into bacterial interactions, which are of great relevance for the properties of biofilms.
Cationic azobenzene-containing surfactants are capable of condensing DNA in solution with formation of nanosized particles that can be employed in gene delivery. The ratio of surfactant/DNA concentration and solution ionic strength determines the result of DNA-surfactant interaction: Complexes with a micelle-like surfactant associates on DNA, which induces DNA shrinkage, DNA precipitation or DNA condensation with the emergence of nanosized particles. UV and fluorescence spectroscopy, low gradient viscometry and flow birefringence methods were employed to investigate DNA-surfactant and surfactant-surfactant interaction at different NaCl concentrations, [NaCl]. It was observed that [NaCl] (or the Debye screening radius) determines the surfactant-surfactant interaction in solutions without DNA. Monomers, micelles and non-micellar associates of azobenzene-containing surfactants with head-to-tail orientation of molecules were distinguished due to the features of their absorption spectra. The novel data enabled us to conclude that exactly the type of associates (together with the concentration of components) determines the result of DNA-surfactant interaction. Predomination of head-to-tail associates at 0.01 M < [NaCl] < 0.5 M induces DNA aggregation and in some cases DNA precipitation. High NaCl concentration (higher than 0.8 M) prevents electrostatic attraction of surfactants to DNA phosphates for complex formation. DAPI dye luminescence in solutions with DNA-surfactant complexes shows that surfactant tails overlap the DNA minor groove. The addition of di- and trivalent metal ions before and after the surfactant binding to DNA indicate that the bound surfactant molecules are located on DNA in islets
The purpose of the present study was to examine the moderation of parental mediation in the longitudinal association between being a bystander of cyberbullying and cyberbullying perpetration and cyberbullying victimization. Participants were 1067 7th and 8th graders between 12 and 15 years old (51% female) from six middle schools in predominantly middle-class neighborhoods in the Midwestern United States. Increases in being bystanders of cyberbullying was related positively to restrictive and instructive parental mediation. Restrictive parental mediation was related positively to Time 2 (T2) cyberbullying victimization, while instructive parental mediation was negatively related to T2 cyberbullying perpetration and victimization. Restrictive parental mediation was a moderator in the association between bystanders of cyberbullying and T2 cyberbullying victimization. Increases in restrictive parental mediation strengthened the positive relationship between these variables. In addition, instructive mediation moderated the association between bystanders of cyberbullying and T2 cyberbullying victimization such that increases in this form of parental mediation strategy weakened the association between bystanders of cyberbullying and T2 cyberbullying victimization. The current findings indicate a need for parents to be aware of how they can impact adolescents’ involvement in cyberbullying as bullies and victims. In addition, greater attention should be given to developing parental intervention programs that focus on the role of parents in helping to mitigate adolescents’ likelihood of cyberbullying involvement.
Blockchain
(2018)
Der Begriff Blockchain ist in letzter Zeit zu einem Schlagwort geworden, aber nur wenige wissen, was sich genau dahinter verbirgt. Laut einer Umfrage, die im ersten Quartal 2017 veröffentlicht wurde, ist der Begriff nur bei 35 Prozent der deutschen Mittelständler bekannt. Dabei ist die Blockchain-Technologie durch ihre rasante Entwicklung und die globale Eroberung unterschiedlicher Märkte für Massenmedien sehr interessant.
So sehen viele die Blockchain-Technologie entweder als eine Allzweckwaffe, zu der aber nur wenige einen Zugang haben, oder als eine Hacker-Technologie für geheime Geschäfte im Darknet. Dabei liegt die Innovation der Blockchain-Technologie in ihrer erfolgreichen Zusammensetzung bereits vorhandener Ansätze: dezentrale Netzwerke, Kryptographie, Konsensfindungsmodelle. Durch das innovative Konzept wird ein Werte-Austausch in einem dezentralen System möglich. Dabei wird kein Vertrauen zwischen dessen Knoten (z.B. Nutzer) vorausgesetzt.
Mit dieser Studie möchte das Hasso-Plattner-Institut den Lesern helfen, ihren eigenen Standpunkt zur Blockchain-Technologie zu finden und dabei dazwischen unterscheiden zu können, welche Eigenschaften wirklich innovativ und welche nichts weiter als ein Hype sind.
Die Autoren der vorliegenden Arbeit analysieren positive und negative Eigenschaften, welche die Blockchain-Architektur prägen, und stellen mögliche Anpassungs- und Lösungsvorschläge vor, die zu einem effizienten Einsatz der Technologie beitragen können. Jedem Unternehmen, bevor es sich für diese Technologie entscheidet, wird dabei empfohlen, für den geplanten Anwendungszweck zunächst ein klares Ziel zu definieren, das mit einem angemessenen Kosten-Nutzen-Verhältnis angestrebt werden kann. Dabei sind sowohl die Möglichkeiten als auch die Grenzen der Blockchain-Technologie zu beachten. Die relevanten Schritte, die es in diesem Zusammenhang zu beachten gilt, fasst die Studie für die Leser übersichtlich zusammen.
Es wird ebenso auf akute Fragestellungen wie Skalierbarkeit der Blockchain, geeigneter Konsensalgorithmus und Sicherheit eingegangen, darunter verschiedene Arten möglicher Angriffe und die entsprechenden Gegenmaßnahmen zu deren Abwehr. Neue Blockchains etwa laufen Gefahr, geringere Sicherheit zu bieten, da Änderungen an der bereits bestehenden Technologie zu Schutzlücken und Mängeln führen können.
Nach Diskussion der innovativen Eigenschaften und Probleme der Blockchain-Technologie wird auf ihre Umsetzung eingegangen. Interessierten Unternehmen stehen viele Umsetzungsmöglichkeiten zur Verfügung. Die zahlreichen Anwendungen haben entweder eine eigene Blockchain als Grundlage oder nutzen bereits bestehende und weitverbreitete Blockchain-Systeme. Zahlreiche Konsortien und Projekte bieten „Blockchain-as-a-Service“ an und unterstützen andere Unternehmen beim Entwickeln, Testen und Bereitstellen von Anwendungen.
Die Studie gibt einen detaillierten Überblick über zahlreiche relevante Einsatzbereiche und Projekte im Bereich der Blockchain-Technologie. Dadurch, dass sie noch relativ jung ist und sich schnell entwickelt, fehlen ihr noch einheitliche Standards, die Zusammenarbeit der verschiedenen Systeme erlauben und an die sich alle Entwickler halten können. Aktuell orientieren sich Entwickler an Bitcoin-, Ethereum- und Hyperledger-Systeme, diese dienen als Grundlage für viele weitere Blockchain-Anwendungen.
Ziel ist, den Lesern einen klaren und umfassenden Überblick über die Blockchain-Technologie und deren Möglichkeiten zu vermitteln.
The improvement of power is an objective in training of athletes. In order to detect effective methods of exercise, basic research is required regarding the mechanisms of muscular activity. The purpose of this study is to investigate whether or not a muscular pre-activation prior to an external impulse-like force impact has an effect on the maximal explosive eccentric Adaptive Force (xpAFeccmax). This power capability combines different probable power enhancing mechanisms. To measure the xpAFeccmax an innovative pneumatic device was used. During measuring, the subject tries to hold an isometric position as long as possible. In the moment in which the subjects’ maximal isometric holding strength is exceeded, it merges into eccentric muscle action. This process is very close to motions in sports, where an adaptation of the neuromuscular system is required, e.g., force impacts caused by uneven surfaces during skiing. For investigating the effect of pre-activation on the xpAFeccmax of the quadriceps femoris muscle, n = 20 subjects had to pass three different pre-activation levels in a randomized order (level 1: 0.4 bar, level 2: 0.8 bar, level 3: 1.2 bar). After adjusting the standardized pre-pressure by pushing against the interface, an impulse-like load impacted on the distal tibia of the subject. During this, the xpAFeccmax was detected. The maximal voluntary isometric contraction (MVIC) was also measured. The torque values of the xpAFeccmax were compared with regard to the pre-activation levels. The results show a significant positive relation between the pre-activation of the quadriceps femoris muscle and the xpAFeccmax (male: p = 0.000, η2= 0.683; female: p = 0.000, η2= 0.907). The average percentage increase of torque amounted +28.15 ± 25.4% between MVIC and xpAFeccmax with pre-pressure level 1, +12.09 ± 7.9% for the xpAFeccmax comparing pre-pressure levels 1 vs. 2 and +2.98 ± 4.2% comparing levels 2 and 3. A higher but not maximal muscular activation prior to a fast impacting eccentric load seems to produce an immediate increase of force outcome. Different possible physiological explanatory approaches and the use as a potential training method are discussed.
Analysis of social media using digital methods is a flourishing approach. However, the relatively easy availability of data collected via platform application programming interfaces has arguably led to the predominance of single-platform research of social media. Such research has also privileged the role of text in social media analysis, as a form of data that is more readily gathered and searchable than images. In this paper, we challenge both of these prevailing forms of social media research by outlining a methodology for visual cross-platform analysis (VCPA), defined as the study of still and moving images across two or more social media platforms. Our argument contains three steps. First, we argue that cross-platform analysis addresses a gap in research methods in that it acknowledges the interplay between a social phenomenon under investigation and the medium within which it is being researched, thus illuminating the different affordances and cultures of web platforms. Second, we build on the literature on multimodal communication and platform vernacular to provide a rationale for incorporating the visual into cross-platform analysis. Third, we reflect on an experimental cross-platform analysis of images within social media posts (n = 471,033) used to communicate climate change to advance different modes of macro- and meso-levels of analysis that are natively visual: image-text networks, image plots and composite images. We conclude by assessing the research pathways opened up by VCPA, delineating potential contributions to empirical research and theory and the potential impact on practitioners of social media communication.
The last years have shown an increasing sophistication of attacks against enterprises. Traditional security solutions like firewalls, anti-virus systems and generally Intrusion Detection Systems (IDSs) are no longer sufficient to protect an enterprise against these advanced attacks. One popular approach to tackle this issue is to collect and analyze events generated across the IT landscape of an enterprise. This task is achieved by the utilization of Security Information and Event Management (SIEM) systems. However, the majority of the currently existing SIEM solutions is not capable of handling the massive volume of data and the diversity of event representations. Even if these solutions can collect the data at a central place, they are neither able to extract all relevant information from the events nor correlate events across various sources. Hence, only rather simple attacks are detected, whereas complex attacks, consisting of multiple stages, remain undetected. Undoubtedly, security operators of large enterprises are faced with a typical Big Data problem.
In this thesis, we propose and implement a prototypical SIEM system named Real-Time Event Analysis and Monitoring System (REAMS) that addresses the Big Data challenges of event data with common paradigms, such as data normalization, multi-threading, in-memory storage, and distributed processing. In particular, a mostly stream-based event processing workflow is proposed that collects, normalizes, persists and analyzes events in near real-time. In this regard, we have made various contributions in the SIEM context. First, we propose a high-performance normalization algorithm that is highly parallelized across threads and distributed across nodes. Second, we are persisting into an in-memory database for fast querying and correlation in the context of attack detection. Third, we propose various analysis layers, such as anomaly- and signature-based detection, that run on top of the normalized and correlated events. As a result, we demonstrate our capabilities to detect previously known as well as unknown attack patterns. Lastly, we have investigated the integration of cyber threat intelligence (CTI) into the analytical process, for instance, for correlating monitored user accounts with previously collected public identity leaks to identify possible compromised user accounts.
In summary, we show that a SIEM system can indeed monitor a large enterprise environment with a massive load of incoming events. As a result, complex attacks spanning across the whole network can be uncovered and mitigated, which is an advancement in comparison to existing SIEM systems on the market.
The rapid development and integration of Information Technologies over the last decades influenced all areas of our life, including the business world. Yet not only the modern enterprises become digitalised, but also security and criminal threats move into the digital sphere. To withstand these threats, modern companies must be aware of all activities within their computer networks.
The keystone for such continuous security monitoring is a Security Information and Event Management (SIEM) system that collects and processes all security-related log messages from the entire enterprise network. However, digital transformations and technologies, such as network virtualisation and widespread usage of mobile communications, lead to a constantly increasing number of monitored devices and systems. As a result, the amount of data that has to be processed by a SIEM system is increasing rapidly. Besides that, in-depth security analysis of the captured data requires the application of rather sophisticated outlier detection algorithms that have a high computational complexity. Existing outlier detection methods often suffer from performance issues and are not directly applicable for high-speed and high-volume analysis of heterogeneous security-related events, which becomes a major challenge for modern SIEM systems nowadays.
This thesis provides a number of solutions for the mentioned challenges. First, it proposes a new SIEM system architecture for high-speed processing of security events, implementing parallel, in-memory and in-database processing principles. The proposed architecture also utilises the most efficient log format for high-speed data normalisation. Next, the thesis offers several novel high-speed outlier detection methods, including generic Hybrid Outlier Detection that can efficiently be used for Big Data analysis. Finally, the special User Behaviour Outlier Detection is proposed for better threat detection and analysis of particular user behaviour cases.
The proposed architecture and methods were evaluated in terms of both performance and accuracy, as well as compared with classical architecture and existing algorithms. These evaluations were performed on multiple data sets, including simulated data, well-known public intrusion detection data set, and real data from the large multinational enterprise. The evaluation results have proved the high performance and efficacy of the developed methods.
All concepts proposed in this thesis were integrated into the prototype of the SIEM system, capable of high-speed analysis of Big Security Data, which makes this integrated SIEM platform highly relevant for modern enterprise security applications.
Blockchain
(2018)
The term blockchain has recently become a buzzword, but only few know what exactly lies behind this approach. According to a survey, issued in the first quarter of 2017, the term is only known by 35 percent of German medium-sized enterprise representatives. However, the blockchain technology is very interesting for the mass media because of its rapid development and global capturing of different markets.
For example, many see blockchain technology either as an all-purpose weapon— which only a few have access to—or as a hacker technology for secret deals in the darknet. The innovation of blockchain technology is found in its successful combination of already existing approaches: such as decentralized networks, cryptography, and consensus models. This innovative concept makes it possible to exchange values in a decentralized system. At the same time, there is no requirement for trust between its nodes (e.g. users).
With this study the Hasso Plattner Institute would like to help readers form their own opinion about blockchain technology, and to distinguish between truly innovative properties and hype.
The authors of the present study analyze the positive and negative properties of the blockchain architecture and suggest possible solutions, which can contribute to the efficient use of the technology. We recommend that every company define a clear target for the intended application, which is achievable with a reasonable cost-benefit ration, before deciding on this technology. Both the possibilities and the limitations of blockchain technology need to be considered. The relevant steps that must be taken in this respect are summarized /summed up for the reader in this study.
Furthermore, this study elaborates on urgent problems such as the scalability of the blockchain, appropriate consensus algorithm and security, including various types of possible attacks and their countermeasures. New blockchains, for example, run the risk of reducing security, as changes to existing technology can lead to lacks in the security and failures.
After discussing the innovative properties and problems of the blockchain technology, its implementation is discussed. There are a lot of implementation opportunities for companies available who are interested in the blockchain realization. The numerous applications have either their own blockchain as a basis or use existing and widespread blockchain systems. Various consortia and projects offer "blockchain-as-a-serviceänd help other companies to develop, test and deploy their own applications.
This study gives a detailed overview of diverse relevant applications and projects in the field of blockchain technology. As this technology is still a relatively young and fast developing approach, it still lacks uniform standards to allow the cooperation of different systems and to which all developers can adhere. Currently, developers are orienting themselves to Bitcoin, Ethereum and Hyperledger systems, which serve as the basis for many other blockchain applications.
The goal is to give readers a clear and comprehensive overview of blockchain technology and its capabilities.
Genetic and environmental factors both contribute to cognitive test performance. A substantial increase in average intelligence test results in the second half of the previous century within one generation is unlikely to be explained by genetic changes. One possible explanation for the strong malleability of cognitive performance measure is that environmental factors modify gene expression via epigenetic mechanisms. Epigenetic factors may help to understand the recent observations of an association between dopamine-dependent encoding of reward prediction errors and cognitive capacity, which was modulated by adverse life events. The possible manifestation of malleable biomarkers contributing to variance in cognitive test performance, and thus possibly contributing to the "missing heritability" between estimates from twin studies and variance explained by genetic markers, is still unclear. Here we show in 1475 healthy adolescents from the IMaging and GENetics (IMAGEN) sample that general IQ (gIQ) is associated with (1) polygenic scores for intelligence, (2) epigenetic modification of DRD2 gene, (3) gray matter density in striatum, and (4) functional striatal activation elicited by temporarily surprising reward-predicting cues. Comparing the relative importance for the prediction of gIQ in an overlapping subsample, our results demonstrate neurobiological correlates of the malleability of gIQ and point to equal importance of genetic variance, epigenetic modification of DRD2 receptor gene, as well as functional striatal activation, known to influence dopamine neurotransmission. Peripheral epigenetic markers are in need of confirmation in the central nervous system and should be tested in longitudinal settings specifically assessing individual and environmental factors that modify epigenetic structure.
Previous research has shown that electrical muscle activity is able to synchronize between muscles of one subject. The ability to synchronize the mechanical muscle oscillations measured by Mechanomyography (MMG) is not described sufficiently. Likewise, the behavior of myofascial oscillations was not considered yet during muscular interaction of two human subjects. The purpose of this study is to investigate the myofascial oscillations intra- and interpersonally. For this the mechanical muscle oscillations of the triceps and the abdominal external oblique muscles were measured by MMG and the triceps tendon was measured by mechanotendography (MTG) during isometric interaction of two subjects (n = 20) performed at 80% of the MVC using their arm extensors. The coherence of MMG/MTG-signals was analyzed with coherence wavelet transform and was compared with randomly matched signal pairs. Each signal pairing shows significant coherent behavior. Averagely, the coherent phases of n = 485 real pairings last over 82 ± 39 % of the total duration time of the isometric interaction. Coherent phases of randomly matched signal pairs take 21 ± 12 % of the total duration time (n = 39). The difference between real vs. randomly matched pairs is significant (U = 113.0, p = 0.000, r = 0.73). The results show that the neuromuscular system seems to be able to synchronize to another neuromuscular system during muscular interaction and generate a coherent behavior of the mechanical muscular oscillations. Potential explanatory approaches are discussed.
In a changing world facing several direct or indirect anthropogenic challenges the freshwater resources are endangered in quantity and quality. An excessive supply of nutrients, for example, can cause disproportional phytoplankton development and oxygen deficits in large rivers, leading to failure of the aims requested by the Water Framework Directive (WFD). Such problems can be observed in many European river catchments including the Elbe basin, and effective measures for improving water quality status are highly appreciated.
In water resources management and protection, modelling tools can help to understand the dominant nutrient processes and to identify the main sources of nutrient pollution in a watershed. They can be effective instruments for impact assessments investigating the effects of changing climate or socio-economic conditions on the status of surface water bodies, and for testing the usefulness of possible protection measures. Due to the high number of interrelated processes, ecohydrological model approaches containing water quality components are more complex than the pure hydrological ones, and their setup and calibration require more efforts. Such models, including the Soil and Water Integrated Model (SWIM), still need some further development and improvement.
Therefore, this cumulative dissertation focuses on two main objectives: 1) the approach-related objectives aiming in the SWIM model improvement and further development regarding nutrient (nitrogen and phosphorus) process description, and 2) the application-related objectives in meso- to large-scale Elbe river basins to support adaptive river basin management in view of possible future changes. The dissertation is based on five scientific papers published in international journals and dealing with these research questions.
Several adaptations were implemented in the model code to improve the representation of nutrient processes including a simple wetland approach, an extended by ammonium nitrogen cycle in the soils, as well as a detailed in-stream module, simulating algal growth, nutrient transformation processes and oxygen conditions in the river reaches, mainly driven by water temperature and light. Although this new approaches created a highly complex ecohydrological model with a large number of additional calibration parameters and rising uncertainty, the calibration and validation of the SWIM model enhanced by the new approaches in selected subcatchment and the entire Elbe river basin delivered satisfactory to good model results in terms of criteria of fit. Thus, the calibrated and validated model provided a sound base for the assessment of possible future changes and impacts in climate, land use and management in the Elbe river (sub)basin(s).
The new enhanced modelling approach improved the applicability of the SWIM model for the WFD related research questions, where the ability to consider biological water quality components (such as phytoplankton) is important. It additionally enhanced its ability to simulate the behaviour of nutrients coming mainly from point sources (e.g. phosphate phosphorus). Scenario results can be used by decision makers and stakeholders to find and understand future challenges and possible adaptation measures in the Elbe river basin.
Sensitivity to salience
(2018)
Sentence comprehension is optimised by indicating entities as salient through linguistic (i.e., information-structural) or visual means. We compare how salience of a depicted referent due to a linguistic (i.e., topic status) or visual cue (i.e., a virtual person’s gaze shift) modulates sentence comprehension in German. We investigated processing of sentences with varying word order and pronoun resolution by means of self-paced reading and an antecedent choice task, respectively. Our results show that linguistic as well as visual salience cues immediately speeded up reading times of sentences mentioning the salient referent first. In contrast, for pronoun resolution, linguistic and visual cues modulated antecedent choice preferences less congruently. In sum, our findings speak in favour of a significant impact of linguistic and visual salience cues on sentence comprehension, substantiating that salient information delivered via language as well as the visual environment is integrated in the current mental representation of the discourse.
Background: Individuals with aphasia after stroke (IWA) often present with working memory (WM) deficits. Research investigating the relationship between WM and language abilities has led to the promising hypothesis that treatments of WM could lead to improvements in language, a phenomenon known as transfer. Although recent treatment protocols have been successful in improving WM, the evidence to date is scarce and the extent to which improvements in trained tasks of WM transfer to untrained memory tasks, spoken sentence comprehension, and functional communication is yet poorly understood.
Aims: We aimed at (a) investigating whether WM can be improved through an adaptive n-back training in IWA (Study 1–3); (b) testing whether WM training leads to near transfer to unpracticed WM tasks (Study 1–3), and far transfer to spoken sentence comprehension (Study 1–3), functional communication (Study 2–3), and memory in daily life in IWA (Study 2–3); and (c) evaluating the methodological quality of existing WM treatments in IWA (Study 3). To address these goals, we conducted two empirical studies – a case-controls study with Hungarian speaking IWA (Study 1) and a multiple baseline study with German speaking IWA (Study 2) – and a systematic review (Study 3).
Methods: In Study 1 and 2 participants with chronic, post-stroke aphasia performed an adaptive, computerized n-back training. ‘Adaptivity’ was implemented by adjusting the tasks’ difficulty level according to the participants’ performance, ensuring that they always practiced at an optimal level of difficulty. To assess the specificity of transfer effects and to better understand the underlying mechanisms of transfer on spoken sentence comprehension, we included an outcome measure testing specific syntactic structures that have been proposed to involve WM processes (e.g., non-canonical structures with varying complexity).
Results: We detected a mixed pattern of training and transfer effects across individuals: five participants out of six significantly improved in the n-back training. Our most important finding is that all six participants improved significantly in spoken sentence comprehension (i.e., far transfer effects). In addition, we also found far transfer to functional communication (in two participants out of three in Study 2) and everyday memory functioning (in all three participants in Study 2), and near transfer to unpracticed n-back tasks (in four participants out of six). Pooled data analysis of Study 1 and 2 showed a significant negative relationship between initial spoken sentence comprehension and the amount of improvement in this ability, suggesting that the more severe the participants’ spoken sentence comprehension deficit was at the beginning of training, the more they improved after training. Taken together, we detected both near far and transfer effects in our studies, but the effects varied across participants. The systematic review evaluating the methodological quality of existing WM treatments in stroke IWA (Study 3) showed poor internal and external validity across the included 17 studies. Poor internal validity was mainly due to use of inappropriate design, lack of randomization of study phases, lack of blinding of participants and/or assessors, and insufficient sampling. Low external validity was mainly related to incomplete information on the setting, lack of use of appropriate analysis or justification for the suitability of the analysis procedure used, and lack of replication across participants and/or behaviors. Results in terms of WM, spoken sentence comprehension, and reading are promising, but further studies with more rigorous methodology and stronger experimental control are needed to determine the beneficial effects of WM intervention.
Conclusions: Results of the empirical studies suggest that WM can be improved with a computerized and adaptive WM training, and improvements can lead to transfer effects to spoken sentence comprehension and functional communication in some individuals with chronic post-stroke aphasia. The fact that improvements were not specific to certain syntactic structures (i.e., non-canonical complex sentences) in spoken sentence comprehension suggest that WM is not involved in the online, automatic processing of syntactic information (i.e., parsing and interpretation), but plays a more general role in the later stage of spoken sentence comprehension (i.e., post-interpretive comprehension). The individual differences in treatment outcomes call for future research to clarify how far these results are generalizable to the population level of IWA. Future studies are needed to identify a few mechanisms that may generalize to at least a subpopulation of IWA as well as to investigate baseline non-linguistic cognitive and language abilities that may play a role in transfer effects and the maintenance of such effects. These may require larger yet homogenous samples.
The simultaneous detection of energy, momentum and temporal information in electron spectroscopy is the key aspect to enhance the detection efficiency in order to broaden the range of scientific applications. Employing a novel 60 degrees wide angle acceptance lens system, based on an additional accelerating electron optical element, leads to a significant enhancement in transmission over the previously employed 30 degrees electron lenses. Due to the performance gain, optimized capabilities for time resolved electron spectroscopy and other high transmission applications with pulsed ionizing radiation have been obtained. The energy resolution and transmission have been determined experimentally utilizing BESSY II as a photon source. Four different and complementary lens modes have been characterized. (C) 2017 The Authors. Published by Elsevier B.V.
Abstract. The aim of this study is to investigate the shallow thermal field differences for two differently aged passive continental margins by analyzing regional variations in geothermal gradient and exploring the controlling factors for these variations. Hence, we analyzed two previously published 3-D conductive and lithospheric-scale thermal models of the Southwest African and the Norwegian passive margins. These 3-D models differentiate various sedimentary, crustal, and mantle units and integrate different geophysical data such as seismic observations and the gravity field. We extracted the temperature–depth distributions in 1 km intervals down to 6 km below the upper thermal boundary condition. The geothermal gradient was then calculated for these intervals between the upper thermal boundary condition and the respective depth levels (1, 2, 3, 4, 5, and 6 km below the upper thermal boundary condition). According to our results, the geothermal gradient decreases with increasing depth and shows varying lateral trends and values for these two different margins. We compare the 3-D geological structural models and the geothermal gradient variations for both thermal models and show how radiogenic heat production, sediment insulating effect, and thermal lithosphere–asthenosphere boundary (LAB) depth influence the shallow thermal field pattern. The results indicate an ongoing process of oceanic mantle cooling at the young Norwegian margin compared with the old SW African passive margin that seems to be thermally equilibrated in the present day.
Polysulfobetaines in aqueous solution show upper critical solution temperature (UCST) behavior. We investigate here the representative of this class of materials, poly (N,N-dimethyl-N-(3-methacrylamidopropyl) ammonio propane sulfonate) (PSPP), with respect to: (i) the dynamics in aqueous solution above the cloud point as function of NaBr concentration; and (ii) the swelling behavior of thin films in water vapor as function of the initial film thickness. For PSPP solutions with a concentration of 5 wt.%, the temperature dependence of the intensity autocorrelation functions is measured with dynamic light scattering as function of molar mass and NaBr concentration (0–8 mM). We found a scaling of behavior for the scattered intensity and dynamic correlation length. The resulting spinodal temperatures showed a maximum at a certain (small) NaBr concentration, which is similar to the behavior of the cloud points measured previously by turbidimetry. The critical exponent of susceptibility depends on NaBr concentration, with a minimum value where the spinodal temperature is maximum and a trend towards the mean-field value of unity with increasing NaBr concentration. In contrast, the critical exponent of the correlation length does not depend on NaBr concentration and is lower than the value of 0.5 predicted by mean-field theory. For PSPP thin films, the swelling behavior was found to depend on film thickness. A film thickness of about 100 nm turns out to be the optimum thickness needed to obtain fast hydration with H 2 O.
The aim of this study is to investigate the shal-
low thermal field differences for two differently aged pas-
sive continental margins by analyzing regional variations in
geothermal gradient and exploring the controlling factors for
these variations. Hence, we analyzed two previously pub-
lished 3-D conductive and lithospheric-scale thermal models
of the Southwest African and the Norwegian passive mar-
gins. These 3-D models differentiate various sedimentary,
crustal, and mantle units and integrate different geophysi-
cal data such as seismic observations and the gravity field.
We extracted the temperature–depth distributions in 1 km
intervals down to 6 km below the upper thermal boundary
condition. The geothermal gradient was then calculated for
these intervals between the upper thermal boundary condi-
tion and the respective depth levels (1, 2, 3, 4, 5, and 6 km
below the upper thermal boundary condition). According to
our results, the geothermal gradient decreases with increas-
ing depth and shows varying lateral trends and values for
these two different margins. We compare the 3-D geologi-
cal structural models and the geothermal gradient variations
for both thermal models and show how radiogenic heat pro-
duction, sediment insulating effect, and thermal lithosphere–
asthenosphere boundary (LAB) depth influence the shallow
thermal field pattern. The results indicate an ongoing process
of oceanic mantle cooling at the young Norwegian margin
compared with the old SW African passive margin that seems
to be thermally equilibrated in the present day.
Ice-wedge polygons are common features of northeastern Siberian lowland periglacial tundra landscapes. To deduce the formation and alternation of ice-wedge polygons in the Kolyma Delta and in the Indigirka Lowland, we studied shallow cores, up to 1.3 m deep, from polygon center and rim locations. The formation of well-developed low-center polygons with elevated rims and wet centers is shown by the beginning of peat accumulation, increased organic matter contents, and changes in vegetation cover from Poaceae-, Alnus-, and Betula-dominated pollen spectra to dominating Cyperaceae and Botryoccocus presence, and Carex and Drepanocladus revolvens macro-fossils. Tecamoebae data support such a change from wetland to open-water conditions in polygon centers by changes from dominating eurybiontic and sphagnobiontic to hydrobiontic species assemblages. The peat accumulation indicates low-center polygon formation and started between 2380 +/- 30 and 1676 +/- 32 years before present (BP) in the Kolyma Delta. We recorded an opposite change from open-water to wetland conditions because of rim degradation and consecutive high-center polygon formation in the Indigirka Lowland between 2144 +/- 33 and 1632 +/- 32 years BP. The late Holocene records of polygon landscape development reveal changes in local hydrology and soil moisture.
We describe how inversion symmetry separation of electronic state manifolds in resonant inelastic soft X-ray scattering (RIXS) can be applied to probe excited-state dynamics with compelling selectivity. In a case study of Fe L-3-edge RIXS in the ferricyanide complex Fe(CN)(6)(3-), we demonstrate with multi-configurational restricted active space spectrum simulations how the information content of RIXS spectral fingerprints can be used to unambiguously separate species of different electronic configurations, spin multiplicities, and structures, with possible involvement in the decay dynamics of photo-excited ligand-to-metal charge-transfer. Specifically, we propose that this could be applied to confirm or reject the presence of a hitherto elusive transient Quartet species. Thus, RIXS offers a particular possibility to settle a recent controversy regarding the decay pathway, and we expect the technique to be similarly applicable in other model systems of photo-induced dynamics.
Manganese (Mn) is an essential nutrient for intracellular activities; it functions as a cofactor for a variety of enzymes, including arginase, glutamine synthetase (GS), pyruvate carboxylase and Mn superoxide dismutase (Mn-SOD). Through these metalloproteins, Mn plays critically important roles in development, digestion, reproduction, antioxidant defense, energy production, immune response and regulation of neuronal activities. Mn deficiency is rare. In contrast Mn poisoning may be encountered upon overexposure to this metal. Excessive Mn tends to accumulate in the liver, pancreas, bone, kidney and brain, with the latter being the major target of Mn intoxication. Hepatic cirrhosis, polycythemia, hypermanganesemia, dystonia and Parkinsonism-like symptoms have been reported in patients with Mn poisoning. In recent years, Mn has come to the forefront of environmental concerns due to its neurotoxicity. Molecular mechanisms of Mn toxicity include oxidative stress, mitochondrial dysfunction, protein misfolding, endoplasmic reticulum (ER) stress, autophagy dysregulation, apoptosis, and disruption of other metal homeostasis. The mechanisms of Mn homeostasis are not fully understood. Here, we will address recent progress in Mn absorption, distribution and elimination across different tissues, as well as the intracellular regulation of Mn homeostasis in cells. We will conclude with recommendations for future research areas on Mn metabolism.