Refine
Year of publication
Document Type
- Article (21457) (remove)
Language
- English (21457) (remove)
Keywords
- climate change (95)
- Germany (75)
- stars: massive (58)
- diffusion (48)
- morphology (47)
- stars: early-type (47)
- gamma rays: general (46)
- German (45)
- stars: winds, outflows (45)
- Arabidopsis thaliana (43)
Institute
- Institut für Physik und Astronomie (4091)
- Institut für Biochemie und Biologie (3409)
- Institut für Geowissenschaften (2623)
- Institut für Chemie (2242)
- Department Psychologie (1133)
- Institut für Mathematik (968)
- Department Linguistik (775)
- Institut für Ernährungswissenschaft (731)
- Institut für Umweltwissenschaften und Geographie (581)
- Institut für Informatik und Computational Science (573)
According to Aikhenvald (2007:5), descriptive linguistics or linguistic
fieldwork “ideally involves observing the language as it is used,
becoming a member of the community, and often being adopted into
the kinship system”. Descriptive linguistics therefore differs from
theoretical linguistics in that while the former seeks to describe natural
languages as they are used, the latter, other than describing, attempts
to give explanations on how or why language phenomena behave in
certain ways. Thus, I will abstract away from any preconceived ideas
on how sentences ought to be in Awing and take the linguist/reader
through focus and interrogative constructions to get a feeling of how
the Awing people interact verbally.
This paper reopens the discussion on focus marking in Akan (Kwa,
Niger-Congo) by examining the semantics of the so-called focus marker
in the language. It is shown that the so-called focus marker expresses
exhaustivity when it occurs in a sentence with narrow focus. The study
employs four standard tests for exhaustivity proposed in the literature
to examine the semantics of Akan focus constructions (Szabolsci 1981,
1994; É. Kiss 1998; Hartmann and Zimmermann 2007). It is shown that
although a focused entity with the so-called focus marker nà is
interpreted to mean ‘only X and nothing/nobody else,’ this meaning
appears to be pragmatic.
A lot has been published about the competencies needed by
students in the 21st century (Ravenscroft et al., 2012). However, equally
important are the competencies needed by educators in the new era
of digital education. We review the key competencies for educators in
light of the new methods of teaching and learning proposed by Massive
Open Online Courses (MOOCs) and their on-campus counterparts,
Small Private Online Courses (SPOCs).
Participants of this workshop will be confronted exemplarily
with a considerable inconsistency of global Informatics education at
lower secondary level. More importantly, they are invited to contribute
actively on this issue in form of short case studies of their countries.
Until now, very few countries have been successful in implementing
Informatics or Computing at primary and lower secondary level. The
spectrum from digital literacy to informatics, particularly as a discipline
in its own right, has not really achieved a breakthrough and seems to
be underrepresented for these age groups. The goal of this workshop
is not only to discuss the anamnesis and diagnosis of this fragmented
field, but also to discuss and suggest viable forms of therapy in form of
setting educational standards. Making visible good practices in some
countries and comparing successful approaches are rewarding tasks for
this workshop.
Discussing and defining common educational standards on a transcontinental
level for the age group of 14 to 15 years old students in a readable,
assessable and acceptable form should keep the participants of this
workshop active beyond the limited time at the workshop.
Let’s talk about CS!
(2015)
To communicate about a science is the most important key
competence in education for any science. Without communication we
cannot teach, so teachers should reflect about the language they use in
class properly. But the language students and teachers use to communicate
about their CS courses is very heterogeneous, inconsistent and
deeply influenced by tool names. There is a big lack of research and
discussion in CS education regarding the terminology and the role of
concepts and tools in our science. We don’t have a consistent set of
terminology that we agree on to be helpful for learning our science.
This makes it nearly impossible to do research on CS competencies as
long as we have not agreed on the names we use to describe these. This
workshop intends to provide room to fill with discussion and first ideas
for future research in this field.
ProtoSense
(2015)
The poster and abstract describe the importance of teaching
information security in school. After a short description of information
security and important aspects, I will show, how information security
fits into different guidelines or models for computer science educations
and that it is therefore on of the key competencies. Afterwards I will
present you a rough insight of teaching information security in Austria.
Current curricular trends require teachers in Baden-
Wuerttemberg (Germany) to integrate Computer Science (CS) into
traditional subjects, such as Physical Science. However, concrete guidelines
are missing. To fill this gap, we outline an approach where a
microcontroller is used to perform and evaluate measurements in the
Physical Science classroom.
Using the open-source Arduino platform, we expect students to acquire
and develop both CS and Physical Science competencies by using a
self-programmed microcontroller. In addition to this combined development
of competencies in Physical Science and CS, the subject matter
will be embedded in suitable contexts and learning environments,
such as weather and climate.
Think logarithmically!
(2015)
We discuss here a number of algorithmic topics which we
use in our teaching and in learning of mathematics and informatics to
illustrate and document the power of logarithm in designing very efficient
algorithms and computations – logarithmic thinking is one of the
most important key competencies for solving real world practical problems.
We demonstrate also how to introduce logarithm independently
of mathematical formalism using a conceptual model for reducing a
problem size by at least half. It is quite surprising that the idea, which
leads to logarithm, is present in Euclid’s algorithm described almost
2000 years before John Napier invented logarithm.
A project involving the composition of a number of pieces
of music by public participants revealed levels of engagement with and
mastery of complex music technologies by a number of secondary student
volunteers. This paper reports briefly on some initial findings of
that project and seeks to illuminate an understanding of computational
thinking across the curriculum.
Physical computing covers the design and realization of interactive
objects and installations and allows students to develop concrete,
tangible products of the real world that arise from the learners’
imagination. This way, constructionist learning is raised to a level that
enables students to gain haptic experience and thereby concretizes the
virtual. In this paper the defining characteristics of physical computing
are described. Key competences to be gained with physical computing
will be identified.
Mentoring in a Digital World
(2015)
This paper focuses on the results of the evaluation of the first
pilot of an e-mentoring unit designed by the Hands-On ICT consortium,
funded by the EU LLL programme. The overall aim of this two-year
activity is to investigate the value for professional learning of Massive
Online Open Courses (MOOCs) and Community Online Open Courses
(COOCs) in the context of a ‘community of practice’. Three units in the
first pilot covered aspects of using digital technologies to develop creative
thinking skills. The findings in this paper relate to the fourth unit
about e-mentoring, a skill that was important to delivering the course
content in the other three units. Findings about the e-mentoring unit
included: the students’ request for detailed profiles so that participants
can get to know each other; and, the need to reconcile the different
interpretations of e-mentoring held by the participants when the course
begins. The evaluators concluded that the major issues were that: not all
professional learners would self-organise and network; and few would
wish to mentor their colleagues voluntarily. Therefore, the e-mentoring
issues will need careful consideration in pilots two and three to identify
how e-mentoring will be organised.
The study reported in this paper involved the employment
of specific in-class exercises using a Personal Response System (PRS).
These exercises were designed with two goals: to enhance students’
capabilities of tracing a given code and of explaining a given code in
natural language with some abstraction. The paper presents evidence
from the actual use of the PRS along with students’ subjective impressions
regarding both the use of the PRS and the special exercises. The
conclusions from the findings are followed with a short discussion on
benefits of PRS-based mental processing exercises for learning programming
and beyond.
In this paper we describe the recent state of our research
project concerning computer science teachers’ knowledge on students’
cognition. We did a comprehensive analysis of textbooks, curricula
and other resources, which give teachers guidance to formulate assignments.
In comparison to other subjects there are only a few concepts
and strategies taught to prospective computer science teachers in university.
We summarize them and given an overview on our empirical
approach to measure this knowledge.
How does the Implementation of a Literacy Learning Tool Kit influence Literacy Skill Acquisition?
(2015)
This study aimed at following how teachers transfer skills
into results while using ABRA literacy software. This was done in
the second part of the pilot study whose aim was to provide equity to
control group teachers and students by exposing them to the ABRACADABRA
treatment after the end of phase 1. This opportunity was
used to follow the phase 1 teachers to see how the skills learned were
being transformed into results. A standard three-day initial training and
planning session on how to use ABRA to teach literacy was held at the
beginning of each phase for ABRA teachers (phase 1 experimental and
phase 2 delayed ABRA). Teachers were provided with teaching materials
including a tentative ABRA curriculum developed to align with the
Kenyan English Language requirements for year 1 and 3 students. Results
showed that although there was no significant difference between
the groups in vocabulary-related subscales which include word reading
and meaning as well as sentence comprehension, students in ABRACADABRA
classes improved their scores at a significantly higher rate
than students in control classes in comprehension related scores. An
average student in the ABRACADABRA group improved by 12 and
16 percentile points respectively compared to their counterparts in the
control group.
The Technology Proficiency Self-Assessment (TPSA) questionnaire
has been used for 15 years in the USA and other nations as a
self-efficacy measure for proficiencies fundamental to effective technology
integration in the classroom learning environment. Internal consistency
reliabilities for each of the five-item scales have typically ranged
from .73 to .88 for preservice or inservice technology-using teachers.
Due to changing technologies used in education, researchers sought to
renovate partially obsolete items and extend self-efficacy assessment to
new areas, such as social media and mobile learning. Analysis of 2014
data gathered on a new, 34 item version of the TPSA indicates that the
four established areas of email, World Wide Web (WWW), integrated
applications, and teaching with technology continue to form consistent
scales with reliabilities ranging from .81 to .93, while the 14 new items
gathered to represent emerging technologies and media separate into
two scales, each with internal consistency reliabilities greater than .9.
The renovated TPSA is deemed to be worthy of continued use in the
teaching with technology context.
Computational Thinking
(2015)
Digital technology has radically changed the way people
work in industry, finance, services, media and commerce. Informatics
has contributed to the scientific and technological development of our
society in general and to the digital revolution in particular. Computational
thinking is the term indicating the key ideas of this discipline that
might be included in the key competencies underlying the curriculum
of compulsory education. The educational potential of informatics has
a history dating back to the sixties. In this article, we briefly revisit this
history looking for lessons learned. In particular, we focus on experiences
of teaching and learning programming. However, computational
thinking is more than coding. It is a way of thinking and practicing interactive
dynamic modeling with computers. We advocate that learners
can practice computational thinking in playful contexts where they can
develop personal projects, for example building videogames and/or robots,
share and discuss their construction with others. In our view, this
approach allows an integration of computational thinking in the K-12
curriculum across disciplines.
How Things Work
(2015)
Recognizing and defining functionality is a key competence
adopted in all kinds of programming projects. This study investigates
how far students without specific informatics training are able to identify
and verbalize functions and parameters. It presents observations
from classroom activities on functional modeling in high school chemistry
lessons with altogether 154 students. Finally it discusses the potential
of functional modelling to improve the comprehension of scientific
content.
This paper originated from discussions about the need for
important changes in the curriculum for Computing including two focus
group meetings at IFIP conferences over the last two years. The
paper examines how recent developments in curriculum, together with
insights from curriculum thinking in other subject areas, especially mathematics
and science, can inform curriculum design for Computing.
The analysis presented in the paper provides insights into the complexity
of curriculum design as well as identifying important constraints and
considerations for the ongoing development of a vision and framework
for a Computing curriculum.
This article shows a discussion about the key competencies
in informatics and ICT viewed from a philosophical foundation presented
by Martha Nussbaum, which is known as ‘ten central capabilities’.
Firstly, the outline of ‘The Capability Approach’, which has been presented
by Amartya Sen and Nussbaum as a theoretical framework of
assessing the state of social welfare, will be explained. Secondly, the
body of Nussbaum’s ten central capabilities and the reason for being
applied as the basis of discussion will be shown. Thirdly, the relationship
between the concept of ‘capability’ and ‘competency’ is to be
discussed. After that, the author’s assumption of the key competencies
in informatics and ICT led from the examination of Nussbaum’s ten
capabilities will be presented.
The objectives of this study were to examine (a) the effect
of dynamic assessment (DA) in a 3D Immersive Virtual Reality
(IVR) environment as compared with computerized 2D and noncomputerized
(NC) situations on cognitive modifiability, and (b) the
transfer effects of these conditions on more difficult problem solving
administered two weeks later in a non-computerized environment. A
sample of 117 children aged 6:6-9:0 years were randomly assigned
into three experimental groups of DA conditions: 3D, 2D, and NC, and
one control group (C). All groups received the pre- and post-teaching
Analogies subtest of the Cognitive Modifiability Battery (CMB-AN).
The experimental groups received a teaching phase in conditions similar
to the pre-and post-teaching phases. The findings showed that cognitive
modifiability, in a 3D IVR, was distinctively higher than in the two
other experimental groups (2D computer group and NC group). It was
also found that the 3D group showed significantly higher performance
in transfer problems than the 2D and NC groups.
BugHunt
(2015)
Competencies related to operating systems and computer
security are usually taught systematically. In this paper we present
a different approach, in which students have to remove virus-like
behaviour on their respective computers, which has been induced by
software developed for this purpose. They have to develop appropriate
problem-solving strategies and thereby explore essential elements of
the operating system. The approach was implemented exemplarily in
two computer science courses at a regional general upper secondary
school and showed great motivation and interest in the participating
students.
In the project MoKoM, which is funded by the German
Research Foundation (DFG) from 2008 to 2012, a test instrument
measuring students’ competences in computer science was developed.
This paper presents the results of an expert rating of the levels of
students’ competences done for the items of the instrument.
At first we will describe the difficulty-relevant features that were
used for the evaluation. These were deduced from computer science,
psychological and didactical findings and resources. Potentials and
desiderata of this research method are discussed further on. Finally
we will present our conclusions on the results and give an outlook on
further steps.
The growing impact of globalisation and the development of
a ‘knowledge society’ have led many to argue that 21st century skills are
essential for life in twenty-first century society and that ICT is central
to their development. This paper describes how 21st century skills, in
particular digital literacy, critical thinking, creativity, communication
and collaboration skills, have been conceptualised and embedded in the
resources developed for teachers in iTEC, a four-year, European project.
The effectiveness of this approach is considered in light of the data
collected through the evaluation of the pilots, which considers both the
potential benefits of using technology to support the development of
21st century skills, but also the challenges of doing so. Finally, the paper
discusses the learning support systems required in order to transform
pedagogies and embed 21st century skills. It is argued that support is
required in standards and assessment; curriculum and instruction; professional
development; and learning environments.
This paper discusses results from a small-scale research
study, together with some recently published research into student
perceptions of ICT for learning in schools, to consider relevant skills
that do not appear to currently being taught. The paper concludes by
raising three issues relating to learning with and through ICT that need
to be addressed in school curricula and classroom teaching.
The Student Learning Ecology
(2015)
Educational research on social media has showed that
students use it for socialisation, personal communication, and informal
learning. Recent studies have argued that students to some degree use
social media to carry out formal schoolwork. This article gives an
explorative account on how a small sample of Norwegian high school
students use social media to self-organise formal schoolwork. This
user pattern can be called a “student learning ecology”, which is a
user perspective on how participating students gain access to learning
resources.
Teaching Data Management
(2015)
Data management is a central topic in computer science as
well as in computer science education. Within the last years, this topic is
changing tremendously, as its impact on daily life becomes increasingly
visible. Nowadays, everyone not only needs to manage data of various
kinds, but also continuously generates large amounts of data. In
addition, Big Data and data analysis are intensively discussed in public
dialogue because of their influences on society. For the understanding of
such discussions and for being able to participate in them, fundamental
knowledge on data management is necessary. Especially, being aware
of the threats accompanying the ability to analyze large amounts of
data in nearly real-time becomes increasingly important. This raises the
question, which key competencies are necessary for daily dealings with
data and data management.
In this paper, we will first point out the importance of data management
and of Big Data in daily life. On this basis, we will analyze which are
the key competencies everyone needs concerning data management to
be able to handle data in a proper way in daily life. Afterwards, we will
discuss the impact of these changes in data management on computer
science education and in particular database education.
Social networks are currently at the forefront of tools that
lend to Personal Learning Environments (PLEs). This study aimed to
observe how students perceived PLEs, what they believed were the
integral components of social presence when using Facebook as part
of a PLE, and to describe student’s preferences for types of interactions
when using Facebook as part of their PLE. This study used mixed
methods to analyze the perceptions of graduate and undergraduate
students on the use of social networks, more specifically Facebook as a
learning tool. Fifty surveys were returned representing a 65 % response
rate. Survey questions included both closed and open-ended questions.
Findings suggested that even though students rated themselves relatively
well in having requisite technology skills, and 94 % of students used
Facebook primarily for social use, they were hesitant to migrate these
skills to academic use because of concerns of privacy, believing that
other platforms could fulfil the same purpose, and by not seeing the
validity to use Facebook in establishing social presence. What lies
at odds with these beliefs is that when asked to identify strategies in
Facebook that enabled social presence to occur in academic work, the
majority of students identified strategies in five categories that lead to
social presence establishment on Facebook during their coursework.
The paper discusses the issue of supporting informatics
(computer science) education through competitions for lower and
upper secondary school students (8–19 years old). Competitions play
an important role for learners as a source of inspiration, innovation,
and attraction. Running contests in informatics for school students
for many years, we have noticed that the students consider the contest
experience very engaging and exciting as well as a learning experience.
A contest is an excellent instrument to involve students in problem
solving activities. An overview of infrastructure and development
of an informatics contest from international level to the national one
(the Bebras contest on informatics and computer fluency, originated
in Lithuania) is presented. The performance of Bebras contests in 23
countries during the last 10 years showed an unexpected and unusually
high acceptance by school students and teachers. Many thousands of
students participated and got a valuable input in addition to their regular
informatics lectures at school. In the paper, the main attention is paid
to the developed tasks and analysis of students’ task solving results in
Lithuania.
The paper presents two approaches to the development of
a Computer Science Competence Model for the needs of curriculum
development and evaluation in Higher Education. A normativetheoretical
approach is based on the AKT and ACM/IEEE curriculum
and will be used within the recommendations of the German
Informatics Society (GI) for the design of CS curricula. An empirically
oriented approach refines the categories of the first one with regard to
specific subject areas by conducting content analysis on CS curricula of
important universities from several countries. The refined model will be
used for the needs of students’ e-assessment and subsequent affirmative
action of the CS departments.
Regardless of what is intended by government curriculum
specifications and advised by educational experts, the competencies
taught and learned in and out of classrooms can vary considerably.
In this paper, we discuss in particular how we can investigate the
perceptions that individual teachers have of competencies in ICT,
and how these and other factors may influence students’ learning. We
report case study research which identifies contradictions within the
teaching of ICT competencies as an activity system, highlighting issues
concerning the object of the curriculum, the roles of the participants and
the school cultures. In a particular case, contradictions in the learning
objectives between higher order skills and the use of application tools
have been resolved by a change in the teacher’s perceptions which
have not led to changes in other aspects of the activity system. We look
forward to further investigation of the effects of these contradictions in
other case studies and on forthcoming curriculum change.
As a result of the Bologna reform of educational systems in
Europe the outcome orientation of learning processes, competence-oriented
descriptions of the curricula and competence-oriented assessment
procedures became standard also in Computer Science Education
(CSE). The following keynote addresses important issues of shaping
a CSE competence model especially in the area of informatics system
comprehension and object-oriented modelling. Objectives and research
methodology of the project MoKoM (Modelling and Measurement
of Competences in CSE) are explained. Firstly, the CSE competence
model was derived based on theoretical concepts and then secondly the
model was empirically examined and refined using expert interviews.
Furthermore, the paper depicts the development and examination of
a competence measurement instrument, which was derived from the
competence model. Therefore, the instrument was applied to a large
sample of students at the gymnasium’s upper class level. Subsequently,
efforts to develop a competence level model, based on the retrieved empirical
results and on expert ratings are presented. Finally, further demands
on research on competence modelling in CSE will be outlined.
Computational thinking is a fundamental skill set that is learned
by studying Informatics and ICT. We argue that its core ideas can
be introduced in an inspiring and integrated way to both teachers and
students using fun and contextually rich cs4fn ‘Computer Science for
Fun’ stories combined with ‘unplugged’ activities including games and
magic tricks. We also argue that understanding people is an important
part of computational thinking. Computational thinking can be fun for
everyone when taught in kinaesthetic ways away from technology.
Introduction
We investigated blood glucose (BG) and hormone response to aerobic high-intensity interval exercise (HIIE) and moderate continuous exercise (CON) matched for mean load and duration in type 1 diabetes mellitus (T1DM).
Material and Methods
Seven trained male subjects with T1DM performed a maximal incremental exercise test and HIIE and CON at 3 different mean intensities below (A) and above (B) the first lactate turn point and below the second lactate turn point (C) on a cycle ergometer. Subjects were adjusted to ultra-long-acting insulin Degludec (Tresiba/ Novo Nordisk, Denmark). Before exercise, standardized meals were administered, and short-acting insulin dose was reduced by 25% (A), 50% (B), and 75% (C) dependent on mean exercise intensity. During exercise, BG, adrenaline, noradrenaline, dopamine, cortisol, glucagon, and insulin-like growth factor-1, blood lactate, heart rate, and gas exchange variables were measured. For 24 h after exercise, interstitial glucose was measured by continuous glucose monitoring system.
Results
BG decrease during HIIE was significantly smaller for B (p = 0.024) and tended to be smaller for A and C compared to CON. No differences were found for post-exercise interstitial glucose, acute hormone response, and carbohydrate utilization between HIIE and CON for A, B, and C. In HIIE, blood lactate for A (p = 0.006) and B (p = 0.004) and respiratory exchange ratio for A (p = 0.003) and B (p = 0.003) were significantly higher compared to CON but not for C.
Conclusion
Hypoglycemia did not occur during or after HIIE and CON when using ultra-long-acting insulin and applying our methodological approach for exercise prescription. HIIE led to a smaller BG decrease compared to CON, although both exercises modes were matched for mean load and duration, even despite markedly higher peak workloads applied in HIIE. Therefore, HIIE and CON could be safely performed in T1DM.
In living cells, there are always a plethora of processes taking place at the same time. Their precise regulation is the basis of cellular functions, since small failures can lead to severe dysfunctions. For a comprehensive understanding of intracellular homeostasis, simultaneous multiparameter detection is a versatile tool for revealing the spatial and temporal interactions of intracellular parameters. Here, a recently developed time-correlated single-photon counting (TCSPC) board was evaluated for simultaneous fluorescence and phosphorescence lifetime imaging microscopy (FLIM/PLIM). Therefore, the metabolic activity in insect salivary glands was investigated by recording ns-decaying intrinsic cellular fluorescence, mainly related to oxidized flavin adenine dinucleotide (FAD) and the μs-decaying phosphorescence of the oxygen-sensitive ruthenium-complex Kr341. Due to dopamine stimulation, the metabolic activity of salivary glands increased, causing a higher pericellular oxygen consumption and a resulting increase in Kr341 phosphorescence decay time. Furthermore, FAD fluorescence decay time decreased, presumably due to protein binding, thus inducing a quenching of FAD fluorescence decay time. Through application of the metabolic drugs antimycin and FCCP, the recorded signals could be assigned to a mitochondrial origin. The dopamine-induced changes could be observed in sequential FLIM and PLIM recordings, as well as in simultaneous FLIM/PLIM recordings using an intermediate TCSPC timing resolution.
The distinction of enantiomers is a key aspect of chemical analysis. In mass spectrometry the distinction of enantiomers has been achieved by ionizing the sample with circularly polarized laser pulses and comparing the ion yields for light of opposite handedness. While resonant excitation conditions are expected to be most efficient, they are not required for the detection of a circular dichroism (CD) in the ion yield. However, the prediction of the size and sign of the circular dichroism becomes challenging if non-resonant multiphoton excitations are used to ionize the sample. Employing femtosecond laser pulses to drive electron wavepacket dynamics based on ab initio calculations, we attempt to reveal underlying mechanisms that determine the CD under non-resonant excitation conditions. Simulations were done for (R)-1,2-propylene oxide, using time-dependent configuration interaction singles with perturbative doubles (TD-CIS(D)) and the aug-cc-pVTZ basis set. Interactions between the electric field and the electric dipole and quadrupole as well as between the magnetic field and the magnetic dipole were explicitly accounted for. The ion yield was determined by treating states above the ionization potential as either stationary or non-stationary with energy-dependent lifetimes based on an approved heuristic approach. The observed population dynamics do not allow for a simple interpretation, because of highly non-linear interactions. Still, the various transition pathways are governed by resonant enantiospecific n-photon excitation, with preferably high transition dipole moments, which eventually dominate the CD in the ionized population.
Exposure to organic mercury compounds promotes primarily neurological effects. Although methylmercury is recognized as a potent neurotoxicant, its transfer into the central nervous system (CNS) is not fully evaluated. While methylmercury and thiomersal pass the blood–brain barrier, limited data are available regarding the second brain regulating interface, the blood–cerebrospinal fluid (CSF) barrier. This novel study was designed to investigate the effects of organic as well as inorganic mercury compounds on, and their transfer across, a porcine in vitro model of the blood–CSF barrier for the first time. The barrier system is significantly more sensitive towards organic Hg compounds as compared to inorganic compounds regarding the endpoints cytotoxicity and barrier integrity. Whereas there are low transfer rates from the blood side to the CSF side, our results strongly indicate an active transfer of the organic mercury compounds out of the CSF. These results are the first to demonstrate an efflux of organic mercury compounds regarding the CNS and provide a completely new approach in the understanding of mercury compounds specific transport.
Arsenic-containing fatty acids are a group of fat-soluble arsenic species (arsenolipids) which are present in marine fish and other seafood. Recently, it has been shown that arsenic-containing hydrocarbons, another group of arsenolipids, exert toxicity in similar concentrations comparable to arsenite although the toxic modes of action differ. Hence, a risk assessment of arsenolipids is urgently needed. In this study the cellular toxicity of a saturated (AsFA 362) and an unsaturated (AsFA 388) arsenic-containing fatty acid and three of their proposed metabolites (DMAV, DMAPr and thio-DMAPr) were investigated in human liver cells (HepG2). Even though both arsenic-containing fatty acids were less toxic as compared to arsenic-containing hydrocarbons and arsenite, significant effects were observable at μM concentrations. DMAV causes effects in a similar concentration range and it could be seen that it is metabolised to its highly toxic thio analogue thio-DMAV in HepG2 cells. Nevertheless, DMAPr and thio-DMAPr did not exert any cytotoxicity. In summary, our data indicate that risks to human health related to the presence of arsenic-containing fatty acids in marine food cannot be excluded. This stresses the need for a full in vitro and in vivo toxicological characterisation of these arsenolipids.
Fully renewable pyridinium ionic liquids were synthesised via the hydrothermal decarboxylation of pyridinium zwitterions derived from furfural and amino acids in flow. The functionality of the resulting ionic liquid (IL) can be tuned by choice of different amino acids as well as different natural carboxylic acids as the counterions. A representative member of this new class of ionic liquids was successfully used for the synthesis of ionogels and as a solvent for the Heck coupling.
Double cyclization of short linear peptides obtained by solid phase peptide synthesis was used to prepare bridged bicyclic peptides (BBPs) corresponding to the topology of bridged bicyclic alkanes such as norbornane. Diastereomeric norbornapeptides were investigated by 1H-NMR, X-ray crystallography and CD spectroscopy and found to represent rigid globular scaffolds stabilized by intramolecular backbone hydrogen bonds with scaffold geometries determined by the chirality of amino acid residues and sharing structural features of β-turns and α-helices. Proteome profiling by capture compound mass spectrometry (CCMS) led to the discovery of the norbornapeptide 27c binding selectively to calmodulin as an example of a BBP protein binder. This and other BBPs showed high stability towards proteolytic degradation in serum.
Nonlinear optical response of photochromic azobenzene-functionalized self-assembled monolayers
(2015)
The combination of photochromic and nonlinear optical (NLO) properties of azobenzene-functionalized self-assembled monolayers (SAMs) constitutes an intriguing step towards novel photonic and optoelectronic devices. By utilizing the second-order NLO process of second harmonic generation (SHG), supported by density-functional theory and correlated wave function method calculations, we demonstrate that the photochromic interface provides the necessary prerequisites en route towards possible future technical applications: we find a high NLO contrast on the order of 16% between the switching states. These are furthermore accessible reversibly and with high efficiencies in terms of cross sections on the order of 10−18 cm2 for both photoisomerization reactions, i.e., drivable by means of low-power LED light sources. Finally, both photostationary states (PSSs) are thermally stable at ambient conditions.
In this work we present a CMOS high frequency direct immunosensor operating at 6 GHz (C-band) for label free determination of creatinine. The sensor is fabricated in standard 0.13 μm SiGe:C BiCMOS process. The report also demonstrates the ability to immobilize creatinine molecules on a Si3N4 passivation layer of the standard BiCMOS/CMOS process, therefore, evading any further need of cumbersome post processing of the fabricated sensor chip. The sensor is based on capacitive detection of the amount of non-creatinine bound antibodies binding to an immobilized creatinine layer on the passivated sensor. The chip bound antibody amount in turn corresponds indirectly to the creatinine concentration used in the incubation phase. The determination of creatinine in the concentration range of 0.88–880 μM is successfully demonstrated in this work. A sensitivity of 35 MHz/10 fold increase in creatinine concentration (during incubation) at the centre frequency of 6 GHz is gained by the immunosensor. The results are compared with a standard optical measurement technique and the dynamic range and sensitivity is of the order of the established optical indication technique. The C-band immunosensor chip comprising an area of 0.3 mm2 reduces the sensing area considerably, therefore, requiring a sample volume as low as 2 μl. The small analyte sample volume and label free approach also reduce the experimental costs in addition to the low fabrication costs offered by the batch fabrication technique of CMOS/BiCMOS process.
Paper-based microfluidics provide an inexpensive, easy to use technology for point-of-care diagnostics in developing countries. Here, we combine paper-based microfluidic devices with responsive hydrogels to add an entire new class of functions to these versatile low-cost fluidic systems. The hydrogels serve as fluid reservoirs. In response to an external stimulus, e.g. an increase in temperature, the hydrogels collapse and release fluid into the structured paper substrate. In this way, chemicals that are either stored on the paper substrate or inside the hydrogel pads can be dissolved, premixed, and brought to reaction to fulfill specific analytic tasks. We demonstrate that multi-step sequences of chemical reactions can be implemented in a paper-based system and operated without the need for external precision pumps. We exemplify this technology by integrating an antibody-based E. coli test on a small and easy to use paper device.
Temperature-memory polymers remember the temperature, where they were deformed recently, enabled by broad thermal transitions. In this study, we explored a series of crosslinked poly[ethylene-co-(vinyl acetate)] networks (cPEVAs) comprising crystallizable polyethylene (PE) controlling units exhibiting a pronounced temperature-memory effect (TME) between 16 and 99 °C related to a broad melting transition (∼100 °C). The nanostructural changes in such cPEVAs during programming and activation of the TME were analyzed via in situ X-ray scattering and specific annealing experiments. Different contributions to the mechanism of memorizing high or low deformation temperatures (Tdeform) were observed in cPEVA, which can be associated to the average PE crystal sizes. At high deformation temperatures (>50 °C), newly formed PE crystals, which are established during cooling when fixing the temporary shape, dominated the TME mechanism. In contrast, at low Tdeform (<50 °C), corresponding to a cold drawing scenario, the deformation led preferably to a disruption of existing large crystals into smaller ones, which then fix the temporary shape upon cooling. The observed mechanism of memorizing a deformation temperature might enable the prediction of the TME behavior and the knowledge based design of other TMPs with crystallizable controlling units.
Co-doping of the MOF 3∞[Zn(2-methylimidazolate-4-amide-5-imidate)] (IFP-1 = Imidazolate Framework Potsdam-1) with luminescent Eu3+ and Tb3+ ions presents an approach to utilize the porosity of the MOF for the intercalation of luminescence centers and for tuning of the chromaticity to the emission of white light of the quality of a three color emitter. Organic based fluorescence processes of the MOF backbone as well as metal based luminescence of the dopants are combined to one homogenous single source emitter while retaining the MOF's porosity. The lanthanide ions Eu3+ and Tb3+ were doped in situ into IFP-1 upon formation of the MOF by intercalation into the micropores of the growing framework without a structure directing effect. Furthermore, the color point is temperature sensitive, so that a cold white light with a higher blue content is observed at 77 K and a warmer white light at room temperature (RT) due to the reduction of the organic emission at higher temperatures. The study further illustrates the dependence of the amount of luminescent ions on porosity and sorption properties of the MOF and proves the intercalation of luminescence centers into the pore system by low-temperature site selective photoluminescence spectroscopy, SEM and EDX. It also covers an investigation of the border of homogenous uptake within the MOF pores and the formation of secondary phases of lanthanide formates on the surface of the MOF. Crossing the border from a homogenous co-doping to a two-phase composite system can be beneficially used to adjust the character and warmth of the white light. This study also describes two-color emitters of the formula Ln@IFP-1a–d (Ln: Eu, Tb) by doping with just one lanthanide Eu3+ or Tb3+.
The simulation of the optical properties of supramolecular aggregates requires the development of methods, which are able to treat a large number of coupled chromophores interacting with the environment. Since it is currently not possible to treat large systems by quantum chemistry, the Frenkel exciton model is a valuable alternative. In this work we show how the Frenkel exciton model can be extended in order to explain the excitonic spectra of a specific double-walled tubular dye aggregate explicitly taking into account dispersive energy shifts of ground and excited states due to van der Waals interaction with all surrounding molecules. The experimentally observed splitting is well explained by the site-dependent energy shift of molecules placed at the inner or outer side of the double-walled tube, respectively. Therefore we can conclude that inclusion of the site-dependent dispersive effect in the theoretical description of optical properties of nanoscaled dye aggregates is mandatory.
cis-Diamminedichloroplatinum(II) (Cisplatin) is one of the most important and frequently used cytostatic drugs for the treatment of various solid tumors. Herein, a laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS) method incorporating a fast and simple sample preparation protocol was developed for the elemental mapping of Cisplatin in the model organism Caenorhabditis elegans (C. elegans). The method allows imaging of the spatially-resolved elemental distribution of platinum in the whole organism with respect to the anatomic structure in L4 stage worms at a lateral resolution of 5 μm. In addition, a dose- and time-dependent Cisplatin uptake was corroborated quantitatively by a total reflection X-ray fluorescence spectroscopy (TXRF) method, and the elemental mapping indicated that Cisplatin is located in the intestine and in the head of the worms. Better understanding of the distribution of Cisplatin in this well-established model organism will be instrumental in deciphering Cisplatin toxicity and pharmacokinetics. Since the cytostatic effect of Cisplatin is based on binding the DNA by forming intra- and interstrand crosslinks, the response of poly(ADP-ribose)metabolism enzyme 1 (pme-1) deletion mutants to Cisplatin was also examined. Loss of pme-1, which is the C. elegans ortholog of human poly(ADP-ribose) polymerase 1 (PARP-1) led to disturbed DNA damage response. With respect to survival and brood size, pme-1 deletion mutants were more sensitive to Cisplatin as compared to wildtype worms, while Cisplatin uptake was indistinguishable.
Patterned differentiation of distinct cell types is essential for the development of multicellular organisms. The root epidermis of Arabidopsis thaliana is composed of alternating files of root hair and non-hair cells and represents a model system for studying the control of cell-fate acquisition. Epidermal cell fate is regulated by a network of genes that translate positional information from the underlying cortical cell layer into a specific pattern of differentiated cells. While much is known about the genes of this network, new players continue to be discovered. Here we show that the SABRE (SAB) gene, known to mediate microtubule organization, anisotropic cell growth and planar polarity, has an effect on root epidermal hair cell patterning. Loss of SAB function results in ectopic root hair formation and destabilizes the expression of cell fate and differentiation markers in the root epidermis, including expression of the WEREWOLF (WER) and GLABRA2 (GL2) genes. Double mutant analysis reveal that wer and caprice (cpc) mutants, defective in core components of the epidermal patterning pathway, genetically interact with sab. This suggests that SAB may act on epidermal patterning upstream of WER and CPC. Hence, we provide evidence for a role of SAB in root epidermal patterning by affecting cell-fate stabilization. Our work opens the door for future studies addressing SAB-dependent functions of the cytoskeleton during root epidermal patterning.
The coordination of cell polarity within the plane of the tissue layer (planar polarity) is crucial for the development of diverse multicellular organisms. Small Rac/Rho-family GTPases and the actin cytoskeleton contribute to planar polarity formation at sites of polarity establishment in animals and plants. Yet, upstream pathways coordinating planar polarity differ strikingly between kingdoms. In the root of Arabidopsis thaliana, a concentration gradient of the phytohormone auxin coordinates polar recruitment of Rho-of-plant (ROP) to sites of polar epidermal hair initiation. However, little is known about cytoskeletal components and interactions that contribute to this planar polarity or about their relation to the patterning machinery. Here, we show that ACTIN7 (ACT7) represents a main actin isoform required for planar polarity of root hair positioning, interacting with the negative modulator ACTIN-INTERACTING PROTEIN1-2 (AIP1-2). ACT7, AIP1-2 and their genetic interaction are required for coordinated planar polarity of ROP downstream of ethylene signalling. Strikingly, AIP1-2 displays hair cell file-enriched expression, restricted by WEREWOLF (WER)-dependent patterning and modified by ethylene and auxin action. Hence, our findings reveal AIP1-2, expressed under control of the WER-dependent patterning machinery and the ethylene signalling pathway, as a modulator of actin-mediated planar polarity.
Recent experiments show that transcription factors (TFs) indeed use the facilitated diffusion mechanism to locate their target sequences on DNA in living bacteria cells: TFs alternate between sliding motion along DNA and relocation events through the cytoplasm. From simulations and theoretical analysis we study the TF-sliding motion for a large section of the DNA-sequence of a common E. coli strain, based on the two-state TF-model with a fast-sliding search state and a recognition state enabling target detection. For the probability to detect the target before dissociating from DNA the TF-search times self-consistently depend heavily on whether or not an auxiliary operator (an accessible sequence similar to the main operator) is present in the genome section. Importantly, within our model the extent to which the interconversion rates between search and recognition states depend on the underlying nucleotide sequence is varied. A moderate dependence maximises the capability to distinguish between the main operator and similar sequences. Moreover, these auxiliary operators serve as starting points for DNA looping with the main operator, yielding a spectrum of target detection times spanning several orders of magnitude. Auxiliary operators are shown to act as funnels facilitating target detection by TFs.
Children’s poor performance on object relative clauses has been explained in terms of intervention locality. This approach predicts that object relatives with a full DP head and an embedded pronominal subject are easier than object relatives in which both the head noun and the embedded subject are full DPs. This prediction is shared by other accounts formulated to explain processing mechanisms. We conducted a visual-world study designed to test the off-line comprehension and on-line processing of object relatives in German-speaking 5-year-olds. Children were tested on three types of object relatives, all having a full DP head noun and differing with respect to the type of nominal phrase that appeared in the embedded subject position: another full DP, a 1st- or a 3rd-person pronoun. Grammatical skills and memory capacity were also assessed in order to see whether and how they affect children’s performance. Most accurately processed were object relatives with 1st-person pronoun, independently of children’s language and memory skills. Performance on object relatives with two full DPs was overall more accurate than on object relatives with 3rd-person pronoun. In the former condition, children with stronger grammatical skills accurately processed the structure and their memory abilities determined how fast they were; in the latter condition, children only processed accurately the structure if they were strong both in their grammatical skills and in their memory capacity. The results are discussed in the light of accounts that predict different pronoun effects like the ones we find, which depend on the referential properties of the pronouns. We then discuss which role language and memory abilities might have in processing object relatives with various embedded nominal phrases.
Sentences with doubly center-embedded relative clauses in which a verb phrase (VP) is missing are sometimes perceived as grammatical, thus giving rise to an illusion of grammaticality. In this paper, we provide a new account of why missing-VP sentences, which are both complex and ungrammatical, lead to an illusion of grammaticality, the so-called missing-VP effect. We propose that the missing-VP effect in particular, and processing difficulties with multiply center-embedded clauses more generally, are best understood as resulting from interference during cue-based retrieval. When processing a sentence with double center-embedding, a retrieval error due to interference can cause the verb of an embedded clause to be erroneously attached into a higher clause. This can lead to an illusion of grammaticality in the case of missing-VP sentences and to processing complexity in the case of complete sentences with double center-embedding. Evidence for an interference account of the missing-VP effect comes from experiments that have investigated the missing-VP effect in German using a speeded grammaticality judgments procedure. We review this evidence and then present two new experiments that show that the missing-VP effect can be found in German also with less restricting procedures. One experiment was a questionnaire study which required grammaticality judgments from participants without imposing any time constraints. The second experiment used a self-paced reading procedure and did not require any judgments. Both experiments confirm the prior findings of missing-VP effects in German and also show that the missing-VP effect is subject to a primacy effect as known from the memory literature. Based on this evidence, we argue that an account of missing-VP effects in terms of interference during cue-based retrieval is superior to accounts in terms of limited memory resources or in terms of experience with embedded structures.
A number of recent studies have investigated how syntactic and non-syntactic constraints combine to cue memory retrieval during anaphora resolution. In this paper we investigate how syntactic constraints and gender congruence interact to guide memory retrieval during the resolution of subject pronouns. Subject pronouns are always technically ambiguous, and the application of syntactic constraints on their interpretation depends on properties of the antecedent that is to be retrieved. While pronouns can freely corefer with non-quantified referential antecedents, linking a pronoun to a quantified antecedent is only possible in certain syntactic configurations via variable binding. We report the results from a judgment task and three online reading comprehension experiments investigating pronoun resolution with quantified and non-quantified antecedents. Results from both the judgment task and participants' eye movements during reading indicate that comprehenders freely allow pronouns to corefer with non-quantified antecedents, but that retrieval of quantified antecedents is restricted to specific syntactic environments. We interpret our findings as indicating that syntactic constraints constitute highly weighted cues to memory retrieval during anaphora resolution.
We define and study in detail utraslow scaled Brownian motion (USBM) characterized by a time dependent diffusion coefficient of the form . For unconfined motion the mean squared displacement (MSD) of USBM exhibits an ultraslow, logarithmic growth as function of time, in contrast to the conventional scaled Brownian motion. In a harmonic potential the MSD of USBM does not saturate but asymptotically decays inverse-proportionally to time, reflecting the highly non-stationary character of the process. We show that the process is weakly non-ergodic in the sense that the time averaged MSD does not converge to the regular MSD even at long times, and for unconfined motion combines a linear lag time dependence with a logarithmic term. The weakly non-ergodic behaviour is quantified in terms of the ergodicity breaking parameter. The USBM process is also shown to be ageing: observables of the system depend on the time gap between initiation of the test particle and start of the measurement of its motion. Our analytical results are shown to agree excellently with extensive computer simulations.
Through the use of next generation sequencing (NGS) technology, a lot of newly sequenced organisms are now available. Annotating those genes is one of the most challenging tasks in sequence biology. Here, we present an automated workflow to find homologue proteins, annotate sequences according to function and create a three-dimensional model.
With the jABC it is possible to realize workflows for numerous questions in different fields. The goal of this project was to create a workflow for the identification of differentially expressed genes. This is of special interest in biology, for it gives the opportunity to get a better insight in cellular changes due to exogenous stress, diseases and so on. With the knowledge that can be derived from the differentially expressed genes in diseased tissues, it becomes possible to find new targets for treatment.
A workflow for visualizing server connections using the Google Maps API was built in the jABC. It makes use of three basic services: An XML-based IP address geolocation web service, a command line tool and the Static Maps API. The result of the workflow is an URL leading to an image file of a map, showing server connections between a client and a target host.
Geocoder accuracy ranking
(2014)
Finding an address on a map is sometimes tricky: the chosen map application may be unfamiliar with the enclosed region. There are several geocoders on the market, they have different databases and algorithms to compute the query. Consequently, the geocoding results differ in their quality. Fortunately the geocoders provide a rich set of metadata. The workflow described in this paper compares this metadata with the aim to find out which geocoder is offering the best-fitting coordinate for a given address.
Analyses of metagenomes in life sciences present new opportunities as well as challenges to the scientific community and call for advanced computational methods and workflows. The large amount of data collected from samples via next-generation sequencing (NGS) technologies render manual approaches to sequence comparison and annotation unsuitable. Rather, fast and efficient computational pipelines are needed to provide comprehensive statistics and summaries and enable the researcher to choose appropriate tools for more specific analyses. The workflow presented here builds upon previous pipelines designed for automated clustering and annotation of raw sequence reads obtained from next-generation sequencing technologies such as 454 and Illumina. Employing specialized algorithms, the sequence reads are processed at three different levels. First, raw reads are clustered at high similarity cutoff to yield clusters which can be exported as multifasta files for further analyses. Independently, open reading frames (ORFs) are predicted from raw reads and clustered at two strictness levels to yield sets of non-redundant sequences and ORF families. Furthermore, single ORFs are annotated by performing searches against the Pfam database
Geometric generalization is a fundamental concept in the digital mapping process. An increasing amount of spatial data is provided on the web as well as a range of tools to process it. This jABC workflow is used for the automatic testing of web-based generalization services like mapshaper.org by executing its functionality, overlaying both datasets before and after the transformation and displaying them visually in a .tif file. Mostly Web Services and command line tools are used to build an environment where ESRI shapefiles can be uploaded, processed through a chosen generalization service and finally visualized in Irfanview.
In the geoinformatics field, remote sensing data is often used for analyzing the characteristics of the current investigation area. This includes DEMs, which are simple raster grids containing grey scales representing the respective elevation values. The project CREADED that is presented in this paper aims at making these monochrome raster images more significant and more intuitively interpretable. For this purpose, an executable interactive model for creating a colored and relief-shaded Digital Elevation Model (DEM) has been designed using the jABC framework. The process is based on standard jABC-SIBs and SIBs that provide specific GIS functions, which are available as Web services, command line tools and scripts.
This paper describes the implementation of a workflow model for service-oriented computing of potential areas for wind turbines in jABC. By implementing a re-executable model the manual effort of a multi-criteria site analysis can be reduced. The aim is to determine the shift of typical geoprocessing tools of geographic information systems (GIS) from the desktop to the web. The analysis is based on a vector data set and mainly uses web services of the “Center for Spatial Information Science and Systems” (CSISS). This paper discusses effort, benefits and problems associated with the use of the web services.
Location analyses are among the most common tasks while working with spatial data and geographic information systems. Automating the most frequently used procedures is therefore an important aspect of improving their usability. In this context, this project aims to design and implement a workflow, providing some basic tools for a location analysis. For the implementation with jABC, the workflow was applied to the problem of finding a suitable location for placing an artificial reef. For this analysis three parameters (bathymetry, slope and grain size of the ground material) were taken into account, processed, and visualized with the The Generic Mapping Tools (GMT), which were integrated into the workflow as jETI-SIBs. The implemented workflow thereby showed that the approach to combine jABC with GMT resulted in an user-centric yet user-friendly tool with high-quality cartographic outputs.
Creation of topographic maps
(2014)
Location analyses are among the most common tasks while working with spatial data and geographic information systems. Automating the most frequently used procedures is therefore an important aspect of improving their usability. In this context, this project aims to design and implement a workflow, providing some basic tools for a location analysis. For the implementation with jABC, the workflow was applied to the problem of finding a suitable location for placing an artificial reef. For this analysis three parameters (bathymetry, slope and grain size of the ground material) were taken into account, processed, and visualized with the The Generic Mapping Tools (GMT), which were integrated into the workflow as jETI-SIBs. The implemented workflow thereby showed that the approach to combine jABC with GMT resulted in an user-centric yet user-friendly tool with high-quality cartographic outputs.
GraffDok is an application helping to maintain an overview over sprayed images somewhere in a city. At the time of writing it aims at vandalism rather than at beautiful photographic graffiti in an underpass. Looking at hundreds of tags and scribbles on monuments, house walls, etc. it would be interesting to not only record them in writing but even make them accessible electronically, including images.
GraffDok’s workflow is simple and only requires an EXIF-GPS-tagged photograph of a graffito. It automatically determines its location by using reverse geocoding with the given GPS-coordinates and the Gisgraphy WebService. While asking the user for some more meta data, GraffDok analyses the image in parallel with this and tries to detect fore- and background – before extracting the drawing lines and make them stand alone. The command line based tool ImageMagick is used here as well as for accessing EXIF data.
Any meta data is written to csv-files, which will stay easily accessible and can be integrated in TeX-files as well. The latter ones are converted to PDF at the end of the workflow, containing a table about all graffiti and a summary for each – including the generated characteristic graffiti pattern image.
We conducted two eye-tracking experiments investigating the processing of the Mandarin reflexive ziji in order to tease apart structurally constrained accounts from standard cue-based accounts of memory retrieval. In both experiments, we tested whether structurally inaccessible distractors that fulfill the animacy requirement of ziji influence processing times at the reflexive. In Experiment 1, we manipulated animacy of the antecedent and a structurally inaccessible distractor intervening between the antecedent and the reflexive. In conditions where the accessible antecedent mismatched the animacy cue, we found inhibitory interference whereas in antecedent-match conditions, no effect of the distractor was observed. In Experiment 2, we tested only antecedent-match configurations and manipulated locality of the reflexive-antecedent binding (Mandarin allows non-local binding). Participants were asked to hold three distractors (animate vs. inanimate nouns) in memory while reading the target sentence. We found slower reading times when animate distractors were held in memory (inhibitory interference). Moreover, we replicated the locality effect reported in previous studies. These results are incompatible with structure-based accounts. However, the cue-based ACT-R model of Lewis and Vasishth (2005) cannot explain the observed pattern either. We therefore extend the original ACT-R model and show how this model not only explains the data presented in this article, but is also able to account for previously unexplained patterns in the literature on reflexive processing.
Two classes of account have been proposed to explain the memory processes subserving the processing of reflexive-antecedent dependencies. Structure-based accounts assume that the retrieval of the antecedent is guided by syntactic tree-configurational information without considering other kinds of information such as gender marking in the case of English reflexives. By contrast, unconstrained cue-based retrieval assumes that all available information is used for retrieving the antecedent. Similarity-based interference effects from structurally illicit distractors which match a non-structural retrieval cue have been interpreted as evidence favoring the unconstrained cue-based retrieval account since cue-based retrieval interference from structurally illicit distractors is incompatible with the structure-based account. However, it has been argued that the observed effects do not necessarily reflect interference occurring at the moment of retrieval but might equally well be accounted for by interference occurring already at the stage of encoding or maintaining the antecedent in memory, in which case they cannot be taken as evidence against the structure-based account. We present three experiments (self-paced reading and eye-tracking) on German reflexives and Swedish reflexive and pronominal possessives in which we pit the predictions of encoding interference and cue-based retrieval interference against each other. We could not find any indication that encoding interference affects the processing ease of the reflexive-antecedent dependency formation. Thus, there is no evidence that encoding interference might be the explanation for the interference effects observed in previous work. We therefore conclude that invoking encoding interference may not be a plausible way to reconcile interference effects with a structure-based account of reflexive processing.
Stochastic Wilson
(2015)
We consider a simple Markovian class of the stochastic Wilson–Cowan type models of neuronal network dynamics, which incorporates stochastic delay caused by the existence of a refractory period of neurons. From the point of view of the dynamics of the individual elements, we are dealing with a network of non-Markovian stochastic two-state oscillators with memory, which are coupled globally in a mean-field fashion. This interrelation of a higher-dimensional Markovian and lower-dimensional non-Markovian dynamics is discussed in its relevance to the general problem of the network dynamics of complex elements possessing memory. The simplest model of this class is provided by a three-state Markovian neuron with one refractory state, which causes firing delay with an exponentially decaying memory within the two-state reduced model. This basic model is used to study critical avalanche dynamics (the noise sustained criticality) in a balanced feedforward network consisting of the excitatory and inhibitory neurons. Such avalanches emerge due to the network size dependent noise (mesoscopic noise). Numerical simulations reveal an intermediate power law in the distribution of avalanche sizes with the critical exponent around −1.16. We show that this power law is robust upon a variation of the refractory time over several orders of magnitude. However, the avalanche time distribution is biexponential. It does not reflect any genuine power law dependence.
Objective: Alexithymia relates to difficulties recognizing and describing emotions. It has been linked to subjectively increased interoceptive awareness (IA) and to psychiatric illnesses such as major depressive disorder (MDD) and somatization. MDD in turn is characterized by aberrant emotion processing and IA on the subjective as well as on the neural level. However, a link between neural activity in response to IA and alexithymic traits in health and depression remains unclear.
Methods: A well-established fMRI task was used to investigate neural activity during IA (heartbeat counting) and exteroceptive awareness (tone counting) in non-psychiatric controls (NC) and MDD. Firstly, comparing MDD and NC, a linear relationship between IA-related activity and scores of the Toronto Alexithymia Scale (TAS) was investigated through whole-brain regression. Secondly, NC were divided by median-split of TAS scores into groups showing low (NC-low) or high (NC-high) alexithymia. MDD and NC-high showed equally high TAS scores. Subsequently, IA-related neural activity was compared on a whole-brain level between the three independent samples (MDD, NC-low, NC-high).
Results: Whole-brain regressions between MDD and NC revealed neural differences during IA as a function of TAS-DD (subscale difficulty describing feelings) in the supragenual anterior cingulate cortex (sACC; BA 24/32), which were due to negative associations between TAS-DD and IA-related activity in NC. Contrasting NC subgroups after median-split on a whole-brain level, high TAS scores were associated with decreased neural activity during IA in the sACC and increased insula activity. Though having equally high alexithymia scores, NC-high showed increased insula activity during IA compared to MDD, whilst both groups showed decreased activity in the sACC.
Conclusions: Within the context of decreased sACC activity during IA in alexithymia (NC-high and MDD), increased insula activity might mirror a compensatory mechanism in NC-high, which is disrupted in MDD.
We study the diffusion of a tracer particle, which moves in continuum space between a lattice of excluded volume, immobile non-inert obstacles. In particular, we analyse how the strength of the tracer–obstacle interactions and the volume occupancy of the crowders alter the diffusive motion of the tracer. From the details of partitioning of the tracer diffusion modes between trapping states when bound to obstacles and bulk diffusion, we examine the degree of localisation of the tracer in the lattice of crowders. We study the properties of the tracer diffusion in terms of the ensemble and time averaged mean squared displacements, the trapping time distributions, the amplitude variation of the time averaged mean squared displacements, and the non-Gaussianity parameter of the diffusing tracer. We conclude that tracer–obstacle adsorption and binding triggers a transient anomalous diffusion. From a very narrow spread of recorded individual time averaged trajectories we exclude continuous type random walk processes as the underlying physical model of the tracer diffusion in our system. For moderate tracer–crowder attraction the motion is found to be fully ergodic, while at stronger attraction strength a transient disparity between ensemble and time averaged mean squared displacements occurs. We also put our results into perspective with findings from experimental single-particle tracking and simulations of the diffusion of tagged tracers in dense crowded suspensions. Our results have implications for the diffusion, transport, and spreading of chemical components in highly crowded environments inside living cells and other structured liquids.
The looping of polymers such as DNA is a fundamental process in the molecular biology of living cells, whose interior is characterised by a high degree of molecular crowding. We here investigate in detail the looping dynamics of flexible polymer chains in the presence of different degrees of crowding. From the analysis of the looping–unlooping rates and the looping probabilities of the chain ends we show that the presence of small crowders typically slows down the chain dynamics but larger crowders may in fact facilitate the looping. We rationalise these non-trivial and often counterintuitive effects of the crowder size on the looping kinetics in terms of an effective solution viscosity and standard excluded volume. It is shown that for small crowders the effect of an increased viscosity dominates, while for big crowders we argue that confinement effects (caging) prevail. The tradeoff between both trends can thus result in the impediment or facilitation of polymer looping, depending on the crowder size. We also examine how the crowding volume fraction, chain length, and the attraction strength of the contact groups of the polymer chain affect the looping kinetics and hairpin formation dynamics. Our results are relevant for DNA looping in the absence and presence of protein mediation, DNA hairpin formation, RNA folding, and the folding of polypeptide chains under biologically relevant high-crowding conditions.
ANG-2 for quantitative Na+ determination in living cells by time-resolved fluorescence microscopy
(2014)
Sodium ions (Na+) play an important role in a plethora of cellular processes, which are complex and partly still unexplored. For the investigation of these processes and quantification of intracellular Na+ concentrations ([Na+]i), two-photon coupled fluorescence lifetime imaging microscopy (2P-FLIM) was performed in the salivary glands of the cockroach Periplaneta americana. For this, the novel Na+-sensitive fluorescent dye Asante NaTRIUM Green-2 (ANG-2) was evaluated, both in vitro and in situ. In this context, absorption coefficients, fluorescence quantum yields and 2P action cross-sections were determined for the first time. ANG-2 was 2P-excitable over a broad spectral range and displayed fluorescence in the visible spectral range. Although the fluorescence decay behaviour of ANG-2 was triexponential in vitro, its analysis indicates a Na+-sensitivity appropriate for recordings in living cells. The Na+-sensitivity was reduced in situ, but the biexponential fluorescence decay behaviour could be successfully analysed in terms of quantitative [Na+]i recordings. Thus, physiological 2P-FLIM measurements revealed a dopamine-induced [Na+]i rise in cockroach salivary gland cells, which was dependent on a Na+-K+-2Cl− cotransporter (NKCC) activity. It was concluded that ANG-2 is a promising new sodium indicator applicable for diverse biological systems.
Arsenic-containing hydrocarbons (AsHC) constitute one group of arsenolipids that have been identified in seafood. In this first in vivo toxicity study for AsHCs, we show that AsHCs exert toxic effects in Drosophila melanogaster in a concentration range similar to that of arsenite. In contrast to arsenite, however, AsHCs cause developmental toxicity in the late developmental stages of Drosophila melanogaster. This work illustrates the need for a full characterisation of the toxicity of AsHCs in experimental animals to finally assess the risk to human health related to the presence of arsenolipids in seafood.
We report a 1,2,3-triazol fluoroionophore for detecting Na+ that shows in vitro enhancement in the Na+-induced fluorescence intensity and decay time. The Na+-selective molecule 1 was incorporated into a hydrogel as a part of a fiber optical sensor. This sensor allows the direct determination of Na+ in the range of 1–10 mM by measuring reversible fluorescence decay time changes.
Molecular motors pulling cargos in the viscoelastic cytosol: how power strokes beat subdiffusion
(2014)
The discovery of anomalous diffusion of larger biopolymers and submicron tracers such as endogenous granules, organelles, or virus capsids in living cells, attributed to the viscoelastic nature of the cytoplasm, provokes the question whether this complex environment equally impacts the active intracellular transport of submicron cargos by molecular motors such as kinesins: does the passive anomalous diffusion of free cargo always imply its anomalously slow active transport by motors, the mean transport distance along microtubule growing sublinearly rather than linearly in time? Here we analyze this question within the widely used two-state Brownian ratchet model of kinesin motors based on the continuous-state diffusion along microtubules driven by a flashing binding potential, where the cargo particle is elastically attached to the motor. Depending on the cargo size, the loading force, the amplitude of the binding potential, the turnover frequency of the molecular motor enzyme, and the linker stiffness we demonstrate that the motor transport may turn out either normal or anomalous, as indeed measured experimentally. We show how a highly efficient normal active transport mediated by motors may emerge despite the passive anomalous diffusion of the cargo, and study the intricate effects of the elastic linker. Under different, well specified conditions the microtubule-based motor transport becomes anomalously slow and thus significantly less efficient.
Anomalous diffusion is frequently described by scaled Brownian motion (SBM){,} a Gaussian process with a power-law time dependent diffusion coefficient. Its mean squared displacement is ?x2(t)? [similar{,} equals] 2K(t)t with K(t) [similar{,} equals] t[small alpha]-1 for 0 < [small alpha] < 2. SBM may provide a seemingly adequate description in the case of unbounded diffusion{,} for which its probability density function coincides with that of fractional Brownian motion. Here we show that free SBM is weakly non-ergodic but does not exhibit a significant amplitude scatter of the time averaged mean squared displacement. More severely{,} we demonstrate that under confinement{,} the dynamics encoded by SBM is fundamentally different from both fractional Brownian motion and continuous time random walks. SBM is highly non-stationary and cannot provide a physical description for particles in a thermalised stationary system. Our findings have direct impact on the modelling of single particle tracking experiments{,} in particular{,} under confinement inside cellular compartments or when optical tweezers tracking methods are used.
Probably no other field of statistical physics at the borderline of soft matter and biological physics has caused such a flurry of papers as polymer translocation since the 1994 landmark paper by Bezrukov, Vodyanoy, and Parsegian and the study of Kasianowicz in 1996. Experiments, simulations, and theoretical approaches are still contributing novel insights to date, while no universal consensus on the statistical understanding of polymer translocation has been reached. We here collect the published results, in particular, the famous–infamous debate on the scaling exponents governing the translocation process. We put these results into perspective and discuss where the field is going. In particular, we argue that the phenomenon of polymer translocation is non-universal and highly sensitive to the exact specifications of the models and experiments used towards its analysis.
This study aims to further mechanistically understand toxic modes of action after chronic inorganic arsenic exposure. Therefore long-term incubation studies in cultured cells were carried out, to display chronically attained changes, which cannot be observed in the generally applied in vitro short-term incubation studies. Particularly, the cytotoxic, genotoxic and epigenetic effects of an up to 21 days incubation of human urothelial (UROtsa) cells with pico- to nanomolar concentrations of iAsIII and its metabolite thio-DMAV were compared. After 21 days of incubation, cytotoxic effects were strongly enhanced in the case of iAsIII and might partly be due to glutathione depletion and genotoxic effects on the chromosomal level. These results are in strong contrast to cells exposed to thio-DMAV. Thus, cells seemed to be able to adapt to this arsenical, as indicated among others by an increase in the cellular glutathione level. Most interestingly, picomolar concentrations of both iAsIII and thio-DMAV caused global DNA hypomethylation in UROtsa cells, which was quantified in parallel by 5-medC immunostaining and a newly established, reliable, high resolution mass spectrometry (HRMS)-based test system. This is the first time that epigenetic effects are reported for thio-DMAV; iAsIII induced epigenetic effects occur in at least 8000 fold lower concentrations as reported in vitro before. The fact that both arsenicals cause DNA hypomethylation at really low, exposure-relevant concentrations in human urothelial cells suggests that this epigenetic effect might contribute to inorganic arsenic induced carcinogenicity, which for sure has to be further investigated in future studies.
Diffusion of finite-size particles in two-dimensional channels with random wall configurations
(2014)
Diffusion of chemicals or tracer molecules through complex systems containing irregularly shaped channels is important in many applications. Most theoretical studies based on the famed Fick–Jacobs equation focus on the idealised case of infinitely small particles and reflecting boundaries. In this study we use numerical simulations to consider the transport of finite-size particles through asymmetrical two-dimensional channels. Additionally, we examine transient binding of the molecules to the channel walls by applying sticky boundary conditions. We consider an ensemble of particles diffusing in independent channels, which are characterised by common structural parameters. We compare our results for the long-time effective diffusion coefficient with a recent theoretical formula obtained by Dagdug and Pineda [J. Chem. Phys., 2012, 137, 024107].
Graphitic carbon nitride, g-C₃N₄, is a promising organic photo-catalyst for a variety of redox reactions. In order to improve its efficiency in a systematic manner, however, a fundamental understanding of the microscopic interaction between catalyst, reactants and products is crucial. Here we present a systematic study of water adsorption on g-C₃N₄ by means of density functional theory and the density functional based tight-binding method as a prerequisite for understanding photocatalytic water splitting. We then analyze this prototypical redox reaction on the basis of a thermodynamic model providing an estimate of the overpotential for both water oxidation and H⁺ reduction. While the latter is found to occur readily upon irradiation with visible light, we derive a prohibitive overpotential of 1.56 eV for the water oxidation half reaction, comparing well with the experimental finding that in contrast to H₂ production O₂ evolution is only possible in the presence of oxidation cocatalysts.
There is a wealth of evidence showing that increasing the distance between an argument and its head leads to more processing effort, namely, locality effects; these are usually associated with constraints in working memory (DLT: Gibson, 2000; activation-based model: Lewis and Vasishth, 2005). In SOV languages, however, the opposite effect has been found: antilocality (see discussion in Levy et al., 2013). Antilocality effects can be explained by the expectation-based approach as proposed by Levy (2008) or by the activation-based model of sentence processing as proposed by Lewis and Vasishth (2005). We report an eye-tracking and a self-paced reading study with sentences in Spanish together with measures of individual differences to examine the distinction between expectation- and memory-based accounts, and within memory-based accounts the further distinction between DLT and the activation-based model. The experiments show that (i) antilocality effects as predicted by the expectation account appear only for high-capacity readers; (ii) increasing dependency length by interposing material that modifies the head of the dependency (the verb) produces stronger facilitation than increasing dependency length with material that does not modify the head; this is in agreement with the activation-based model but not with the expectation account; and (iii) a possible outcome of memory load on low-capacity readers is the increase in regressive saccades (locality effects as predicted by memory-based accounts) or, surprisingly, a speedup in the self-paced reading task; the latter consistent with good-enough parsing (Ferreira et al., 2002). In sum, the study suggests that individual differences in working memory capacity play a role in dependency resolution, and that some of the aspects of dependency resolution can be best explained with the activation-based model together with a prediction component.
The neurophysiological and behavioral correlates of action-related language processing have been debated for long time. A precursor in this field was the study by Buccino et al. (2005) combining transcranial magnetic stimulation (TMS) and behavioral measures (reaction times, RTs) to study the effect of listening to hand- and foot-related sentences. In the TMS experiment, the authors showed a decrease of motor evoked potentials (MEPs) recorded from hand muscles when processing hand-related verbs as compared to foot-related verbs. Similarly, MEPs recorded from leg muscles decreased when participants processed foot-related as compared to hand-related verbs. In the behavioral experiment, using the same stimuli and a semantic decision task the authors found slower RTs when the participants used the body effector (hand or foot) involved in the actual execution of the action expressed by the presented verb to give their motor responses. These findings were interpreted as an interference effect due to a simultaneous involvement of the motor system in both a language and a motor task. Our replication aimed to enlarge the sample size and replicate the findings with higher statistical power. The TMS experiment showed a significant modulation of hand MEPs, but in the sense of a motor facilitation when processing hand-related verbs. On the contrary, the behavioral experiment did not show significant results. The results are discussed within the general debate on the time-course of the modulation of motor cortex during implicit and explicit language processing and in relation to the studies on action observation/understanding.
Background: Interoceptive awareness (iA), the awareness of stimuli originating inside the body, plays an important role in human emotions and psychopathology. The insula is particularly involved in neural processes underlying iA. However, iA-related neural activity in the insula during the acute state of major depressive disorder (MDD) and in remission from depression has not been explored.
Methods: A well-established fMRI paradigm for studying (iA; heartbeat counting) and exteroceptive awareness (eA; tone counting) was used. Study participants formed three independent groups: patients suffering from MDD, patients in remission from MDD or healthy controls. Task-induced neural activity in three functional subdivisions of the insula was compared between these groups.
Results: Depressed participants showed neural hypo-responses during iA in anterior insula regions, as compared to both healthy and remitted participants. The right dorsal anterior insula showed the strongest response to iA across all participant groups. In depressed participants there was no differentiation between different stimuli types in this region (i.e., between iA, eA and noTask). Healthy and remitted participants in contrast showed clear activity differences.
Conclusions: This is the first study comparing iA and eA-related activity in the insula in depressed participants to that in healthy and remitted individuals. The preliminary results suggest that these groups differ in there being hypo-responses across insula regions in the depressed participants, whilst non-psychiatric participants and patients in remission from MDD show the same neural activity during iA in insula subregions implying a possible state marker for MDD. The lack of activity differences between different stimulus types in the depressed group may account for their symptoms of altered external and internal focus.
Background
Previous literature mainly introduced cognitive functions to explain performance decrements in dual-task walking, i.e., changes in dual-task locomotion are attributed to limited cognitive information processing capacities. In this study, we enlarge existing literature and investigate whether leg muscular capacity plays an additional role in children’s dual-task walking performance.
Methods
To this end, we had prepubescent children (mean age: 8.7 ± 0.5 years, age range: 7–9 years) walk in single task (ST) and while concurrently conducting an arithmetic subtraction task (DT). Additionally, leg lean tissue mass was assessed.
Results
Findings show that both, boys and girls, significantly decrease their gait velocity (f = 0.73), stride length (f = 0.62) and cadence (f = 0.68) and increase the variability thereof (f = 0.20-0.63) during DT compared to ST. Furthermore, stepwise regressions indicate that leg lean tissue mass is closely associated with step time and the variability thereof during DT (R2 = 0.44, p = 0.009). These associations between gait measures and leg lean tissue mass could not be observed for ST (R2 = 0.17, p = 0.19).
Conclusion
We were able to show a potential link between leg muscular capacities and DT walking performance in children. We interpret these findings as evidence that higher leg muscle mass in children may mitigate the impact of a cognitive interference task on DT walking performance by inducing enhanced gait stability.
Picosecond X-ray absorption spectroscopy (XAS) is used to investigate the electronic and structural dynamics initiated by plasmon excitation of 1.8 nm diameter Au nanoparticles (NPs) functionalised with 1-hexanethiol. We show that 100 ps after photoexcitation the transient XAS spectrum is consistent with an 8% expansion of the Au–Au bond length and a large increase in disorder associated with melting of the NPs. Recovery of the ground state occurs with a time constant of ∼1.8 ns, arising from thermalisation with the environment. Simulations reveal that the transient spectrum exhibits no signature of charge separation at 100 ps and allows us to estimate an upper limit for the quantum yield (QY) of this process to be <0.1.
New porous materials based on covalently connected monomers are presented. The key step of the synthesis is an acetalisation reaction. In previous years we used acetalisation reactions extensively to build up various molecular rods. Based on this approach, investigations towards porous polymeric materials were conducted by us. Here we wish to present the results of these studies in the synthesis of 1D polyacetals and porous 3D polyacetals. By scrambling experiments with 1D acetals we could prove that exchange reactions occur between different building blocks (evidenced by MALDI-TOF mass spectrometry). Based on these results we synthesized porous 3D polyacetals under the same mild conditions.
Modern microscopic techniques following the stochastic motion of labelled tracer particles have uncovered significant deviations from the laws of Brownian motion in a variety of animate and inanimate systems. Such anomalous diffusion can have different physical origins, which can be identified from careful data analysis. In particular, single particle tracking provides the entire trajectory of the traced particle, which allows one to evaluate different observables to quantify the dynamics of the system under observation. We here provide an extensive overview over different popular anomalous diffusion models and their properties. We pay special attention to their ergodic properties, highlighting the fact that in several of these models the long time averaged mean squared displacement shows a distinct disparity to the regular, ensemble averaged mean squared displacement. In these cases, data obtained from time averages cannot be interpreted by the standard theoretical results for the ensemble averages. Here we therefore provide a comparison of the main properties of the time averaged mean squared displacement and its statistical behaviour in terms of the scatter of the amplitudes between the time averages obtained from different trajectories. We especially demonstrate how anomalous dynamics may be identified for systems, which, on first sight, appear to be Brownian. Moreover, we discuss the ergodicity breaking parameters for the different anomalous stochastic processes and showcase the physical origins for the various behaviours. This Perspective is intended as a guidebook for both experimentalists and theorists working on systems, which exhibit anomalous diffusion.
Recently, C K-edge Near Edge X-ray Absorption Fine Structure (NEXAFS) spectra of graphite (HOPG) surfaces have been measured for the pristine material, and for HOPG treated with either bromine or krypton plasmas (Lippitz et al., Surf. Sci., 2013, 611, L1). Changes of the NEXAFS spectra characteristic for physical (krypton) and/or chemical/physical modifications of the surface (bromine) upon plasma treatment were observed. Their molecular origin, however, remained elusive. In this work we study by density functional theory, the effects of selected point and line defects as well as chemical modifications on NEXAFS carbon K-edge spectra of single graphene layers. For Br-treated surfaces, also Br 3d X-ray Photoelectron Spectra (XPS) are simulated by a cluster approach, to identify possible chemical modifications. We observe that some of the defects related to plasma treatment lead to characteristic changes of NEXAFS spectra, similar to those in experiment. Theory provides possible microscopic origins for these changes.