Refine
Year of publication
Document Type
- Article (604)
- Preprint (299)
- Postprint (257)
- Conference Proceeding (160)
- Doctoral Thesis (51)
- Working Paper (48)
- Review (45)
- Monograph/Edited Volume (26)
- Part of a Book (24)
- Other (16)
Language
- English (1534) (remove)
Is part of the Bibliography
- no (1534) (remove)
Keywords
- Curriculum Framework (31)
- European values education (31)
- Europäische Werteerziehung (31)
- Familie (31)
- Family (31)
- Lehrevaluation (31)
- Studierendenaustausch (31)
- Unterrichtseinheiten (31)
- curriculum framework (31)
- lesson evaluation (31)
Institute
- Institut für Mathematik (309)
- Extern (276)
- Vereinigung für Jüdische Studien e. V. (125)
- Institut für Physik und Astronomie (119)
- Department Linguistik (91)
- Department Psychologie (86)
- Institut für Chemie (65)
- Hasso-Plattner-Institut für Digital Engineering GmbH (55)
- Historisches Institut (52)
- Institut für Umweltwissenschaften und Geographie (49)
In this contribution we present some preliminary results obtained from a SOAR-Goodman optical spectroscopic survey aimed to confirm the OIf* - OIf*/WN nature of a sample of Galactic candidates that were previously confirmed as massive stars based on near-infrared spectra taken with OSIRIS at SOAR. With only a few of such stars known in the Galaxy to date, our study significantly contributes to improve the number of known Galactic O2If* stars, as well as almost doubling the number of known members of the galactic sample of the rare type OIf*/WN.
The total population of Wolf-Rayet (WR) stars in the Galaxy is predicted by models to be as many as ~6000 stars, and yet the number of catalogued WR stars as a result of optical surveys was far lower than this (~200) at the turn of this century. When beginning our WR searches using infrared techniques it was not clear whether WR number predictions were too optimistic or whether there was more hidden behind interstellar and circumstellar extinction. During the last decade we pioneered a technique of exploiting the near- and mid-infrared continuum colours for individual point sources provided by large-format surveys of the Galaxy, including 2MASS and Spitzer/GLIMPSE, to pierce through the dust and reveal newly discovered WR stars throughout the Galactic Plane. The key item to the colour discrimination is via the characteristic infrared spectral index produced by the strong winds of the WR stars, combined with dust extinction, which place WR stars in a relatively depopulated area of infrared colour-colour diagrams. The use of the Spitzer/GLIMPSE 8µm and, more recently, WISE 22µm fluxes together with cross-referencing with X-ray measurements in selected Galactic regions have enabled improved candidate lists that increased our confirmation success rate, achieved via follow-up infrared and optical spectroscopy. To date a total of 102 new WR stars have been found with many more
candidates still available for follow-up. This constitutes an addition of ~16% to the current
inventory of 642 Galactic WR stars. In this talk we review our methods and provide some
new results and a preliminary review of their stellar and interstellar medium environments. We provide a roadmap for the future of this search, including statistical modeling, and what we can add to star formation and high mass star evolution studies.
Although we all use the name Wolf-Rayet to refer to specific groups of stars, “Wolf-Rayet” per se is really an astrophysical phenomenon of fast-moving, hot plasma, normally expanding around a hot star. However, expediency demands that we follow established traditions by referring to three specific kinds of WR stars: (1) cWR, “classical” He-burning descendants of massive, O-type stars, presumably all of which pass through a WR stage; (2) WNh, the most massive and luminous hydrogen-rich main-sequence stars with strong winds; and (3) [WR], the central stars of some 15 % of Planetary Nebulae. Wolf-Rayet stars are the epitome of relatively stable stars with the highest mass-loss rates for their kind. It behooves us to understand the what, how and why of this circumstance, along with its manyfold and fascinating consequences.
An overview of the known Wolf-Rayet (WR) population of the Milky Way is presented, including a brief overview of historical catalogues and recent advances based on infrared photometric and spectroscopic observations resulting in the current census of 642 (vl.13 online catalogue). The observed distribution of WR stars is considered with respect to known star clusters, given that ≤20% of WR stars in the disk are located in clusters. WN stars outnumber WC stars at all galactocentric radii, while early-type WC stars are strongly biased against the inner Milky Way. Finally, recent estimates of the global WR population in the Milky Way are reassessed, with 1,200±100 estimated, such that the current census may be 50% complete. A characteristic WR lifetime of 0.25 Myr is inferred for an initial mass threshold of 25 M⊙.
Networking knowledge
(2015)
Global citizenship and diversity are well-represented concepts in today’s higher education. Learning outcomes and competencies are designed to sensitize students to the many cultural backgrounds of U.S. learning institutions. Nevertheless, true globality, as represented through diverse discourses and perspectives of the world, still seems neglected in curricula and course assignments. This article explores the possibilities offered through a new shared space in education where different forms of networked knowledge and multifaceted perspectives can build a global platform of exchange in a diverse student population. The universal science concept described by Alexander von Humboldt at the beginning of the 19th Century illuminates this intertwined approach to knowledge of the world, which has the potential to positively impact contemporary curricula and course design. Von Humboldt’s writings emphasize inclusion and interplay among cultures and natural phenomena. By inviting our students to be active representatives of diverse discourses, these interconnecting links will become more transparent. In turn, productive forms of knowing about the world may enrich current learning objectives and thereby reflect a true global citizenship as it evolves in a new shared space of education. Keywords: global citizenship, plurality, diverse discourses, multicultural education.
The Franciscans in Cathay
(2015)
The study analyzes the process that leads to the elaboration of the thesis of a continuity between the Medieval Asia mission and the New World mission. This effort, undertaken by the Catholic historiography of the mission during the XIX century, is the result of the impulse provided by Alexander von Humboldt’s studies about the discovery of America (Examen critique). The data about the geography of Asia collected by the missionaries-travelers working in the territory between Karakorum and Khanbalik during the XIII e XIV century reaches Christopher Colombus with the mediation of Roger Bacon, whom Humboldt himself esteems as a true cultural mediator. The conclusion of the article tries to identify reasons and modalities of the secularization of the missionary concept, i.e. the shift from the ideal of the propagation of the Christian message to a prevailing interest for cartography and topography, transformations arranged by a late medieval historiography that introduces into martyrolagia the loca toponomastica.
When Jesus Spoke Yiddish
(2015)
In this paper, I wish to bring some evidences from a Yiddish manuscript of the “Toledot Yeshu” which has not yet been the object of research: MS. Günzburg, 1730 kept in the Russian State Library in Moscow and dated 17th century. The manuscript is part of the so-called ‘Herode-tradition’ of the “Toledot Yeshu”. This means that the Yiddish manuscript is connected to the version printed in Hebrew and accompanied by a Latin translation by the Swiss pastor and theologian Johann Jacob Uldrich (Huldricus, 1683–1731) in Leiden in 1705, bearing the title “Historia Jeschuae Nazareni”. Given the uncertainty about the exact dating of the Yiddish manuscript, a comparison between the Hebrew and the Yiddish can still allow some remarks concerning the characteristics of the Yiddish version and posit some questions about the transmission and the reception of this challenging and intriguing text.
In 1945, Zinovii Shenderovich Tolkatchev (1903–1977), a Soviet artist of Jewish origin, created a striking series of five images entitled “Jesus in Majdanek”. The series was the culmination of Tolkatchev‘s intensive preoccupation with the experience he, as a Red Army soldier, endured upon taking part in liberation of the concentration camps Majdanek and Auschwitz. Shocked by the actual sights he witnessed, he depicted Jesus as an actual camp inmate, wearing a striped uniform marked by every possible defamation sign – the Jewish yellow star, the red triangle of political prisoners, and the individual prison number, the numerical tattoo on his lower arm can also be seen. The different stages of camp life are portrayed as the traditional Passion of Christ. While showing the actual situations the artist based himself upon the well known European Renaissance paintings canonically depicting Jesus‘ suffering. The article places Tolkatchev‘s series in a broader cultural and visual context by exploring the development of the ‘historical Jesus’ in the 19th century European thought and Russian realist art, and by examining the impact of the German avant-garde. By doing so, a deeper understanding of the universal message Tolkatchev’s works entail is offered.
Messianic Jews are Jewish individuals who syncretically accept both the messianic character of Jesus and the ritual cultic practices provided by traditional Judaism. The present article examines the emergence of this marginal syncretic movement in contemporary Israel, and maintains that it represents a radical development in the bimillenary history of Jewish-Christian relations. This article offers a general introduction to the notion of Jewish-Christian identity, a brief history of the first group of Messianic Jews in the Land of Israel, the cultural influence and religious syncretism of the Messianic Jews in modern Israel, and, finally, the implication that Messianic Judaism is supposed to become the new paradigm within the various branches of Judaism.
A lot has been published about the competencies needed by
students in the 21st century (Ravenscroft et al., 2012). However, equally
important are the competencies needed by educators in the new era
of digital education. We review the key competencies for educators in
light of the new methods of teaching and learning proposed by Massive
Open Online Courses (MOOCs) and their on-campus counterparts,
Small Private Online Courses (SPOCs).
Participants of this workshop will be confronted exemplarily
with a considerable inconsistency of global Informatics education at
lower secondary level. More importantly, they are invited to contribute
actively on this issue in form of short case studies of their countries.
Until now, very few countries have been successful in implementing
Informatics or Computing at primary and lower secondary level. The
spectrum from digital literacy to informatics, particularly as a discipline
in its own right, has not really achieved a breakthrough and seems to
be underrepresented for these age groups. The goal of this workshop
is not only to discuss the anamnesis and diagnosis of this fragmented
field, but also to discuss and suggest viable forms of therapy in form of
setting educational standards. Making visible good practices in some
countries and comparing successful approaches are rewarding tasks for
this workshop.
Discussing and defining common educational standards on a transcontinental
level for the age group of 14 to 15 years old students in a readable,
assessable and acceptable form should keep the participants of this
workshop active beyond the limited time at the workshop.
Let’s talk about CS!
(2015)
To communicate about a science is the most important key
competence in education for any science. Without communication we
cannot teach, so teachers should reflect about the language they use in
class properly. But the language students and teachers use to communicate
about their CS courses is very heterogeneous, inconsistent and
deeply influenced by tool names. There is a big lack of research and
discussion in CS education regarding the terminology and the role of
concepts and tools in our science. We don’t have a consistent set of
terminology that we agree on to be helpful for learning our science.
This makes it nearly impossible to do research on CS competencies as
long as we have not agreed on the names we use to describe these. This
workshop intends to provide room to fill with discussion and first ideas
for future research in this field.
The poster and abstract describe the importance of teaching
information security in school. After a short description of information
security and important aspects, I will show, how information security
fits into different guidelines or models for computer science educations
and that it is therefore on of the key competencies. Afterwards I will
present you a rough insight of teaching information security in Austria.
Current curricular trends require teachers in Baden-
Wuerttemberg (Germany) to integrate Computer Science (CS) into
traditional subjects, such as Physical Science. However, concrete guidelines
are missing. To fill this gap, we outline an approach where a
microcontroller is used to perform and evaluate measurements in the
Physical Science classroom.
Using the open-source Arduino platform, we expect students to acquire
and develop both CS and Physical Science competencies by using a
self-programmed microcontroller. In addition to this combined development
of competencies in Physical Science and CS, the subject matter
will be embedded in suitable contexts and learning environments,
such as weather and climate.
Think logarithmically!
(2015)
We discuss here a number of algorithmic topics which we
use in our teaching and in learning of mathematics and informatics to
illustrate and document the power of logarithm in designing very efficient
algorithms and computations – logarithmic thinking is one of the
most important key competencies for solving real world practical problems.
We demonstrate also how to introduce logarithm independently
of mathematical formalism using a conceptual model for reducing a
problem size by at least half. It is quite surprising that the idea, which
leads to logarithm, is present in Euclid’s algorithm described almost
2000 years before John Napier invented logarithm.
A project involving the composition of a number of pieces
of music by public participants revealed levels of engagement with and
mastery of complex music technologies by a number of secondary student
volunteers. This paper reports briefly on some initial findings of
that project and seeks to illuminate an understanding of computational
thinking across the curriculum.
Mentoring in a Digital World
(2015)
This paper focuses on the results of the evaluation of the first
pilot of an e-mentoring unit designed by the Hands-On ICT consortium,
funded by the EU LLL programme. The overall aim of this two-year
activity is to investigate the value for professional learning of Massive
Online Open Courses (MOOCs) and Community Online Open Courses
(COOCs) in the context of a ‘community of practice’. Three units in the
first pilot covered aspects of using digital technologies to develop creative
thinking skills. The findings in this paper relate to the fourth unit
about e-mentoring, a skill that was important to delivering the course
content in the other three units. Findings about the e-mentoring unit
included: the students’ request for detailed profiles so that participants
can get to know each other; and, the need to reconcile the different
interpretations of e-mentoring held by the participants when the course
begins. The evaluators concluded that the major issues were that: not all
professional learners would self-organise and network; and few would
wish to mentor their colleagues voluntarily. Therefore, the e-mentoring
issues will need careful consideration in pilots two and three to identify
how e-mentoring will be organised.
The study reported in this paper involved the employment
of specific in-class exercises using a Personal Response System (PRS).
These exercises were designed with two goals: to enhance students’
capabilities of tracing a given code and of explaining a given code in
natural language with some abstraction. The paper presents evidence
from the actual use of the PRS along with students’ subjective impressions
regarding both the use of the PRS and the special exercises. The
conclusions from the findings are followed with a short discussion on
benefits of PRS-based mental processing exercises for learning programming
and beyond.
In this paper we describe the recent state of our research
project concerning computer science teachers’ knowledge on students’
cognition. We did a comprehensive analysis of textbooks, curricula
and other resources, which give teachers guidance to formulate assignments.
In comparison to other subjects there are only a few concepts
and strategies taught to prospective computer science teachers in university.
We summarize them and given an overview on our empirical
approach to measure this knowledge.
How does the Implementation of a Literacy Learning Tool Kit influence Literacy Skill Acquisition?
(2015)
This study aimed at following how teachers transfer skills
into results while using ABRA literacy software. This was done in
the second part of the pilot study whose aim was to provide equity to
control group teachers and students by exposing them to the ABRACADABRA
treatment after the end of phase 1. This opportunity was
used to follow the phase 1 teachers to see how the skills learned were
being transformed into results. A standard three-day initial training and
planning session on how to use ABRA to teach literacy was held at the
beginning of each phase for ABRA teachers (phase 1 experimental and
phase 2 delayed ABRA). Teachers were provided with teaching materials
including a tentative ABRA curriculum developed to align with the
Kenyan English Language requirements for year 1 and 3 students. Results
showed that although there was no significant difference between
the groups in vocabulary-related subscales which include word reading
and meaning as well as sentence comprehension, students in ABRACADABRA
classes improved their scores at a significantly higher rate
than students in control classes in comprehension related scores. An
average student in the ABRACADABRA group improved by 12 and
16 percentile points respectively compared to their counterparts in the
control group.
The Technology Proficiency Self-Assessment (TPSA) questionnaire
has been used for 15 years in the USA and other nations as a
self-efficacy measure for proficiencies fundamental to effective technology
integration in the classroom learning environment. Internal consistency
reliabilities for each of the five-item scales have typically ranged
from .73 to .88 for preservice or inservice technology-using teachers.
Due to changing technologies used in education, researchers sought to
renovate partially obsolete items and extend self-efficacy assessment to
new areas, such as social media and mobile learning. Analysis of 2014
data gathered on a new, 34 item version of the TPSA indicates that the
four established areas of email, World Wide Web (WWW), integrated
applications, and teaching with technology continue to form consistent
scales with reliabilities ranging from .81 to .93, while the 14 new items
gathered to represent emerging technologies and media separate into
two scales, each with internal consistency reliabilities greater than .9.
The renovated TPSA is deemed to be worthy of continued use in the
teaching with technology context.
Computational Thinking
(2015)
Digital technology has radically changed the way people
work in industry, finance, services, media and commerce. Informatics
has contributed to the scientific and technological development of our
society in general and to the digital revolution in particular. Computational
thinking is the term indicating the key ideas of this discipline that
might be included in the key competencies underlying the curriculum
of compulsory education. The educational potential of informatics has
a history dating back to the sixties. In this article, we briefly revisit this
history looking for lessons learned. In particular, we focus on experiences
of teaching and learning programming. However, computational
thinking is more than coding. It is a way of thinking and practicing interactive
dynamic modeling with computers. We advocate that learners
can practice computational thinking in playful contexts where they can
develop personal projects, for example building videogames and/or robots,
share and discuss their construction with others. In our view, this
approach allows an integration of computational thinking in the K-12
curriculum across disciplines.
How Things Work
(2015)
Recognizing and defining functionality is a key competence
adopted in all kinds of programming projects. This study investigates
how far students without specific informatics training are able to identify
and verbalize functions and parameters. It presents observations
from classroom activities on functional modeling in high school chemistry
lessons with altogether 154 students. Finally it discusses the potential
of functional modelling to improve the comprehension of scientific
content.
This paper originated from discussions about the need for
important changes in the curriculum for Computing including two focus
group meetings at IFIP conferences over the last two years. The
paper examines how recent developments in curriculum, together with
insights from curriculum thinking in other subject areas, especially mathematics
and science, can inform curriculum design for Computing.
The analysis presented in the paper provides insights into the complexity
of curriculum design as well as identifying important constraints and
considerations for the ongoing development of a vision and framework
for a Computing curriculum.
This article shows a discussion about the key competencies
in informatics and ICT viewed from a philosophical foundation presented
by Martha Nussbaum, which is known as ‘ten central capabilities’.
Firstly, the outline of ‘The Capability Approach’, which has been presented
by Amartya Sen and Nussbaum as a theoretical framework of
assessing the state of social welfare, will be explained. Secondly, the
body of Nussbaum’s ten central capabilities and the reason for being
applied as the basis of discussion will be shown. Thirdly, the relationship
between the concept of ‘capability’ and ‘competency’ is to be
discussed. After that, the author’s assumption of the key competencies
in informatics and ICT led from the examination of Nussbaum’s ten
capabilities will be presented.
The objectives of this study were to examine (a) the effect
of dynamic assessment (DA) in a 3D Immersive Virtual Reality
(IVR) environment as compared with computerized 2D and noncomputerized
(NC) situations on cognitive modifiability, and (b) the
transfer effects of these conditions on more difficult problem solving
administered two weeks later in a non-computerized environment. A
sample of 117 children aged 6:6-9:0 years were randomly assigned
into three experimental groups of DA conditions: 3D, 2D, and NC, and
one control group (C). All groups received the pre- and post-teaching
Analogies subtest of the Cognitive Modifiability Battery (CMB-AN).
The experimental groups received a teaching phase in conditions similar
to the pre-and post-teaching phases. The findings showed that cognitive
modifiability, in a 3D IVR, was distinctively higher than in the two
other experimental groups (2D computer group and NC group). It was
also found that the 3D group showed significantly higher performance
in transfer problems than the 2D and NC groups.
BugHunt
(2015)
Competencies related to operating systems and computer
security are usually taught systematically. In this paper we present
a different approach, in which students have to remove virus-like
behaviour on their respective computers, which has been induced by
software developed for this purpose. They have to develop appropriate
problem-solving strategies and thereby explore essential elements of
the operating system. The approach was implemented exemplarily in
two computer science courses at a regional general upper secondary
school and showed great motivation and interest in the participating
students.
In the project MoKoM, which is funded by the German
Research Foundation (DFG) from 2008 to 2012, a test instrument
measuring students’ competences in computer science was developed.
This paper presents the results of an expert rating of the levels of
students’ competences done for the items of the instrument.
At first we will describe the difficulty-relevant features that were
used for the evaluation. These were deduced from computer science,
psychological and didactical findings and resources. Potentials and
desiderata of this research method are discussed further on. Finally
we will present our conclusions on the results and give an outlook on
further steps.
The growing impact of globalisation and the development of
a ‘knowledge society’ have led many to argue that 21st century skills are
essential for life in twenty-first century society and that ICT is central
to their development. This paper describes how 21st century skills, in
particular digital literacy, critical thinking, creativity, communication
and collaboration skills, have been conceptualised and embedded in the
resources developed for teachers in iTEC, a four-year, European project.
The effectiveness of this approach is considered in light of the data
collected through the evaluation of the pilots, which considers both the
potential benefits of using technology to support the development of
21st century skills, but also the challenges of doing so. Finally, the paper
discusses the learning support systems required in order to transform
pedagogies and embed 21st century skills. It is argued that support is
required in standards and assessment; curriculum and instruction; professional
development; and learning environments.
This paper discusses results from a small-scale research
study, together with some recently published research into student
perceptions of ICT for learning in schools, to consider relevant skills
that do not appear to currently being taught. The paper concludes by
raising three issues relating to learning with and through ICT that need
to be addressed in school curricula and classroom teaching.
The Student Learning Ecology
(2015)
Educational research on social media has showed that
students use it for socialisation, personal communication, and informal
learning. Recent studies have argued that students to some degree use
social media to carry out formal schoolwork. This article gives an
explorative account on how a small sample of Norwegian high school
students use social media to self-organise formal schoolwork. This
user pattern can be called a “student learning ecology”, which is a
user perspective on how participating students gain access to learning
resources.
Teaching Data Management
(2015)
Data management is a central topic in computer science as
well as in computer science education. Within the last years, this topic is
changing tremendously, as its impact on daily life becomes increasingly
visible. Nowadays, everyone not only needs to manage data of various
kinds, but also continuously generates large amounts of data. In
addition, Big Data and data analysis are intensively discussed in public
dialogue because of their influences on society. For the understanding of
such discussions and for being able to participate in them, fundamental
knowledge on data management is necessary. Especially, being aware
of the threats accompanying the ability to analyze large amounts of
data in nearly real-time becomes increasingly important. This raises the
question, which key competencies are necessary for daily dealings with
data and data management.
In this paper, we will first point out the importance of data management
and of Big Data in daily life. On this basis, we will analyze which are
the key competencies everyone needs concerning data management to
be able to handle data in a proper way in daily life. Afterwards, we will
discuss the impact of these changes in data management on computer
science education and in particular database education.
Social networks are currently at the forefront of tools that
lend to Personal Learning Environments (PLEs). This study aimed to
observe how students perceived PLEs, what they believed were the
integral components of social presence when using Facebook as part
of a PLE, and to describe student’s preferences for types of interactions
when using Facebook as part of their PLE. This study used mixed
methods to analyze the perceptions of graduate and undergraduate
students on the use of social networks, more specifically Facebook as a
learning tool. Fifty surveys were returned representing a 65 % response
rate. Survey questions included both closed and open-ended questions.
Findings suggested that even though students rated themselves relatively
well in having requisite technology skills, and 94 % of students used
Facebook primarily for social use, they were hesitant to migrate these
skills to academic use because of concerns of privacy, believing that
other platforms could fulfil the same purpose, and by not seeing the
validity to use Facebook in establishing social presence. What lies
at odds with these beliefs is that when asked to identify strategies in
Facebook that enabled social presence to occur in academic work, the
majority of students identified strategies in five categories that lead to
social presence establishment on Facebook during their coursework.
The paper discusses the issue of supporting informatics
(computer science) education through competitions for lower and
upper secondary school students (8–19 years old). Competitions play
an important role for learners as a source of inspiration, innovation,
and attraction. Running contests in informatics for school students
for many years, we have noticed that the students consider the contest
experience very engaging and exciting as well as a learning experience.
A contest is an excellent instrument to involve students in problem
solving activities. An overview of infrastructure and development
of an informatics contest from international level to the national one
(the Bebras contest on informatics and computer fluency, originated
in Lithuania) is presented. The performance of Bebras contests in 23
countries during the last 10 years showed an unexpected and unusually
high acceptance by school students and teachers. Many thousands of
students participated and got a valuable input in addition to their regular
informatics lectures at school. In the paper, the main attention is paid
to the developed tasks and analysis of students’ task solving results in
Lithuania.
The paper presents two approaches to the development of
a Computer Science Competence Model for the needs of curriculum
development and evaluation in Higher Education. A normativetheoretical
approach is based on the AKT and ACM/IEEE curriculum
and will be used within the recommendations of the German
Informatics Society (GI) for the design of CS curricula. An empirically
oriented approach refines the categories of the first one with regard to
specific subject areas by conducting content analysis on CS curricula of
important universities from several countries. The refined model will be
used for the needs of students’ e-assessment and subsequent affirmative
action of the CS departments.
Regardless of what is intended by government curriculum
specifications and advised by educational experts, the competencies
taught and learned in and out of classrooms can vary considerably.
In this paper, we discuss in particular how we can investigate the
perceptions that individual teachers have of competencies in ICT,
and how these and other factors may influence students’ learning. We
report case study research which identifies contradictions within the
teaching of ICT competencies as an activity system, highlighting issues
concerning the object of the curriculum, the roles of the participants and
the school cultures. In a particular case, contradictions in the learning
objectives between higher order skills and the use of application tools
have been resolved by a change in the teacher’s perceptions which
have not led to changes in other aspects of the activity system. We look
forward to further investigation of the effects of these contradictions in
other case studies and on forthcoming curriculum change.
As a result of the Bologna reform of educational systems in
Europe the outcome orientation of learning processes, competence-oriented
descriptions of the curricula and competence-oriented assessment
procedures became standard also in Computer Science Education
(CSE). The following keynote addresses important issues of shaping
a CSE competence model especially in the area of informatics system
comprehension and object-oriented modelling. Objectives and research
methodology of the project MoKoM (Modelling and Measurement
of Competences in CSE) are explained. Firstly, the CSE competence
model was derived based on theoretical concepts and then secondly the
model was empirically examined and refined using expert interviews.
Furthermore, the paper depicts the development and examination of
a competence measurement instrument, which was derived from the
competence model. Therefore, the instrument was applied to a large
sample of students at the gymnasium’s upper class level. Subsequently,
efforts to develop a competence level model, based on the retrieved empirical
results and on expert ratings are presented. Finally, further demands
on research on competence modelling in CSE will be outlined.
Computational thinking is a fundamental skill set that is learned
by studying Informatics and ICT. We argue that its core ideas can
be introduced in an inspiring and integrated way to both teachers and
students using fun and contextually rich cs4fn ‘Computer Science for
Fun’ stories combined with ‘unplugged’ activities including games and
magic tricks. We also argue that understanding people is an important
part of computational thinking. Computational thinking can be fun for
everyone when taught in kinaesthetic ways away from technology.
Reading is a complex cognitive task based on the analyses of visual stimuli. Due to the physiology of the eye, only a small number of letters around the fixation position can be extracted with high visual acuity, while the visibility of words and letters outside this so-called foveal region quickly drops with increasing eccentricity. As a consequence, saccadic eye movements are needed to repeatedly shift the fovea to new words for visual word identification during reading. Moreover, even within a foveated word fixation positions near the word center are superior to other fixation positions for efficient word recognition (O’Regan, 1981; Brysbaert, Vitu, and Schroyens, 1996). Thus, most reading theories assume that readers aim specifically at word centers during reading (for a review see Reichle, Rayner, & Pollatsek, 2003). However, saccades’ landing positions within words during reading are in fact systematically modulated by the distance of the launch site from the word center (McConkie, Kerr, Reddix, & Zola, 1988). In general, it is largely unknown how readers identify the center of upcoming target words and there is no computational model of the sensorimotor translation of the decision for a target word into spatial word center coordinates. Here we present a series of three studies which aim at advancing the current knowledge about the computation of saccade target coordinates during saccade planning in reading. Based on a large corpus analyses, we firstly identified word skipping as a further factor beyond the launch-site distance with a likewise systematic and surprisingly large effect on within-word landing positions. Most importantly, we found that the end points of saccades after skipped word are shifted two and more letters to the left as compared to one-step saccades (i.e., from word N to word N+1) with equal launch-site distances. Then we present evidence from a single saccade experiment suggesting that the word-skipping effect results from highly automatic low-level perceptual processes, which are essentially based on the localization of blank spaces between words. Finally, in the third part, we present a Bayesian model of the computation of the word center from primary sensory measurements of inter-word spaces. We demonstrate that the model simultaneously accounts for launch-site and saccade-type contingent modulations of within-word landing positions in reading. Our results show that the spatial saccade target during reading is the result of complex estimations of the word center based on incomplete sensory information, which also leads to specific systematic deviations of saccades’ landing positions from the word center. Our results have important implications for current reading models and experimental reading research.
Planetary research is often user-based and requires considerable skill, time, and effort. Unfortunately, self-defined boundary conditions, definitions, and rules are often not documented or not easy to comprehend due to the complexity of research. This makes a comparison to other studies, or an extension of the already existing research, complicated. Comparisons are often distorted, because results rely on different, not well defined, or even unknown boundary conditions. The purpose of this research is to develop a standardized analysis method for planetary surfaces, which is adaptable to several research topics. The method provides a consistent quality of results. This also includes achieving reliable and comparable results and reducing the time and effort of conducting such studies. A standardized analysis method is provided by automated analysis tools that focus on statistical parameters. Specific key parameters and boundary conditions are defined for the tool application. The analysis relies on a database in which all key parameters are stored. These databases can be easily updated and adapted to various research questions. This increases the flexibility, reproducibility, and comparability of the research. However, the quality of the database and reliability of definitions directly influence the results. To ensure a high quality of results, the rules and definitions need to be well defined and based on previously conducted case studies. The tools then produce parameters, which are obtained by defined geostatistical techniques (measurements, calculations, classifications). The idea of an automated statistical analysis is tested to proof benefits but also potential problems of this method. In this study, I adapt automated tools for floor-fractured craters (FFCs) on Mars. These impact craters show a variety of surface features, occurring in different Martian environments, and having different fracturing origins. They provide a complex morphological and geological field of application. 433 FFCs are classified by the analysis tools due to their fracturing process. Spatial data, environmental context, and crater interior data are analyzed to distinguish between the processes involved in floor fracturing. Related geologic processes, such as glacial and fluvial activity, are too similar to be separately classified by the automated tools. Glacial and fluvial fracturing processes are merged together for the classification. The automated tools provide probability values for each origin model. To guarantee the quality and reliability of the results, classification tools need to achieve an origin probability above 50 %. This analysis method shows that 15 % of the FFCs are fractured by intrusive volcanism, 20 % by tectonic activity, and 43 % by water & ice related processes. In total, 75 % of the FFCs are classified to an origin type. This can be explained by a combination of origin models, superposition or erosion of key parameters, or an unknown fracturing model. Those features have to be manually analyzed in detail. Another possibility would be the improvement of key parameters and rules for the classification. This research shows that it is possible to conduct an automated statistical analysis of morphologic and geologic features based on analysis tools. Analysis tools provide additional information to the user and are therefore considered assistance systems.
User-centered design processes are the first choice when new interactive systems or services are developed to address real customer needs and provide a good user experience. Common tools for collecting user research data, conducting brainstormings, or sketching ideas are whiteboards and sticky notes. They are ubiquitously available, and no technical or domain knowledge is necessary to use them. However, traditional pen and paper tools fall short when saving the content and sharing it with others unable to be in the same location. They are also missing further digital advantages such as searching or sorting content. Although research on digital whiteboard and sticky note applications has been conducted for over 20 years, these tools are not widely adopted in company contexts. While many research prototypes exist, they have not been used for an extended period of time in a real-world context. The goal of this thesis is to investigate what the enablers and obstacles for the adoption of digital whiteboard systems are. As an instrument for different studies, we developed the Tele-Board software system for collaborative creative work. Based on interviews, observations, and findings from former research, we tried to transfer the analog way of working to the digital world. Being a software system, Tele-Board can be used with a variety of hardware and does not depend on special devices. This feature became one of the main factors for adoption on a larger scale. In this thesis, I will present three studies on the use of Tele-Board with different user groups and foci. I will use a combination of research methods (laboratory case studies and data from field research) with the overall goal of finding out when a digital whiteboard system is used and in which cases not. Not surprisingly, the system is used and accepted if a user sees a main benefit that neither analog tools nor other applications can offer. However, I found that these perceived benefits are very different for each user and usage context. If a tool provides possibilities to use in different ways and with different equipment, the chances of its adoption by a larger group increase. Tele-Board has now been in use for over 1.5 years in a global IT company in at least five countries with a constantly growing user base. Its use, advantages, and disadvantages will be described based on 42 interviews and usage statistics from server logs. Through these insights and findings from laboratory case studies, I will present a detailed analysis of digital whiteboard use in different contexts with design implications for future systems.
Transcription factors (TFs) are ubiquitous gene expression regulators and play essential roles in almost all biological processes. This Ph.D. project is primarily focused on the functional characterisation of MYB112 - a member of the R2R3-MYB TF family from the model plant Arabidopsis thaliana. This gene was selected due to its increased expression during senescence based on previous qRT-PCR expression profiling experiments of 1880 TFs in Arabidopsis leaves at three developmental stages (15 mm leaf, 30 mm leaf and 20% yellowing leaf). MYB112 promoter GUS fusion lines were generated to further investigate the expression pattern of MYB112. Employing transgenic approaches in combination with metabolomics and transcriptomics we demonstrate that MYB112 exerts a major role in regulation of plant flavonoid metabolism. We report enhanced and impaired anthocyanin accumulation in MYB112 overexpressors and MYB112-deficient mutants, respectively. Expression profiling reveals that MYB112 acts as a positive regulator of the transcription factor PAP1 leading to increased anthocyanin biosynthesis, and as a negative regulator of MYB12 and MYB111, which both control flavonol biosynthesis. We also identify MYB112 early responsive genes using a combination of several approaches. These include gene expression profiling (Affymetrix ATH1 micro-arrays and qRT-PCR) and transactivation assays in leaf mesophyll cell protoplasts. We show that MYB112 binds to an 8-bp DNA fragment containing the core sequence (A/T/G)(A/C)CC(A/T)(A/G/T)(A/C)(T/C). By electrophoretic mobility shift assay (EMSA) and chromatin immunoprecipitation coupled to qPCR (ChIP-qPCR) we demonstrate that MYB112 binds in vitro and in vivo to MYB7 and MYB32 promoters revealing them as direct downstream target genes. MYB TFs were previously reported to play an important role in controlling flavonoid biosynthesis in plants. Many factors acting upstream of the anthocyanin biosynthesis pathway show enhanced expression levels during nitrogen limitation, or elevated sucrose content. In addition to the mentioned conditions, other environmental parameters including salinity or high light stress may trigger anthocyanin accumulation. In contrast to several other MYB TFs affecting anthocyanin biosynthesis pathway genes, MYB112 expression is not controlled by nitrogen limitation, or carbon excess, but rather is stimulated by salinity and high light stress. Thus, MYB112 constitutes a previously uncharacterised regulatory factor that modifies anthocyanin accumulation under conditions of abiotic stress.
Poly(A) Polymerase 1 (PAPS1) influences organ size and pathogen response in Arabidopsis thaliana
(2014)
Polyadenylation of pre-mRNAs is critical for efficient nuclear export, stability, and translation of the mature mRNAs, and thus for gene expression. The bulk of pre-mRNAs are processed by canonical nuclear poly(A) polymerase (PAPS). Both vertebrate and higher-plant genomes encode more than one isoform of this enzyme, and these are coexpressed in different tissues. However, in neither case is it known whether the isoforms fulfill different functions or polyadenylate distinct subsets of pre-mRNAs. This thesis shows that the three canonical nuclear PAPS isoforms in Arabidopsis are functionally specialized owing to their evolutionarily divergent C-terminal domains. A moderate loss-of-function mutant in PAPS1 leads to increase in floral organ size, whereas leaf size is reduced. A strong loss-of-function mutation causes a male gametophytic defect, whereas a weak allele leads to reduced leaf growth. By contrast, plants lacking both PAPS2 and PAPS4 function are viable with wild-type leaf growth. Polyadenylation of SMALL AUXIN UP RNA (SAUR) mRNAs depends specifically on PAPS1 function. The resulting reduction in SAUR activity in paps1 mutants contributes to their reduced leaf growth, providing a causal link between polyadenylation of specific pre-mRNAs by a particular PAPS isoform and plant growth. Additionally, opposite effects of PAPS1 on leaf and flower growth reflect the different identities of these organs. The overgrowth of paps1 mutant petals is due to increased recruitment of founder cells into early organ primordia whereas the reduced leaf size is due to an ectopic pathogen response. This constitutive immune response leads to increased resistance to the biotrophic oomycete Hyaloperonospora arabidopsidis and reflects activation of the salicylic acid-independent signalling pathway downstream of ENHANCED DISEASE SUSCEPTIBILITY1 (EDS1)/PHYTOALEXIN DEFICIENT4 (PAD4). Immune responses are accompanied by intracellular redox changes. Consistent with this, the redox-status of the chloroplast is altered in paps1-1 mutants. The molecular effects of the paps1-1 mutation were analysed using an RNA sequencing approach that distinguishes between long- and short tailed mRNA. The results shown here suggest the existence of an additional layer of regulation in plants and possibly vertebrate gene expression, whereby the relative activities of canonical nuclear PAPS isoforms control de novo synthesized poly(A) tail length and hence expression of specific subsets of mRNAs.
Mathematical modeling of biological systems is a powerful tool to systematically investigate the functions of biological processes and their relationship with the environment. To obtain accurate and biologically interpretable predictions, a modeling framework has to be devised whose assumptions best approximate the examined scenario and which copes with the trade-off of complexity of the underlying mathematical description: with attention to detail or high coverage. Correspondingly, the system can be examined in detail on a smaller scale or in a simplified manner on a larger scale. In this thesis, the role of photosynthesis and its related biochemical processes in the context of plant metabolism was dissected by employing modeling approaches ranging from kinetic to stoichiometric models. The Calvin-Benson cycle, as primary pathway of carbon fixation in C3 plants, is the initial step for producing starch and sucrose, necessary for plant growth. Based on an integrative analysis for model ranking applied on the largest compendium of (kinetic) models for the Calvin-Benson cycle, those suitable for development of metabolic engineering strategies were identified. Driven by the question why starch rather than sucrose is the predominant transitory carbon storage in higher plants, the metabolic costs for their synthesis were examined. The incorporation of the maintenance costs for the involved enzymes provided a model-based support for the preference of starch as transitory carbon storage, by only exploiting the stoichiometry of synthesis pathways. Many photosynthetic organisms have to cope with processes which compete with carbon fixation, such as photorespiration whose impact on plant metabolism is still controversial. A systematic model-oriented review provided a detailed assessment for the role of this pathway in inhibiting the rate of carbon fixation, bridging carbon and nitrogen metabolism, shaping the C1 metabolism, and influencing redox signal transduction. The demand of understanding photosynthesis in its metabolic context calls for the examination of the related processes of the primary carbon metabolism. To this end, the Arabidopsis core model was assembled via a bottom-up approach. This large-scale model can be used to simulate photoautotrophic biomass production, as an indicator for plant growth, under so-called optimal, carbon-limiting and nitrogen-limiting growth conditions. Finally, the introduced model was employed to investigate the effects of the environment, in particular, nitrogen, carbon and energy sources, on the metabolic behavior. This resulted in a purely stoichiometry-based explanation for the experimental evidence for preferred simultaneous acquisition of nitrogen in both forms, as nitrate and ammonium, for optimal growth in various plant species. The findings presented in this thesis provide new insights into plant system's behavior, further support existing opinions for which mounting experimental evidences arise, and posit novel hypotheses for further directed large-scale experiments.
Mars is one of the best candidates among planetary bodies for supporting life. The presence of water in the form of ice and atmospheric vapour together with the availability of biogenic elements and energy are indicators of the possibility of hosting life as we know it. The occurrence of permanently frozen ground – permafrost, is a common phenomenon on Mars and it shows multiple morphological analogies with terrestrial permafrost. Despite the extreme inhospitable conditions, highly diverse microbial communities inhabit terrestrial permafrost in large numbers. Among these are methanogenic archaea, which are anaerobic chemotrophic microorganisms that meet many of the metabolic and physiological requirements for survival on the martian subsurface. Moreover, methanogens from Siberian permafrost are extremely resistant against different types of physiological stresses as well as simulated martian thermo-physical and subsurface conditions, making them promising model organisms for potential life on Mars. The main aims of this investigation are to assess the survival of methanogenic archaea under Mars conditions, focusing on methanogens from Siberian permafrost, and to characterize their biosignatures by means of Raman spectroscopy, a powerful technology for microbial identification that will be used in the ExoMars mission. For this purpose, methanogens from Siberian permafrost and non-permafrost habitats were subjected to simulated martian desiccation by exposure to an ultra-low subfreezing temperature (-80ºC) and to Mars regolith (S-MRS and P-MRS) and atmospheric analogues. They were also exposed to different concentrations of perchlorate, a strong oxidant found in martian soils. Moreover, the biosignatures of methanogens were characterized at the single-cell level using confocal Raman microspectroscopy (CRM). The results showed survival and methane production in all methanogenic strains under simulated martian desiccation. After exposure to subfreezing temperatures, Siberian permafrost strains had a faster metabolic recovery, whereas the membranes of non-permafrost methanogens remained intact to a greater extent. The strain Methanosarcina soligelidi SMA-21 from Siberian permafrost showed significantly higher methane production rates than all other strains after the exposure to martian soil and atmospheric analogues, and all strains survived the presence of perchlorate at the concentration on Mars. Furthermore, CRM analyses revealed remarkable differences in the overall chemical composition of permafrost and non-permafrost strains of methanogens, regardless of their phylogenetic relationship. The convergence of the chemical composition in non-sister permafrost strains may be the consequence of adaptations to the environment, and could explain their greater resistance compared to the non-permafrost strains. As part of this study, Raman spectroscopy was evaluated as an analytical technique for remote detection of methanogens embedded in a mineral matrix. This thesis contributes to the understanding of the survival limits of methanogenic archaea under simulated martian conditions to further assess the hypothetical existence of life similar to methanogens on the martian subsurface. In addition, the overall chemical composition of methanogens was characterized for the first time by means of confocal Raman microspectroscopy, with potential implications for astrobiological research.
The term Linked Data refers to connected information sources comprising structured data about a wide range of topics and for a multitude of applications. In recent years, the conceptional and technical foundations of Linked Data have been formalized and refined. To this end, well-known technologies have been established, such as the Resource Description Framework (RDF) as a Linked Data model or the SPARQL Protocol and RDF Query Language (SPARQL) for retrieving this information. Whereas most research has been conducted in the area of generating and publishing Linked Data, this thesis presents novel approaches for improved management. In particular, we illustrate new methods for analyzing and processing SPARQL queries. Here, we present two algorithms suitable for identifying structural relationships between these queries. Both algorithms are applied to a large number of real-world requests to evaluate the performance of the approaches and the quality of their results. Based on this, we introduce different strategies enabling optimized access of Linked Data sources. We demonstrate how the presented approach facilitates effective utilization of SPARQL endpoints by prefetching results relevant for multiple subsequent requests. Furthermore, we contribute a set of metrics for determining technical characteristics of such knowledge bases. To this end, we devise practical heuristics and validate them through thorough analysis of real-world data sources. We discuss the findings and evaluate their impact on utilizing the endpoints. Moreover, we detail the adoption of a scalable infrastructure for improving Linked Data discovery and consumption. As we outline in an exemplary use case, this platform is eligible both for processing and provisioning the corresponding information.
HPI Future SOC Lab
(2013)
The “HPI Future SOC Lab” is a cooperation of the Hasso-Plattner-Institut (HPI) and industrial partners. Its mission is to enable and promote exchange and interaction between the research community and the industrial partners. The HPI Future SOC Lab provides researchers with free of charge access to a complete infrastructure of state of the art hard- and software. This infrastructure includes components, which might be too expensive for an ordinary research environment, such as servers with up to 64 cores. The offerings address researchers particularly from but not limited to the areas of computer science and business information systems. Main areas of research include cloud computing, parallelization, and In-Memory technologies. This technical report presents results of research projects executed in 2012. Selected projects have presented their results on June 18th and November 26th 2012 at the Future SOC Lab Day events.
Despite its many challenges and limitations the concept of in situ upgrading of informal settlements has become one of the most favoured approaches to the housing crisis in the ‘Global South’. Due to its inherent principles of incremental in situ development, prevention of relocations, protection of local livelihoods and democratic participation and cooperation, this approach is often perceived to be more sustainable than other housing approaches that often rely on quantitative housing delivery and top down planning methodologies. While this study does not question the benefits of the in situ upgrading approach, it seeks to identify problems of its practical implementation within a specific national and local context. The study discusses the origin and importance of this approach on the basis of a review of international housing policy development and analyses the broader political and social context of the incorporation of this approach into South African housing policy. It further uses insights from a recent case study in Cape Town to determine complications and conflicts that can arise when applying in situ upgrading of informal settlements in a complex local context. On that basis benefits and limitations of the in situ upgrading approach are specified and prerequisites for its successful implementation formulated.
This document presents a formula selection system for classical first order theorem proving based on the relevance of formulae for the proof of a conjecture. It is based on unifiability of predicates and is also able to use a linguistic approach for the selection. The scope of the technique is the reduction of the set of formulae and the increase of the amount of provable conjectures in a given time. Since the technique generates a subset of the formula set, it can be used as a preprocessor for automated theorem proving. The document contains the conception, implementation and evaluation of both selection concepts. While the one concept generates a search graph over the negation normal forms or Skolem normal forms of the given formulae, the linguistic concept analyses the formulae and determines frequencies of lexemes and uses a tf-idf weighting algorithm to determine the relevance of the formulae. Though the concept is built for first order logic, it is not limited to it. The concept can be used for higher order and modal logik, too, with minimal adoptions. The system was also evaluated at the world championship of automated theorem provers (CADE ATP Systems Competition, CASC-24) in combination with the leanCoP theorem prover and the evaluation of the results of the CASC and the benchmarks with the problems of the CASC of the year 2012 (CASC-J6) show that the concept of the system has positive impact to the performance of automated theorem provers. Also, the benchmarks with two different theorem provers which use different calculi have shown that the selection is independent from the calculus. Moreover, the concept of TEMPLAR has shown to be competitive to some extent with the concept of SinE and even helped one of the theorem provers to solve problems that were not (or slower) solved with SinE selection in the CASC. Finally, the evaluation implies that the combination of the unification based and linguistic selection yields more improved results though no optimisation was done for the problems.
This document presents an axiom selection technique for classic first order theorem proving based on the relevance of axioms for the proof of a conjecture. It is based on unifiability of predicates and does not need statistical information like symbol frequency. The scope of the technique is the reduction of the set of axioms and the increase of the amount of provable conjectures in a given time. Since the technique generates a subset of the axiom set, it can be used as a preprocessor for automated theorem proving. This technical report describes the conception, implementation and evaluation of ARDE. The selection method, which is based on a breadth-first graph search by unifiability of predicates, is a weakened form of the connection calculus and uses specialised variants or unifiability to speed up the selection. The implementation of the concept is evaluated with comparison to the results of the world championship of theorem provers of the year 2012 (CASC J6). It is shown that both the theorem prover leanCoP which uses the connection calculus and E which uses equality reasoning, can benefit from the selection approach. Also, the evaluation shows that the concept is applyable for theorem proving problems with thousands of formulae and that the selection is independent from the calculus used by the theorem prover.
The Adana Basin of southern Turkey, situated at the SE margin of the Central Anatolian Plateau is ideally located to record Neogene topographic and tectonic changes in the easternmost Mediterranean realm. Using industry seismic reflection data we correlate 34 seismic profiles with corresponding exposed units in the Adana Basin. The time-depth conversion of the interpreted seismic profiles allows us to reconstruct the subsidence curve of the Adana Basin and to outline the occurrence of a major increase in both subsidence and sedimentation rates at 5.45 – 5.33 Ma, leading to the deposition of almost 1500 km3 of conglomerates and marls. Our provenance analysis of the conglomerates reveals that most of the sediment is derived from and north of the SE margin of the Central Anatolian Plateau. A comparison of these results with the composition of recent conglomerates and the present drainage basins indicates major changes between late Messinian and present-day source areas. We suggest that these changes in source areas result of uplift and ensuing erosion of the SE margin of the plateau. This hypothesis is supported by the comparison of the Adana Basin subsidence curve with the subsidence curve of the Mut Basin, a mainly Neogene basin located on top of the Central Anatolian Plateau southern margin, showing that the Adana Basin subsidence event is coeval with an uplift episode of the plateau southern margin. The collection of several fault measurements in the Adana region show different deformation styles for the NW and SE margins of the Adana Basin. The weakly seismic NW portion of the basin is characterized by extensional and transtensional structures cutting Neogene deposits, likely accomodating the differential uplift occurring between the basin and the SE margin of the plateau. We interpret the tectonic evolution of the southern flank of the Central Anatolian Plateau and the coeval subsidence and sedimentation in the Adana Basin to be related to deep lithospheric processes, particularly lithospheric delamination and slab break-off.
These lecture notes are intended as a short introduction to diffusion processes on a domain with a reflecting boundary for graduate students, researchers in stochastic analysis and interested readers. Specific results on stochastic differential equations with reflecting boundaries such as existence and uniqueness, continuity and Markov properties, relation to partial differential equations and submartingale problems are given. An extensive list of references to current literature is included. This book has its origins in a mini-course the author gave at the University of Potsdam and at the Technical University of Berlin in Winter 2013.
How distinct implicit and explicit motive systems differ has long been unclear. Schultheiss' (2001) information processing account of implicit motive arousal hypothesized that implicit motives respond to nonverbal stimuli to influence non-declarative measures of motivation and that explicit motives respond to verbal stimuli to influence declarative measures of motivation. Moreover, in individuals high in referential competence, i.e., with the ability to quickly translate non-verbal stimuli into a verbal representation, implicit motives are thought to respond to verbal stimuli and influence declarative measures of motivation and explicit motives are thought to respond to nonverbal stimuli and to influence non-declarative measures of motivation. The present study tested these hypotheses by assessing liking ratings as a declarative response format and an affective stroop task as a non-declarative response format using emotion words as verbal and emotional facial expressions as non-verbal stimuli. Individual power, affiliation, and achievement motive dispositions were assessed via the Picture Story Excercise for implicit motives and via questionnaires for explicit motives. Referential competence was assessed via a colour-naming/-reading task. I found that as expected explicit and implicit motives overall were not correlated across subjects. Moreover, implicit and explicit motives affected declarative and non-declarative responses for verbal and non-verbal stimuli. As predicted, however, implicit motives responded to verbal stimuli and influenced declarative responses more strongly for individuals high compared to those low in referential competence. Likewise, explicit motive effects were moderated by referential competence in some - but not all - of the predicted conditions. These results show that implicit and explicit motives can influence declarative and non-declarative responses to verbal and non-verbal stimuli. They support the hypothesis that referential processing is needed for implicit motives to respond to verbal stimuli and influence declarative response formats, and they partly support the hypothesis that referential processing plays a role for the influence of explicit motives. Results for explicit motives may suggest that new measures are needed to assess the referential competence to translate verbal stimuli into non-verbal representations. Overall, the findings provide support to the information processing account of implicit motive arousal by Schultheiss' (2001), suggesting that a non-verbal and non-declarative implicit motive system and a distinct verbal and declarative explicit motive system interact via referential processing, i.e., by translating information between representational formats.
The Beruriah Incident
(2014)
The story known as the Beruriah Incident, which appears in Rashi’s commentary on bAvodah Zarah 18b (related to ATU types 920A* and 823A*), describes the failure and tragic end of R. Meir and his wife Beruriah, two tannaic role-models. This article examines the authenticity of the story by tracking the method of distribution in traditional Jewish society before the modern era, and comparing the story’s components with rabbinic literature and international folklore.
The development of the current liturgical music used in the Belgrade synagogue is (in the last decades) heavily influenced by foreign traditions (mostly levantine) that are brought to Belgrade by modern communication systems. Therefore it is nearly impossible to speak of a status quo that might be possibly obsolete by tomorrow – at least with respect to the melodies. The great changes within the liturgical music occurred not due to acculturation into the Serbian majority but due to the personal preferences of the religious leaders of the Belgrade Jews. The alterations are a conscious process which is precisely the consequence of the musical taste of the local Rabbi and Cantor and not occurring autonomously. In order to understand the new nusah sepharadiyerushalmi that took the place of the forlorn nusah after the downfall of the Communist regime it is deemed necessary to look towards Israel where the rite developed.
Pronoun resolution normally takes place without conscious effort or awareness, yet the processes behind it are far from straightforward. A large number of cues and constraints have previously been recognised as playing a role in the identification and integration of potential antecedents, yet there is considerable debate over how these operate within the resolution process. The aim of this thesis is to investigate how the parser handles multiple antecedents in order to understand more about how certain information sources play a role during pronoun resolution. I consider how both structural information and information provided by the prior discourse is used during online processing. This is investigated through several eye tracking during reading experiments that are complemented by a number of offline questionnaire experiments. I begin by considering how condition B of the Binding Theory (Chomsky 1981; 1986) has been captured in pronoun processing models; some researchers have claimed that processing is faithful to syntactic constraints from the beginning of the search (e.g. Nicol and Swinney 1989), while others have claimed that potential antecedents which are ruled out on structural grounds nonetheless affect processing, because the parser must also pay attention to a potential antecedent’s features (e.g. Badecker and Straub 2002). My experimental findings demonstrate that the parser is sensitive to the subtle changes in syntactic configuration which either allow or disallow pronoun reference to a local antecedent, and indicate that the parser is normally faithful to condition B at all stages of processing. Secondly, I test the Primitives of Binding hypothesis proposed by Koornneef (2008) based on work by Reuland (2001), which is a modular approach to pronoun resolution in which variable binding (a semantic relationship between pronoun and antecedent) takes place before coreference. I demonstrate that a variable-binding (VB) antecedent is not systematically considered earlier than a coreference (CR) antecedent online. I then go on to explore whether these findings could be attributed to the linear order of the antecedents, and uncover a robust recency preference both online and offline. I consider what role the factor of recency plays in pronoun resolution and how it can be reconciled with the first-mention advantage (Gernsbacher and Hargreaves 1988; Arnold 2001; Arnold et al., 2007). Finally, I investigate how aspects of the prior discourse affect pronoun resolution. Prior discourse status clearly had an effect on pronoun resolution, but an antecedent’s appearance in the previous context was not always facilitative; I propose that this is due to the number of topic switches that a reader must make, leading to a lack of discourse coherence which has a detrimental effect on pronoun resolution. The sensitivity of the parser to structural cues does not entail that cue types can be easily separated into distinct sequential stages, and I therefore propose that the parser is structurally sensitive but not modular. Aspects of pronoun resolution can be captured within a parallel constraints model of pronoun resolution, however, such a model should be sensitive to the activation of potential antecedents based on discourse factors, and structural cues should be strongly weighted.
When azobenzene-modified photosensitive polymer films are irradiated with light interference patterns, topographic variations in the film develop that follow the electric field vector distribution resulting in the formation of surface relief grating (SRG). The exact correspondence of the electric field vector orientation in interference pattern in relation to the presence of local topographic minima or maxima of SRG is in general difficult to determine. In my thesis, we have established a systematic procedure to accomplish the correlation between different interference patterns and the topography of SRG. For this, we devise a new setup combining an atomic force microscope and a two-beam interferometer (IIAFM). With this set-up, it is possible to track the topography change in-situ, while at the same time changing polarization and phase of the impinging interference pattern. To validate our results, we have compared two photosensitive materials named in short as PAZO and trimer. This is the first time that an absolute correspondence between the local distribution of electric field vectors of interference pattern and the local topography of the relief grating could be established exhaustively. In addition, using our IIAFM we found that for a certain polarization combination of two orthogonally polarized interfering beams namely SP (↕, ↔) interference pattern, the topography forms SRG with only half the period of the interference patterns. Exploiting this phenomenon we are able to fabricate surface relief structures below diffraction limit with characteristic features measuring only 140 nm, by using far field optics with a wavelength of 491 nm. We have also probed for the stresses induced during the polymer mass transport by placing an ultra-thin gold film on top (5–30 nm). During irradiation, the metal film not only deforms along with the SRG formation, but ruptures in regular and complex manner. The morphology of the cracks differs strongly depending on the electric field distribution in the interference pattern even when the magnitude and the kinetic of the strain are kept constant. This implies a complex local distribution of the opto-mechanical stress along the topography grating. The neutron reflectivity measurements of the metal/polymer interface indicate the penetration of metal layer within the polymer resulting in the formation of bonding layer that confirms the transduction of light induced stresses in the polymer layer to a metal film.
In Chapter 1 of the dissertation, the role of social networks is analyzed as an important determinant in the search behavior of the unemployed. Based on the hypothesis that the unemployed generate information on vacancies through their social network, search theory predicts that individuals with large social networks should experience an increased productivity of informal search, and reduce their search in formal channels. Due to the higher productivity of search, unemployed with a larger network are also expected to have a higher reservation wage than unemployed with a small network. The model-theoretic predictions are tested and confirmed empirically. It is found that the search behavior of unemployed is significantly affected by the presence of social contacts, with larger networks implying a stronger substitution away from formal search channels towards informal channels. The substitution is particularly pronounced for passive formal search methods, i.e., search methods that generate rather non-specific types of job offer information at low relative cost. We also find small but significant positive effects of an increase of the network size on the reservation wage. These results have important implications on the analysis of the job search monitoring or counseling measures that are usually targeted at formal search only. Chapter 2 of the dissertation addresses the labor market effects of vacancy information during the early stages of unemployment. The outcomes considered are the speed of exit from unemployment, the effects on the quality of employment and the short-and medium-term effects on active labor market program (ALMP) participation. It is found that vacancy information significantly increases the speed of entry into employment; at the same time the probability to participate in ALMP is significantly reduced. Whereas the long-term reduction in the ALMP arises in consequence of the earlier exit from unemployment, we also observe a short-run decrease for some labor market groups which suggest that caseworker use high and low intensity activation measures interchangeably which is clearly questionable from an efficiency point of view. For unemployed who find a job through vacancy information we observe a small negative effect on the weekly number of hours worked. In Chapter 3, the long-term effects of participation in ALMP are assessed for unemployed youth under 25 years of age. Complementary to the analysis in Chapter 2, the effects of participation in time- and cost-intensive measures of active labor market policies are examined. In particular we study the effects of job creation schemes, wage subsidies, short-and long-term training measures and measures to promote the participation in vocational training. The outcome variables of interest are the probability to be in regular employment, and participation in further education during the 60 months following program entry. The analysis shows that all programs, except job creation schemes have positive and long-term effects on the employment probability of youth. In the short-run only short-term training measures generate positive effects, as long-term training programs and wage subsidies exhibit significant locking-in'' effects. Measures to promote vocational training are found to increase the probability of attending education and training significantly, whereas all other programs have either no or a negative effect on training participation. Effect heterogeneity with respect to the pre-treatment level education shows that young people with higher pre-treatment educational levels benefit more from participation most programs. However, for longer-term wage subsidies we also find strong positive effects for young people with low initial education levels. The relative benefit of training measures is higher in West than in East Germany. In the evaluation studies of Chapters 2 and 3 semi-parametric balancing methods of Propensity Score Matching (PSM) and Inverse Probability Weighting (IPW) are used to eliminate the effects of counfounding factors that influence both the treatment participation as well as the outcome variable of interest, and to establish a causal relation between program participation and outcome differences. While PSM and IPW are intuitive and methodologically attractive as they do not require parametric assumptions, the practical implementation may become quite challenging due to their sensitivity to various data features. Given the importance of these methods in the evaluation literature, and the vast number of recent methodological contributions in this field, Chapter 4 aims to reduce the knowledge gap between the methodological and applied literature by summarizing new findings of the empirical and statistical literature and practical guidelines for future applied research. In contrast to previous publications this study does not only focus on the estimation of causal effects, but stresses that the balancing challenge can and should be discussed independent of question of causal identification of treatment effects on most empirical applications. Following a brief outline of the practical implementation steps required for PSM and IPW, these steps are presented in detail chronologically, outlining practical advice for each step. Subsequently, the topics of effect estimation, inference, sensitivity analysis and the combination with parametric estimation methods are discussed. Finally, new extensions of the methodology and avenues for future research are presented.
LCST-type synthetic thermoresponsive polymers can reversibly respond to certain stimuli in aqueous media with a massive change of their physical state. When fluorophores, that are sensitive to such changes, are incorporated into the polymeric structure, the response can be translated into a fluorescence signal. Based on this idea, this thesis presents sensing schemes which transduce the stimuli-induced variations in the solubility of polymer chains with covalently-bound fluorophores into a well-detectable fluorescence output. Benefiting from the principles of different photophysical phenomena, i.e. of fluorescence resonance energy transfer and solvatochromism, such fluorescent copolymers enabled monitoring of stimuli such as the solution temperature and ionic strength, but also of association/disassociation mechanisms with other macromolecules or of biochemical binding events through remarkable changes in their fluorescence properties. For instance, an aqueous ratiometric dual sensor for temperature and salts was developed, relying on the delicate supramolecular assembly of a thermoresponsive copolymer with a thiophene-based conjugated polyelectrolyte. Alternatively, by taking advantage of the sensitivity of solvatochromic fluorophores, an increase in solution temperature or the presence of analytes was signaled as an enhancement of the fluorescence intensity. A simultaneous use of the sensitivity of chains towards the temperature and a specific antibody allowed monitoring of more complex phenomena such as competitive binding of analytes. The use of different thermoresponsive polymers, namely poly(N-isopropylacrylamide) and poly(meth)acrylates bearing oligo(ethylene glycol) side chains, revealed that the responsive polymers differed widely in their ability to perform a particular sensing function. In order to address questions regarding the impact of the chemical structure of the host polymer on the sensing performance, the macromolecular assembly behavior below and above the phase transition temperature was evaluated by a combination of fluorescence and light scattering methods. It was found that although the temperature-triggered changes in the macroscopic absorption characteristics were similar for these polymers, properties such as the degree of hydration or the extent of interchain aggregations differed substantially. Therefore, in addition to the demonstration of strategies for fluorescence-based sensing with thermoresponsive polymers, this work highlights the role of the chemical structure of the two popular thermoresponsive polymers on the fluorescence response. The results are fundamentally important for the rational choice of polymeric materials for a specific sensing strategy.
Nowadays, software systems are getting more and more complex. To tackle this challenge most diverse techniques, such as design patterns, service oriented architectures (SOA), software development processes, and model-driven engineering (MDE), are used to improve productivity, while time to market and quality of the products stay stable. Multiple of these techniques are used in parallel to profit from their benefits. While the use of sophisticated software development processes is standard, today, MDE is just adopted in practice. However, research has shown that the application of MDE is not always successful. It is not fully understood when advantages of MDE can be used and to what degree MDE can also be disadvantageous for productivity. Further, when combining different techniques that aim to affect the same factor (e.g. productivity) the question arises whether these techniques really complement each other or, in contrast, compensate their effects. Due to that, there is the concrete question how MDE and other techniques, such as software development process, are interrelated. Both aspects (advantages and disadvantages for productivity as well as the interrelation to other techniques) need to be understood to identify risks relating to the productivity impact of MDE. Before studying MDE's impact on productivity, it is necessary to investigate the range of validity that can be reached for the results. This includes two questions. First, there is the question whether MDE's impact on productivity is similar for all approaches of adopting MDE in practice. Second, there is the question whether MDE's impact on productivity for an approach of using MDE in practice remains stable over time. The answers for both questions are crucial for handling risks of MDE, but also for the design of future studies on MDE success. This thesis addresses these questions with the goal to support adoption of MDE in future. To enable a differentiated discussion about MDE, the term MDE setting'' is introduced. MDE setting refers to the applied technical setting, i.e. the employed manual and automated activities, artifacts, languages, and tools. An MDE setting's possible impact on productivity is studied with a focus on changeability and the interrelation to software development processes. This is done by introducing a taxonomy of changeability concerns that might be affected by an MDE setting. Further, three MDE traits are identified and it is studied for which manifestations of these MDE traits software development processes are impacted. To enable the assessment and evaluation of an MDE setting's impacts, the Software Manufacture Model language is introduced. This is a process modeling language that allows to reason about how relations between (modeling) artifacts (e.g. models or code files) change during application of manual or automated development activities. On that basis, risk analysis techniques are provided. These techniques allow identifying changeability risks and assessing the manifestations of the MDE traits (and with it an MDE setting's impact on software development processes). To address the range of validity, MDE settings from practice and their evolution histories were capture in context of this thesis. First, this data is used to show that MDE settings cover the whole spectrum concerning their impact on changeability or interrelation to software development processes. Neither it is seldom that MDE settings are neutral for processes nor is it seldom that MDE settings have impact on processes. Similarly, the impact on changeability differs relevantly. Second, a taxonomy of evolution of MDE settings is introduced. In that context it is discussed to what extent different types of changes on an MDE setting can influence this MDE setting's impact on changeability and the interrelation to processes. The category of structural evolution, which can change these characteristics of an MDE setting, is identified. The captured MDE settings from practice are used to show that structural evolution exists and is common. In addition, some examples of structural evolution steps are collected that actually led to a change in the characteristics of the respective MDE settings. Two implications are: First, the assessed diversity of MDE settings evaluates the need for the analysis techniques that shall be presented in this thesis. Second, evolution is one explanation for the diversity of MDE settings in practice. To summarize, this thesis studies the nature and evolution of MDE settings in practice. As a result support for the adoption of MDE settings is provided in form of techniques for the identification of risks relating to productivity impacts.
This thesis gives formal definitions of discourse-givenness, coreference and reference, and reports on experiments with computational models of discourse-givenness of noun phrases for English and German. Definitions are based on Bach's (1987) work on reference, Kibble and van Deemter's (2000) work on coreference, and Kamp and Reyle's Discourse Representation Theory (1993). For the experiments, the following corpora with coreference annotation were used: MUC-7, OntoNotes and ARRAU for Englisch, and TueBa-D/Z for German. As for classification algorithms, they cover J48 decision trees, the rule based learner Ripper, and linear support vector machines. New features are suggested, representing the noun phrase's specificity as well as its context, which lead to a significant improvement of classification quality.
It sometimes happens that we finish reading a passage of text just to realize that we have no idea what we just read. During these episodes of mindless reading our mind is elsewhere yet the eyes still move across the text. The phenomenon of mindless reading is common and seems to be widely recognized in lay psychology. However, the scientific investigation of mindless reading has long been underdeveloped. Recent progress in research on mindless reading has been based on self-report measures and on treating it as an all-or-none phenomenon (dichotomy-hypothesis). Here, we introduce the levels-of-inattention hypothesis proposing that mindless reading is graded and occurs at different levels of cognitive processing. Moreover, we introduce two new behavioral paradigms to study mindless reading at different levels in the eye-tracking laboratory. First (Chapter 2), we introduce shuffled text reading as a paradigm to approximate states of weak mindless reading experimentally and compare it to reading of normal text. Results from statistical analyses of eye movements that subjects perform in this task qualitatively support the ‘mindless’ hypothesis that cognitive influences on eye movements are reduced and the ‘foveal load’ hypothesis that the response of the zoom lens of attention to local text difficulty is enhanced when reading shuffled text. We introduce and validate an advanced version of the SWIFT model (SWIFT 3) incorporating the zoom lens of attention (Chapter 3) and use it to explain eye movements during shuffled text reading. Simulations of the SWIFT 3 model provide fully quantitative support for the ‘mindless’ and the ‘foveal load’ hypothesis. They moreover demonstrate that the zoom lens is an important concept to explain eye movements across reading and mindless reading tasks. Second (Chapter 4), we introduce the sustained attention to stimulus task (SAST) to catch episodes when external attention spontaneously lapses (i.e., attentional decoupling or mind wandering) via the overlooking of errors in the text and via signal detection analyses of error detection. Analyses of eye movements in the SAST revealed reduced influences from cognitive text processing during mindless reading. Based on these findings, we demonstrate that it is possible to predict states of mindless reading from eye movement recordings online. That cognition is not always needed to move the eyes supports autonomous mechanisms for saccade initiation. Results from analyses of error detection and eye movements provide support to our levels-of-inattention hypothesis that errors at different levels of the text assess different levels of decoupling. Analyses of pupil size in the SAST (Chapter 5) provide further support to the levels of inattention hypothesis and to the decoupling hypothesis that off-line thought is a distinct mode of cognitive functioning that demands cognitive resources and is associated with deep levels of decoupling. The present work demonstrates that the elusive phenomenon of mindless reading can be vigorously investigated in the cognitive laboratory and further incorporated in the theoretical framework of cognitive science.
For the first time the transcriptional reprogramming of distinct root cortex cells during the arbuscular mycorrhizal (AM) symbiosis was investigated by combining Laser Capture Mirodissection and Affymetrix GeneChip® Medicago genome array hybridization. The establishment of cryosections facilitated the isolation of high quality RNA in sufficient amounts from three different cortical cell types. The transcript profiles of arbuscule-containing cells (arb cells), non-arbuscule-containing cells (nac cells) of Rhizophagus irregularis inoculated Medicago truncatula roots and cortex cells of non-inoculated roots (cor) were successfully explored. The data gave new insights in the symbiosis-related cellular reorganization processes and indicated that already nac cells seem to be prepared for the upcoming fungal colonization. The mycorrhizal- and phosphate-dependent transcription of a GRAS TF family member (MtGras8) was detected in arb cells and mycorrhizal roots. MtGRAS shares a high sequence similarity to a GRAS TF suggested to be involved in the fungal colonization processes (MtRAM1). The function of MtGras8 was unraveled upon RNA interference- (RNAi-) mediated gene silencing. An AM symbiosis-dependent expression of a RNAi construct (MtPt4pro::gras8-RNAi) revealed a successful gene silencing of MtGras8 leading to a reduced arbuscule abundance and a higher proportion of deformed arbuscules in root with reduced transcript levels. Accordingly, MtGras8 might control the arbuscule development and life-time. The targeting of MtGras8 by the phosphate-dependent regulated miRNA5204* was discovered previously (Devers et al., 2011). Since miRNA5204* is known to be affected by phosphate, the posttranscriptional regulation might represent a link between phosphate signaling and arbuscule development. In this work, the posttranscriptional regulation was confirmed by mis-expression of miRNA5204* in M. truncatula roots. The miRNA-mediated gene silencing affects the MtGras8 transcript abundance only in the first two weeks of the AM symbiosis and the mis-expression lines seem to mimic the phenotype of MtGras8-RNAi lines. Additionally, MtGRAS8 seems to form heterodimers with NSP2 and RAM1, which are known to be key regulators of the fungal colonization process (Hirsch et al., 2009; Gobbato et al., 2012). These data indicate that MtGras8 and miRNA5204* are linked to the sym pathway and regulate the arbuscule development in phosphate-dependent manner.
The contractile vacuole (CV) is an osmoregulatory organelle found exclusively in algae and protists. In addition to expelling excessive water out of the cell, it also expels ions and other metabolites and thereby contributes to the cell's metabolic homeostasis. The interest in the CV reaches beyond its immediate cellular roles. The CV's function is tightly related to basic cellular processes such as membrane dynamics and vesicle budding and fusion; several physiological processes in animals, such as synaptic neurotransmission and blood filtration in the kidney, are related to the CV's function; and several pathogens, such as the causative agents of sleeping sickness, possess CVs, which may serve as pharmacological targets. The green alga Chlamydomonas reinhardtii has two CVs. They are the smallest known CVs in nature, and they remain relatively untouched in the CV-related literature. Many genes that have been shown to be related to the CV in other organisms have close homologues in C. reinhardtii. We attempted to silence some of these genes and observe the effect on the CV. One of our genes, VMP1, caused striking, severe phenotypes when silenced. Cells exhibited defective cytokinesis and aberrant morphologies. The CV, incidentally, remained unscathed. In addition, mutant cells showed some evidence of disrupted autophagy. Several important regulators of the cell cycle as well as autophagy were found to be underexpressed in the mutant. Lipidomic analysis revealed many meaningful changes between wild-type and mutant cells, reinforcing the compromised-autophagy observation. VMP1 is a singular protein, with homologues in numerous eukaryotic organisms (aside from fungi), but usually with no relatives in each particular genome. Since its first characterization in 2002 it has been associated with several cellular processes and functions, namely autophagy, programmed cell-death, secretion, cell adhesion, and organelle biogenesis. It has been implicated in several human diseases: pancreatitis, diabetes, and several types of cancer. Our results reiterate some of the observations in VMP1's six reported homologues, but, importantly, show for the first time an involvement of this protein in cell division. The mechanisms underlying this involvement in Chlamydomonas, as well as other key aspects, such as VMP1's subcellular localization and interaction partners, still await elucidation.
The Semantic Web provides information contained in the World Wide Web as machine-readable facts. In comparison to a keyword-based inquiry, semantic search enables a more sophisticated exploration of web documents. By clarifying the meaning behind entities, search results are more precise and the semantics simultaneously enable an exploration of semantic relationships. However, unlike keyword searches, a semantic entity-focused search requires that web documents are annotated with semantic representations of common words and named entities. Manual semantic annotation of (web) documents is time-consuming; in response, automatic annotation services have emerged in recent years. These annotation services take continuous text as input, detect important key terms and named entities and annotate them with semantic entities contained in widely used semantic knowledge bases, such as Freebase or DBpedia. Metadata of video documents require special attention. Semantic analysis approaches for continuous text cannot be applied, because information of a context in video documents originates from multiple sources possessing different reliabilities and characteristics. This thesis presents a semantic analysis approach consisting of a context model and a disambiguation algorithm for video metadata. The context model takes into account the characteristics of video metadata and derives a confidence value for each metadata item. The confidence value represents the level of correctness and ambiguity of the textual information of the metadata item. The lower the ambiguity and the higher the prospective correctness, the higher the confidence value. The metadata items derived from the video metadata are analyzed in a specific order from high to low confidence level. Previously analyzed metadata are used as reference points in the context for subsequent disambiguation. The contextually most relevant entity is identified by means of descriptive texts and semantic relationships to the context. The context is created dynamically for each metadata item, taking into account the confidence value and other characteristics. The proposed semantic analysis follows two hypotheses: metadata items of a context should be processed in descendent order of their confidence value, and the metadata that pertains to a context should be limited by content-based segmentation boundaries. The evaluation results support the proposed hypotheses and show increased recall and precision for annotated entities, especially for metadata that originates from sources with low reliability. The algorithms have been evaluated against several state-of-the-art annotation approaches. The presented semantic analysis process is integrated into a video analysis framework and has been successfully applied in several projects for the purpose of semantic video exploration of videos.
The objective and motivation behind this research is to provide applications with easy-to-use interfaces to communities of deaf and functionally illiterate users, which enables them to work without any human assistance. Although recent years have witnessed technological advancements, the availability of technology does not ensure accessibility to information and communication technologies (ICT). Extensive use of text from menus to document contents means that deaf or functionally illiterate can not access services implemented on most computer software. Consequently, most existing computer applications pose an accessibility barrier to those who are unable to read fluently. Online technologies intended for such groups should be developed in continuous partnership with primary users and include a thorough investigation into their limitations, requirements and usability barriers. In this research, I investigated existing tools in voice, web and other multimedia technologies to identify learning gaps and explored ways to enhance the information literacy for deaf and functionally illiterate users. I worked on the development of user-centered interfaces to increase the capabilities of deaf and low literacy users by enhancing lexical resources and by evaluating several multimedia interfaces for them. The interface of the platform-independent Italian Sign Language (LIS) Dictionary has been developed to enhance the lexical resources for deaf users. The Sign Language Dictionary accepts Italian lemmas as input and provides their representation in the Italian Sign Language as output. The Sign Language dictionary has 3082 signs as set of Avatar animations in which each sign is linked to a corresponding Italian lemma. I integrated the LIS lexical resources with MultiWordNet (MWN) database to form the first LIS MultiWordNet(LMWN). LMWN contains information about lexical relations between words, semantic relations between lexical concepts (synsets), correspondences between Italian and sign language lexical concepts and semantic fields (domains). The approach enhances the deaf users’ understanding of written Italian language and shows that a relatively small set of lexicon can cover a significant portion of MWN. Integration of LIS signs with MWN made it useful tool for computational linguistics and natural language processing. The rule-based translation process from written Italian text to LIS has been transformed into service-oriented system. The translation process is composed of various modules including parser, semantic interpreter, generator, and spatial allocation planner. This translation procedure has been implemented in the Java Application Building Center (jABC), which is a framework for extreme model driven design (XMDD). The XMDD approach focuses on bringing software development closer to conceptual design, so that the functionality of a software solution could be understood by someone who is unfamiliar with programming concepts. The transformation addresses the heterogeneity challenge and enhances the re-usability of the system. For enhancing the e-participation of functionally illiterate users, two detailed studies were conducted in the Republic of Rwanda. In the first study, the traditional (textual) interface was compared with the virtual character-based interactive interface. The study helped to identify usability barriers and users evaluated these interfaces according to three fundamental areas of usability, i.e. effectiveness, efficiency and satisfaction. In another study, we developed four different interfaces to analyze the usability and effects of online assistance (consistent help) for functionally illiterate users and compared different help modes including textual, vocal and virtual character on the performance of semi-literate users. In our newly designed interfaces the instructions were automatically translated in Swahili language. All the interfaces were evaluated on the basis of task accomplishment, time consumption, System Usability Scale (SUS) rating and number of times the help was acquired. The results show that the performance of semi-literate users improved significantly when using the online assistance. The dissertation thus introduces a new development approach in which virtual characters are used as additional support for barely literate or naturally challenged users. Such components enhanced the application utility by offering a variety of services like translating contents in local language, providing additional vocal information, and performing automatic translation from text to sign language. Obviously, there is no such thing as one design solution that fits for all in the underlying domain. Context sensitivity, literacy and mental abilities are key factors on which I concentrated and the results emphasize that computer interfaces must be based on a thoughtful definition of target groups, purposes and objectives.
Passive plant actuators have fascinated many researchers in the field of botany and structural biology since at least one century. Up to date, the most investigated tissue types in plant and artificial passive actuators are fibre-reinforced composites (and multilayered assemblies thereof) where stiff, almost inextensible cellulose microfibrils direct the otherwise isotropic swelling of a matrix. In addition, Nature provides examples of actuating systems based on lignified, low-swelling, cellular solids enclosing a high-swelling cellulosic phase. This is the case of the Delosperma nakurense seed capsule, in which a specialized tissue promotes the reversible opening of the capsule upon wetting. This tissue has a diamond-shaped honeycomb microstructure characterized by high geometrical anisotropy: when the cellulosic phase swells inside this constraining structure, the tissue deforms up to four times in one principal direction while maintaining its original dimension in the other. Inspired by the example of the Delosoperma nakurense, in this thesis we analyze the role of architecture of 2D cellular solids as models for natural hygromorphs. To start off, we consider a simple fluid pressure acting in the cells and try to assess the influence of several architectural parameters onto their mechanical actuation. Since internal pressurization is a configurational type of load (that is the load direction is not fixed but it “follows” the structure as it deforms) it will result in the cellular structure acquiring a “spontaneous” shape. This shape is independent of the load but just depends on the architectural characteristics of the cells making up the structure itself. Whereas regular convex tiled cellular solids (such as hexagonal, triangular or square lattices) deform isotropically upon pressurization, we show through finite element simulations that by introducing anisotropic and non-convex, reentrant tiling large expansions can be achieved in each individual cell. The influence of geometrical anisotropy onto the expansion behaviour of a diamond shaped honeycomb is assessed by FEM calculations and a Born lattice approximation. We found that anisotropic expansions (eigenstrains) comparable to those observed in the keels tissue of the Delosoperma nakurense are possible. In particular these depend on the relative contributions of bending and stretching of the beams building up the honeycomb. Moreover, by varying the walls’ Young modulus E and internal pressure p we found that both the eigenstrains and 2D elastic moduli scale with the ratio p/E. Therefore the potential of these pressurized structures as soft actuators is outlined. This approach was extended by considering several 2D cellular solids based on two types of non-convex cells. Each honeycomb is build as a lattice made of only one non-convex cell. Compared to usual honeycombs, these lattices have kinked walls between neighbouring cells which offers a hidden length scale allowing large directed deformations. By comparing the area expansion in all lattices, we were able to show that less convex cells are prone to achieve larger area expansions, but the direction in which the material expands is variable and depends on the local cell’s connectivity. This has repercussions both at the macroscopic (lattice level) and microscopic (cells level) scales. At the macroscopic scale, these non-convex lattices can experience large anisotropic (similarly to the diamond shaped honeycomb) or perfectly isotropic principal expansions, large shearing deformations or a mixed behaviour. Moreover, lattices that at the macroscopic scale expand similarly can show quite different microscopic deformation patterns that include zig-zag motions and radical changes of the initial cell shape. Depending on the lattice architecture, the microscopic deformations of the individual cells can be equal or not, so that they can build up or mutually compensate and hence give rise to the aforementioned variety of macroscopic behaviours. Interestingly, simple geometrical arguments involving the undeformed cell shape and its local connectivity enable to predict the results of the FE simulations. Motivated by the results of the simulations, we also created experimental 3D printed models of such actuating structures. When swollen, the models undergo substantial deformation with deformation patterns qualitatively following those predicted by the simulations. This work highlights how the internal architecture of a swellable cellular solid can lead to complex shape changes which may be useful in the fields of soft robotics or morphing structures.
Measuring the metabolite profile of plants can be a strong phenotyping tool, but the changes of metabolite pool sizes are often difficult to interpret, not least because metabolite pool sizes may stay constant while carbon flows are altered and vice versa. Hence, measuring the carbon allocation of metabolites enables a better understanding of the metabolic phenotype. The main challenge of such measurements is the in vivo integration of a stable or radioactive label into a plant without perturbation of the system. To follow the carbon flow of a precursor metabolite, a method is developed in this work that is based on metabolite profiling of primary metabolites measured with a mass spectrometer preceded by a gas chromatograph (Wagner et al. 2003; Erban et al. 2007; Dethloff et al. submitted). This method generates stable isotope profiling data, besides conventional metabolite profiling data. In order to allow the feeding of a 13C sucrose solution into the plant, a petiole and a hypocotyl feeding assay are developed. To enable the processing of large numbers of single leaf samples, their preparation and extraction are simplified and optimised. The metabolite profiles of primary metabolites are measured, and a simple relative calculation is done to gain information on carbon allocation from 13C sucrose. This method is tested examining single leaves of one rosette in different developmental stages, both metabolically and regarding carbon allocation from 13C sucrose. It is revealed that some metabolite pool sizes and 13C pools are tightly associated to relative leaf growth, i.e. to the developmental stage of the leaf. Fumaric acid turns out to be the most interesting candidate for further studies because pool size and 13C pool diverge considerably. In addition, the analyses are also performed on plants grown in the cold, and the initial results show a different metabolite pool size pattern across single leaves of one Arabidopsis rosette, compared to the plants grown under normal temperatures. Lastly, in situ expression of REIL genes in the cold is examined using promotor-GUS plants. Initial results suggest that single leaf metabolite profiles of reil2 differ from those of the WT.
The habilitation thesis covers theoretical investigations on light-induced processes in molecules. The study is focussed on changes of the molecular electronic structure and geometry, caused either by photoexcitation in the event of a spectroscopic analysis, or by a selective control with shaped laser pulses. The applied and developed methods are predominantly based on quantum chemistry as well as on electron and nuclear quantum dynamics, and in parts on molecular dynamics. The studied scientific problems deal with stereoisomerism and the question of how to either switch or distinguish chiral molecules using laser pulses, and with the essentials for the simulation of the spectroscopic response of biochromophores, in order to unravel their photophysics. The accomplished findings not only explain experimental results and extend existing approaches, but also contribute significantly to the basic understanding of the investigated light-driven molecular processes. The main achievements can be divided in three parts: First, a quantum theory for an enantio- and diastereoselective or, in general, stereoselective laser pulse control was developed and successfully applied to influence the chirality of molecular switches. The proposed axially chiral molecules possess different numbers of "switchable" stable chiral conformations, with one particular switch featuring even a true achiral "off"-state which allows to enantioselectively "turn on" its chirality. Furthermore, surface mounted chiral molecular switches with several well-defined orientations were treated, where a newly devised highly flexible stochastic pulse optimization technique provides high stereoselectivity and efficiency at the same time, even for coupled chirality-changing degrees of freedom. Despite the model character of these studies, the proposed types of chiral molecular switches and, all the more, the developed basic concepts are generally applicable to design laser pulse controlled catalysts for asymmetric synthesis, or to achieve selective changes in the chirality of liquid crystals or in chiroptical nanodevices, implementable in information processing or as data storage. Second, laser-driven electron wavepacket dynamics based on ab initio calculations, namely time-dependent configuration interaction, was extended by the explicit inclusion of magnetic field-magnetic dipole interactions for the simulation of the qualitative and quantitative distinction of enantiomers in mass spectrometry by means of circularly polarized ultrashort laser pulses. The developed approach not only allows to explain the origin of the experimentally observed influence of the pulse duration on the detected circular dichroism in the ion yield, but also to predict laser pulse parameters for an optimal distinction of enantiomers by ultrashort shaped laser pulses. Moreover, these investigations in combination with the previous ones provide a fundamental understanding of the relevance of electric and magnetic interactions between linearly or non-linearly polarized laser pulses and (pro-)chiral molecules for either control by enantioselective excitation or distinction by enantiospecific excitation. Third, for selected light-sensitive biological systems of central importance, like e.g. antenna complexes of photosynthesis, simulations of processes which take place during and after photoexcitation of their chromophores were performed, in order to explain experimental (spectroscopic) findings as well as to understand the underlying photophysical and photochemical principles. In particular, aspects of normal mode mixing due to geometrical changes upon photoexcitation and their impact on (time-dependent) vibronic and resonance Raman spectra, as well as on intramolecular energy redistribution were addressed. In order to explain unresolved experimental findings, a simulation program for the calculation of vibronic and resonance Raman spectra, accounting for changes in both vibrational frequencies and normal modes, was created based on a time-dependent formalism. In addition, the influence of the biochemical environment on the electronic structure of the chromophores was studied by electrostatic interactions and mechanical embedding using hybrid quantum-classical methods. Environmental effects were found to be of importance, in particular, for the excitonic coupling of chromophores in light-harvesting complex II. Although the simulations for such highly complex systems are still restricted by various approximations, the improved approaches and obtained results have proven to be important contributions for a better understanding of light-induced processes in biosystems which also adds to efforts of their artificial reproduction.
Today, it is well known that galaxies like the Milky Way consist not only of stars but also of gas and dust. The galactic halo, a sphere of gas that surrounds the stellar disk of a galaxy, is especially interesting. It provides a wealth of information about in and outflowing gaseous material towards and away from galaxies and their hierarchical evolution. For the Milky Way, the so-called high-velocity clouds (HVCs), fast moving neutral gas complexes in the halo that can be traced by absorption-line measurements, are believed to play a crucial role in the overall matter cycle in our Galaxy. Over the last decades, the properties of these halo structures and their connection to the local circumgalactic and intergalactic medium (CGM and IGM, respectively) have been investigated in great detail by many different groups. So far it remains unclear, however, to what extent the results of these studies can be transferred to other galaxies in the local Universe. In this thesis, we study the absorption properties of Galactic HVCs and compare the HVC absorption characteristics with those of intervening QSO absorption-line systems at low redshift. The goal of this project is to improve our understanding of the spatial extent and physical conditions of gaseous galaxy halos in the local Universe. In the first part of the thesis we use HST /STIS ultraviolet spectra of more than 40 extragalactic background sources to statistically analyze the absorption properties of the HVCs in the Galactic halo. We determine fundamental absorption line parameters including covering fractions of different weakly/intermediately/highly ionized metals with a particular focus on SiII and MgII. Due to the similarity in the ionization properties of SiII and MgII, we are able to estimate the contribution of HVC-like halo structures to the cross section of intervening strong MgII absorbers at z = 0. Our study implies that only the most massive HVCs would be regarded as strong MgII absorbers, if the Milky Way halo would be seen as a QSO absorption line system from an exterior vantage point. Combining the observed absorption-cross section of Galactic HVCs with the well-known number density of intervening strong MgII absorbers at z = 0, we conclude that the contribution of infalling gas clouds (i.e., HVC analogs) in the halos of Milky Way-type galaxies to the cross section of strong MgII absorbers is 34%. This result indicates that only about one third of the strong MgII absorption can be associated with HVC analogs around other galaxies, while the majority of the strong MgII systems possibly is related to galaxy outflows and winds. The second part of this thesis focuses on the properties of intervening metal absorbers at low redshift. The analysis of the frequency and physical conditions of intervening metal systems in QSO spectra and their relation to nearby galaxies offers new insights into the typical conditions of gaseous galaxy halos. One major aspect in our study was to regard intervening metal systems as possible HVC analogs. We perform a detailed analysis of absorption line properties and line statistics for 57 metal absorbers along 78 QSO sightlines using newly-obtained ultraviolet spectra obtained with HST /COS. We find clear evidence for bimodal distribution in the HI column density in the absorbers, a trend that we interpret as sign for two different classes of absorption systems (with HVC analogs at the high-column density end). With the help of the strong transitions of SiII λ1260, SiIII λ1206, and CIII λ977 we have set up Cloudy photoionization models to estimate the local ionization conditions, gas densities, and metallicities. We find that the intervening absorption systems studied by us have, on average, similar physical conditions as Galactic HVC absorbers, providing evidence that many of them represent HVC analogs in the vicinity of other galaxies. We therefore determine typical halo sizes for SiII, SiIII, and CIII for L = 0.01L∗ and L = 0.05L∗ galaxies. Based on the covering fractions of the different ions in the Galactic halo, we find that, for example, the typical halo size for SiIII is ∼ 160 kpc for L = 0.05L∗ galaxies. We test the plausibility of this result by searching for known galaxies close to the QSO sightlines and at similar redshifts as the absorbers. We find that more than 34% of the measured SiIII absorbers have galaxies associated with them, with the majority of the absorbers indeed being at impact parameters ρ ≤160 kpc.
Organizations try to gain competitive advantages, and to increase customer satisfaction. To ensure the quality and efficiency of their business processes, they perform business process management. An important part of process management that happens on the daily operational level is process controlling. A prerequisite of controlling is process monitoring, i.e., keeping track of the performed activities in running process instances. Only by process monitoring can business analysts detect delays and react to deviations from the expected or guaranteed performance of a process instance. To enable monitoring, process events need to be collected from the process environment. When a business process is orchestrated by a process execution engine, monitoring is available for all orchestrated process activities. Many business processes, however, do not lend themselves to automatic orchestration, e.g., because of required freedom of action. This situation is often encountered in hospitals, where most business processes are manually enacted. Hence, in practice it is often inefficient or infeasible to document and monitor every process activity. Additionally, manual process execution and documentation is prone to errors, e.g., documentation of activities can be forgotten. Thus, organizations face the challenge of process events that occur, but are not observed by the monitoring environment. These unobserved process events can serve as basis for operational process decisions, even without exact knowledge of when they happened or when they will happen. An exemplary decision is whether to invest more resources to manage timely completion of a case, anticipating that the process end event will occur too late. This thesis offers means to reason about unobserved process events in a probabilistic way. We address decisive questions of process managers (e.g., "when will the case be finished?", or "when did we perform the activity that we forgot to document?") in this thesis. As main contribution, we introduce an advanced probabilistic model to business process management that is based on a stochastic variant of Petri nets. We present a holistic approach to use the model effectively along the business process lifecycle. Therefore, we provide techniques to discover such models from historical observations, to predict the termination time of processes, and to ensure quality by missing data management. We propose mechanisms to optimize configuration for monitoring and prediction, i.e., to offer guidance in selecting important activities to monitor. An implementation is provided as a proof of concept. For evaluation, we compare the accuracy of the approach with that of state-of-the-art approaches using real process data of a hospital. Additionally, we show its more general applicability in other domains by applying the approach on process data from logistics and finance.
In this work, the development of temperature- and protein-responsive sensor materials based on biocompatible, inverse hydrogel opals (IHOs) is presented. With these materials, large biomolecules can be specifically recognised and the binding event visualised. The preparation of the IHOs was performed with a template process, for which monodisperse silica particles were vertically deposited onto glass slides as the first step. The obtained colloidal crystals with a thickness of 5 μm displayed opalescent reflections because of the uniform alignment of the colloids. As a second step, the template was embedded in a matrix consisting of biocompatible, thermoresponsive hydrogels. The comonomers were selected from the family of oligo(ethylene glycol)methacrylates. The monomer solution was injected into a polymerisation mould, which contained the colloidal crystals as a template. The space in-between the template particles was filled with the monomer solution and the hydrogel was cured via UV-polymerisation. The particles were chemically etched, which resulted in a porous inner structure. The uniform alignment of the pores and therefore the opalescent reflection were maintained, so these system were denoted as inverse hydrogel opals. A pore diameter of several hundred nanometres as well as interconnections between the pores should facilitate a diffusion of bigger (bio)molecules, which was always a challenge in the presented systems until now. The copolymer composition was chosen to result in a hydrogel collapse over 35 °C. All hydrogels showed pronounced swelling in water below the critical temperature. The incorporation of a reactive monomer with hydroxyl groups ensured a potential coupling group for the introduction of recognition units for analytes, e.g. proteins. As a test system, biotin as a recognition unit for avidin was coupled to the IHO via polymer-analogous Steglich esterification. The amount of accessible biotin was quantified with a colorimetric binding assay. When avidin was added to the biotinylated IHO, the wavelength of the opalescent reflection was significantly shifted and therefore the binding event was visualised. This effect is based on the change in swelling behaviour of the hydrogel after binding of the hydrophilic avidin, which is amplified by the thermoresponsive nature of the hydrogel. A swelling or shrinking of the pores induces a change in distance of the crystal planes, which are responsible for the colour of the reflection. With these findings, the possibility of creating sensor materials or additional biomolecules in the size range of avidin is given.
The name Mandela became first inscribed in the annals of African liberation as nothing particularly unusual at the time. The late fifties was an era of trials and detentions in the colonies. The Treason Trial, which took place from 1956 to 1961, was closely followed by those of my generation largely through Drum Magazine.
The Prussian geologist Leopold von Buch was a lifelong friend of Alexander von Humboldt and had a significant influence on Humboldt’s geological ideas. In a talk, held in Berlin in 1831, which is published here for the first time, von Buch presented the Duria Antiquior of 1830 by the English geologist Henry De La Beche. The Duria Antiquior is widely regarded as the earliest depiction of a scene of prehistoric life from deep time. The print raised new questions about the processes of geohistorical change. The talk reveals that Leopold von Buch was a true scientist of the Romantic Age. His descriptions of geohistorical organismic transformations are taken from pictorial examples of organismic transformation from the classical literature. The talk also illustrates how influential English geologists were for geo-historical reconstructions in Germany.
It was the goal of this work to explore two different synthesis pathways using green chemistry. The first part of this thesis is focusing on the use of the urea-glass route towards single phase manganese nitride and manganese nitride/oxide nano-composites embedded in carbon, while the second part of the thesis is focusing on the use of the “saccharide route” (namely cellulose, sucrose, glucose and lignin) towards metal (Ni0), metal alloy (Pd0.9Ni0.1, Pd0.5Ni0.5, Fe0.5Ni0.5, Cu0.5Ni0.5 and W0.15Ni0.85) and ternary carbide (Mn0.75Fe2.25C) nanoparticles embedded in carbon. In the interest of battery application, MnN0.43 nanoparticles surrounded by a graphitic shell and embedded in carbon with a high surface area (79 m^2/g) were synthesized, following a previously set route.The comparison of the material characteristics before and after the discharge showed no remarkable difference in terms of composition and just slight differences in the morphological point of view, meaning the particles are stable but agglomerate. The graphitic shell is contributing to the resistance of the material and leads to a fine cyclic stability over 140 cycles of 230 mAh/g after the first charge/discharge and coulombic efficiencies close to 100%. Due to the low voltage towards Li/Li+ and the low polarization, it might be an attractive anode material for lithium ion batteries. However, the capacity is still noticeably lower than the theoretical value for MnN0.43. A mixture of MnN0.43 and MnO nanoparticles embedded in carbon (surface area 93 m^2/g) was able to improve the cyclic stability to over 160 cycles giving a capacity of 811 mAh/g, which is considerably higher than the capacity of the conventional material graphite (372 mAh/g). This nano-composite seems to agglomerate less during the process of discharge. Interestingly, although the capacity is much higher than of the single phase manganese nitride, the nano-composite seems to only contain MnN0.43 nanoparticles after the process of discharge with no oxide phase to be found. Concerning catalysis application, different metal, metal alloy, and metal carbide nanoparticles were synthesized using the saccharide route. At first, systems that were already investigated before, being Pd0.9Ni0.1, Pd0.5Ni0.5, Fe0.5Ni0.5 and Mn0.75Fe2.25C using cellulose as the carbon source were prepared and tested in an alkylation reaction of toluene with benzylchloride. Unexpectedly, the metal alloys did not show any catalytic activity, but the ternary carbide Mn0.75Fe2.25C showed fine catalytic activity of 98% conversion after 9 hour reaction time (110 °C). In a second step, the saccharide route was modified towards other carbon sources and carbon to metal ratios in order to improve the homogeneity of the samples and accessibility of the particle surfaces. The used carbon sources sucrose and glucose are similar in their basic structure of carbohydrates, but reducing the (polymeric) chain length. Indeed, the cellulose could be successfully replaced by sucrose and glucose. A lower carbon to metal ratio was found to influence the size, homogeneity and accessibility (as evidenced by TEM) of the samples. Since sucrose is an aliment, glucose is the better choice as a carbon source. Using glucose, the synthesis of Cu0.5Ni0.5 and W0.15Ni0.85 nano-composites was also possible, although the later was never obtained as pure phase. These alloy nano-composites were tested, along with nickel0 nanoparticles also prepared with glucose and on their catalytic activity towards the reduction of phenylacetylene. The results obtained let believe that any (poly) saccharide, including lignin, could be used as carbon source. The nickel0 nano-composites prepared with lignin as a carbon source were tested along with those prepared with cellulose and sucrose for their catalytic activity in the transfer hydrogenation of nitrobenzene (results compared with exposed nickel nanoparticles and nickel supported on carbon) leading to very promising results. Based on the urea-glass route and the saccharide route, simple equipment and transition metals, it was possible to have a one-pot synthesize with scale-up possibilities towards new material that can be applied in catalysis and battery systems.
The sharply rising level of atmospheric carbon dioxide resulting from anthropogenic emissions is one of the greatest environmental concerns facing our civilization today. Metal-organic frameworks (MOFs) are a new class of materials that constructed by metal-containing nodes bonded to organic bridging ligands. MOFs could serve as an ideal platform for the development of next generation CO2 capture materials owing to their large capacity for the adsorption of gases and their structural and chemical tunability. The ability to rationally select the framework components is expected to allow the affinity of the internal pore surface toward CO2 to be precisely controlled, facilitating materials properties that are optimized for the specific type of CO2 capture to be performed (post-combustion capture, precombustion capture, or oxy-fuel combustion) and potentially even for the specific power plant in which the capture system is to be installed. For this reason, significant effort has been made in recent years in improving the gas separation performance of MOFs and some studies evaluating the prospects of deploying these materials in real-world CO2 capture systems have begun to emerge. We have developed six new MOFs, denoted as IFPs (IFP-5, -6, -7, -8, -9, -10, IFP = Imidazolate Framework Potsdam) and two hydrogen-bonded molecular building block (MBB, named as 1 and 2 for Zn and Co based, respectively) have been synthesized, characterized and applied for gas storage. The structure of IFP possesses 1D hexagonal channels. Metal centre and the substituent groups of C2 position of the linker protrude into the open channels and determine their accessible diameter. Interestingly, the channel diameters (range : 0.3 to 5.2 Å) for IFP structures are tuned by the metal centre (Zn, Co and Cd) and substituent of C2 position of the imidazolate linker. Moreover hydrogen bonded MBB of 1 and 2 is formed an in situ functionalization of a ligand under solvothermal condition. Two different types of channels are observed for 1 and 2. Materials contain solvent accessible void space. Solvent could be easily removed by under high vacuum. The porous framework has maintained the crystalline integrity even without solvent molecules. N2, H2, CO2 and CH4 gas sorption isotherms were performed. Gas uptake capacities are comparable with other frameworks. Gas uptake capacity is reduced when the channel diameter is narrow. For example, the channel diameter of IFP-5 (channel diameter: 3.8 Å) is slightly lower than that of IFP-1 (channel diameter: 4.2 Å); hence, the gas uptake capacity and Brunauer-Emmett-Teller (BET) surface area are slightly lower than IFP-1. The selectivity does not depend only on the size of the gas components (kinetic diameter: CO2 3.3 Å, N2 3.6 Å and CH4 3.8 ) but also on the polarizability of the surface and of the gas components. IFP-5 and-6 have the potential applications for the separation of CO2 and CH4 from N2-containing gas mixtures and CO2 and CH4 containing gas mixtures. Gas sorption isotherms of IFP-7, -8, -9, -10 exhibited hysteretic behavior due to flexible alkoxy (e.g., methoxy and ethoxy) substituents. Such phenomenon is a kind of gate effects which is rarely observed in microporous MOFs. IFP-7 (Zn-centred) has a flexible methoxy substituent. This is the first example where a flexible methoxy substituent shows the gate opening behavior in a MOF. Presence of methoxy functional group at the hexagonal channels, IFP-7 acted as molecular gate for N2 gas. Due to polar methoxy group and channel walls, wide hysteretic isotherm was observed during gas uptake. The N2 The estimated BET surface area for 1 is 471 m2 g-1 and the Langmuir surface area is 570 m2 g-1. However, such surface area is slightly higher than azolate-based hydrogen-bonded supramolecular assemblies and also comparable and higher than some hydrogen-bonded porous organic molecules.
We consider infinite-dimensional diffusions where the interaction between the coordinates has a finite extent both in space and time. In particular, it is not supposed to be smooth or Markov. The initial state of the system is Gibbs, given by a strong summable interaction. If the strongness of this initial interaction is lower than a suitable level, and if the dynamical interaction is bounded from above in a right way, we prove that the law of the diffusion at any time t is a Gibbs measure with absolutely summable interaction. The main tool is a cluster expansion in space uniformly in time of the Girsanov factor coming from the dynamics and exponential ergodicity of the free dynamics to an equilibrium product measure.
Within the course of this thesis, I have investigated the complex interplay between electron and lattice dynamics in nanostructures of perovskite oxides. Femtosecond hard X-ray pulses were utilized to probe the evolution of atomic rearrangement directly, which is driven by ultrafast optical excitation of electrons. The physics of complex materials with a large number of degrees of freedom can be interpreted once the exact fingerprint of ultrafast lattice dynamics in time-resolved X-ray diffraction experiments for a simple model system is well known. The motion of atoms in a crystal can be probed directly and in real-time by femtosecond pulses of hard X-ray radiation in a pump-probe scheme. In order to provide such ultrashort X-ray pulses, I have built up a laser-driven plasma X-ray source. The setup was extended by a stable goniometer, a two-dimensional X-ray detector and a cryogen-free cryostat. The data acquisition routines of the diffractometer for these ultrafast X-ray diffraction experiments were further improved in terms of signal-to-noise ratio and angular resolution. The implementation of a high-speed reciprocal-space mapping technique allowed for a two-dimensional structural analysis with femtosecond temporal resolution. I have studied the ultrafast lattice dynamics, namely the excitation and propagation of coherent phonons, in photoexcited thin films and superlattice structures of the metallic perovskite SrRuO3. Due to the quasi-instantaneous coupling of the lattice to the optically excited electrons in this material a spatially and temporally well-defined thermal stress profile is generated in SrRuO3. This enables understanding the effect of the resulting coherent lattice dynamics in time-resolved X-ray diffraction data in great detail, e.g. the appearance of a transient Bragg peak splitting in both thin films and superlattice structures of SrRuO3. In addition, a comprehensive simulation toolbox to calculate the ultrafast lattice dynamics and the resulting X-ray diffraction response in photoexcited one-dimensional crystalline structures was developed in this thesis work. With the powerful experimental and theoretical framework at hand, I have studied the excitation and propagation of coherent phonons in more complex material systems. In particular, I have revealed strongly localized charge carriers after above-bandgap femtosecond photoexcitation of the prototypical multiferroic BiFeO3, which are the origin of a quasi-instantaneous and spatially inhomogeneous stress that drives coherent phonons in a thin film of the multiferroic. In a structurally imperfect thin film of the ferroelectric Pb(Zr0.2Ti0.8)O3, the ultrafast reciprocal-space mapping technique was applied to follow a purely strain-induced change of mosaicity on a picosecond time scale. These results point to a strong coupling of in- and out-of-plane atomic motion exclusively mediated by structural defects.
1. Motivation and introduction 2. International asset allocation 2.1 Risk and return drivers in international asset allocation 2.2 Passive and active investment approaches 2.3 Is international diversification advantageous? 3. Case 4. Interaction levels of the exchange rate dimension 4.1 Role of the reference currency 4.2 Decision on hedging exchange rate risks 4.3 Role of the investment currency 4.4 Role of the investment claim 5. Conclusion
1. Introduction of China’s bank reform 1.1 Stage 1 (1978–1993): Rebuilding the financial system 1.2 Stage 2 (1994–1997): Regulating the financial system 1.3 Stage 3 (1998–2002): Deepening reform of state-owned commercial banks 1.4 Stage 4 (2003-present): Public listing of state-owned banks 2. The roles of SWF in China’s bank reform 3. Future challenges
1. Introduction 2. The role of banks and what is different in banks? 3. Corporate Governance and risk management 4. Risk taking and executive board composition 5. Compensation structures – how to improve models for banks? 6. Banking supervision and regulation 7. Reform of European institutions for financial stability
1. Introduction 2. The architecture of the financial market regulation in Europe prior to the crisis 3. The new architecture of the financial market regulation in Europe 4. Actual issues of the political discussion on further needs to adapt the regulation and the structure of the financial markets in Europe 5. A brief summary
1. Porter strategic competitive analysis 2. A Porter analysis of the competitive advantage of banks in business lending and proprietary trading 3. Summary, competitive advantage of banks in business lending and proprietary trading 4. JPMorgan’s “London Whale” speculation 5. A common misapprehension about hedged positions in corporate debt 6. Conclusion