Refine
Has Fulltext
- yes (668) (remove)
Year of publication
- 2015 (668) (remove)
Document Type
- Article (255)
- Postprint (197)
- Doctoral Thesis (108)
- Monograph/Edited Volume (33)
- Part of Periodical (26)
- Preprint (12)
- Review (11)
- Conference Proceeding (7)
- Master's Thesis (7)
- Bachelor Thesis (5)
Keywords
- Patholinguistik (20)
- Sprachtherapie (20)
- geistige Behinderung (20)
- mental deficiency (20)
- patholinguistics (20)
- primary progessive aphasia (20)
- primär progessive Aphasie (20)
- speech therapy (20)
- Armut (13)
- Nachhaltigkeit (13)
Institute
- Institut für Physik und Astronomie (108)
- Mathematisch-Naturwissenschaftliche Fakultät (70)
- Institut für Informatik und Computational Science (52)
- Institut für Chemie (39)
- MenschenRechtsZentrum (37)
- Humanwissenschaftliche Fakultät (36)
- Institut für Romanistik (32)
- Department Linguistik (30)
- Institut für Geowissenschaften (29)
- Institut für Biochemie und Biologie (24)
Wie verhandelt die Praxis?
(2015)
Aus dem Inhalt:
- 10 Jahre Responsibility to Protect: Ein Sieg für die Menschenrechte? – Eine politik- und rechtswissenschaftliche Analyse
- Neue Regeln zur Abwesenheit des Angeklagten vor dem IStGH:
Menschenrechtliche Anforderungen an In-absentia-Verfahren
- EGMR: S.A.S. ./. Frankreich – Urteilsbesprechung zum Burkaverbot
We study segregation of the subducted oceanic crust (OC) at the core mantle boundary and its ability to accumulate and form large thermochemical piles (such as the seismically observed Large Low Shear Velocity Provinces - LLSVPs). Our high-resolution numerical simulations suggest that the longevity of LLSVPs for up to three billion years, and possibly longer, can be ensured by a balance in the rate of segregation of high-density OC-material to the CMB, and the rate of its entrainment away from the CMB by mantle upwellings.
For a range of parameters tested in this study, a large-scale compositional anomaly forms at the CMB, similar in shape and size to the LLSVPs. Neutrally buoyant thermochemical piles formed by mechanical stirring - where thermally induced negative density anomaly is balanced by the presence of a fraction of dense anomalous material - best resemble the geometry of LLSVPs. Such neutrally buoyant piles tend to emerge and survive for at least 3Gyr in simulations with quite different parameters. We conclude that for a plausible range of values of density anomaly of OC material in the lower mantle - it is likely that it segregates to the CMB, gets mechanically mixed with the ambient material, and forms neutrally buoyant large scale compositional anomalies similar in shape to the LLSVPs.
We have developed an efficient FEM code with dynamically adaptive time and space resolution, and marker-in-cell methodology. This enabled us to model thermochemical mantle convection at realistically high convective vigor, strong thermally induced viscosity variations, and long term evolution of compositional fields.
In this thesis we study reciprocal classes of Markov chains. Given a continuous time Markov chain on a countable state space, acting as reference dynamics, the associated reciprocal class is the set of all probability measures on path space that can be written as a mixture of its bridges. These processes possess a conditional independence property that generalizes the Markov property, and evolved from an idea of Schrödinger, who wanted to obtain a probabilistic interpretation of quantum mechanics.
Associated to a reciprocal class is a set of reciprocal characteristics, which are space-time functions that determine the reciprocal class. We compute explicitly these characteristics, and divide them into two main families: arc characteristics and cycle characteristics. As a byproduct, we obtain an explicit criterion to check when two different Markov chains share their bridges.
Starting from the characteristics we offer two different descriptions of the reciprocal class, including its non-Markov probabilities.
The first one is based on a pathwise approach and the second one on short time asymptotic. With the first approach one produces a family of functional equations whose only solutions are precisely the elements of the reciprocal class. These equations are integration by parts on path space associated with derivative operators which perturb the paths by mean of the addition of random loops. Several geometrical tools are employed to construct such formulas. The problem of obtaining sharp characterizations is also considered, showing some interesting connections with discrete geometry. Examples of such formulas are given in the framework of counting processes and random walks on Abelian groups, where the set of loops has a group structure.
In addition to this global description, we propose a second approach by looking at the short time behavior of a reciprocal process. In the same way as the Markov property and short time expansions of transition probabilities characterize Markov chains, we show that a reciprocal class is characterized by imposing the reciprocal property and two families of short time expansions for the bridges. Such local approach is suitable to study reciprocal processes on general countable graphs. As application of our characterization, we considered several interesting graphs, such as lattices, planar
graphs, the complete graph, and the hypercube.
Finally, we obtain some first results about concentration of measure implied by lower bounds on the reciprocal characteristics.
The relationship between nutrition and the development of chronic diseases including metabolic syndrome, diabetes mellitus, cancer and cardiovascular disease has been well studied. On the other hand, changes in the GH-IGF-1 axis in association with nutrition-related diseases have been reported. The interplay between GH, total IGF-1 and different inhibitory and stimulatory kinds of IGF-1 binding proteins (IGFBPs) results in IGF-1 bioactivity, the ability of IGF-1 to induce phosphorylation of its receptor and consequently its signaling. Moreover, IGF-1 bioactivity is sufficient to reflect any change in the GH-IGF-1 system. Accumulating evidence suggests that both of high protein diet, characterized by increased glucagon secretion, and insulin-induced hypoglycemia increase mortality rate and the mechanisms are unclear. However both of glucagon and insulin-induced hypoglycemia are potent stimuli of GH secretion. The aim of the current study was to identify the impact of glucagon and insulin-induced hypoglycemia on IGF-1 bioactivity as possible mechanisms. In a double-blind placebo-controlled study, glucagon was intramuscularly administrated in 13 type 1 diabetic patients (6 males /7 females; [BMI]: 24.8 ± 0.95 kg/m2), 11 obese subjects (OP; 5/ 6; 34.4 ± 1.7 kg/m2), and 13 healthy lean participants (LP; 6/ 7; 21.7 ± 0.6 kg/m2), whereas 12 obese subjects (OP; 6/ 6; 34.4 ± 1.7 kg/m2), and 13 healthy lean participants (LP; 6/ 7; 21.7 ± 0.6 kg/m2) performed insulin tolerance test in another double-blind placebo-controlled study and changes in GH, total IGF-1, IGF binding proteins (IGFBPs) and IGF-1 bioactivity, measured by the cell-based KIRA method, were investigated. In addition, the interaction between the metabolic hormones (glucagon and insulin) and the GH-IGF-1 system on the transcriptional level was studied using mouse primary hepatocytes. In this thesis, glucagon decreased IGF-1 bioactivity in humans independently of endogenous insulin levels, most likely through modulation of IGFBP-1 and-2 levels. The glucagon-induced reduction in IGF-1 bioactivity may represent a novel mechanism underlying the impact of glucagon on GH secretion and may explain the negative effect of high protein diet related to increased cardiovascular risk and mortality rate. In addition, insulin-induced hypoglycemia was correlated with a decrease in IGF-1 bioactivity through up-regulation of IGFBP-2. These results may refer to a possible and poorly explored mechanism explaining the strong association between hypoglycemia and increased cardiovascular mortality among diabetic patients.
This dissertation investigates the working memory mechanism subserving human sentence processing and its relative contribution to processing difficulty as compared to syntactic prediction. Within the last decades, evidence for a content-addressable memory system underlying human cognition in general has accumulated (e.g., Anderson et al., 2004). In sentence processing research, it has been proposed that this general content-addressable architecture is also used for language processing (e.g., McElree, 2000).
Although there is a growing body of evidence from various kinds of linguistic dependencies that is consistent with a general content-addressable memory subserving sentence processing (e.g., McElree et al., 2003; VanDyke2006), the case of reflexive-antecedent dependencies has challenged this view. It has been proposed that in the processing of reflexive-antecedent dependencies, a syntactic-structure based memory access is used rather than cue-based retrieval within a content-addressable framework (e.g., Sturt, 2003).
Two eye-tracking experiments on Chinese reflexives were designed to tease apart accounts assuming a syntactic-structure based memory access mechanism from cue-based retrieval (implemented in ACT-R as proposed by Lewis and Vasishth (2005).
In both experiments, interference effects were observed from noun phrases which syntactically do not qualify as the reflexive's antecedent but match the animacy requirement the reflexive imposes on its antecedent. These results are interpreted as evidence against a purely syntactic-structure based memory access. However, the exact pattern of effects observed in the data is only partially compatible with the Lewis and Vasishth cue-based parsing model.
Therefore, an extension of the Lewis and Vasishth model is proposed. Two principles are added to the original model, namely 'cue confusion' and 'distractor prominence'.
Although interference effects are generally interpreted in favor of a content-addressable memory architecture, an alternative explanation for interference effects in reflexive processing has been proposed which, crucially, might reconcile interference effects with a structure-based account.
It has been argued that interference effects do not necessarily reflect cue-based retrieval interference in a content-addressable memory but might equally well be accounted for by interference effects which have already occurred at the moment of encoding the antecedent in memory (Dillon, 2011).
Three experiments (eye-tracking and self-paced reading) on German reflexives and Swedish possessives were designed to tease apart cue-based retrieval interference from encoding interference. The results of all three experiments suggest that there is no evidence that encoding interference affects the retrieval of a reflexive's antecedent.
Taken together, these findings suggest that the processing of reflexives can be explained with the same cue-based retrieval mechanism that has been invoked to explain syntactic dependency resolution in a range of other structures. This supports the view that the language processing system is located within a general cognitive architecture, with a general-purpose content-addressable working memory system operating on linguistic expressions.
Finally, two experiments (self-paced reading and eye-tracking) using Chinese relative clauses were conducted to determine the relative contribution to sentence processing difficulty of working-memory processes as compared to syntactic prediction during incremental parsing.
Chinese has the cross-linguistically rare property of being a language with subject-verb-object word order and pre-nominal relative clauses. This property leads to opposing predictions of expectation-based
accounts and memory-based accounts with respect to the relative processing difficulty of subject vs. object relatives.
Previous studies showed contradictory results, which has been attributed to different kinds local ambiguities confounding the materials (Lin and Bever, 2011). The two experiments presented are the first to compare Chinese relatives clauses in syntactically unambiguous contexts.
The results of both experiments were consistent with the predictions of the expectation-based account of sentence processing but not with the memory-based account. From these findings, I conclude that any theory of human sentence processing needs to take into account the power of predictive processes unfolding in the human mind.
A lot has been published about the competencies needed by
students in the 21st century (Ravenscroft et al., 2012). However, equally
important are the competencies needed by educators in the new era
of digital education. We review the key competencies for educators in
light of the new methods of teaching and learning proposed by Massive
Open Online Courses (MOOCs) and their on-campus counterparts,
Small Private Online Courses (SPOCs).
Participants of this workshop will be confronted exemplarily
with a considerable inconsistency of global Informatics education at
lower secondary level. More importantly, they are invited to contribute
actively on this issue in form of short case studies of their countries.
Until now, very few countries have been successful in implementing
Informatics or Computing at primary and lower secondary level. The
spectrum from digital literacy to informatics, particularly as a discipline
in its own right, has not really achieved a breakthrough and seems to
be underrepresented for these age groups. The goal of this workshop
is not only to discuss the anamnesis and diagnosis of this fragmented
field, but also to discuss and suggest viable forms of therapy in form of
setting educational standards. Making visible good practices in some
countries and comparing successful approaches are rewarding tasks for
this workshop.
Discussing and defining common educational standards on a transcontinental
level for the age group of 14 to 15 years old students in a readable,
assessable and acceptable form should keep the participants of this
workshop active beyond the limited time at the workshop.
Let’s talk about CS!
(2015)
To communicate about a science is the most important key
competence in education for any science. Without communication we
cannot teach, so teachers should reflect about the language they use in
class properly. But the language students and teachers use to communicate
about their CS courses is very heterogeneous, inconsistent and
deeply influenced by tool names. There is a big lack of research and
discussion in CS education regarding the terminology and the role of
concepts and tools in our science. We don’t have a consistent set of
terminology that we agree on to be helpful for learning our science.
This makes it nearly impossible to do research on CS competencies as
long as we have not agreed on the names we use to describe these. This
workshop intends to provide room to fill with discussion and first ideas
for future research in this field.
ProtoSense
(2015)
The poster and abstract describe the importance of teaching
information security in school. After a short description of information
security and important aspects, I will show, how information security
fits into different guidelines or models for computer science educations
and that it is therefore on of the key competencies. Afterwards I will
present you a rough insight of teaching information security in Austria.
Current curricular trends require teachers in Baden-
Wuerttemberg (Germany) to integrate Computer Science (CS) into
traditional subjects, such as Physical Science. However, concrete guidelines
are missing. To fill this gap, we outline an approach where a
microcontroller is used to perform and evaluate measurements in the
Physical Science classroom.
Using the open-source Arduino platform, we expect students to acquire
and develop both CS and Physical Science competencies by using a
self-programmed microcontroller. In addition to this combined development
of competencies in Physical Science and CS, the subject matter
will be embedded in suitable contexts and learning environments,
such as weather and climate.
Think logarithmically!
(2015)
We discuss here a number of algorithmic topics which we
use in our teaching and in learning of mathematics and informatics to
illustrate and document the power of logarithm in designing very efficient
algorithms and computations – logarithmic thinking is one of the
most important key competencies for solving real world practical problems.
We demonstrate also how to introduce logarithm independently
of mathematical formalism using a conceptual model for reducing a
problem size by at least half. It is quite surprising that the idea, which
leads to logarithm, is present in Euclid’s algorithm described almost
2000 years before John Napier invented logarithm.
A project involving the composition of a number of pieces
of music by public participants revealed levels of engagement with and
mastery of complex music technologies by a number of secondary student
volunteers. This paper reports briefly on some initial findings of
that project and seeks to illuminate an understanding of computational
thinking across the curriculum.
Physical computing covers the design and realization of interactive
objects and installations and allows students to develop concrete,
tangible products of the real world that arise from the learners’
imagination. This way, constructionist learning is raised to a level that
enables students to gain haptic experience and thereby concretizes the
virtual. In this paper the defining characteristics of physical computing
are described. Key competences to be gained with physical computing
will be identified.
Mentoring in a Digital World
(2015)
This paper focuses on the results of the evaluation of the first
pilot of an e-mentoring unit designed by the Hands-On ICT consortium,
funded by the EU LLL programme. The overall aim of this two-year
activity is to investigate the value for professional learning of Massive
Online Open Courses (MOOCs) and Community Online Open Courses
(COOCs) in the context of a ‘community of practice’. Three units in the
first pilot covered aspects of using digital technologies to develop creative
thinking skills. The findings in this paper relate to the fourth unit
about e-mentoring, a skill that was important to delivering the course
content in the other three units. Findings about the e-mentoring unit
included: the students’ request for detailed profiles so that participants
can get to know each other; and, the need to reconcile the different
interpretations of e-mentoring held by the participants when the course
begins. The evaluators concluded that the major issues were that: not all
professional learners would self-organise and network; and few would
wish to mentor their colleagues voluntarily. Therefore, the e-mentoring
issues will need careful consideration in pilots two and three to identify
how e-mentoring will be organised.
The study reported in this paper involved the employment
of specific in-class exercises using a Personal Response System (PRS).
These exercises were designed with two goals: to enhance students’
capabilities of tracing a given code and of explaining a given code in
natural language with some abstraction. The paper presents evidence
from the actual use of the PRS along with students’ subjective impressions
regarding both the use of the PRS and the special exercises. The
conclusions from the findings are followed with a short discussion on
benefits of PRS-based mental processing exercises for learning programming
and beyond.
In this paper we describe the recent state of our research
project concerning computer science teachers’ knowledge on students’
cognition. We did a comprehensive analysis of textbooks, curricula
and other resources, which give teachers guidance to formulate assignments.
In comparison to other subjects there are only a few concepts
and strategies taught to prospective computer science teachers in university.
We summarize them and given an overview on our empirical
approach to measure this knowledge.
How does the Implementation of a Literacy Learning Tool Kit influence Literacy Skill Acquisition?
(2015)
This study aimed at following how teachers transfer skills
into results while using ABRA literacy software. This was done in
the second part of the pilot study whose aim was to provide equity to
control group teachers and students by exposing them to the ABRACADABRA
treatment after the end of phase 1. This opportunity was
used to follow the phase 1 teachers to see how the skills learned were
being transformed into results. A standard three-day initial training and
planning session on how to use ABRA to teach literacy was held at the
beginning of each phase for ABRA teachers (phase 1 experimental and
phase 2 delayed ABRA). Teachers were provided with teaching materials
including a tentative ABRA curriculum developed to align with the
Kenyan English Language requirements for year 1 and 3 students. Results
showed that although there was no significant difference between
the groups in vocabulary-related subscales which include word reading
and meaning as well as sentence comprehension, students in ABRACADABRA
classes improved their scores at a significantly higher rate
than students in control classes in comprehension related scores. An
average student in the ABRACADABRA group improved by 12 and
16 percentile points respectively compared to their counterparts in the
control group.
The Technology Proficiency Self-Assessment (TPSA) questionnaire
has been used for 15 years in the USA and other nations as a
self-efficacy measure for proficiencies fundamental to effective technology
integration in the classroom learning environment. Internal consistency
reliabilities for each of the five-item scales have typically ranged
from .73 to .88 for preservice or inservice technology-using teachers.
Due to changing technologies used in education, researchers sought to
renovate partially obsolete items and extend self-efficacy assessment to
new areas, such as social media and mobile learning. Analysis of 2014
data gathered on a new, 34 item version of the TPSA indicates that the
four established areas of email, World Wide Web (WWW), integrated
applications, and teaching with technology continue to form consistent
scales with reliabilities ranging from .81 to .93, while the 14 new items
gathered to represent emerging technologies and media separate into
two scales, each with internal consistency reliabilities greater than .9.
The renovated TPSA is deemed to be worthy of continued use in the
teaching with technology context.
Computational Thinking
(2015)
Digital technology has radically changed the way people
work in industry, finance, services, media and commerce. Informatics
has contributed to the scientific and technological development of our
society in general and to the digital revolution in particular. Computational
thinking is the term indicating the key ideas of this discipline that
might be included in the key competencies underlying the curriculum
of compulsory education. The educational potential of informatics has
a history dating back to the sixties. In this article, we briefly revisit this
history looking for lessons learned. In particular, we focus on experiences
of teaching and learning programming. However, computational
thinking is more than coding. It is a way of thinking and practicing interactive
dynamic modeling with computers. We advocate that learners
can practice computational thinking in playful contexts where they can
develop personal projects, for example building videogames and/or robots,
share and discuss their construction with others. In our view, this
approach allows an integration of computational thinking in the K-12
curriculum across disciplines.
How Things Work
(2015)
Recognizing and defining functionality is a key competence
adopted in all kinds of programming projects. This study investigates
how far students without specific informatics training are able to identify
and verbalize functions and parameters. It presents observations
from classroom activities on functional modeling in high school chemistry
lessons with altogether 154 students. Finally it discusses the potential
of functional modelling to improve the comprehension of scientific
content.
This paper originated from discussions about the need for
important changes in the curriculum for Computing including two focus
group meetings at IFIP conferences over the last two years. The
paper examines how recent developments in curriculum, together with
insights from curriculum thinking in other subject areas, especially mathematics
and science, can inform curriculum design for Computing.
The analysis presented in the paper provides insights into the complexity
of curriculum design as well as identifying important constraints and
considerations for the ongoing development of a vision and framework
for a Computing curriculum.
This article shows a discussion about the key competencies
in informatics and ICT viewed from a philosophical foundation presented
by Martha Nussbaum, which is known as ‘ten central capabilities’.
Firstly, the outline of ‘The Capability Approach’, which has been presented
by Amartya Sen and Nussbaum as a theoretical framework of
assessing the state of social welfare, will be explained. Secondly, the
body of Nussbaum’s ten central capabilities and the reason for being
applied as the basis of discussion will be shown. Thirdly, the relationship
between the concept of ‘capability’ and ‘competency’ is to be
discussed. After that, the author’s assumption of the key competencies
in informatics and ICT led from the examination of Nussbaum’s ten
capabilities will be presented.
The objectives of this study were to examine (a) the effect
of dynamic assessment (DA) in a 3D Immersive Virtual Reality
(IVR) environment as compared with computerized 2D and noncomputerized
(NC) situations on cognitive modifiability, and (b) the
transfer effects of these conditions on more difficult problem solving
administered two weeks later in a non-computerized environment. A
sample of 117 children aged 6:6-9:0 years were randomly assigned
into three experimental groups of DA conditions: 3D, 2D, and NC, and
one control group (C). All groups received the pre- and post-teaching
Analogies subtest of the Cognitive Modifiability Battery (CMB-AN).
The experimental groups received a teaching phase in conditions similar
to the pre-and post-teaching phases. The findings showed that cognitive
modifiability, in a 3D IVR, was distinctively higher than in the two
other experimental groups (2D computer group and NC group). It was
also found that the 3D group showed significantly higher performance
in transfer problems than the 2D and NC groups.
BugHunt
(2015)
Competencies related to operating systems and computer
security are usually taught systematically. In this paper we present
a different approach, in which students have to remove virus-like
behaviour on their respective computers, which has been induced by
software developed for this purpose. They have to develop appropriate
problem-solving strategies and thereby explore essential elements of
the operating system. The approach was implemented exemplarily in
two computer science courses at a regional general upper secondary
school and showed great motivation and interest in the participating
students.
In the project MoKoM, which is funded by the German
Research Foundation (DFG) from 2008 to 2012, a test instrument
measuring students’ competences in computer science was developed.
This paper presents the results of an expert rating of the levels of
students’ competences done for the items of the instrument.
At first we will describe the difficulty-relevant features that were
used for the evaluation. These were deduced from computer science,
psychological and didactical findings and resources. Potentials and
desiderata of this research method are discussed further on. Finally
we will present our conclusions on the results and give an outlook on
further steps.
The growing impact of globalisation and the development of
a ‘knowledge society’ have led many to argue that 21st century skills are
essential for life in twenty-first century society and that ICT is central
to their development. This paper describes how 21st century skills, in
particular digital literacy, critical thinking, creativity, communication
and collaboration skills, have been conceptualised and embedded in the
resources developed for teachers in iTEC, a four-year, European project.
The effectiveness of this approach is considered in light of the data
collected through the evaluation of the pilots, which considers both the
potential benefits of using technology to support the development of
21st century skills, but also the challenges of doing so. Finally, the paper
discusses the learning support systems required in order to transform
pedagogies and embed 21st century skills. It is argued that support is
required in standards and assessment; curriculum and instruction; professional
development; and learning environments.
This paper discusses results from a small-scale research
study, together with some recently published research into student
perceptions of ICT for learning in schools, to consider relevant skills
that do not appear to currently being taught. The paper concludes by
raising three issues relating to learning with and through ICT that need
to be addressed in school curricula and classroom teaching.
The Student Learning Ecology
(2015)
Educational research on social media has showed that
students use it for socialisation, personal communication, and informal
learning. Recent studies have argued that students to some degree use
social media to carry out formal schoolwork. This article gives an
explorative account on how a small sample of Norwegian high school
students use social media to self-organise formal schoolwork. This
user pattern can be called a “student learning ecology”, which is a
user perspective on how participating students gain access to learning
resources.
Teaching Data Management
(2015)
Data management is a central topic in computer science as
well as in computer science education. Within the last years, this topic is
changing tremendously, as its impact on daily life becomes increasingly
visible. Nowadays, everyone not only needs to manage data of various
kinds, but also continuously generates large amounts of data. In
addition, Big Data and data analysis are intensively discussed in public
dialogue because of their influences on society. For the understanding of
such discussions and for being able to participate in them, fundamental
knowledge on data management is necessary. Especially, being aware
of the threats accompanying the ability to analyze large amounts of
data in nearly real-time becomes increasingly important. This raises the
question, which key competencies are necessary for daily dealings with
data and data management.
In this paper, we will first point out the importance of data management
and of Big Data in daily life. On this basis, we will analyze which are
the key competencies everyone needs concerning data management to
be able to handle data in a proper way in daily life. Afterwards, we will
discuss the impact of these changes in data management on computer
science education and in particular database education.
Social networks are currently at the forefront of tools that
lend to Personal Learning Environments (PLEs). This study aimed to
observe how students perceived PLEs, what they believed were the
integral components of social presence when using Facebook as part
of a PLE, and to describe student’s preferences for types of interactions
when using Facebook as part of their PLE. This study used mixed
methods to analyze the perceptions of graduate and undergraduate
students on the use of social networks, more specifically Facebook as a
learning tool. Fifty surveys were returned representing a 65 % response
rate. Survey questions included both closed and open-ended questions.
Findings suggested that even though students rated themselves relatively
well in having requisite technology skills, and 94 % of students used
Facebook primarily for social use, they were hesitant to migrate these
skills to academic use because of concerns of privacy, believing that
other platforms could fulfil the same purpose, and by not seeing the
validity to use Facebook in establishing social presence. What lies
at odds with these beliefs is that when asked to identify strategies in
Facebook that enabled social presence to occur in academic work, the
majority of students identified strategies in five categories that lead to
social presence establishment on Facebook during their coursework.
The paper discusses the issue of supporting informatics
(computer science) education through competitions for lower and
upper secondary school students (8–19 years old). Competitions play
an important role for learners as a source of inspiration, innovation,
and attraction. Running contests in informatics for school students
for many years, we have noticed that the students consider the contest
experience very engaging and exciting as well as a learning experience.
A contest is an excellent instrument to involve students in problem
solving activities. An overview of infrastructure and development
of an informatics contest from international level to the national one
(the Bebras contest on informatics and computer fluency, originated
in Lithuania) is presented. The performance of Bebras contests in 23
countries during the last 10 years showed an unexpected and unusually
high acceptance by school students and teachers. Many thousands of
students participated and got a valuable input in addition to their regular
informatics lectures at school. In the paper, the main attention is paid
to the developed tasks and analysis of students’ task solving results in
Lithuania.
The paper presents two approaches to the development of
a Computer Science Competence Model for the needs of curriculum
development and evaluation in Higher Education. A normativetheoretical
approach is based on the AKT and ACM/IEEE curriculum
and will be used within the recommendations of the German
Informatics Society (GI) for the design of CS curricula. An empirically
oriented approach refines the categories of the first one with regard to
specific subject areas by conducting content analysis on CS curricula of
important universities from several countries. The refined model will be
used for the needs of students’ e-assessment and subsequent affirmative
action of the CS departments.
Regardless of what is intended by government curriculum
specifications and advised by educational experts, the competencies
taught and learned in and out of classrooms can vary considerably.
In this paper, we discuss in particular how we can investigate the
perceptions that individual teachers have of competencies in ICT,
and how these and other factors may influence students’ learning. We
report case study research which identifies contradictions within the
teaching of ICT competencies as an activity system, highlighting issues
concerning the object of the curriculum, the roles of the participants and
the school cultures. In a particular case, contradictions in the learning
objectives between higher order skills and the use of application tools
have been resolved by a change in the teacher’s perceptions which
have not led to changes in other aspects of the activity system. We look
forward to further investigation of the effects of these contradictions in
other case studies and on forthcoming curriculum change.
As a result of the Bologna reform of educational systems in
Europe the outcome orientation of learning processes, competence-oriented
descriptions of the curricula and competence-oriented assessment
procedures became standard also in Computer Science Education
(CSE). The following keynote addresses important issues of shaping
a CSE competence model especially in the area of informatics system
comprehension and object-oriented modelling. Objectives and research
methodology of the project MoKoM (Modelling and Measurement
of Competences in CSE) are explained. Firstly, the CSE competence
model was derived based on theoretical concepts and then secondly the
model was empirically examined and refined using expert interviews.
Furthermore, the paper depicts the development and examination of
a competence measurement instrument, which was derived from the
competence model. Therefore, the instrument was applied to a large
sample of students at the gymnasium’s upper class level. Subsequently,
efforts to develop a competence level model, based on the retrieved empirical
results and on expert ratings are presented. Finally, further demands
on research on competence modelling in CSE will be outlined.
Computational thinking is a fundamental skill set that is learned
by studying Informatics and ICT. We argue that its core ideas can
be introduced in an inspiring and integrated way to both teachers and
students using fun and contextually rich cs4fn ‘Computer Science for
Fun’ stories combined with ‘unplugged’ activities including games and
magic tricks. We also argue that understanding people is an important
part of computational thinking. Computational thinking can be fun for
everyone when taught in kinaesthetic ways away from technology.
Graph databases provide a natural way of storing and querying graph data. In contrast to relational databases, queries over graph databases enable to refer directly to the graph structure of such graph data. For example, graph pattern matching can be employed to formulate queries over graph data.
However, as for relational databases running complex queries can be very time-consuming and ruin the interactivity with the database. One possible approach to deal with this performance issue is to employ database views that consist of pre-computed answers to common and often stated queries. But to ensure that database views yield consistent query results in comparison with the data from which they are derived, these database views must be updated before queries make use of these database views. Such a maintenance of database views must be performed efficiently, otherwise the effort to create and maintain views may not pay off in comparison to processing the queries directly on the data from which the database views are derived.
At the time of writing, graph databases do not support database views and are limited to graph indexes that index nodes and edges of the graph data for fast query evaluation, but do not enable to maintain pre-computed answers of complex queries over graph data. Moreover, the maintenance of database views in graph databases becomes even more challenging when negation and recursion have to be supported as in deductive relational databases.
In this technical report, we present an approach for the efficient and scalable incremental graph view maintenance for deductive graph databases. The main concept of our approach is a generalized discrimination network that enables to model nested graph conditions including negative application conditions and recursion, which specify the content of graph views derived from graph data stored by graph databases. The discrimination network enables to automatically derive generic maintenance rules using graph transformations for maintaining graph views in case the graph data from which the graph views are derived change. We evaluate our approach in terms of a case study using multiple data sets derived from open source projects.
Ziel dieser Arbeit war die Synthese und Charakterisierung von neuartigen fluoreszierenden Copolymeren zur Analytdetektion in wässrigen Systemen. Das Detektionssystem sollte ein einfaches Schalten der Fluoreszenz bei Analytbindung „Aus“ bzw. Verdrängung „An“ ermöglichen. Dafür wurde die Synthese eines funktionalisierten Monomers so geplant, dass sich Fluorophor und Analyt innerhalb derselben Monomereinheit in direkter Nachbarschaft zueinander befinden. So sollten bei Erkennung des Analyten durch eine mit einem Fluoreszenzlöscher funktionalisierte Erkennungsstruktur Fluorophor und Löscher in einen vorgegebenen Abstand zueinander gezwungen und die Fluoreszenz des Fluorophors effizient gelöscht werden. Bei anschließender Verdrängung der Erkennungseinheit durch einen stärker bindenden Analyten sollte die Fluoreszenz wieder „angeschaltet“ werden. Eine weitere Zielstellung für das Detektionssystem war eine hohe Löslichkeit und Fluoreszenzintensität in Wasser. Da die Anwendung solcher Sensoren besonders in der Medizin und Biologie, z.B. für Schnellerkennungstest von Pathogenen, von Interesse ist, ist die Kompatibilität mit wässrigen Medien essentiell. Die funktionalisierten Monomere wurden frei radikalisch mit N Vinyl-pyrrolidon bzw. N Vinyl¬caprolactam zu wasserlöslichen, fluoreszierenden Copolymeren umgesetzt. In den N-Vinyl¬pyrrolidon-Polymeren (PNVP) wurde RhodaminB, in den thermoresponsiven N Vinyl¬caprolactam-Polymeren (PNVCL) ein Naphthalsäureimid als Fluorophor verwendet. Während Rhodamine eine hohe Fluoreszenzintensität, gute Quantenausbeuten und hohen Extinktionskoeffizienten in Wasser zeigen, sind Naphthalsäure¬imide umgebungssensitive Chromophore, die bei Änderung ihrer Lösungsmittelumgebung, wie z.B. beim Kollaps eines thermoresponsiven Polymers in Wasser, ihre Fluoreszenzintensität und Quantenausbeute drastisch ändern können. Der Vorteil der hier verwendeten Strategie der Monomersynthese liegt darin, dass bei jeder spezifischen Analytdetektion durch eine Erkennungseinheit die Fluoreszenz effizient gelöscht bzw. bei Verdrängung durch einen stärker bindenden Analyten wieder „angeschaltet“ wird. Dieses Prinzip wird bereits vielfach in der Biologie in sogenannten „Molecular Beacons“ ausgenutzt, wobei ein Fluorophor und ein Löscher durch spezifische DNA Basenpaarung in einen vorgegebenen Abstand zueinander gezwungen werden und so ein „Schalten“ der Fluoreszenz ermöglichen. Aufgrund der vorgegebenen Struktur der DNA Basensequenzen ist es jedoch nicht direkt auf andere Erkennungsreaktionen übertragbar. Daher wurde ein Modellsystem entwickelt, welches die Möglichkeit bietet Analyt, Erkennungseinheit und Signalgeber variabel, je nach Anforderungen des Systems, auszutauschen. So soll es möglich sein, den Sensor a priori für jede Erkennungs¬reaktion zu verwenden. Als Modell Bindungs¬paare wurden ß Cyclodextrin/Adamantan und Con¬cana¬valinA/Mannose ausgewählt. Adamantan bzw. Mannose wurde als Analyt zusammen mit dem Fluorophor in das Polymer eingebunden. ß Cyclo¬dextrin (ß CD) bzw. ConcanavalinA (ConA) wurde als Erkennungsstruktur an einem Fluoreszenzlöscher immobilisiert. Polymer-basierte Fluoreszenzsensoren sind in der Fachliteratur gut dokumentiert. In der Regel sind Signalgeber und Analyt jedoch statistisch im Polymer verteilt, da sie sich entweder in unterschiedlichen Monomereinheiten befinden oder die Funktionalisierung durch eine polymeranaloge Umsetzung erfolgt. Der gewählte Ansatz Fluorophor und Analyt innerhalb derselben Monomereinheit einzubinden, soll bei jeder Erkennungsreaktion des Analyten zu einer Änderung der Signalintensität des Fluorophors führen. Eine hohe Signalintensität bei Analytdetektion ist wünschenswert, insbesondere für Erkennungsreaktionen, die mit möglichst geringem apparativem Aufwand, am besten mit dem bloßen Auge zu verfolgen sein sollen. Des Weiteren ist es möglich den Fluorophorgehalt im Polymer genau einzustellen und so Selbstlöschung zu vermeiden. Die synthetisierten Polymere haben einen Fluorophorgehalt von 0,01 mol% bis 0,5 mol%. Für die RhodaminB haltigen Polymere zeigte sich, dass ein Fluorophorgehalt unterhalb 0,1 mol% im Polymer die höchsten Ausbeuten, Molmassen und Quantenausbeuten liefert. Für die Naphthalsäureimid haltigen Polymere hingegen wurden auch für einen Fluorophorgehalt von bis zu 1 mol% hohe Ausbeuten und Molmassen erreicht. Die Naphthalsäureimid haltigen Polymere haben jedoch in wässriger Lösungsmittelumgebung nur geringe Quantenausbeuten. Als Fluoreszenzlöscher wurden Goldnanopartikel synthetisiert, die mit den entsprechenden Erkennungsstrukturen (ß-CD oder ConA) für den verwendeten Analyten funktionalisiert wurden. Goldnanopartikel als Löscher bieten den Vorteil, dass ihre Dispergierbarkeit in einem Lösemittel durch Funktionalisierung ihrer Hülle gezielt gesteuert werden kann. Durch die hohe Affinität von Goldnanopartikeln zu Thiolen und Aminen konnten sie mit Hilfe einfacher Syntheseschritte mit Thio ß CD Derivaten bzw. ConA funktionalisiert werden. In der hier vorgelegten Arbeit sollte ein Modellsystem für einen solches fluoreszenz-basiertes Detektionssystem in Wasser entwickelt werden. Nachfolgend werden die zu erfüllenden strukturellen Voraussetzungen für die Synthese eines solchen Sensors nochmals zusammengefasst:
1. Verwendung eines Fluorophors, der eine hohe Signalintensität zeigt.
2. Analyt bzw. Erkennungseinheit soll sich im Abstand von wenigen Nanometern zum Signalgeber befinden, um bei jeder Detektionsreaktion die Signalintensität des Signalgebers beeinflussen zu können.
3. Die Detektionseinheit benötigt eine funktionelle Gruppe zur Immobilisierung. Immobilisierung kann z.B. durch Einbindung in ein Polymer erfolgen.
4. Der Fluorophor soll bei Änderung seiner lokalen Umgebung, durch Binden eines Löschers oder Änderung seiner Lösemittelumgebung seine Fluoreszenzeigenschaften drastisch ändern.
5. Die Reaktion sollte schnell und mit möglichst geringem apparativem Aufwand, am besten mit bloßem Auge zu verfolgen sein.
Für das ß-CD/Adamantan Modellsystem wurde ein Fluoreszenz Aus/An Sensor entwickelt, der bei Binden ß CD funktionalisierter Goldnanopartikel an das polymergebundene Adamantan die Fluoreszenz des RhodaminB Fluorophors effizient löscht und bei Verdrängung der Goldnanopartikel wieder zurück gewinnt. Dies konnte auch mit bloßem Auge verfolgt werden.
Für die Naphthalsäureimid Monomere, die mit NVCL copolymerisiert wurden, wurde abhängig von der lokalen Umgebung des Fluorophors eine unterschiedliche Verstärkung der Fluoreszenzintensität bei Überschreiten des Trübungspunktes des Polymers gefunden. Dabei zeigte sich, dass die Einführung eines Abstandshalters zwischen Polymerrückgrat und Fluorophor zu einer großen Fluoreszenz¬verstärkung führt, während sich ohne Abstandshalter die Fluoreszenzintensität bei Über¬schreiten des Trübungspunktes kaum ändert.
Two of the most controversial issues concerning the late Cenozoic evolution of the Andean orogen are the timing of uplift of the intraorogenic Puna plateau and its eastern border, the Eastern Cordillera, and ensuing changes in climatic and surface-process conditions in the intermontane basins of the NW-Argentine Andes. The Eastern Cordillera separates the internally drained, arid Puna from semi-arid intermontane basins and the humid sectors of the Andean broken foreland and the Subandean fold-and-thrust belt to the east. With elevations between 4,000 and 6,000 m the eastern flanks of the Andes form an efficient orographic barrier with westward-increasing elevation and asymmetric rainfall distribution and amount with respect to easterly moisture-bearing winds. This is mirrored by pronounced gradients in the efficiency of surface processes that erode and re-distribute sediment from the uplifting ranges. Although the overall pattern of deformation and uplift in this sector of the southern central Andes shows an eastward migration of deformation, a well-developed deformation front does not exist and uplift and associated erosion and sedimentary processes are highly disparate in space and time. In addition, periodic deformation within intermontane basins, and continued diachronous foreland uplifts associated with the reactivation of inherited basement structures furthermore make a rigorous assessment of the spatiotemporal uplift patterns difficult.
This thesis focuses on the tectonic evolution of the Eastern Cordillera of NW Argentina, the depositional history of its intermontane sedimentary basins, and the regional topographic evolution of the eastern flank of the Puna Plateau. The intermontane basins of the Eastern Cordillera and the adjacent morphotectonic provinces of the Sierras Pampeanas and the Santa Bárbara System are akin to reverse fault bounded, filled, and partly coalesced sedimentary basins of the Puna Plateau. In contrast to the Puna basins, however, which still form intact morphologic entities, repeated deformation, erosion, and re-filling have impacted the basins in the Eastern Cordillera. This has resulted in a rich stratigraphy of repeated basin fills, but many of these basins have retained vestiges of their early depositional history that may reach back in time when these areas were still part of a contiguous and undeformed foreland basin. Fortunately, these strata also contain abundant volcanic ashes that are not only important horizons to decipher tectono-sedimentary events through U-Pb geochronology and geochemical correlation, but they also represent terrestrial recorders of the hydrogen-isotope composition of ancient meteoric waters that can be compared to the isotopic composition of modern meteoric water. The ash horizons are thus unique recorders of past environmental conditions and lend themselves to tracking the development of rainfall barriers and tectonically forced climate and environmental change through time.
U-Pb zircon geochronology and paleocurrent reconstructions of conglomerate sequences in the Humahuaca Basin of the Eastern Cordillera at 23.5° S suggest that the basin was an integral part of a largely unrestricted depositional system until 4.2 Ma, which subsequently became progressively decoupled from the foreland by range uplifts to the east that forced easterly moisture-bearing winds to precipitate in increasingly eastward locations. Multiple cycles of severed hydrological conditions and drainage re-capture are identified together with these processes that were associated with basin filling and sediment evacuation, respectively. Moreover, systematic relationships among faults, regional unconformities and deformed landforms reveal a general pattern of intra-basin deformation that appears to be linked with basin-internal deformation during or subsequent to episodes of large-scale sediment removal. Some of these observations are supported by variations in the hydrogen stable isotope composition of volcanic glass from the Neogene to Quaternary sedimentary record, which can be related to spatiotemporal changes in topography and associated orographic effects. δDg values in the basin strata reveal two main trends associated with surface uplift in the catchment area between 6.0 and 3.5 Ma and the onset of semiarid conditions in the basin following the attainment of threshold elevations for effective orographic barriers to the east after 3.5 Ma. The disruption of sediment supply from western sources after 4.2 Ma and subsequent hinterland aridification, moreover, emphasize the possibility that these processes were related to lateral orogenic growth of the adjacent Puna Plateau. As a result of the hinterland aridification the regions in the orogen interior have been characterized by an inefficient fluvial system, which in turn has helped maintaining internal drainage conditions, sediment storage, and relief reduction within high-elevation basins.
The diachronous nature of basin formation and impacts on the fluvial system in the adjacent broken foreland is underscored by the results of detailed sediment provenance and paleocurrent analyses, as well as U-Pb zircon geochronology in the Lerma and Metán basins at ca. 25° S. This is particularly demonstrated by the isolated uplift of the Metán range at ~10 Ma, which is more than 50 km away from the presently active orogenic front along the eastern Puna margin and the Eastern Cordillera to the west. At about 5 Ma, Puna-sourced sediments disappear from the foreland record, documenting further range uplifts in the Eastern Cordillera and hydrological isolation of the neighboring Angastaco Basin from the foreland. Finally, during the late Pliocene and Quaternary, deformation has been accommodated across the entire foreland and is still active. To elucidate the interactions between tectonically controlled changes in elevation and their impact on atmospheric circulation processes in this region, this thesis provides additional, temporally well-constrained hydrogen stable isotope results of volcanic glass samples from the broken foreland, including the Angastaco Basin, and other intermontane basins farther south. The results suggest similar elevations of intermontane basins and the foreland sectors prior to ca. 7 Ma. In case of the Angastaco Basin the region was affected by km-scale surface uplift of the basin. A comparison with coeval isotope data collected from sedimentary sequences in the Puna plateau explains rapid shifts in the intermontane δDg record and supports the notion of recurring phases of enhanced deep convection during the Pliocene, and thus climatic conditions during the middle to late Pliocene similar to the present day.
Combined, field-based and isotope geochemical methods used in this study of the NW-Argentine Andes have thus helped to gain insight into the systematics, rate changes, interactions, and temporal characteristics among tectonically controlled deformation patterns, the build-up of topography impacting atmospheric processes, the distribution of rainfall, and resulting surface processes in a tectonically active mountain belt. Ultimately, this information is essential for a better understanding of the style and the rates at which non-collisional mountain belts evolve, including the development orogenic plateaus and their bordering flanks. The results presented in this study emphasize the importance of stable isotope records for paleoaltimetric and paleoenvironmental studies in mountain belts and furnishes important data for a rigorous interpretation of such records.
Parts without a whole?
(2015)
This explorative study gives a descriptive overview of what organizations do and experience when they say they practice design thinking. It looks at how the concept has been appropriated in organizations and also describes patterns of design thinking adoption. The authors use a mixed-method research design fed by two sources: questionnaire data and semi-structured personal expert interviews. The study proceeds in six parts: (1) design thinking¹s entry points into organizations; (2) understandings of the descriptor; (3) its fields of application and organizational localization; (4) its perceived impact; (5) reasons for its discontinuation or failure; and (6) attempts to measure its success. In conclusion the report challenges managers to be more conscious of their current design thinking practice. The authors suggest a co-evolution of the concept¹s introduction with innovation capability building and the respective changes in leadership approaches. It is argued that this might help in unfolding design thinking¹s hidden potentials as well as preventing unintended side-effects such as discontented teams or the dwindling authority of managers.
Introduction
We investigated blood glucose (BG) and hormone response to aerobic high-intensity interval exercise (HIIE) and moderate continuous exercise (CON) matched for mean load and duration in type 1 diabetes mellitus (T1DM).
Material and Methods
Seven trained male subjects with T1DM performed a maximal incremental exercise test and HIIE and CON at 3 different mean intensities below (A) and above (B) the first lactate turn point and below the second lactate turn point (C) on a cycle ergometer. Subjects were adjusted to ultra-long-acting insulin Degludec (Tresiba/ Novo Nordisk, Denmark). Before exercise, standardized meals were administered, and short-acting insulin dose was reduced by 25% (A), 50% (B), and 75% (C) dependent on mean exercise intensity. During exercise, BG, adrenaline, noradrenaline, dopamine, cortisol, glucagon, and insulin-like growth factor-1, blood lactate, heart rate, and gas exchange variables were measured. For 24 h after exercise, interstitial glucose was measured by continuous glucose monitoring system.
Results
BG decrease during HIIE was significantly smaller for B (p = 0.024) and tended to be smaller for A and C compared to CON. No differences were found for post-exercise interstitial glucose, acute hormone response, and carbohydrate utilization between HIIE and CON for A, B, and C. In HIIE, blood lactate for A (p = 0.006) and B (p = 0.004) and respiratory exchange ratio for A (p = 0.003) and B (p = 0.003) were significantly higher compared to CON but not for C.
Conclusion
Hypoglycemia did not occur during or after HIIE and CON when using ultra-long-acting insulin and applying our methodological approach for exercise prescription. HIIE led to a smaller BG decrease compared to CON, although both exercises modes were matched for mean load and duration, even despite markedly higher peak workloads applied in HIIE. Therefore, HIIE and CON could be safely performed in T1DM.
In living cells, there are always a plethora of processes taking place at the same time. Their precise regulation is the basis of cellular functions, since small failures can lead to severe dysfunctions. For a comprehensive understanding of intracellular homeostasis, simultaneous multiparameter detection is a versatile tool for revealing the spatial and temporal interactions of intracellular parameters. Here, a recently developed time-correlated single-photon counting (TCSPC) board was evaluated for simultaneous fluorescence and phosphorescence lifetime imaging microscopy (FLIM/PLIM). Therefore, the metabolic activity in insect salivary glands was investigated by recording ns-decaying intrinsic cellular fluorescence, mainly related to oxidized flavin adenine dinucleotide (FAD) and the μs-decaying phosphorescence of the oxygen-sensitive ruthenium-complex Kr341. Due to dopamine stimulation, the metabolic activity of salivary glands increased, causing a higher pericellular oxygen consumption and a resulting increase in Kr341 phosphorescence decay time. Furthermore, FAD fluorescence decay time decreased, presumably due to protein binding, thus inducing a quenching of FAD fluorescence decay time. Through application of the metabolic drugs antimycin and FCCP, the recorded signals could be assigned to a mitochondrial origin. The dopamine-induced changes could be observed in sequential FLIM and PLIM recordings, as well as in simultaneous FLIM/PLIM recordings using an intermediate TCSPC timing resolution.
Analphabetismus und Teilhabe
(2015)
Aus bildungstheoretisch-gesellschaftskritischer Perspektive stellt sich Lernen als soziales Handeln in gesellschaftlich-vermittelten Verhältnissen – Möglichkeiten wie auch Begrenzungen – dar. Funktionaler Analphabetismus ist mit einem bundesweiten Anteil von 14% der erwerbsfähigen Bevölkerung oder 7,5 Millionen Analphabeten in Deutschland nicht nur ein bildungspolitisches und -praktisches, sondern auch ein wissenschaftlich zu untersuchendes Phänomen. Es gibt zahlreiche Untersuchungen, die sich mit dieser Thematik auseinandersetzen und Anknüpfungspunkte für die vorliegende Studie bieten. Aus der Zielgruppenforschung beispielsweise ist bekannt, dass die Hauptadressaten der Männer, der Älteren und der Bildungsfernen nicht adäquat erreicht bzw. als Teilnehmende gewonnen werden. Aus der Teilnehmendenforschung sind Abbrüche und Drop-Outs bekannt.
Warum Analphabeten im Erwachsenenalter, also nach der Aneignung vielfältigster Bewältigungsstrategien, durch das sich das Phänomen einer direkten Sichtbarkeit entzieht, dennoch beginnen das Lesen und Schreiben (wieder) zu lernen, wird bislang weder bildungs- noch lerntheoretisch untersucht. Im Rahmen der vorliegenden Erwachsenenbildungsstudie werden genau diese Lernanlässe empirisch herausgearbeitet.
Als Heuristik wird auf eine subjekttheoretische Theoriefolie rekurriert, die sich in besonderer Weise eignet Lernbegründungen im Kontext gesellschaftlich verhafteter Biografien sichtbar zu machen. Lernforschung im Begründungsmodell muss dabei auf eine Methodik zurückgreifen, die die Perspektive des Subjekts, Bedeutungszusammenhänge und typische Sinnstrukturen hervorbringen kann. Daher wird ein auf Einzelfallstudien basierendes, qualitatives Forschungsdesign gewählt, das Daten aus der Erhebung mittels problemzentrierter Interviews bereitstellt, die eine Auswertung innerhalb der Forschungsstrategie der Grounded Theory erfahren und in einer empirisch begründeten Typenbildung münden. Dieses Design ermöglicht die Rekonstruktion typischer Lernanlässe und im Ergebnis die Entwicklung einer gegenstandsbezogene Theorie mittlerer Reichweite.
Aus der vorliegenden Bedeutungs-Begründungsanalyse konnten empirisch fünf Lernbegründungstypen ausdifferenziert werden, die sich im Spannungsverhältnis von Teilhabeausrichtung und Widersprüchlichkeit bewegen und in ihrer Komplexität mittels der drei Schlüsselkategorien Bedeutungsraum, Reflexion der sozialen Eingebundenheit und Kompetenzen sowie Lernen bzw. dem Erleben der Diskrepanzerfahrung zwischen Lesen-Wollen und Lesen-Können dargestellt werden. Das Spektrum der Lernbegründungstypen reicht von teilhabesicherndem resignierten Lernen, bei dem die Sicherung des bedrohten Status quo im Vordergrund steht und die Welt als nicht gestaltbar erlebt wird, bis hin zu vielschichtigem teilhabeerweiternden Lernen, das auf die Erweiterung der eigenen Handlungsmöglichkeiten zielt und die umfangreichste Reflexion der sozialen Eingebundenheit und Kompetenzen aufweist. Funktionale Analphabeten begründen ihr Lernen und Nicht-Lernen vor dem Hintergrund ihrer sozialen Situation, ihrer Begrenzungen und Möglichkeiten: Schriftsprachlernen erhält erst im Kontext gesellschaftlicher Teilhabe und dessen Reflexion eine Bedeutung.
Mit der Einordnung der Lernbegründungen funktionaler Analphabeten in: erstens, Diskurse der Bildungsbenachteiligung durch Exklusionsprozesse; zweitens, die lerntheoretische Bedeutung von Inklusionsprozessen und drittens, den internationalen Theorieansatz transformativen Lernens durch die Integration der Reflexionskategorie, erfolgt eine Erweiterung bildungs- und lerntheoretischer Ansätze. In dieser Arbeit werden Alphabetisierungs- und Erwachsenen-bildungsforschung verbunden und in den jeweiligen Diskurs integriert. Weitere Anschluss- und Verwertungsmöglichkeiten in der Bildungsforschung wären denkbar. Die Untersuchung von Lernbegründungen im Längsschnitt beispielsweise kann Transformationsprozesse rekonstruierbar machen und somit Erträge für eine Bildungsprozessforschung liefern. Bildungspraktisch können die Lernbegründungstypen einerseits der Teilnehmergewinnung dienen, andererseits Ausgangspunkt für reflexive Lernbegleitungskonzepte sein, die Lernbegründungen zur Sprache bringen und die soziale Eingebundenheit thematisieren und damit Lernprozesse unterstützen.
Jahresbericht 2014
(2015)
Das MenschenRechtsZentrum der Universität Potsdam (MRZ) beging im Jahr 2014 sein zwanzigjähriges Bestehen. Aus diesem Grund beschäftigt sich der aktuelle Jahresbericht nicht nur – wie ansonsten üblich – mit der spezifischen Organisationsstruktur und der Arbeit im Berichtszeitraum, sondern gibt einleitend einen knappen Überblick über die umfangreiche Tätigkeit des MRZ seit seiner Gründung. Diese Bilanz wird durch eine ausführliche Liste der Veranstaltungen und Schriftenreihen im Anhang vervollständigt.