Refine
Has Fulltext
- yes (473) (remove)
Year of publication
- 2014 (473) (remove)
Document Type
- Article (142)
- Postprint (127)
- Doctoral Thesis (109)
- Monograph/Edited Volume (28)
- Part of Periodical (25)
- Preprint (13)
- Review (11)
- Master's Thesis (8)
- Bachelor Thesis (4)
- Conference Proceeding (4)
Language
Keywords
- prevention (23)
- Gewalt (21)
- Kriminalität (21)
- Nachhaltigkeit (21)
- Prävention (21)
- Rechtsextremismus (21)
- crime (21)
- right-wing extremism (21)
- sustainability (21)
- violence (21)
Institute
- Institut für Chemie (42)
- Humanwissenschaftliche Fakultät (34)
- Bürgerliches Recht (30)
- WeltTrends e.V. Potsdam (27)
- Institut für Physik und Astronomie (26)
- Institut für Biochemie und Biologie (24)
- Institut für Romanistik (22)
- Mathematisch-Naturwissenschaftliche Fakultät (22)
- Extern (21)
- Department Linguistik (20)
forum:logopädie 28.2014, 1
(2014)
forum:logopädie 28.2014, 2
(2014)
forum:logopädie 28.2014, 3
(2014)
forum:logopädie 28.2014, 4
(2014)
forum:logopädie 28.2014, 5
(2014)
forum:logopädie 28.2014, 6
(2014)
The Great Hungarian Plain was a crossroads of cultural transformations that have shaped European prehistory. Here we analyse a 5,000-year transect of human genomes, sampled from petrous bones giving consistently excellent endogenous DNA yields, from 13 Hungarian Neolithic, Copper, Bronze and Iron Age burials including two to high (similar to 22x) and seven to similar to 1x coverage, to investigate the impact of these on Europe's genetic landscape. These data suggest genomic shifts with the advent of the Neolithic, Bronze and Iron Ages, with interleaved periods of genome stability. The earliest Neolithic context genome shows a European hunter-gatherer genetic signature and a restricted ancestral population size, suggesting direct contact between cultures after the arrival of the first farmers into Europe. The latest, Iron Age, sample reveals an eastern genomic influence concordant with introduced Steppe burial rites. We observe transition towards lighter pigmentation and surprisingly, no Neolithic presence of lactase persistence.
Background: Development of eukaryotic organisms is controlled by transcription factors that trigger specific and global changes in gene expression programs. In plants, MADS-domain transcription factors act as master regulators of developmental switches and organ specification. However, the mechanisms by which these factors dynamically regulate the expression of their target genes at different developmental stages are still poorly understood.
Results: We characterized the relationship of chromatin accessibility, gene expression, and DNA binding of two MADS-domain proteins at different stages of Arabidopsis flower development. Dynamic changes in APETALA1 and SEPALLATA3 DNA binding correlated with changes in gene expression, and many of the target genes could be associated with the developmental stage in which they are transcriptionally controlled. We also observe dynamic changes in chromatin accessibility during flower development. Remarkably, DNA binding of APETALA1 and SEPALLATA3 is largely independent of the accessibility status of their binding regions and it can precede increases in DNA accessibility. These results suggest that APETALA1 and SEPALLATA3 may modulate chromatin accessibility, thereby facilitating access of other transcriptional regulators to their target genes.
Conclusions: Our findings indicate that different homeotic factors regulate partly overlapping, yet also distinctive sets of target genes in a partly stage-specific fashion. By combining the information from DNA-binding and gene expression data, we are able to propose models of stage-specific regulatory interactions, thereby addressing dynamics of regulatory networks throughout flower development. Furthermore, MADS-domain TFs may regulate gene expression by alternative strategies, one of which is modulation of chromatin accessibility.
DNA origami nanostructures allow for the arrangement of different functionalities such as proteins, specific DNA structures, nanoparticles, and various chemical modifications with unprecedented precision. The arranged functional entities can be visualized by atomic force microscopy (AFM) which enables the study of molecular processes at a single-molecular level. Examples comprise the investigation of chemical reactions, electron-induced bond breaking, enzymatic binding and cleavage events, and conformational transitions in DNA. In this paper, we provide an overview of the advances achieved in the field of single-molecule investigations by applying atomic force microscopy to functionalized DNA origami substrates.
Modern 3D geovisualization systems (3DGeoVSs) are complex and evolving systems that are required to be adaptable and leverage distributed resources, including massive geodata. This article focuses on 3DGeoVSs built based on the principles of service-oriented architectures, standards and image-based representations (SSI) to address practically relevant challenges and potentials. Such systems facilitate resource sharing and agile and efficient system construction and change in an interoperable manner, while exploiting images as efficient, decoupled and interoperable representations. The software architecture of a 3DGeoVS and its underlying visualization model have strong effects on the system's quality attributes and support various system life cycle activities. This article contributes a software reference architecture (SRA) for 3DGeoVSs based on SSI that can be used to design, describe and analyze concrete software architectures with the intended primary benefit of an increase in effectiveness and efficiency in such activities. The SRA integrates existing, proven technology and novel contributions in a unique manner. As the foundation for the SRA, we propose the generalized visualization pipeline model that generalizes and overcomes expressiveness limitations of the prevalent visualization pipeline model. To facilitate exploiting image-based representations (IReps), the SRA integrates approaches for the representation, provisioning and styling of and interaction with IReps. Five applications of the SRA provide proofs of concept for the general applicability and utility of the SRA. A qualitative evaluation indicates the overall suitability of the SRA, its applications and the general approach of building 3DGeoVSs based on SSI.
This paper reports a problematic case of unequivocally evidencing participant orientation to the projective force of some turn-initial demonstrative wh-clefts (DCs) within the framework of Conversation Analysis (CA) and Interactional Linguistics (IL). Conducting rhythmic analyses appears helpful in this regard, in that they disclose rhythmic regularities which suggest a speaker's orientation towards a projected turn continuation. In this particular case, rhythmic analyses can therefore be shown to meaningfully complement sequential analyses and analyses of turn-design, so as to gather additional evidence for participant orientations. In conclusion, I will point to possibly more extensive relations between rhythmicity and projection and proffer a tentative outlook for the usability of rhythmic analyses as an analytic tool in CA and IL.
Background
The forelimb-specific gene tbx5 is highly conserved and essential for the development of forelimbs in zebrafish, mice, and humans. Amongst birds, a single order, Dinornithiformes, comprising the extinct wingless moa of New Zealand, are unique in having no skeletal evidence of forelimb-like structures.
Results
To determine the sequence of tbx5 in moa, we used a range of PCR-based techniques on ancient DNA to retrieve all nine tbx5 exons and splice sites from the giant moa, Dinornis. Moa Tbx5 is identical to chicken Tbx5 in being able to activate the downstream promotors of fgf10 and ANF. In addition we show that missexpression of moa tbx5 in the hindlimb of chicken embryos results in the formation of forelimb features, suggesting that Tbx5 was fully functional in wingless moa. An alternatively spliced exon 1 for tbx5 that is expressed specifically in the forelimb region was shown to be almost identical between moa and ostrich, suggesting that, as well as being fully functional, tbx5 is likely to have been expressed normally in moa since divergence from their flighted ancestors, approximately 60 mya.
Conclusions
The results suggests that, as in mice, moa tbx5 is necessary for the induction of forelimbs, but is not sufficient for their outgrowth. Moa Tbx5 may have played an important role in the development of moa’s remnant forelimb girdle, and may be required for the formation of this structure. Our results further show that genetic changes affecting genes other than tbx5 must be responsible for the complete loss of forelimbs in moa.
Biosensors for the detection of benzaldehyde and g-aminobutyric acid (GABA) are reported using aldehyde oxidoreductase PaoABC from Escherichia coli immobilized in a polymer containing bound low potential osmium redox complexes. The electrically connected enzyme already electrooxidizes benzaldehyde at potentials below −0.15 V (vs. Ag|AgCl, 1 M KCl). The pH-dependence of benzaldehyde oxidation can be strongly influenced by the ionic strength. The effect is similar with the soluble osmium redox complex and therefore indicates a clear electrostatic effect on the bioelectrocatalytic efficiency of PaoABC in the osmium containing redox polymer. At lower ionic strength, the pH-optimum is high and can be switched to low pH-values at high ionic strength. This offers biosensing at high and low pH-values. A “reagentless” biosensor has been formed with enzyme wired onto a screen-printed electrode in a flow cell device. The response time to addition of benzaldehyde is 30 s, and the measuring range is between 10–150 µM and the detection limit of 5 µM (signal to noise ratio 3:1) of benzaldehyde. The relative standard deviation in a series (n = 13) for 200 µM benzaldehyde is 1.9%. For the biosensor, a response to succinic semialdehyde was also identified. Based on this response and the ability to work at high pH a biosensor for GABA is proposed by coimmobilizing GABA-aminotransferase (GABA-T) and PaoABC in the osmium containing redox polymer.
We present an electrochemical MIP sensor for tamoxifen (TAM)-a nonsteroidal anti-estrogen-which is based on the electropolymerisation of an O-phenylenediamine. resorcinol mixture directly on the electrode surface in the presence of the template molecule. Up to now only. bulk. MIPs for TAM have been described in literature, which are applied for separation in chromatography columns. Electro-polymerisation of the monomers in the presence of TAM generated a film which completely suppressed the reduction of ferricyanide. Removal of the template gave a markedly increased ferricyanide signal, which was again suppressed after rebinding as expected for filling of the cavities by target binding. The decrease of the ferricyanide peak of the MIP electrode depended linearly on the TAM concentration between 1 and 100 nM. The TAM-imprinted electrode showed a 2.3 times higher recognition of the template molecule itself as compared to its metabolite 4-hydroxytamoxifen and no cross-reactivity with the anticancer drug doxorubucin was found. Measurements at + 1.1 V caused a fouling of the electrode surface, whilst pretreatment of TAM with peroxide in presence of HRP generated an oxidation product which was reducible at 0 mV, thus circumventing the polymer formation and electrochemical interferences.
The structures and synthesis of polyzwitterions ("polybetaines") are reviewed, emphasizing the literature of the past decade. Particular attention is given to the general challenges faced, and to successful strategies to obtain polymers with a true balance of permanent cationic and anionic groups, thus resulting in an overall zero charge. Also, the progress due to applying new methodologies from general polymer synthesis, such as controlled polymerization methods or the use of "click" chemical reactions is presented. Furthermore, the emerging topic of responsive ("smart") polyzwitterions is addressed. The considerations and critical discussions are illustrated by typical examples.
Dimensional psychiatry
(2014)
A dimensional approach in psychiatry aims to identify core mechanisms of mental disorders across nosological boundaries.
We compared anticipation of reward between major psychiatric disorders, and investigated whether reward anticipation is impaired in several mental disorders and whether there is a common psychopathological correlate (negative mood) of such an impairment.
We used functional magnetic resonance imaging (fMRI) and a monetary incentive delay (MID) task to study the functional correlates of reward anticipation across major psychiatric disorders in 184 subjects, with the diagnoses of alcohol dependence (n = 26), schizophrenia (n = 44), major depressive disorder (MDD, n = 24), bipolar disorder (acute manic episode, n = 13), attention deficit/hyperactivity disorder (ADHD, n = 23), and healthy controls (n = 54). Subjects' individual Beck Depression Inventory-and State-Trait Anxiety Inventory-scores were correlated with clusters showing significant activation during reward anticipation.
During reward anticipation, we observed significant group differences in ventral striatal (VS) activation: patients with schizophrenia, alcohol dependence, and major depression showed significantly less ventral striatal activation compared to healthy controls. Depressive symptoms correlated with dysfunction in reward anticipation regardless of diagnostic entity. There was no significant correlation between anxiety symptoms and VS functional activation.
Our findings demonstrate a neurobiological dysfunction related to reward prediction that transcended disorder categories and was related to measures of depressed mood. The findings underline the potential of a dimensional approach in psychiatry and strengthen the hypothesis that neurobiological research in psychiatric disorders can be targeted at core mechanisms that are likely to be implicated in a range of clinical entities.
Background
Nucleic acid amplification is the most sensitive and specific method to detect Plasmodium falciparum. However the polymerase chain reaction remains laboratory-based and has to be conducted by trained personnel. Furthermore, the power dependency for the thermocycling process and the costly equipment necessary for the read-out are difficult to cover in resource-limited settings. This study aims to develop and evaluate a combination of isothermal nucleic acid amplification and simple lateral flow dipstick detection of the malaria parasite for point-of-care testing.
Methods
A specific fragment of the 18S rRNA gene of P. falciparum was amplified in 10 min at a constant 38°C using the isothermal recombinase polymerase amplification (RPA) method. With a unique probe system added to the reaction solution, the amplification product can be visualized on a simple lateral flow strip without further labelling. The combination of these methods was tested for sensitivity and specificity with various Plasmodium and other protozoa/bacterial strains, as well as with human DNA. Additional investigations were conducted to analyse the temperature optimum, reaction speed and robustness of this assay.
Results
The lateral flow RPA (LF-RPA) assay exhibited a high sensitivity and specificity. Experiments confirmed a detection limit as low as 100 fg of genomic P. falciparum DNA, corresponding to a sensitivity of approximately four parasites per reaction. All investigated P. falciparum strains (n = 77) were positively tested while all of the total 11 non-Plasmodium samples, showed a negative test result. The enzymatic reaction can be conducted under a broad range of conditions from 30-45°C with high inhibitory concentration of known PCR inhibitors. A time to result of 15 min from start of the reaction to read-out was determined.
Conclusions
Combining the isothermal RPA and the lateral flow detection is an approach to improve molecular diagnostic for P. falciparum in resource-limited settings. The system requires none or only little instrumentation for the nucleic acid amplification reaction and the read-out is possible with the naked eye. Showing the same sensitivity and specificity as comparable diagnostic methods but simultaneously increasing reaction speed and dramatically reducing assay requirements, the method has potential to become a true point-of-care test for the malaria parasite.
Flood damage has increased significantly and is expected to rise further in many parts of the world. For assessing potential changes in flood risk, this paper presents an integrated model chain quantifying flood hazards and losses while considering climate and land use changes. In the case study region, risk estimates for the present and the near future illustrate that changes in flood risk by 2030 are relatively low compared to historic periods. While the impact of climate change on the flood hazard and risk by 2030 is slight or negligible, strong urbanisation associated with economic growth contributes to a remarkable increase in flood risk. Therefore, it is recommended to frequently consider land use scenarios and economic developments when assessing future flood risks. Further, an adapted and sustainable risk management is necessary to encounter rising flood losses, in which non-structural measures are becoming more and more important. The case study demonstrates that adaptation by non-structural measures such as stricter land use regulations or enhancement of private precaution is capable of reducing flood risk by around 30 %. Ignoring flood risks, in contrast, always leads to further increasing losses-with our assumptions by 17 %. These findings underline that private precaution and land use regulation could be taken into account as low cost adaptation strategies to global climate change in many flood prone areas. Since such measures reduce flood risk regardless of climate or land use changes, they can also be recommended as no-regret measures.
The Runge-Kutta type regularization method was recently proposed as a potent tool for the iterative solution of nonlinear ill-posed problems. In this paper we analyze the applicability of this regularization method for solving inverse problems arising in atmospheric remote sensing, particularly for the retrieval of spheroidal particle distribution. Our numerical simulations reveal that the Runge-Kutta type regularization method is able to retrieve two-dimensional particle distributions using optical backscatter and extinction coefficient profiles, as well as depolarization information.
Stress drop is a key factor in earthquake mechanics and engineering seismology. However, stress drop calculations based on fault slip can be significantly biased, particularly due to subjectively determined smoothing conditions in the traditional least-square slip inversion. In this study, we introduce a mechanically constrained Bayesian approach to simultaneously invert for fault slip and stress drop based on geodetic measurements. A Gaussian distribution for stress drop is implemented in the inversion as a prior. We have done several synthetic tests to evaluate the stability and reliability of the inversion approach, considering different fault discretization, fault geometries, utilized datasets, and variability of the slip direction, respectively. We finally apply the approach to the 2010 M8.8 Maule earthquake and invert for the coseismic slip and stress drop simultaneously. Two fault geometries from the literature are tested. Our results indicate that the derived slip models based on both fault geometries are similar, showing major slip north of the hypocenter and relatively weak slip in the south, as indicated in the slip models of other studies. The derived mean stress drop is 5-6 MPa, which is close to the stress drop of similar to 7 MPa that was independently determined according to force balance in this region Luttrell et al. (J Geophys Res, 2011). These findings indicate that stress drop values can be consistently extracted from geodetic data.
In a recent paper, the Lefschetz number for endomorphisms (modulo trace class operators) of sequences of trace class curvature was introduced. We show that this is a well defined, canonical extension of the classical Lefschetz number and establish the homotopy invariance of this number. Moreover, we apply the results to show that the Lefschetz fixed point formula holds for geometric quasiendomorphisms of elliptic quasicomplexes.
This reference paper describes the sampling and contents of the IZA Evaluation Dataset Survey and outlines its vast potential for research in labor economics. The data have been part of a unique IZA project to connect administrative data from the German Federal Employment Agency with innovative survey data to study the out-mobility of individuals to work. This study makes the survey available to the research community as a Scientific Use File by explaining the development, structure, and access to the data. Furthermore, it also summarizes previous findings with the survey data.
Fluid intelligence (fluid IQ), defined as the capacity for rapid problem solving and behavioral adaptation, is known to be modulated by learning and experience. Both stressful life events (SLES) and neural correlates of learning [specifically, a key mediator of adaptive learning in the brain, namely the ventral striatal representation of prediction errors (PE)] have been shown to be associated with individual differences in fluid IQ. Here, we examine the interaction between adaptive learning signals (using a well-characterized probabilistic reversal learning task in combination with fMRI) and SLES on fluid IQ measures. We find that the correlation between ventral striatal BOLD PE and fluid IQ, which we have previously reported, is quantitatively modulated by the amount of reported SLES. Thus, after experiencing adversity, basic neuronal learning signatures appear to align more closely with a general measure of flexible learning (fluid IQ), a finding complementing studies on the effects of acute stress on learning. The results suggest that an understanding of the neurobiological correlates of trait variables like fluid IQ needs to take socioemotional influences such as chronic stress into account.
The Arabidopsis Kinome
(2014)
Background
Protein kinases constitute a particularly large protein family in Arabidopsis with important functions in cellular signal transduction networks. At the same time Arabidopsis is a model plant with high frequencies of gene duplications. Here, we have conducted a systematic analysis of the Arabidopsis kinase complement, the kinome, with particular focus on gene duplication events. We matched Arabidopsis proteins to a Hidden-Markov Model of eukaryotic kinases and computed a phylogeny of 942 Arabidopsis protein kinase domains and mapped their origin by gene duplication.
Results
The phylogeny showed two major clades of receptor kinases and soluble kinases, each of which was divided into functional subclades. Based on this phylogeny, association of yet uncharacterized kinases to families was possible which extended functional annotation of unknowns. Classification of gene duplications within these protein kinases revealed that representatives of cytosolic subfamilies showed a tendency to maintain segmentally duplicated genes, while some subfamilies of the receptor kinases were enriched for tandem duplicates. Although functional diversification is observed throughout most subfamilies, some instances of functional conservation among genes transposed from the same ancestor were observed. In general, a significant enrichment of essential genes was found among genes encoding for protein kinases.
Conclusions
The inferred phylogeny allowed classification and annotation of yet uncharacterized kinases. The prediction and analysis of syntenic blocks and duplication events within gene families of interest can be used to link functional biology to insights from an evolutionary viewpoint. The approach undertaken here can be applied to any gene family in any organism with an annotated genome.
Background
In the past, plyometric training (PT) has been predominantly performed on stable surfaces. The purpose of this pilot study was to examine effects of a 7-week lower body PT on stable vs. unstable surfaces. This type of exercise condition may be denoted as metastable equilibrium.
Methods
Thirty-three physically active male sport science students (age: 24.1 ± 3.8 years) were randomly assigned to a PT group (n = 13) exercising on stable (STAB) and a PT group (n = 20) on unstable surfaces (INST). Both groups trained countermovement jumps, drop jumps, and practiced a hurdle jump course. In addition, high bar squats were performed. Physical fitness tests on stable surfaces (hexagonal obstacle test, countermovement jump, hurdle drop jump, left-right hop, dynamic and static balance tests, and leg extension strength) were used to examine the training effects.
Results
Significant main effects of time (ANOVA) were found for the countermovement jump, hurdle drop jump, hexagonal test, dynamic balance, and leg extension strength. A significant interaction of time and training mode was detected for the countermovement jump in favor of the INST group. No significant improvements were evident for either group in the left-right hop and in the static balance test.
Conclusions
These results show that lower body PT on unstable surfaces is a safe and efficient way to improve physical performance on stable surfaces.
Portal alumni
(2014)
Die Beliebtheit von Medienberufen ist ungebrochen. Das zeigt sich unter anderem an der Zahl der Studieninteressierten. So haben sich allein in diesem Jahr mehr als 1 500 junge Leute auf einen der 44 Plätze für den Studiengang Medienwissenschaft an der Universität Potsdam beworben. Nach ihrem erfolgreichen Abschluss allerdings konkurrieren die Absolventen am Arbeitsmarkt mit Tausenden Abgängern anderer Hochschulen aus Film-, Medien- und Kommunikationsstudiengängen. Das sind allein in der Region Berlin-Brandenburg jährlich etwa 1 500. Doch nach jahrzehntelangem Boom der Medienbranche hat sich der Arbeitsmarkt im vergangenen Jahrzehnt drastisch verändert. Konjunkturkrise, Kursrückgänge und rückläufige Werbeinvestitionen schwächten die Medien deutlich. Es folgten daraus schlechte Gewinnergebnisse, Einsparungen und Personalreduzierung, insbesondere bei den Printmedien. Die Insolvenz der Frankfurter Rundschau oder die Einstellung der Financial Times Deutschland sind nur zwei eklatante Beispiele. Auf der anderen Seite boomt der dynamische Online-Markt aufgrund des veränderten Nutzerverhaltens insbesondere der jungen Generation, die ihre Informationen zunehmend aus Internet, Apps und sozialen Netzwerken gewinnen. Die Berufsaussichten für all Jene, die „Irgendwas mit Medien“ studieren wollen sind zwar aufgrund des Arbeitsmarktes schwieriger geworden, sie sind aber dennoch vielfältig. Guter Journalismus wird weiterhin benötigt und auch Öffentlichkeitsarbeiter sind gefragt. Darüber hinaus stehen Absolventen der Kommunikationswissenschaften die Türen in die Medienplanung oder in der Markt- und Meinungsforschung offen. Und nicht zuletzt sind Experten in der Online-Branche gefragt. Portal alumni hat sich in diesem Jahr dafür interessiert, welche Karrierewege Absolventen der der Universität Potsdam in Medienberufen bisher gegangen sind. Dabei zeigt sich, dass auch hier die Wege selten linear verlaufen und berufliche Erfolge sich keineswegs leicht einstellten.
The B fields in OB stars (BOB) survey is an ESO large programme collecting spectropolarimetric observations for a large number of early-type stars in order to study the occurrence rate, properties, and ultimately the origin of magnetic fields in massive stars. As of July 2014, a total of 98 objects were observed over 20 nights with FORS2 and HARPSpol. Our preliminary results indicate that the fraction of magnetic OB stars with an organised, detectable field is low. This conclusion, now independently reached by two different surveys, has profound implications for any theoretical model attempting to explain the field formation in these objects. We discuss in this contribution some important issues addressed by our observations (e.g., the lower bound of the field strength) and the discovery of some remarkable objects.
Nested application conditions generalise the well-known negative application conditions and are important for several application domains. In this paper, we present Local Church-Rosser, Parallelism, Concurrency and Amalgamation Theorems for rules with nested application conditions in the framework of M-adhesive categories, where M-adhesive categories are slightly more general than weak adhesive high-level replacement categories. Most of the proofs are based on the corresponding statements for rules without application conditions and two shift lemmas stating that nested application conditions can be shifted over morphisms and rules.
An der BTU finden freiwillige Kurse zur Studienvorbereitung, im Rahmen des BMBF geförderten Projektes „Blended Learning Anfangshürden erkennen zur Unterstützung der fachspezifischen Studienvorbereitung und des Lernerfolges im ersten Studienjahr“, statt. Die Kluft zwischen notwendigen und tatsächlich vorhandenen Fähigkeiten ist in der Mathematik bei vielen angehenden Studierenden groß und die zur Verfügung stehende Zeit oft zu gering. Hier setzt dieses Konzept an. Neben Präsenzveranstaltungen enthält der Kurs unterstützte Selbstlernphasen. Diese werden durch einen virtuellen Kursraum und einer Anwendung für mobile Endgeräte (App) unterstützt.
Im Rahmen eines interdisziplinären studentischen Projekts wurde ein Framework für mobile pervasive Lernspiele entwickelt. Am Beispiel des historischen Lernortes Park Sanssouci wurde auf dieser Grundlage ein Lernspiel für Schülerinnen und Schüler implementiert. Die geplante Evaluation soll die Lernwirksamkeit von geobasierten mobilen Lernspielen messen. Dazu wird die Intensität des Flow-Erlebens mit einer ortsgebundenen alternativen Umsetzung verglichen.
Das gesteckte Ziel unserer Applikation ist es nicht nur die ständige Verfügbarkeit von Lernmaterialien zu ermöglichen, sondern auch die gesamte Kommunikation zwischen Dozenten und Studierenden, sowie Studierende unter sich, zu verändern. Eine Mischung aus E-Learning, Blended-Learning und Mobile- Learning soll es hier allen Teilnehmer ermöglichen, ortsungebunden zu agieren. Neue Funktionen sollen es den Studierenden ermöglichen, besser miteinander zu arbeiten, zu kooperieren und neue Bekanntschaften zu schließen.
Der vorliegende Beitrag befasst sich mit der Konstruktion eines Lehr-/ Lernszenarios polyvalenter Grundlagenvorlesungen in naturwissenschaftlichen Fachwissenschaften. Das Szenario verbindet klassische Vorlesungen mit virtuellen Elementen wie Online-Kursen, Online-Foren und Audience-Response-Systemen sowie dem Arbeiten in Kleingruppen mit Ansätzen des problemorientierten Lernens. Ziel ist es das Grundlagenwissen der Studierenden anzupassen, das Arbeiten in Gruppen zu fördern und problemorientiertes Lernen zu erlernen.
Das Projekt „Medienbildung in der LehrerInnenbildung“ hat das Ziel, den Einsatz digitaler Medien in den Lehramtsstudiengängen der Universität Potsdam nachhaltig zu fördern. Am Beispiel der Musiklehrerausbildung (Lehrstuhl für Musikpädagogik und Musikdidaktik) wurde ein Konzept für die Nutzung von Video-Podcasts in schulischen Praxisphasen entwickelt, um Studierende bei der Unterrichtsplanung zu unterstützen. Die fachspezifische Umsetzung des E-Learning-Ansatzes und die damit verbundenen Möglichkeiten und Heraus- forderungen werden gezeigt und betonen die Wichtigkeit der Zusammenarbeit zwischen Fachdidaktik und Mediendidaktik, um eine bedarfsorientierte Lösung zu finden, die praktisch umsetzbar ist.
Bewegunglesen.com
(2014)
bewegunglesen.com (mit Silber bei den Best of Swiss Web Awards 2013 ausgezeichnet) ist ein E-Learning-Tool und bietet für Sportunterrichtende und Studierende eine webbasierte, interaktive Übungsgelegenheit, die Bewegungsanalyse und das kriteriengeleitete Verbessern von Fertigkeiten zu erlernen. Bewegungsabläufe mit ihren Kernbewegungen werden praxisnah und schulstufengerecht vermittelt. Daneben können auch Unterrichtsvideos hochgeladen, geschnitten, durch Grafiken und Fakten angereichert und innerhalb der Community geteilt werden. Aus den Clips lassen sich Übungen und Prüfungen mit Beurteilungskriterien des Bewegungsablaufs zusammenstellen, welche automatisiert ausgewertet werden.
Recently, interest in collecting and mining large sets of educational data on student background and performance to conduct research on learning and instruction has developed as an area generally referred to as learning analytics. Higher education leaders are recognising the value of learning analytics for improving not only learning and teaching but also the entire educational arena. However, theoretical concepts and empirical evidence need to be generated within the fast evolving field of learning analytics. In this paper, we introduce a holistic learning analytics framework. Based on this framework, student, learning, and curriculum profiles have been developed which include relevant static and dynamic parameters for facilitating the learning analytics framework. Based on the theoretical model, an empirical study was conducted to empirically validate the parameters included in the student profile. The paper concludes with practical implications and issues for future research.
Portal Wissen = Time
(2014)
“What then is time?”, Augustine of Hippo sighs melancholically in Book XI of “Confessions” and continues, “If no one asks me, I know; if I want to explain it to a questioner, I don’t know.” Even today, 1584 years after Augustine, time still appears mysterious. Treatises about the essence of time fill whole libraries – and this magazine.
However, questions of essence are alien to modern sciences. Time is – at least in physics – unproblematic: “Time is defined so that motion looks simple”, briefly and prosaically phrased, waves goodbye to Augustine’s riddle and to the Newtonian concept of absolute time, whose mathematical flow can only be approximately recorded with earthly instruments anyway.
In our everyday language and even in science we still speak of the flow of time but time has not been a natural condition for quite a while now. It is rather a conventional order parameter for change and movement. Processes are arranged by using a class of processes as a counting system in order to compare other processes and to organize them with the help of the temporary categories “before”, “during”, and “after”.
During Galileo’s time one’s own pulse was seen as the time standard for the flight of cannon balls. More sophisticated examination methods later made this seem too impractical. The distance-time diagrams of free-flying cannon balls turned out to be rather imprecise, difficult to replicate, and in no way “simple”. Nowadays, we use cesium atoms. A process is said to take one second when a caesium-133 atom completes 9,192,631,770 periods of the radiation corresponding to the transition between two hyperfine levels of the ground state. A meter is the length of the path travelled by light in a vacuum in exactly 1/299,792,458 of a second. Fortunately, these data are hard-coded in the Global Positioning System GPS so users do not have to reenter them each time they want to know where they are. In the future, however, they might have to download an app because the time standard has been replaced by sophisticated transitions to ytterbium.
The conventional character of the time concept should not tempt us to believe that everything is somehow relative and, as a result, arbitrary. The relation of one’s own pulse to an atomic clock is absolute and as real as the relation of an hourglass to the path of the sun. The exact sciences are relational sciences. They are not about the thing-initself as Newton and Kant dreamt, but rather about relations as Leibniz and, later, Mach pointed out.
It is not surprising that the physical time standard turned out to be rather impractical for other scientists. The psychology of time perception tells us – and you will all agree – that the perceived age is quite different from the physical age. The older we get the shorter the years seem. If we simply assume that perceived duration is inversely related to physical age and that a 20-year old also perceives a physical year as a psychological one, we come to the surprising discovery that at 90 years we are 90 years old. With an assumed life expectancy of 90 years, 67% (or 82%) of your felt lifetime is behind you at the age of 20 (or 40) physical years.
Before we start to wallow in melancholy in the face of the “relativity of time”, let me again quote Augustine. “But at any rate this much I dare affirm I know: that if nothing passed there would be no past time; if nothing were approaching, there would be no future time; if nothing were, there would be no present time.” Well, – or as Bob Dylan sings “The times they are a-changin”.
I wish you an exciting time reading this issue.
Prof. Martin Wilkens
Professor of Quantum Optics
Portal Wissen = Believe
(2014)
People want to know what is real. Children enjoy listening to a story but when my children were about four years old they started asking whether the story really happened or was just invented. Likewise, only on a higher level, our academic curiosity is fuelled by our interest in knowing what is real. When we analyze poetic texts or dreams we are trying to distinguish between the facts (e.g. neurological ones or linguistic structures) and merely assumed influences. Ideally we can present results that were logically understood by others and that we can repeat empirically. But in most cases this is not possible. We cannot read every book and cannot look through every microscope, not even within our own discipline. In the world we live in we depend on trusting the information of others, like how to get to the train station or what the weather is like in Ulaanbataar. This is why we are used to believing others, our friends or the news anchors. This is not a childish behavior but a necessity. Of course, it is risky because they could all be lying to us, like in a Truman Show situation. The only time we are able to know that we are in reality is when we transcend our selfconsciousness and when we accept two propositions: first, that we are not only objects but also subjects in the consciousness of others and second that our dialogic relations are again observed by a third party that is not part of this intersubjective world.
For religious people this is “belief” - belief as the assumption that all human relations only become real, serious and beyond any doubt if they know they are under the eyes of God. Only before Him something is in itself and not only “for me” or “among us”. That is why biblical language distinguishes between three forms of belief: the relationship with the world of things (“to believe that”), the relationship to the world of subjects (“to believe somebody”) and the assumption of a subjective supernatural reality (“to believe in” or “faith”). From an academic point of view belief is a holistic hypothesis. Belief is not the opposite of knowledge but it is the attempt to save reality from doubt by comprehending the fragile empirical world as an expression of a stable transcendent world.
When I talk to students they often ask not only about what I know but what I believe. As a professor for Religious Studies and a believing Catholic I am caught in the middle. On the one hand, it is my duty as a professor to doubt everything, i.e. to attribute each religious text to its historical context and sociological functions. On the other hand, I, as a Christian, consider certain religious documents, in my case the Bible, an interpretable but nevertheless irreversible, revealed text about the origin of reality. On weekdays the New Testament is a collection of ancient writings among many others, on Sundays it is the revelation. You can make a clear distinction between these two perspectives but it is difficult to decide whether doubt or belief is more real.
This issue of “Portal Wissen” explores this dual relationship of belief. What is the attitude of science towards belief – is it a religious one? Where does science bring things to light that we can hardly believe or that make us believe (again)? What happens if research clears up erroneous assumptions or myths? Is science able to investigate things that are convincing but inexplicable? How can it maintain its credibility and develop even so?
These questions appear again and again in the contributions of this “Portal Wissen”. They form a manifold, exciting and surprising picture of the research projects and academics at the University of Potsdam. Believe me, it will be an enjoyable read.
Prof. Johann Hafner
Professor of Religious Studies with Focus on Christianity Dean of the Faculty of Arts
Portal Wissen = Zeit
(2014)
„Was ist also 'Zeit'?“ seufzt Augustinus von Hippo im 11. Buch seiner „Confessiones“ melancholisch, und fährt fort „Wenn mich niemand danach fragt, weiß ich es; will ich einem Fragenden es erklären, weiß ich es nicht.“ Auch heute, 1584 Jahre nach Augustinus, erscheint 'Zeit' immer noch rätselhaft. Abhandlungen über das Wesen der Zeit füllen Bibliotheken. Oder eben dieses Heft.
Wesensfragen sind den modernen Wissenschaften allerdings fremd. Zeit ist – zumindest in der Physik – unproblematisch. „Time is defined so that Motion looks simple“ erkärt man kurz und trocken, und verabschiedet sich damit vom Augustinischen Rätsel oder der Newtonschen Vorstellung einer absoluten Zeit, deren mathematischen Fluss man durch irdische Instrumente eh immer nur näherungsweise erfassen kann.
In der Alltagssprache, selbst in den Wissenschaften, reden wir zwar weiterhin vom Fluss der Zeit, aber Zeit ist schon lange keine natürliche Gegebenheit mehr. Zeit ist vielmehr ein konventioneller Ordnungsparameter für Änderung und Bewegung. Geordnet werden Prozesse, indem eine Klasse von Prozessen als Zählsystem dient, um andere Prozesse mit ihnen zu vergleichen und anhand der temporären Kategorien „vorher“, „während“ und „nachher“ anzuordnen.
Zu Galileis Zeiten galt der eigene Pulsschlag als Zeitstandard für den Flug von Kanonenkugeln. Mit zunehmender Verfeinerung der Untersuchungsmethoden erschien das zu unpraktisch: Die Weg-Zeit-Diagramme frei fliegender Kanonenkugeln erweisen sich in diesem Standard ziemlich verwackelt, schlecht reproduzierbar, und keineswegs „simpel“. Heutzutage greift man zu Cäsium-Atomen. Demnach dauert ein Prozess eine Sekunde, wenn ein 133Cs-Atom genau 9 192 631 770 Schwingungen zwischen zwei sogenannten Hyperfeinzuständen des Grundzustands vollführt hat. Und ein Meter ist die Entfernung, die Licht im Vakuum in exakt 1/299 792 458 Sekunden zurücklegt. Glücklicherweise sind diese Daten im General Positioning System GPS hart kodiert, so dass der Nutzer sie nicht jedes Mal aufs Neue eingeben muss, wenn er wissen will, wo er ist. Aber schon morgen muss er sich vielleicht ein Applet runterladen, weil der Zeitstandard durch raffinierte Übergänge in Ytterbium ersetzt wurde.
Der konventionelle Charakter des Zeitbegriffs sollte nicht dazu verführen zu glauben, alles sei irgendwie relativ und daher willkürlich. Die Beziehung eines Pulsschlags zu einer Atomuhr ist absolut, und genauso real, wie die Beziehung einer Sanduhr zum Lauf der Sonne. Die exakten Wissenschaften sind Beziehungswissenschaften. Sie handeln nicht vom Ding an sich, was Newton und Kant noch geträumt haben, sondern von Beziehungen – worauf schon Leibniz und später Mach hingewiesen haben.
Kein Wunder, dass sich für andere Wissenschaften der physikalische Zeit-Standard als ziemlich unpraktisch erweist. Der Psychologie der Zeitwahrnehmung entnehmen wir – und jeder wird das bestätigen können – dass das gefühlte Alter durchaus verschieden ist vom physikalischen Alter. Je älter man ist, desto kürzer erscheinen einem die Jahre.
Unter der einfachen Annahme, dass die gefühlte Dauer umgekehrt proportional zum physikalischen Alter ist, und man als Zwanzigjähriger ein physikalisches Jahr auch psychologisch als ein Jahr empfindet, ergibt sich der erstaunliche Befund, dass man mit 90 Jahren 90 Jahre ist. Und – bei einer angenommenen Lebenserwartung von 90 Jahren – mit 20 (bzw. 40) physikalischen Jahren bereits 67 (bzw. 82) Prozent seiner gefühlten Lebenszeit hinter sich hat.
Bevor man angesichts der „Relativität von Zeit“ selbst in Melancholie versinkt, vielleicht die Fortsetzung des Eingangszitats von Augustinus: „Aber zuversichtlich behaupte ich zu wissen, dass es vergangene Zeit nicht gäbe, wenn nichts verginge, und nicht künftige Zeit, wenn nichts herankäme, und nicht gegenwärtige Zeit wenn nichts seiend wäre.“ Tja – oder mit Bob Dylan „The times they're a changing“.
Ich wünsche Ihnen eine spannende Zeit bei der Lektüre dieser Ausgabe.
Prof. Dr. Martin Wilkens
Professor für Quantenoptik
Portal Wissen = Glauben
(2014)
Menschen wollen wissen, was wirklich ist. Kinder lassen sich gern eine Geschichte erzählen, aber spätestens mit vier Jahren fragten meine, ob diese Geschichte so passiert sei oder nur erfunden. Das setzt sich fort: Auch unsere wissenschaftliche Neugier wird vom Interesse befeuert herauszufinden, was wirklich ist. Selbst dort, wo wir poetische Texte oder Träume erforschen, tun wir es in der Absicht, die realen sprachlichen Strukturen bzw. die neurologischen Faktoren von bloß vermuteten zu unterscheiden. Im Idealfall können wir Ergebnisse präsentieren, die von anderen logisch nachvollzogen und empirisch wiederholbar sind. Meistens geht das aber nicht. Wir können nicht jedes Buch lesen und nicht in jedes Mikroskop schauen, nicht einmal innerhalb der eigenen Disziplin. Wie viel mehr sind wir in der Lebenswelt darauf angewiesen, den Ausführungen anderer zu vertrauen, wenn wir wissen wollen, wo es zum Bahnhof geht oder ob es in Ulan Bator schön ist. Deshalb haben wir uns daran gewöhnt, anderen Glauben zu schenken, vom Freund bis zum Tagesschausprecher. Das ist kein kindliches Verhalten, sondern eine Notwendigkeit. Freilich ist das riskant, denn alle anderen könnten uns – wie in der „Truman- Show“ – anlügen. In der Wirklichkeit wissen wir uns erst dann, wenn wir unser Selbstbewusstsein verlassen und akzeptieren, dass wir erstens nicht nur Objekte, sondern Subjekte im Bewusstsein von anderen sind, und zweitens, dass alle unsere dialogischen Beziehungen noch einmal von einem Dritten betrachtet werden, der nicht Teil dieser Welt ist.
Für Religiöse ist das der Glaube. Glaube als Unterstellung, dass alle menschlichen Beziehungen erst dann wirklich, ernst und über Zweifel erhaben sind, wenn sie sich vor den Augen Gottes wissen. Erst vor ihm ist etwas als es selbst und nicht nur „für mich“ oder „unter uns“. Daher unterscheidet die biblische Sprache drei Formen des Glaubens: die Beziehung zur Ding-Welt („glauben, dass“), die Beziehung zur Subjekt-Welt („jemandem glauben“) und die Annahme einer subjekthaften überirdischen Wirklichkeit („glauben an“). Wissenschaftstheoretisch gesehen ist Glaube also eine Totalhypothese. Glaube ist nicht das Gegenteil von Wissen, sondern der Versuch, Wirklichkeit vor dem Zweifel zu retten, indem man die fragile empirische Welt als Ausdruck einer stabilen transzendenten Welt begreift.
Oft wollen Studierende in Gesprächen nicht nur wissen, was ich weiß, sondern, was ich glaube. Als Religionswissenschaftler und gleichzeitig gläubiger Katholik sitze ich zwischen den Stühlen: Einerseits ist es als Professor meine Aufgabe, alles zu bezweifeln, d.h. jeden religiösen Text auf seine historischen Kontexte und soziologischen Funktionen zurückzuführen. Andererseits hält der Christ in mir bestimmte religiöse Dokumente – in meinem Fall die Bibel – zwar für einen interpretierbaren, aber doch irreversiblen, offenbarten Text, der vom Ursprung der Wirklichkeit handelt. Werktags ist das Neue Testament eine antike Schriftensammlung neben vielen anderen, am Sonntag ist es die Offenbarung. Beides kann klar unterschieden werden, aber es ist schwer zu entscheiden, ob das Zweifeln oder das Glauben wirklicher ist.
Das vorliegende Heft geht diesem doppelten Verhältnis zum Glauben nach: Wie steht Wissenschaft zum Glauben – ob religiös oder nicht? Wo bringt Wissenschaft Dinge ans Licht, die wir kaum glauben mögen oder uns (wieder) glauben lassen? Was passiert, wenn Forschung irrige Annahmen oder Mythen aufklärt? Ist Wissenschaft in der Lage, Dingen auf den Grund zu gehen, die zwar überzeugend, aber unerklärbar sind? Wie kann sie selbst glaubwürdig bleiben und sich dennoch weiterentwickeln?
In den Beiträgen dieser „Portal Wissen“ scheinen diese Fragen immer wieder auf. Sie bilden ein vielfältiges, spannendes und auch überraschendes Bild der Forschungsprojekte und der Wissenschaftler an der Universität Potsdam. Glauben Sie mir, es erwartet Sie eine anregende Lektüre!
Prof. Dr. Johann Hafner
Professor für Religionswissenschaft mit dem Schwerpunkt Christentum
Dekan der Philosophischen Fakultät
Unterschiedliche Verfahren zur Ermittlung von Georadar-Wellengeschwindigkeiten wurden entwickelt und erfolgreich angewendet. Für die Verfahren wurden statistische Methoden und Schwarmintelligenz-Algorithmen benutzt. Es wurde gezeigt, dass die neuen Verfahren schneller, präziser und besser reproduzierbare Ergebnisse für Georadar-Wellengeschwindigkeit erzielen als herkömmliche Verfahren.
Mit verbesserten Werten der Georadar-Wellengeschwindigkeit lassen sich die verzerrten dreidimensionalen Abbilder der obersten zehn Meter des Untergrundes, welche sich mit Georadar-Daten erzeugen lassen, korrigieren. In diesen korrigierten Abbildern sind dann realistische Tiefen von Schichten oder Objekten im Untergrund besser messbar. Außerdem verbessern präzisere Wellengeschwindigkeiten die Bestimmung von Bodenparametern, wie Wassergehalt oder Tonanteil. Die präsentierten Verfahren erlauben eine quantitative Angabe von Fehlern der bestimmten Wellengeschwindigkeit und der daraus folgenden Tiefen und Bodenparametern im Untergrund. Die Vorteile dieser neu entwickelten Verfahren zur Charakterisierung des Untergrundes der oberen Meter wurde an Feldbeispielen demonstriert.
We report on the development of an on-chip RPA (recombinase polymerase amplification) with simultaneous multiplex isothermal amplification and detection on a solid surface. The isothermal RPA was applied to amplify specific target sequences from the pathogens Neisseria gonorrhoeae, Salmonella enterica and methicillin-resistant Staphylococcus aureus (MRSA) using genomic DNA. Additionally, a positive plasmid control was established as an internal control. The four targets were amplified simultaneously in a quadruplex reaction. The amplicon is labeled during on-chip RPA by reverse oligonucleotide primers coupled to a fluorophore. Both amplification and spatially resolved signal generation take place on immobilized forward primers bount to expoxy-silanized glass surfaces in a pump-driven hybridization chamber. The combination of microarray technology and sensitive isothermal nucleic acid amplification at 38 °C allows for a multiparameter analysis on a rather small area. The on-chip RPA was characterized in terms of reaction time, sensitivity and inhibitory conditions. A successful enzymatic reaction is completed in <20 min and results in detection limits of 10 colony-forming units for methicillin-resistant Staphylococcus aureus and Salmonella enterica and 100 colony-forming units for Neisseria gonorrhoeae. The results show this method to be useful with respect to point-of-care testing and to enable simplified and miniaturized nucleic acid-based diagnostics.
The Babylonian Talmud (BT) attributes the idea of committing a transgression for the sake of God to R. Nahman b. Isaac (RNBI). RNBI's statement appears in two parallel sugyot in the BT (Nazir 23a; Horayot 10a). Each sugya has four textual witnesses. By comparing these textual witnesses, this paper will attempt to reconstruct the sugya's earlier (or, what some might term, original) dialectical form, from which the two familiar versions of the text in Nazir and Horayot evolved. This article reveals the specific ways in which, value-laden conceptualizations have a major impact on the Talmud's formulation, as we know it today.
Ultraschall Berlin
(2014)
We propose a novel cluster-based reduced-order modelling (CROM) strategy for unsteady flows. CROM combines the cluster analysis pioneered in Gunzburger's group (Burkardt, Gunzburger & Lee, Comput. Meth. Appl. Mech. Engng, vol. 196, 2006a, pp. 337-355) and transition matrix models introduced in fluid dynamics in Eckhardt's group (Schneider, Eckhardt & Vollmer, Phys. Rev. E, vol. 75, 2007, art. 066313). CROM constitutes a potential alternative to POD models and generalises the Ulam-Galerkin method classically used in dynamical systems to determine a finite-rank approximation of the Perron-Frobenius operator. The proposed strategy processes a time-resolved sequence of flow snapshots in two steps. First, the snapshot data are clustered into a small number of representative states, called centroids, in the state space. These centroids partition the state space in complementary non-overlapping regions (centroidal Voronoi cells). Departing from the standard algorithm, the probabilities of the clusters are determined, and the states are sorted by analysis of the transition matrix. Second, the transitions between the states are dynamically modelled using a Markov process. Physical mechanisms are then distilled by a refined analysis of the Markov process, e. g. using finite-time Lyapunov exponent (FTLE) and entropic methods. This CROM framework is applied to the Lorenz attractor (as illustrative example), to velocity fields of the spatially evolving incompressible mixing layer and the three-dimensional turbulent wake of a bluff body. For these examples, CROM is shown to identify non-trivial quasi-attractors and transition processes in an unsupervised manner. CROM has numerous potential applications for the systematic identification of physical mechanisms of complex dynamics, for comparison of flow evolution models, for the identification of precursors to desirable and undesirable events, and for flow control applications exploiting nonlinear actuation dynamics.
claspfolio 2
(2014)
Building on the award-winning, portfolio-based ASP solver claspfolio, we present claspfolio 2, a modular and open solver architecture that integrates several different portfolio-based algorithm selection approaches and techniques. The claspfolio 2 solver framework supports various feature generators, solver selection approaches, solver portfolios, as well as solver-schedule-based pre-solving techniques. The default configuration of claspfolio 2 relies on a light-weight version of the ASP solver clasp to generate static and dynamic instance features. The flexible open design of claspfolio 2 is a distinguishing factor even beyond ASP. As such, it provides a unique framework for comparing and combining existing portfolio-based algorithm selection approaches and techniques in a single, unified framework. Taking advantage of this, we conducted an extensive experimental study to assess the impact of different feature sets, selection approaches and base solver portfolios. In addition to gaining substantial insights into the utility of the various approaches and techniques, we identified a default configuration of claspfolio 2 that achieves substantial performance gains not only over clasp's default configuration and the earlier version of claspfolio, but also over manually tuned configurations of clasp.