Extern
Refine
Year of publication
Document Type
- Article (697)
- Postprint (204)
- Conference Proceeding (137)
- Doctoral Thesis (131)
- Review (128)
- Monograph/Edited Volume (87)
- Working Paper (44)
- Part of Periodical (19)
- Part of a Book (16)
- Preprint (7)
- Master's Thesis (6)
- Other (4)
- Habilitation Thesis (2)
- Lecture (2)
- Report (2)
- Bachelor Thesis (1)
- Sound (1)
Language
- German (855)
- English (605)
- Russian (14)
- French (13)
- Portuguese (1)
Keywords
- Philosophie (18)
- philosophy (18)
- Lehrkräftebildung (16)
- Reflexion (13)
- Germany (9)
- Reflexionskompetenz (9)
- United States (8)
- Anthropologie (7)
- Feedback (7)
- USA (7)
Institute
- Extern (1488) (remove)
Background
Wearables, as small portable computer systems worn on the body, can track user fitness and health data, which can be used to customize health insurance contributions individually. In particular, insured individuals with a healthy lifestyle can receive a reduction of their contributions to be paid. However, this potential is hardly used in practice.
Objective
This study aims to identify which barrier factors impede the usage of wearables for assessing individual risk scores for health insurances, despite its technological feasibility, and to rank these barriers according to their relevance.
Methods
To reach these goals, we conduct a ranking-type Delphi study with the following three stages. First, we collected possible barrier factors from a panel of 16 experts and consolidated them to a list of 11 barrier categories. Second, the panel was asked to rank them regarding their relevance. Third, to enhance the panel consensus, the ranking was revealed to the experts, who were then asked to re-rank the barriers.
Results
The results suggest that regulation is the most important barrier. Other relevant barriers are false or inaccurate measurements and application errors caused by the users. Additionally, insurers could lack the required technological competence to use the wearable data appropriately.
Conclusion
A wider use of wearables and health apps could be achieved through regulatory modifications, especially regarding privacy issues. Even after assuring stricter regulations, users’ privacy concerns could partly remain, if the data exchange between wearables manufacturers, health app providers, and health insurers does not become more transparent.
Adorno und die Kabbala
(2016)
Im neunten Band der Reihe geht Ansgar Martins kabbalistischen Spuren in der Philosophie Theodor W. Adornos (1903–1969) nach. Der Frankfurter Gesellschaftskritiker griff im Rahmen seines radikalen materialistischen Projekts gleichwohl auch auf ‚theologische‘ Deutungsfiguren zurück. Vermittelt durch den gemeinsamen Freund Walter Benjamin (1892–1940) stieß Adorno dabei auf das Werk des Kabbala-Forschers Gershom Scholem (1897–1982). Zwischen Frankfurt und Jerusalem entwickelte sich eine lebenslange Korrespondenz.
Für Adorno erscheint vor dem Hintergrund lückenloser kapitalistischer Vergesellschaftung jede religiöse Sinngebung in der Moderne als unmöglich. Der Tradition der jüdischen Mystik schreibt er hingegen eine innere Affinität zu dieser hoffnungslosen Logik des ‚Verfalls‘ zu. Sie scheint ihm zur unumgänglichen Säkularisierung religiöser Gehalte aufzufordern. Adornos kabbalistische Marginalien beziehen einen breiten Horizont jüdisch-messianischer Ideen ein. Er verleugnet dabei nie, dass es ihm um eine sehr diesseite Verwirklichung geoffenbarter Heilsversprechen zu tun ist: Transzendenz sei als erfüllte Immanenz, als verwirklichte Utopie zu denken. In diesem Anliegen sieht Adorno selbst jedoch gerade seine Übereinstimmung mit der Kabbala.
Adornos kabbalistische Motive, die auf Scholems Forschungen zurückgehen, werden hier ausführlich an seinen Schriften und Vorlesungen untersucht. In seinem Verständnis der philosophischen Tradition sowie im Modell der Metaphysischen Erfahrung suchte er etwa explizit Anschluss an Deutungen der Kabbala: Das unerreichbare Urbild der Philosophie sei die Interpretation der geoffenbarten Schrift. Wie säkularisierte heilige Texte wurden Werke von Beethoven, Goethe, Kafka oder Schönberg so zum Anlass für ‚mystische‘ Interpretationen. Deren detaillierte Untersuchung erlaubt, das viel beschworene jüdische Erbe von Adornos Philosophie zu konkretisieren und bedenkenswerte Einzelheiten von der Negativen Dialektik zur Ästhetik in den Blick zu nehmen.
We present the results of Monte Carlo mass-loss predictions for massive stars covering a wide range of stellar parameters. We critically test our predictions against a range of observed massloss rates – in light of the recent discussions on wind clumping. We also present a model to compute the clumping-induced polarimetric variability of hot stars and we compare this with observations of Luminous Blue Variables, for which polarimetric variability is larger than for O and Wolf-Rayet stars. Luminous Blue Variables comprise an ideal testbed for studies of wind clumping and wind geometry, as well as for wind strength calculations, and we propose they may be direct supernova progenitors.
Children’s physical fitness development and related moderating effects of age and sex are well documented, especially boys’ and girls’ divergence during puberty. The situation might be different during prepuberty. As girls mature approximately two years earlier than boys, we tested a possible convergence of performance with five tests representing four components of physical fitness in a large sample of 108,295 eight-year old third-graders. Within this single prepubertal year of life and irrespective of the test, performance increased linearly with chronological age, and boys outperformed girls to a larger extent in tests requiring muscle mass for successful performance. Tests differed in the magnitude of age effects (gains), but there was no evidence for an interaction between age and sex. Moreover, “physical fitness” of schools correlated at r = 0.48 with their age effect which might imply that "fit schools” promote larger gains; expected secular trends from 2011 to 2019 were replicated.
Agenda 21-Prozesse für zukunftsfähige Kommunen in Brandenburg KROHN, A.: Stadtentwicklung und Lokale Agenda 21 – Zwei Seiten einer Medaille ; MATERNE, S.: Agenda 21 in Oranienburg – die Entwicklung eines Leitbildes ; SCHLUTOW, A.; WILHELM, B.; METZDORF, R.; WILK, B., FÖRSTER, B.: Interessengemeinschaft "Ökologie 2000 - Unternehmer für die Umwelt" – Anstoß der Wirtschaft für eine lokale Agenda 21 in Strausberg ; SCHADE, B.: Agenda 21 im Landkreis Potsdam-Mittelmark – Rahmen für lokale Aktivitäten ; KITZIG, A.: Potsdam, Stadt der Toleranz – unterwegs mit Geschichts- und Verantwortungsbewußtsein für die Zukunft. Die Lokale Agenda 21 ; MÜLLER, J.: Umsetzung eines Klimaschutzkonzeptes – Schritte zu einer nachhaltigen Entwicklung der Stadt Eberswalde , HAASE, W.: Eine lokale Agenda 21 für Kleinmachnow , RÜCKERT-JOHN, J.: Auf dem Weg zur Nachhaltigkeit. Ergebnisse einer Dorfstudie
The optical density of human macular pigment was measured for 50 observers ranging in age from 10 to 90 years. The psychophysical method required adjusting the radiance of a 1°, monochromatic light (400–550 nm) to minimize flicker (15 Hz) when presented in counterphase with a 460 nm standard. This test stimulus was presented superimposed on a broad-band, short-wave background. Macular pigment density was determined by comparing sensitivity under these conditions for the fovea, where macular pigment is maximal, and 5° temporally. This difference spectrum, measured for 12 observers, matched Wyszecki and Stiles's standard density spectrum for macular pigment. To study variation in macular pigment density for a larger group of observers, measurements were made at only selected spectral points (460, 500 and 550 nm). The mean optical density at 460 nm for the complete sample of 50 subjects was 0.39. Substantial individual differences in density were found (ca. 0.10–0.80), but this variation was not systematically related to age.
Wild bee species are important pollinators in agricultural landscapes. However, population decline was reported over the last decades and is still ongoing. While agricultural intensification is a major driver of the rapid loss of pollinating species, transition zones between arable fields and forest or grassland patches, i.e., agricultural buffer zones, are frequently mentioned as suitable mitigation measures to support wild bee populations and other pollinator species. Despite the reported general positive effect, it remains unclear which amount of buffer zones is needed to ensure a sustainable and permanent impact for enhancing bee diversity and abundance. To address this question at a pollinator community level, we implemented a process-based, spatially explicit simulation model of functional bee diversity dynamics in an agricultural landscape. More specifically, we introduced a variable amount of agricultural buffer zones (ABZs) at the transition of arable to grassland, or arable to forest patches to analyze the impact on bee functional diversity and functional richness. We focused our study on solitary bees in a typical agricultural area in the Northeast of Germany. Our results showed positive effects with at least 25% of virtually implemented agricultural buffer zones. However, higher amounts of ABZs of at least 75% should be considered to ensure a sufficient increase in Shannon diversity and decrease in quasi-extinction risks. These high amounts of ABZs represent effective conservation measures to safeguard the stability of pollination services provided by solitary bee species. As the model structure can be easily adapted to other mobile species in agricultural landscapes, our community approach offers the chance to compare the effectiveness of conservation measures also for other pollinator communities in future.
Im Rahmen dieser Studie soll ermittelt werden, welche Urteilsmodelle bzw. -konzepte Manager, Verwaltungsangehörige, Experten und Laien zur Bewertung von Risiken verwenden. Dazu wird eine Untersuchungsmethode, die Conjoint-Analyse, verwendet, die mit spezifischen Problemen der psychometrischen Risikoforschung besser umzugehen vermag und die u.E. noch nicht in der Risikowahrnehmungsforschung eingesetzt wurde. Inhalt: Ziel der Untersuchung Fragestellung Die Conjoint-Analyse Das Design der Studie Stichprobe und Datenerhebungsprozedur Ergebnisse: -Risikoakzeptanzwerte -Entscheidungswichtigkeit der Risiken -Gruppenanalyse
Die Arbeit beschreibt die Analyse von Beobachtungen zweier Sonnenflecken in zweidimensionaler Spektro-Polarimetrie. Die Daten wurden mit dem Fabry-Pérot-Interferometer der Universität Göttingen am Vakuum-Turm-Teleskop auf Teneriffa erfasst. Von der aktiven Region NOAA 9516 wurde der volle Stokes-Vektor des polarisierten Lichts in der Absorptionslinie bei 630,249 nm in Einzelaufnahmen beobachtet, und von der aktiven Region NOAA 9036 wurde bei 617,3 nm Wellenlänge eine 90-minütige Zeitserie des zirkular polarisierten Lichts aufgezeichnet. Aus den reduzierten Daten werden Ergebniswerte für Intensität, Geschwindigkeit in Beobachtungsrichtung, magnetische Feldstärke sowie verschiedene weitere Plasmaparameter abgeleitet. Mehrere Ansätze zur Inversion solarer Modellatmosphären werden angewendet und verglichen. Die teilweise erheblichen Fehlereinflüsse werden ausführlich diskutiert. Das Frequenzverhalten der Ergebnisse und Abhängigkeiten nach Ort und Zeit werden mit Hilfe der Fourier- und Wavelet-Transformation weiter analysiert. Als Resultat lässt sich die Existenz eines hochfrequenten Bandes für Geschwindigkeitsoszillationen mit einer zentralen Frequenz von 75 Sekunden (13 mHz) bestätigen. In größeren photosphärischen Höhen von etwa 500 km entstammt die Mehrheit der damit zusammenhängenden Schockwellen den dunklen Anteilen der Granulen, im Unterschied zu anderen Frequenzbereichen. Die 75-Sekunden-Oszillationen werden ebenfalls in der aktiven Region beobachtet, vor allem in der Lichtbrücke. In den identifizierten Bändern oszillatorischer Power der Geschwindigkeit sind in einer dunklen, penumbralen Struktur sowie in der Lichtbrücke ausgeprägte Strukturen erkennbar, die sich mit einer Horizontalgeschwindigkeit von 5-8 km/s in die ruhige Sonne bewegen. Diese zeigen einen deutlichen Anstieg der Power, vor allem im 5-Minuten-Band, und stehen möglicherweise in Zusammenhang mit dem Phänomen der „Evershed-clouds“. Eingeschränkt durch ein sehr geringes Signal-Rausch-Verhältnis und hohe Fehlereinflüsse werden auch Magnetfeldvariationen mit einer Periode von sechs Minuten am Übergang von Umbra zu Penumbra in der Nähe einer Lichtbrücke beobachtet. Um die beschriebenen Resultate zu erzielen, wurden bestehende Visualisierungsverfahren der Frequenzanalyse verbessert oder neu entwickelt, insbesondere für Ergebnisse der Wavelet-Transformation.
Algorithmen als Dozierende?
(2023)
Auf maschinellem Lernen basierende Tools haben schon längst Einzug in unseren Alltag gefunden und so konnten auch in der Lehrkräftebildung erste Anwendungen entwickelt, erprobt und evaluiert werden. Im Teilprojekt Physikdidaktik des Schwerpunktes 2 „Schulpraktische Studien“ wurden auf Basis eines Rahmenmodells für Reflexion (Nowak et al., 2019) automatisierte Analysemethoden (Wulff et al., 2020) entwickelt und fanden Einzug in universitäre fachdidaktische Lehre (Mientus et al., 2021a). Mit dem Projekt konnten Potenziale KI-basierter Unterstützung aufgezeigt und verstetigt sowie spezifische Herausforderungen identifiziert werden. Dieser Beitrag skizziert ausgewählte Anwendungsmöglichkeiten und weiterführende Forschungen unter dem Gesichtspunkt der Akzeptanz computerunterstützter Lehre.
Alles auf (Studien-)Anfang? Faktoren für den Studienerfolg in der Eingangsphase und zur Studienmitte
(2020)
Die hohen Abbruchquoten, insbesondere in der Studieneingangsphase, haben die Hochschulen in Deutschland veranlasst, eine Vielzahl von Maßnahmen zu ergreifen, über deren Wirkungen bisher allerdings wenig bekannt ist. Im vorliegenden Beitrag werden Befunde eines Forschungsprojekts speziell zur Studieneingangsphase sowie ergänzend zur Studienmitte vorgestellt, dessen Ziel es war, Bedingungen eines erfolgreichen Studieneinstiegs zu identifizieren und Empfehlungen für eine Optimierung des Studieneingangs abzuleiten. Das Forschungsdesign umfasste neben qualitativen Studien vor allem eine quantitative Längsschnittbefragung an fünf Universitäten (Potsdam, Mainz, Magdeburg, Kiel und Greifswald). Im Ergebnis der Analysen konnte die forschungsleitende Hypothese, dass Maßnahmen zum Studieneingang vor allem dann zur Erhöhung des Studienerfolgs einen Beitrag leisten, wenn sie zur akademischen und sozialen Integration in die Hochschule beitragen, bestätigt werden. Bedeutsam für den Studienerfolg sind demnach insbesondere solche Faktoren wie die Identifikation mit dem Studienfach, die Selbstwirksamkeit, die berufliche bzw. erfolgsorientierte Lernmotivation und die akademische Integration. Daneben konnte ein positiver Einfluss des sozialen Klimas sowie des Forschungs- und Praxisbezugs auf die Studienzufriedenheit nachgewiesen werden. Weiterführende Analysen zur Studienmitte verdeutlichen zudem, dass für die beiden Studienphasen (Eingang und Studienmitte) gleiche Faktoren bei zum Teil unterschiedlicher Gewichtung eine Rolle spielen. So ist die soziale Integration ein wesentlicher Prädiktor in beiden Phasen – in der Eingangsphase eher in die Studierendenschaft und im weiteren Studienverlauf (Studienmitte) eher in die akademische Gemeinschaft (in Form von Lehrenden). Insofern muss die Eingangsfrage wie folgt beantwortet werden: Ja, alles auf Anfang, aber dann mit den Bemühungen, soziale und akademische Integration aller Studierenden voll und ganz zu gewährleisten. Zudem machen die Befunde auf die bisher offenbar unterschätzte Rolle von Verwertungsmotiven aufmerksam.
Am alten Markt
(2012)
In diesem Beitrag wird ein geplantes digitales Lehrkonzept für die universitäre Begleitung des Praxissemesters in den Fächern Biologie und Mathematik vorgestellt. Darin setzen sich Studierende im Praxissemester selbstständig fachspezifische Schwerpunkte für die Planung, Durchführung und Reflexion von Unterrichtsprojekten in der Schule. Optimalerweise ergibt sich aus den Unterrichtsprojekten auch eine fachbezogene Fragestellung, die von den Studierenden im Zuge Forschenden Lernens ergründet werden kann. Begleitet wird der gesamte Prozess durch systematisch angeleitete E-Peer-Assessments (E-PA). Dabei treten Studierende in Peers in den digitalen, schriftlichen Dialog über Unterricht und durchlaufen gemeinsam Reflexions- und Feedbackschleifen, welche durch die universitäre Seminarleitung instruiert werden.
As mid-19th-century American Jews introduced radical changes to their religious observance and began to define Judaism in new ways, to what extent did they engage with European Jewish ideas? Historians often approach religious change among Jews from German lands during this period as if Jewish immigrants had come to America with one set of ideas that then evolved solely in conversation with their American contexts. Historians have similarly cast the kinds of Judaism Americans created as both unique to America and uniquely American. These characterizations are accurate to an extent. But to what extent did Jewish innovations in the United States take place in conversation with European Jewish developments? Looking to the 19th-century American Jewish press, this paper seeks to understand how American Jews engaged European Judaism in formulating their own ideas, understanding themselves, and understanding their place in world Judaism.
The "Forum" of WeltTrends No. 32 assembles sixteen analyses of the aftermath of the September 2001 terrorist attacks on the United States, written by distinguished scholars from Germany, Britain, France, the U.S., the Czech Republic, Russia and China. The contributions deal with topics such as international and domestic security, the social and political causes of terrorism, international law, asylum policy, the classification of the attacks as crimes or acts of war, implications for international bodies such as NATO and the UN, and the effect of the attacks on the relationship between the U.S., Europe, Russia, and Asia, in particular Japan and China. The authors counsel strongly against scare mongering and short-term symbolic politics. Any attempt to deal with the complex problem of terrorism has to include long-term political and social policies aimed at the reduction of conflict and sources of political extremism in the Middle Eastern region. There is no reason for panic according to the authors but international politics after September 11th cannot go on like before.
Two dimensional gas chromatography coupled to time-of-flight mass spectrometry (GCxGC-TOF-MS) is a promising technique to overcome limits of complex metabolome analysis using one dimensional GC-TOF-MS. Especially at the stage of data export and data mining, however, convenient procedures to cope with the complexity of GCxGC-TOF-MS data are still in development. Here, we present a high sample throughput protocol exploiting first and second retention index for spectral library search and subsequent construction of a high dimensional data matrix useful for statistical analysis. The method was applied to the analysis of 13 C-labelling experiments in the unicellular green alga Chlamydomonas reinhardtii. We developed a rapid sampling and extraction procedure for Chlamydomonas reinhardtii laboratory strain (CC503), a cell wall deficient mutant. By testing all published quenching protocols we observed dramatic metabolite leakage rates for certain metabolites. To circumvent metabolite leakage, samples were directly quenched and analyzed without separation of the medium. The growth medium was adapted to this rapid sampling protocol to avoid interference with GCxGC-TOF-MS analysis. To analyse batches of samples a new software tool, MetMax, was implemented which extracts the isotopomer matrix from stable isotope labelling experiments together with the first and second retention index (RI1 and RI2). To exploit RI1 and RI2 for metabolite identification we used the Golm metabolome database (GMD [1] with RI1/ RI2-reference spectra and new search algorithms. Using those techniques we analysed the dynamics of (CO2)-C-13 and C-13- acetate uptake in Chlamydomonas reinhardtii cells in two different steady states namely photoautotrophic and mixotrophic growth conditions.
Background: The prevalence of diabetes worldwide is predicted to increase from 2.8% in 2000 to 4.4% in 2030. Diabetic neuropathy (DN) is associated with damage to nerve glial cells, their axons, and endothelial cells leading to impaired function and mobility.
Objective: We aimed to examine the effects of an endurance-dominated exercise program on maximum oxygen consumption (VO2max), ground reaction forces, and muscle activities during walking in patients with moderate DN.
Methods: Sixty male and female individuals aged 45–65 years with DN were randomly assigned to an intervention (IG, n = 30) or a waiting control (CON, n = 30) group. The research protocol of this study was registered with the Local Clinical Trial Organization (IRCT20200201046326N1). IG conducted an endurance-dominated exercise program including exercises on a bike ergometer and gait therapy. The progressive intervention program lasted 12 weeks with three sessions per week, each 40–55 min. CON received the same treatment as IG after the post-tests. Pre- and post-training, VO2max was tested during a graded exercise test using spiroergometry. In addition, ground reaction forces and lower limbs muscle activities were recorded while walking at a constant speed of ∼1 m/s.
Results: No statistically significant baseline between group differences was observed for all analyzed variables. Significant group-by-time interactions were found for VO2max (p < 0.001; d = 1.22). The post-hoc test revealed a significant increase in IG (p < 0.001; d = 1.88) but not CON. Significant group-by-time interactions were observed for peak lateral and vertical ground reaction forces during heel contact and peak vertical ground reaction force during push-off (p = 0.001–0.037; d = 0.56–1.53). For IG, post-hoc analyses showed decreases in peak lateral (p < 0.001; d = 1.33) and vertical (p = 0.004; d = 0.55) ground reaction forces during heel contact and increases in peak vertical ground reaction force during push-off (p < 0.001; d = 0.92). In terms of muscle activity, significant group-by-time interactions were found for vastus lateralis and gluteus medius during the loading phase and for vastus medialis during the mid-stance phase, and gastrocnemius medialis during the push-off phase (p = 0.001–0.044; d = 0.54–0.81). Post-hoc tests indicated significant intervention-related increases in vastus lateralis (p = 0.001; d = 1.08) and gluteus medius (p = 0.008; d = 0.67) during the loading phase and vastus medialis activity during mid-stance (p = 0.001; d = 0.86). In addition, post-hoc tests showed decreases in gastrocnemius medialis during the push-off phase in IG only (p < 0.001; d = 1.28).
Conclusions: This study demonstrated that an endurance-dominated exercise program has the potential to improve VO2max and diabetes-related abnormal gait in patients with DN. The observed decreases in peak vertical ground reaction force during the heel contact of walking could be due to increased vastus lateralis and gluteus medius activities during the loading phase. Accordingly, we recommend to implement endurance-dominated exercise programs in type 2 diabetic patients because it is feasible, safe and effective by improving aerobic capacity and gait characteristics.
We launched an original large-scale experiment concerning informatics learning in French high schools. We are using the France-IOI platform to federate resources and share observation for research. The first step is the implementation of an adaptive hypermedia based on very fine grain epistemic modules for Python programming learning. We define the necessary traces to be built in order to study the trajectories of navigation the pupils will draw across this hypermedia. It may be browsed by pupils either as a course support, or an extra help to solve the list of exercises (mainly for algorithmics discovery). By leaving the locus of control to the learner, we want to observe the different trajectories they finally draw through our system. These trajectories may be abstracted and interpreted as strategies and then compared for their relative efficiency. Our hypothesis is that learners have different profiles and may use the appropriate strategy accordingly. This paper presents the research questions, the method and the expected results.
We describe a framework to support the implementation of web-based systems to manipulate data stored in relational databases. Since the conceptual model of a relational database is often specified as an entity-relationship (ER) model, we propose to use the ER model to generate a complete implementation in the declarative programming language Curry. This implementation contains operations to create and manipulate entities of the data model, supports authentication, authorization, session handling, and the composition of individual operations to user processes. Furthermore and most important, the implementation ensures the consistency of the database w.r.t. the data dependencies specified in the ER model, i.e., updates initiated by the user cannot lead to an inconsistent state of the database. In order to generate a high-level declarative implementation that can be easily adapted to individual customer requirements, the framework exploits previous works on declarative database programming and web user interface construction in Curry.
We prove in this paper an existence result for infinite-dimensional stationary interactive Brownian diffusions. The interaction is supposed to be small in the norm ||.||∞ but otherwise is very general, being possibly non-regular and non-Markovian. Our method consists in using the characterization of such diffusions as space-time Gibbs fields so that we construct them by space-time cluster expansions in the small coupling parameter.
Background
Eating in absence of hunger is quite common and often associated with an increased energy intake co-existent with a poorer food choice. Intuitive eating (IE), i.e., eating in accordance with internal hunger and satiety cues, may protect from overeating. IE, however, requires accurate perception and processing of one’s own bodily signals, also referred to as interoceptive sensitivity. Training interoceptive sensitivity might therefore be an effective method to promote IE and prevent overeating. As most studies on eating behavior are conducted in younger adults and close social relationships influence health-related behavior, this study focuses on middle-aged and older couples.
Methods
The present pilot randomized intervention study aims at investigating the feasibility and effectiveness of a 21-day mindfulness-based training program designed to increase interoceptive sensitivity. A total of N = 60 couples participating in the NutriAct Family Study, aged 50–80 years, will be recruited. This randomized-controlled intervention study comprises three measurement points (pre-intervention, post-intervention, 4-week follow-up) and a 21-day training that consists of daily mindfulness-based guided audio exercises (e.g., body scan). A three-arm intervention study design is applied to compare two intervention groups (training together as a couple vs. training alone) with a control group (no training). Each measurement point includes the assessment of self-reported and objective indicators of interoceptive sensitivity (primary outcome), self-reported indicators of intuitive and maladaptive eating (secondary outcomes), and additional variables. A training evaluation applying focus group discussions will be conducted to assess participants’ overall acceptance of the training and its feasibility.
Discussion
By investigating the feasibility and effectiveness of a mindfulness-based training program to increase interoceptive sensitivity, the present study will contribute to a deeper understanding of how to promote healthy eating in older age.
The present thesis introduces an iterative expert-based Bayesian approach for assessing greenhouse gas (GHG) emissions from the 2030 German new vehicle fleet and quantifying the impacts of their main drivers. A first set of expert interviews has been carried out in order to identify technologies which may help to lower car GHG emissions and to quantify their emission reduction potentials. Moreover, experts were asked for their probability assessments that the different technologies will be widely adopted, as well as for important prerequisites that could foster or hamper their adoption. Drawing on the results of these expert interviews, a Bayesian Belief Network has been built which explicitly models three vehicle types: Internal Combustion Engine Vehicles (which include mild and full Hybrid Electric Vehicles), Plug-In Hybrid Electric Vehicles, and Battery Electric Vehicles. The conditional dependencies of twelve central variables within the BBN - battery energy, fuel and electricity consumption, relative costs, and sales shares of the vehicle types - have been quantified by experts from German car manufacturers in a second series of interviews. For each of the seven second-round interviews, an expert's individually specified BBN results. The BBN have been run for different hypothetical 2030 scenarios which differ, e.g., in regard to battery development, regulation, and fuel and electricity GHG intensities. The present thesis delivers results both in regard to the subject of the investigation and in regard to its method. On the subject level, it has been found that the different experts expect 2030 German new car fleet emission to be at 50 to 65% of 2008 new fleet emissions under the baseline scenario. They can be further reduced to 40 to 50% of the emissions of the 2008 fleet though a combination of a higher share of renewables in the electricity mix, a larger share of biofuels in the fuel mix, and a stricter regulation of car CO$_2$ emissions in the European Union. Technically, 2030 German new car fleet GHG emissions can be reduced to a minimum of 18 to 44% of 2008 emissions, a development which can not be triggered by any combination of measures modeled in the BBN alone but needs further commitment. Out of a wealth of existing BBN, few have been specified by individual experts through elicitation, and to my knowledge, none of them has been employed for analyzing perspectives for the future. On the level of methods, this work shows that expert-based BBN are a valuable tool for making experts' expectations for the future explicit and amenable to the analysis of different hypothetical scenarios. BBN can also be employed for quantifying the impacts of main drivers. They have been demonstrated to be a valuable tool for iterative stakeholder-based science approaches.
ANNIS
(2004)
In this paper, we discuss the design and implementation of our first version of the database "ANNIS" ("ANNotation of Information Structure"). For research based on empirical data, ANNIS provides a uniform environment for storing this data together with its linguistic annotations. A central database promotes standardized annotation, which facilitates interpretation and comparison of the data. ANNIS is used through a standard web browser and offers tier-based visualization of data and annotations, as well as search facilities that allow for cross-level and cross-sentential queries. The paper motivates the design of the system, characterizes its user interface, and provides an initial technical evaluation of ANNIS with respect to data size and query processing.
Christian (von) Rother, Chef der Preußischen Seehandlung 1820–1848, ist vermutlich die prägendste Gestalt der Institution im 19. Jahrhundert. Seine Lebensgeschichte als Sohn eines schlesischen Bauern zeugt von sozialem Aufstieg und einer eindrucksvollen Beamtenkarriere. Rother formte die Seehandlung zu einem Konglomerat von gewerblichen Unternehmen, die durch Bankengeschäfte, Chausseebauprogramme und das Engagement des Staates in der Wirtschaftsförderung leistungsstark gemacht werden sollten. Der Erfolg blieb allerdings unterschiedlich. In den 1840er Jahren stießen diese Bemühungen darüber hinaus auf Kritik von unternehmerischen Konkurrenten. Bleibende Bedeutung behauptete eine von Rother gegründete soziale Einrichtung, die in Berlin ansässige Rother-Stiftung für arme und unverheiratete Töchter von Beamten und Offizieren.
Rezensiertes Werk: Ausländer in Deutschland ; Probleme einer transkulturellen Gesellschaft aus geographischer Sicht : mit 11 Tabellen / Anton Escher (Hrsg.). - Mainz : Geograph. Inst., Johannes Gutenberg-Univ., 2000. - VIII, 128 S. : Ill., graph. Darst., Kt. - (Mainzer Kontaktstudium Geographie ; 6) ISBN 3-88250-205-3
Physical activity and exercise are effective approaches in prevention and therapy of multiple diseases. Although the specific characteristics of lengthening contractions have the potential to be beneficial in many clinical conditions, eccentric training is not commonly used in clinical populations with metabolic, orthopaedic, or neurologic conditions. The purpose of this pilot study is to investigate the feasibility, functional benefits, and systemic responses of an eccentric exercise program focused on the trunk and lower extremities in people with low back pain (LBP) and multiple sclerosis (MS). A six-week eccentric training program with three weekly sessions is performed by people with LBP and MS. The program consists of ten exercises addressing strength of the trunk and lower extremities. The study follows a four-group design (N = 12 per group) in two study centers (Israel and Germany): three groups perform the eccentric training program: A) control group (healthy, asymptomatic); B) people with LBP; C) people with MS; group D (people with MS) receives standard care physiotherapy. Baseline measurements are conducted before first training, post-measurement takes place after the last session both comprise blood sampling, self-reported questionnaires, mobility, balance, and strength testing. The feasibility of the eccentric training program will be evaluated using quantitative and qualitative measures related to the study process, compliance and adherence, safety, and overall program assessment. For preliminary assessment of potential intervention effects, surrogate parameters related to mobility, postural control, muscle strength and systemic effects are assessed. The presented study will add knowledge regarding safety, feasibility, and initial effects of eccentric training in people with orthopaedic and neurological conditions. The simple exercises, that are easily modifiable in complexity and intensity, are likely beneficial to other populations. Thus, multiple applications and implementation pathways for the herein presented training program are conceivable.
Nested complementation plays an important role in expressing counter- i.e. star-free and first-order definable languages and their hierarchies. In addition, methods that compile phonological rules into finite-state networks use double-nested complementation or “double negation”. This paper reviews how the double-nested complementation extends to a relatively new operation, generalized restriction (GR), coined by the author (Yli-Jyrä and Koskenniemi 2004). This operation encapsulates a double-nested complementation and elimination of a concatenation marker, diamond, whose finite occurrences align concatenations in the arguments of the operation. The paper demonstrates that the GR operation has an interesting potential in expressing regular languages, various kinds of grammars, bimorphisms and relations. This motivates a further study of optimized implementation of the operator.
Technological progress allows for producing ever more complex predictive models on the basis of increasingly big datasets. For risk management of natural hazards, a multitude of models is needed as basis for decision-making, e.g. in the evaluation of observational data, for the prediction of hazard scenarios, or for statistical estimates of expected damage. The question arises, how modern modelling approaches like machine learning or data-mining can be meaningfully deployed in this thematic field. In addition, with respect to data availability and accessibility, the trend is towards open data. Topic of this thesis is therefore to investigate the possibilities and limitations of machine learning and open geospatial data in the field of flood risk modelling in the broad sense. As this overarching topic is broad in scope, individual relevant aspects are identified and inspected in detail.
A prominent data source in the flood context is satellite-based mapping of inundated areas, for example made openly available by the Copernicus service of the European Union. Great expectations are directed towards these products in scientific literature, both for acute support of relief forces during emergency response action, and for modelling via hydrodynamic models or for damage estimation. Therefore, a focus of this work was set on evaluating these flood masks. From the observation that the quality of these products is insufficient in forested and built-up areas, a procedure for subsequent improvement via machine learning was developed. This procedure is based on a classification algorithm that only requires training data from a particular class to be predicted, in this specific case data of flooded areas, but not of the negative class (dry areas). The application for hurricane Harvey in Houston shows the high potential of this method, which depends on the quality of the initial flood mask.
Next, it is investigated how much the predicted statistical risk from a process-based model chain is dependent on implemented physical process details. Thereby it is demonstrated what a risk study based on established models can deliver. Even for fluvial flooding, such model chains are already quite complex, though, and are hardly available for compound or cascading events comprising torrential rainfall, flash floods, and other processes. In the fourth chapter of this thesis it is therefore tested whether machine learning based on comprehensive damage data can offer a more direct path towards damage modelling, that avoids explicit conception of such a model chain. For that purpose, a state-collected dataset of damaged buildings from the severe El Niño event 2017 in Peru is used. In this context, the possibilities of data-mining for extracting process knowledge are explored as well. It can be shown that various openly available geodata sources contain useful information for flood hazard and damage modelling for complex events, e.g. satellite-based rainfall measurements, topographic and hydrographic information, mapped settlement areas, as well as indicators from spectral data. Further, insights on damaging processes are discovered, which mainly are in line with prior expectations. The maximum intensity of rainfall, for example, acts stronger in cities and steep canyons, while the sum of rain was found more informative in low-lying river catchments and forested areas. Rural areas of Peru exhibited higher vulnerability in the presented study compared to urban areas. However, the general limitations of the methods and the dependence on specific datasets and algorithms also become obvious.
In the overarching discussion, the different methods – process-based modelling, predictive machine learning, and data-mining – are evaluated with respect to the overall research questions. In the case of hazard observation it seems that a focus on novel algorithms makes sense for future research. In the subtopic of hazard modelling, especially for river floods, the improvement of physical models and the integration of process-based and statistical procedures is suggested. For damage modelling the large and representative datasets necessary for the broad application of machine learning are still lacking. Therefore, the improvement of the data basis in the field of damage is currently regarded as more important than the selection of algorithms.
Am 14. August 1980 begannen die Arbeiter*innen auf der Danziger Leninwerft einen Besetzungsstreik, in deren Folge die erste unabhängige Gewerkschaft Solidarność gegründet wurde. Einen Monat später am 17. September 1980 gingen auf der anderen Seite des „Eisernen Vorhangs“ die Arbeiter*innen der „AG Weser“ Werft in Bremen auf die Straße, um gegen den Verlust ihrer Arbeitsplätze zu protestieren. Die vorliegende Studie zeigt aus einer Perspektive „von unten“, wie seit den 1970er Jahren Betriebe in zwei unterschiedlichen politisch-ökonomischen Systeme auf technische Veränderungen und die verschärfte Konkurrenz auf dem Weltmarkt reagierten und verweist darauf, dass die Krisen in Ost und West Ende der 1970er Jahre eng miteinander verbunden waren.
The light reactions of photosynthesis are carried out by a series of multiprotein complexes embedded in thylakoid membranes. Among them, photosystem I (PSI), acting as plastocyanin-ferderoxin oxidoreductase, catalyzes the final reaction. Together with light-harvesting antenna I, PSI forms a high-molecular-weight supercomplex of ~600 kDa, consisting of eighteen subunits and nearly two hundred co-factors. Assembly of the various components into a functional thylakoid membrane complex requires precise coordination, which is provided by the assembly machinery. Although this includes a small number of proteins (PSI assembly factors) that have been shown to play a role in the formation of PSI, the process as a whole, as well as the intricacy of its members, remains largely unexplored.
In the present work, two approaches were used to find candidate PSI assembly factors. First, EnsembleNet was used to select proteins thought to be functionally related to known PSI assembly factors in Arabidopsis thaliana (approach I), and second, co-immunoprecipitation (Co-IP) of tagged PSI assembly factors in Nicotiana tabacum was performed (approach II).
Here, the novel PSI assembly factors designated CO-EXPRESSED WITH PSI ASSEMBLY 1 (CEPA1) and Ycf4-INTERACTING PROTEIN 1 (Y4IP1) were identified. A. thaliana null mutants for CEPA1 and Y4IP1 showed a growth phenotype and pale leaves compared with the wild type. Biophysical experiments using pulse amplitude modulation (PAM) revealed insufficient electron transport on the PSII acceptor side. Biochemical analyses revealed that both CEPA1 and Y4IP1 are specifically involved in PSI accumulation in A. thaliana at the post-translational level but are not essential. Consistent with their roles as factors in the assembly of a thylakoid membrane protein complex, the two proteins localize to thylakoid membranes. Remarkably, cepa1 y4ip1 double mutants exhibited lethal phenotypes in early developmental stages under photoautotrophic growth. Finally, co-IP and native gel experiments supported a possible role for CEPA1 and Y4IP1 in mediating PSI assembly in conjunction with other PSI assembly factors (e.g., PPD1- and PSA3-CEPA1 and Ycf4-Y4IP1). The fact that CEPA1 and Y4IP1 are found exclusively in green algae and higher plants suggests eukaryote-specific functions. Although the specific mechanisms need further investigation, CEPA1 and Y4IP1 are two novel assembly factors that contribute to PSI formation.
Background: Assessing short-term growth in humans is still fraught with difficulties. Especially when looking for small variations and increments, such as mini growth spurts, high precision instruments or frequent measurements are necessary. Daily measurements however require a lot of effort, both for anthropologists and for the subjects. Therefore, new sophisticated approaches are needed that reduce fluctuations and reveal underlying patterns.
Objectives: Changepoints are abrupt variations in the properties of time series data. In the context of growth, such variations could be variation in mean height. By adjusting the variance and using different growth models, we assessed the ability of changepoint analysis to analyse short-term growth and detect mini growth spurts.
Sample and Methods: We performed Bayesian changepoint analysis on simulated growth data using the bcp package in R. Simulated growth patterns included stasis, linear growth, catch-up growth, and mini growth spurts. Specificity and a normalised variant of the Matthews correlation coefficient (MCC) were used to assess the algorithm’s performance. Welch’s t-test was used to compare differences of the mean.
Results: First results show that changepoint analysis can detect mini growth spurts. However, the ability to detect mini growth spurts is highly dependent on measurement error. Data preparation, such as ranking and rotating time series data, showed negligible improvements. Missing data was an issue and may affect the prediction quality of the classification metrics.
Conclusion: Changepoint analysis is a promising tool to analyse short-term growth. However, further optimisation and analysis of real growth data is needed to make broader generalisations.
Environmental pollution by microplastics has become a severe problem in terrestrial and aquatic ecosystems and, according to actual prognoses, problems will further increase in the future. Therefore, assessing and quantifying the risk for the biota is crucial. Standardized short-term toxicological procedures as well as methods quantifying potential toxic effects over the whole life span of an animal are required. We studied the effect of the microplastic polystyrene on the survival and reproduction of a common freshwater invertebrate, the rotifer Brachionus calyciflorus, at different timescales. We used pristine polystyrene spheres of 1, 3, and 6 µm diameter and fed them to the animals together with food algae in different ratios ranging from 0 to 50% nonfood particles. As a particle control, we used silica to distinguish between a pure particle effect and a plastic effect. After 24 h, no toxic effect was found, neither with polystyrene nor with silica. After 96 h, a toxic effect was detectable for both particle types. The size of the particles played a negligible role. Studying the long-term effect by using life table experiments, we found a reduced reproduction when the animals were fed with 3 µm spheres together with similar-sized food algae. We conclude that the fitness reduction is mainly driven by the dilution of food by the nonfood particles rather than by a direct toxic effect.
Previous studies have not considered the potential influence of maturity status on the relationship between mental imagery and change of direction (CoD) speed in youth soccer. Accordingly, this cross-sectional study examined the association between mental imagery and CoD performance in young elite soccer players of different maturity status. Forty young male soccer players, aged 10-17 years, were assigned into two groups according to their predicted age at peak height velocity (PHV) (Pre-PHV; n = 20 and Post-PHV; n = 20). Participants were evaluated on soccer-specific tests of CoD with (CoDBall-15m) and without (CoD-15m) the ball. Participants completed the movement imagery questionnaire (MIQ) with the three- dimensional structure, internal visual imagery (IVI), external visual imagery (EVI), as well as kinesthetic imagery (KI). The Post-PHV players achieved significantly better results than Pre-PHV in EVI (ES = 1.58, large; p < 0.001), CoD-15m (ES = 2.09, very large; p < 0.001) and CoDBall-15m (ES = 1.60, large; p < 0.001). Correlations were significantly different between maturity groups, where, for the pre-PHV group, a negative very large correlation was observed between CoDBall-15m and KI (r = –0.73, p = 0.001). For the post-PHV group, large negative correlations were observed between CoD-15m and IVI (r = –0.55, p = 0.011), EVI (r = –062, p = 0.003), and KI (r = –0.52, p = 0.020). A large negative correlation of CoDBall-15m with EVI (r = –0.55, p = 0.012) and very large correlation with KI (r = –0.79, p = 0.001) were also observed. This study provides evidence of the theoretical and practical use for the CoD tasks stimulus with imagery. We recommend that sport psychology specialists, coaches, and athletes integrated imagery for CoD tasks in pre-pubertal soccer players to further improve CoD related performance.
This study sought to analyze the relationship between in-season training workload with changes in aerobic power (VO2max), maximum and resting heart rate (HRmax and HRrest), linear sprint medium (LSM), and short test (LSS), in soccer players younger than 16 years (under-16 soccer players). We additionally aimed to explain changes in fitness levels during the in-season through regression models, considering accumulated load, baseline levels, and peak height velocity (PHV) as predictors. Twenty-three male sub-elite soccer players aged 15.5 ± 0.2 years (PHV: 13.6 ± 0.4 years; body height: 172.7 ± 4.2 cm; body mass: 61.3 ± 5.6 kg; body fat: 13.7% ± 3.9%; VO2max: 48.4 ± 2.6 mL⋅kg–1⋅min–1), were tested three times across the season (i.e., early-season (EaS), mid-season (MiS), and end-season (EnS) for VO2max, HRmax, LSM, and LSS. Aerobic and speed variables gradually improved over the season and had a strong association with PHV. Moreover, the HRmax demonstrated improvements from EaS to EnS; however, this was more evident in the intermediate period (from EaS to MiS) and had a strong association with VO2max. Regression analysis showed significant predictions for VO2max [F(2, 20) = 8.18, p ≤ 0.001] with an R2 of 0.45. In conclusion, the meaningful variation of youth players’ fitness levels can be observed across the season, and such changes can be partially explained by the load imposed.
This article describes a HMM-based word-alignment method that can selectively enforce a contiguity constraint. This method has a direct application in the extraction of a bilingual terminological lexicon from a parallel corpus, but can also be used as a preliminary step for the extraction of phrase pairs in a Phrase-Based Statistical Machine Translation system. Contiguous source words composing terms are aligned to contiguous target language words. The HMM is transformed into a Weighted Finite State Transducer (WFST) and contiguity constraints are enforced by specific multi-tape WFSTs. The proposed method is especially suited when basic linguistic resources (morphological analyzer, part-of-speech taggers and term extractors) are available for the source language only.
A degree course in IT and business administration solely for women (FIW) has been offered since 2009 at the HTW Berlin – University of Applied Sciences. This contribution discusses student motivations for enrolling in such a women only degree course and gives details of our experience over recent years. In particular, the approach to attracting new female students is described and the composition of the intake is discussed. It is shown that the women-only setting together with other factors can attract a new clientele for computer science.