Refine
Has Fulltext
- yes (426) (remove)
Year of publication
- 2015 (426) (remove)
Document Type
- Postprint (188)
- Article (131)
- Doctoral Thesis (72)
- Monograph/Edited Volume (12)
- Preprint (12)
- Conference Proceeding (4)
- Master's Thesis (2)
- Part of Periodical (2)
- Bachelor Thesis (1)
- Habilitation Thesis (1)
Language
- English (426) (remove)
Keywords
- Computer Science Education (5)
- climate change (5)
- interference (5)
- German (4)
- climate-change (4)
- embodied cognition (4)
- evolution (4)
- model (4)
- variability (4)
- Competence Measurement (3)
Institute
- Institut für Physik und Astronomie (108)
- Mathematisch-Naturwissenschaftliche Fakultät (70)
- Institut für Informatik und Computational Science (36)
- Institut für Chemie (35)
- Humanwissenschaftliche Fakultät (34)
- Institut für Geowissenschaften (27)
- Institut für Biochemie und Biologie (18)
- Institut für Mathematik (17)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (14)
- Department Linguistik (10)
- Department Psychologie (9)
- Strukturbereich Kognitionswissenschaften (9)
- Sonderforschungsbereich 632 - Informationsstruktur (6)
- Institut für Ernährungswissenschaft (5)
- Wirtschafts- und Sozialwissenschaftliche Fakultät (5)
- Institut für Romanistik (4)
- Institut für Umweltwissenschaften und Geographie (4)
- Vereinigung für Jüdische Studien e. V. (4)
- Department Sport- und Gesundheitswissenschaften (3)
- Philosophische Fakultät (3)
- Potsdam Research Institute for Multilingualism (PRIM) (2)
- Referat für Presse- und Öffentlichkeitsarbeit (2)
- Sozialwissenschaften (2)
- Extern (1)
- Institut für Anglistik und Amerikanistik (1)
- Institut für Germanistik (1)
- Institut für Slavistik (1)
- Wirtschaftswissenschaften (1)
Extract: Topics in psycholinguistics and the neurocognition of language rarely attract the attention of journalists or the general public. One topic that has done so, however, is the potential benefits of bilingualism for general cognitive functioning and development, and as a precaution against cognitive decline in old age. Sensational claims have been made in the public domain, mostly by journalists and politicians. Recently (September 4, 2014) The Guardian reported that “learning a foreign language can increase the size of your brain”, and Michael Gove, the UK's previous Education Secretary, noted in an interview with The Guardian (September 30, 2011) that “learning languages makes you smarter”. The present issue of BLC addresses these topics by providing a state-of-the-art overview of theoretical and experimental research on the role of bilingualism for cognition in children and adults.
Towards the assimilation of tree-ring-width records using ensemble Kalman filtering techniques
(2015)
This paper investigates the applicability of the Vaganov–Shashkin–Lite (VSL) forward model for tree-ring-width chronologies as observation operator within a proxy data assimilation (DA) setting. Based on the principle of limiting factors, VSL combines temperature and moisture time series in a nonlinear fashion to obtain simulated TRW chronologies. When used as observation operator, this modelling approach implies three compounding, challenging features: (1) time averaging, (2) “switching recording” of 2 variables and (3) bounded response windows leading to “thresholded response”. We generate pseudo-TRW observations from a chaotic 2-scale dynamical system, used as a cartoon of the atmosphere-land system, and attempt to assimilate them via ensemble Kalman filtering techniques. Results within our simplified setting reveal that VSL’s nonlinearities may lead to considerable loss of assimilation skill, as compared to the utilization of a time-averaged (TA) linear observation operator. In order to understand this undesired effect, we embed VSL’s formulation into the framework of fuzzy logic (FL) theory, which thereby exposes multiple representations of the principle of limiting factors. DA experiments employing three alternative growth rate functions disclose a strong link between the lack of smoothness of the growth rate function and the loss of optimality in the estimate of the TA state. Accordingly, VSL’s performance as observation operator can be enhanced by resorting to smoother FL representations of the principle of limiting factors. This finding fosters new interpretations of tree-ring-growth limitation processes.
In the last decade, the number and dimensions of catastrophic flooding events in the Niger River Basin (NRB) have markedly increased. Despite the devastating impact of the floods on the population and the mainly agriculturally based economy of the riverine nations, awareness of the hazards in policy and science is still low. The urgency of this topic and the existing research deficits are the motivation for the present dissertation.
The thesis is an initial detailed assessment of the increasing flood risk in the NRB. The research strategy is based on four questions regarding (1) features of the change in flood risk, (2) reasons for the change in the flood regime, (3) expected changes of the flood regime given climate and land use changes, and (4) recommendations from previous analysis for reducing the flood risk in the NRB.
The question examining the features of change in the flood regime is answered by means of statistical analysis. Trend, correlation, changepoint, and variance analyses show that, in addition to the factors exposure and vulnerability, the hazard itself has also increased significantly in the NRB, in accordance with the decadal climate pattern of West Africa. The northern arid and semi-arid parts of the NRB are those most affected by the changes.
As potential reasons for the increase in flood magnitudes, climate and land use changes are attributed by means of a hypothesis-testing framework. Two different approaches, based on either data analysis or simulation, lead to similar results, showing that the influence of climatic changes is generally larger compared to that of land use changes. Only in the dry areas of the NRB is the influence of land use changes comparable to that of climatic alterations.
Future changes of the flood regime are evaluated using modelling results. First ensembles of statistically and dynamically downscaled climate models based on different emission scenarios are analyzed. The models agree with a distinct increase in temperature. The precipitation signal, however, is not coherent. The climate scenarios are used to drive an eco-hydrological model. The influence of climatic changes on the flood regime is uncertain due to the unclear precipitation signal. Still, in general, higher flood peaks are expected. In a next step, effects of land use changes are integrated into the model. Different scenarios show that regreening might help to reduce flood peaks. In contrast, an expansion of agriculture might enhance the flood peaks in the NRB. Similarly to the analysis of observed changes in the flood regime, the impacts of climate- and land use changes for the future scenarios are also most severe in the dry areas of the NRB.
In order to answer the final research question, the results of the above analysis are integrated into a range of recommendations for science and policy on how to reduce flood risk in the NRB. The main recommendations include a stronger consideration of the enormous natural climate variability in the NRB and a focus on so called “no-regret” adaptation strategies which account for high uncertainty, as well as a stronger consideration of regional differences. Regarding the prevention and mitigation of catastrophic flooding, the most vulnerable and sensitive areas in the basin, the arid and semi-arid Sahelian and Sudano-Sahelian regions, should be prioritized. Eventually, an active, science-based and science-guided flood policy is recommended. The enormous population growth in the NRB in connection with the expected deterioration of environmental and climatic conditions is likely to enhance the region´s vulnerability to flooding. A smart and sustainable flood policy can help mitigate these negative impacts of flooding on the development of riverine societies in West Africa.
Central Asia is located at the confluence of large-scale atmospheric circulation systems. It is thus likely to be highly susceptible to changes in the dynamics of those systems; however, little is still known about the regional paleoclimate history. Here we present carbon and hydrogen isotopic compositions of n-alkanoic acids from a late Holocene sediment core from Lake Karakuli (eastern Pamir, Xinjiang Province, China). Instrumental evidence and isotopeenabled climate model experiments with the Laboratoire de Meteorologie Dynamique Zoom model version 4 (LMDZ4) demonstrate that delta D values of precipitation in the region are influenced by both temperature and precipitation amount. We find that these parameters are inversely correlated on an annual scale, i.e., the climate has varied between relatively cool and wet and more warm and dry over the last 50 years. Since the isotopic signals of these changes are in the same direction and therefore additive, isotopes in precipitation are sensitive recorders of climatic changes in the region. Additionally, we infer that plants use year-round precipitation (including snowmelt), and thus leaf wax delta D values must also respond to shifts in the proportion of moisture derived from westerly storms during late winter and early spring. Downcore results give evidence for a gradual shift to cooler and wetter climates between 3.5 and 2.5 cal kyr BP, interrupted by a warm and dry episode between 3.0 and 2.7 kyr BP. Further cool and wet episodes occur between 1.9 and 1.5 and between 0.6 and 0.1 kyr BP, the latter coeval with the Little Ice Age. Warm and dry episodes from 2.5 to 1.9 and 1.5 to 0.6 kyr BP coincide with the Roman Warm Period and Medieval Climate Anomaly, respectively. Finally, we find a drying tend in recent decades. Regional comparisons lead us to infer that the strength and position of the westerlies, and wider northern hemispheric climate dynamics, control climatic shifts in arid Central Asia, leading to complex local responses. Our new archive from Lake Karakuli provides a detailed record of the local signatures of these climate transitions in the eastern Pamir.
This paper reopens the discussion on focus marking in Akan (Kwa,
Niger-Congo) by examining the semantics of the so-called focus marker
in the language. It is shown that the so-called focus marker expresses
exhaustivity when it occurs in a sentence with narrow focus. The study
employs four standard tests for exhaustivity proposed in the literature
to examine the semantics of Akan focus constructions (Szabolsci 1981,
1994; É. Kiss 1998; Hartmann and Zimmermann 2007). It is shown that
although a focused entity with the so-called focus marker nà is
interpreted to mean ‘only X and nothing/nobody else,’ this meaning
appears to be pragmatic.
Professional and amateur astronomers around the world contributed to a 4-month long campaign in 2013, mainly in spectroscopy but also in photometry, interferometry and polarimetry, to observe the first 3 Wolf-Rayet stars discovered: WR 134 (WN6b), WR 135 (WC8) and WR 137 (WC7pd+O9). Each of these stars are interesting in their own way, showing a variety of stellar wind structures. The spectroscopic data from this campaign were reduced and analyzed for WR 134 in order to better understand its behavior and long-term periodicity in the context of CIRs in the wind. We will be presenting the results of these spectroscopic data, which include the confirmation of the CIR variability and a time-coherency of ∼ 40 days (half-life of ∼ 20 days).
We define weak boundary values of solutions to those nonlinear differential equations which appear as Euler-Lagrange equations of variational problems. As a result we initiate the theory of Lagrangian boundary value problems in spaces of appropriate smoothness. We also analyse if the concept of mapping degree of current importance applies to the study of Lagrangian problems.
Climate impacts on transocean dispersal and habitat in gray whales from the Pleistocene to 2100
(2015)
Arctic animals face dramatic habitat alteration due to ongoing climate change. Understanding how such species have responded to past glacial cycles can help us forecast their response to today's changing climate. Gray whales are among those marine species likely to be strongly affected by Arctic climate change, but a thorough analysis of past climate impacts on this species has been complicated by lack of information about an extinct population in the Atlantic. While little is known about the history of Atlantic gray whales or their relationship to the extant Pacific population, the extirpation of the Atlantic population during historical times has been attributed to whaling. We used a combination of ancient and modern DNA, radiocarbon dating and predictive habitat modelling to better understand the distribution of gray whales during the Pleistocene and Holocene. Our results reveal that dispersal between the Pacific and Atlantic was climate dependent and occurred both during the Pleistocene prior to the last glacial period and the early Holocene immediately following the opening of the Bering Strait. Genetic diversity in the Atlantic declined over an extended interval that predates the period of intensive commercial whaling, indicating this decline may have been precipitated by Holocene climate or other ecological causes. These first genetic data for Atlantic gray whales, particularly when combined with predictive habitat models for the year 2100, suggest that two recent sightings of gray whales in the Atlantic may represent the beginning of the expansion of this species' habitat beyond its currently realized range.
With accelerating climate cooling in the late Cenozoic, glacial and periglacial erosion became more widespread on the surface of the Earth. The resultant shift in erosion patterns significantly changed the large-scale morphology of many mountain ranges worldwide. Whereas the glacial fingerprint is easily distinguished by its characteristic fjords and U-shaped valleys, the periglacial fingerprint is more subtle but potentially prevails in some mid- to high-latitude landscapes. Previous models have advocated a frost-driven control on debris production at steep headwalls and glacial valley sides. Here we investigate the important role that periglacial processes also play in less steep parts of mountain landscapes. Understanding the influences of frost-driven processes in low-relief areas requires a focus on the consequences of an accreting soil mantle, which characterises such surfaces. We present a new model that quantifies two key physical processes: frost cracking and frost creep, as a function of both temperature and sediment thickness. Our results yield new insights into how climate and sediment transport properties combine to scale the intensity of periglacial processes. The thickness of the soil mantle strongly modulates the relation between climate and the intensity of mechanical weathering and sediment flux. Our results also point to an offset between the conditions that promote frost cracking and those that promote frost creep, indicating that a stable climate can provide optimal conditions for only one of those processes at a time. Finally, quantifying these relations also opens up the possibility of including periglacial processes in large-scale, long-term landscape evolution models, as demonstrated in a companion paper.
This study presents pioneering data on how adult early bilinguals (heritage speakers) and late bilingual speakers of Turkish and German process grammatical evidentiality in a visual world setting in comparison to monolingual speakers of Turkish. Turkish marks evidentiality, the linguistic reference to information source, through inflectional affixes signaling either direct (-DI) or indirect (-mls) evidentiality. We conducted an eyetracking-during-listening experiment where participants were given access to visual 'evidence' supporting the use of either a direct or indirect evidential form. The behavioral results indicate that the monolingual Turkish speakers comprehended direct and indirect evidential scenarios equally well. In contrast, both late and early bilinguals were less accurate and slower to respond to direct than to indirect evidentials. The behavioral results were also reflected in the proportions of looks data. That is, both late and early bilinguals fixated less frequently on the target picture in the direct than in the indirect evidential condition while the monolinguals showed no difference between these conditions. Taken together, our results indicate reduced sensitivity to the semantic and pragmatic function of direct evidential forms in both late and early bilingual speakers, suggesting a simplification of the Turkish evidentiality system in Turkish heritage grammars. We discuss our findings with regard to theories of incomplete acquisition and first language attrition.
I review our current understanding of the interaction between a Wolf-Rayet star's fast wind and the surrounding medium, and discuss to what extent the predictions of numerical simulations coincide with multiwavelength observations of Wolf-Rayet nebulae. Through a series of examples, I illustrate how changing the input physics affects the results of the numerical simulations. Finally, I discuss how numerical simulations together with multiwavelength observations of these objects allow us to unpick the previous mass-loss history of massive stars.
The Central Pontides is an accretionary-type orogenic area within the Alpine-Himalayan orogenic belt characterized by pre-collisional tectonic continental growth. The region comprises Mesozoic subduction-accretionary complexes and an accreted intra-oceanic arc that are sandwiched between the Laurasian active continental margin and Gondwana-derived the Kırşehir Block. The subduction-accretion complexes mainly consist of an Albian-Turonian accretionary wedge representing the Laurasian active continental margin. To the north, the wedge consists of slate/phyllite and metasandstone intercalation with recrystallized limestone, Na-amphibole-bearing metabasite (PT= 7–12 kbar and 400 ± 70 ºC) and tectonic slices of serpentinite representing accreted distal part of a large Lower Cretaceous submarine turbidite fan deposited on the Laurasian active continental margin that was subsequently accreted and metamorphosed. Raman spectra of carbonaceous material (RSCM) of the metapelitic rocks revealed that the metaflysch sequence consists of metamorphic packets with distinct peak metamorphic temperatures. The majority of the metapelites are low-temperature (ca. 330 °C) slates characterized by lack of differentiation of the graphite (G) and D2 defect bands. They possibly represent offscraped distal turbidites along the toe of the Albian accretionary wedge. The rest are phyllites that are characterized by slightly pronounced G band with D2 defect band occurring on its shoulder. Peak metamorphic temperatures of these phyllites are constrained to 370-385 °C. The phyllites are associated with a strip of incipient blueschist facies metabasites which are found as slivers within the offscraped distal turbidites. They possibly represent underplated continental metasediments together with oceanic crustal basalt along the basal décollement. Tectonic emplacement of the underplated rocks into the offscraped distal turbidites was possibly achieved by out-of-sequence thrusting causing tectonic thickening and uplift of the wedge. 40Ar/39Ar phengite ages from the phyllites are ca. 100 Ma, indicating Albian subduction and regional HP metamorphism.
The accreted continental metasediments are underlain by HP/LT metamorphic rocks of oceanic origin along an extensional shear zone. The oceanic metamorphic sequence mainly comprises tectonically thickened deep-seated eclogite to blueschist facies metabasites and micaschists. In the studied area, metabasites are epidote-blueschists locally with garnet (PT= 17 ± 1 kbar and 500 ± 40 °C). Lawsonite-blueschists are exposed as blocks along the extensional shear zone (PT= 14 ± 2 kbar and 370–440 °C). They are possibly associated with low shear stress regime of the initial stage of convergence. Close to the shear zone, the footwall micaschists consist of quartz, phengite, paragonite, chlorite, rutile with syn-kinematic albite porphyroblast formed by pervasive shearing during exhumation. These types of micaschists are tourmaline-bearing and their retrograde nature suggests high-fluid flux along shear zones. Peak metamorphic mineral assemblages are partly preserved in the chloritoid-micaschist farther away from the shear zone representing the zero strain domains during exhumation. Three peak metamorphic assemblages are identified and their PT conditions are constrained by pseudosections produced by Theriak-Domino and by Raman spectra of carbonaceous material: 1) garnet-chloritoid-glaucophane with lawsonite pseudomorphs (P= 17.5 ± 1 kbar, T: 390-450 °C) 2) chloritoid with glaucophane pseudomorphs (P= 16-18 kbar, T: 475 ± 40 °C) and 3) relatively high-Mg chloritoid (17%) with jadeite pseudomorphs (P= 22-25 kbar; T: 440 ± 30 °C) in addition to phengite, paragonite, quartz, chlorite, rutile and apatite. The last mineral assemblage is interpreted as transformation of the chloritoid + glaucophane assemblage to chloritoid + jadeite paragenesis with increasing pressure. Absence of tourmaline suggests that the chloritoid-micaschist did not interact with B-rich fluids during zero strain exhumation. 40Ar/39Ar phengite age of a pervasively sheared footwall micaschist is constrained to 100.6 ± 1.3 Ma and that of a chloritoid-micaschist is constrained to 91.8 ± 1.8 Ma suggesting exhumation during on-going subduction with a southward younging of the basal accretion and the regional metamorphism. To the south, accretionary wedge consists of blueschist and greenschist facies metabasite, marble and volcanogenic metasediment intercalation. 40Ar/39Ar phengite dating reveals that this part of the wedge is of Middle Jurassic age partly overprinted during the Albian. Emplacement of the Middle Jurassic subduction-accretion complexes is possibly associated with obliquity of the Albian convergence.
Peak metamorphic assemblages and PT estimates of the deep-seated oceanic metamorphic sequence suggest tectonic stacking within wedge with different depths of burial. Coupling and exhumation of the distinct metamorphic slices are controlled by decompression of the wedge possibly along a retreating slab. Structurally, decompression of the wedge is evident by an extensional shear zone and the footwall micaschists with syn-kinematic albite porphyroblasts. Post-kinematic garnets with increasing grossular content and pseudomorphing minerals within the chloritoid-micaschists also support decompression model without an extra heating.
Thickening of subduction-accretionary complexes is attributed to i) significant amount of clastic sediment supply from the overriding continental domain and ii) deep level basal underplating by propagation of the décollement along a retreating slab. Underplating by basal décollement propagation and subsequent exhumation of the deep-seated subduction-accretion complexes are connected and controlled by slab rollback creating a necessary space for progressive basal accretion along the plate interface and extension of the wedge above for exhumation of the tectonically thickened metamorphic sequences. This might be the most common mechanism of the tectonic thickening and subsequent exhumation of deep-seated HP/LT subduction-accretion complexes.
To the south, the Albian-Turonian accretionary wedge structurally overlies a low-grade volcanic arc sequence consisting of low-grade metavolcanic rocks and overlying metasedimentary succession is exposed north of the İzmir-Ankara-Erzincan suture (İAES), separating Laurasia from Gondwana-derived terranes. The metavolcanic rocks mainly consist of basaltic andesite/andesite and mafic cognate xenolith-bearing rhyolite with their pyroclastic equivalents, which are interbedded with recrystallized pelagic limestone and chert. The metavolcanic rocks are stratigraphically overlain by recrystallized micritic limestone with rare volcanogenic metaclastic rocks. Two groups can be identified based on trace and rare earth element characteristics. The first group consists of basaltic andesite/andesite (BA1) and rhyolite with abundant cognate gabbroic xenoliths. It is characterized by relative enrichment of LREE with respect to HREE. The rocks are enriched in fluid mobile LILE, and strongly depleted in Ti and P reflecting fractionation of Fe-Ti oxides and apatite, which are found in the mafic cognate xenoliths. Abundant cognate gabbroic xenoliths and identical trace and rare earth elements compositions suggest that rhyolites and basaltic andesites/andesites (BA1) are cogenetic and felsic rocks were derived from a common mafic parental magma by fractional crystallization and accumulation processes. The second group consists only of basaltic andesites (BA2) with flat REE pattern resembling island arc tholeiites. Although enriched in LILE, this group is not depleted in Ti or P.
Geochemistry of the metavolcanic rocks indicates supra-subduction volcanism evidenced by depletion of HFSE and enrichment of LILE. The arc sequence is sandwiched between an Albian-Turonian subduction-accretionary complex representing the Laurasian active margin and an ophiolitic mélange. Absence of continent derived detritus in the arc sequence and its tectonic setting in a wide Cretaceous accretionary complex suggest that the Kösdağ Arc was intra-oceanic. This is in accordance with basaltic andesites (BA2) with island arc tholeiite REE pattern.
Zircons from two metarhyolite samples give Late Cretaceous (93.8 ± 1.9 and 94.4 ± 1.9 Ma) U/Pb ages. Low-grade regional metamorphism of the intra-oceanic arc sequence is constrained 69.9 ± 0.4 Ma by 40Ar/39Ar dating on metamorphic muscovite from a metarhyolite indicating that the arc sequence became part of a wide Tethyan Cretaceous accretionary complex by the latest Cretaceous. The youngest 40Ar/39Ar phengite age from the overlying subduction-accretion complexes is 92 Ma confirming southward younging of an accretionary-type orogenic belt. Hence, the arc sequence represents an intra-oceanic paleo-arc that formed above the sinking Tethyan slab and finally accreted to Laurasian active continental margin. Abrupt non-collisional termination of arc volcanism was possibly associated with southward migration of the arc volcanism similar to the Izu-Bonin-Mariana arc system.
The intra-oceanic Kösdağ Arc is coeval with the obducted supra-subduction ophiolites in NW Turkey suggesting that it represents part of the presumed but missing incipient intra-oceanic arc associated with the generation of the regional supra-subduction ophiolites. Remnants of a Late Cretaceous intra-oceanic paleo-arc and supra-subduction ophiolites can be traced eastward within the Alp-Himalayan orogenic belt. This reveals that Late Cretaceous intra-oceanic subduction occurred as connected event above the sinking Tethyan slab. It resulted as arc accretion to Laurasian active margin and supra-subduction ophiolite obduction on Gondwana-derived terranes.
Nowadays, business processes are increasingly supported by IT services that produce massive amounts of event data during process execution. Aiming at a better process understanding and improvement, this event data can be used to analyze processes using process mining techniques. Process models can be automatically discovered and the execution can be checked for conformance to specified behavior. Moreover, existing process models can be enhanced and annotated with valuable information, for example for performance analysis. While the maturity of process mining algorithms is increasing and more tools are entering the market, process mining projects still face the problem of different levels of abstraction when comparing events with modeled business activities. Mapping the recorded events to activities of a given process model is essential for conformance checking, annotation and understanding of process discovery results. Current approaches try to abstract from events in an automated way that does not capture the required domain knowledge to fit business activities. Such techniques can be a good way to quickly reduce complexity in process discovery. Yet, they fail to enable techniques like conformance checking or model annotation, and potentially create misleading process discovery results by not using the known business terminology.
In this thesis, we develop approaches that abstract an event log to the same level that is needed by the business. Typically, this abstraction level is defined by a given process model. Thus, the goal of this thesis is to match events from an event log to activities in a given process model. To accomplish this goal, behavioral and linguistic aspects of process models and event logs as well as domain knowledge captured in existing process documentation are taken into account to build semiautomatic matching approaches. The approaches establish a pre--processing for every available process mining technique that produces or annotates a process model, thereby reducing the manual effort for process analysts. While each of the presented approaches can be used in isolation, we also introduce a general framework for the integration of different matching approaches.
The approaches have been evaluated in case studies with industry and using a large industry process model collection and simulated event logs. The evaluation demonstrates the effectiveness and efficiency of the approaches and their robustness towards nonconforming execution logs.
Optical properties of modified diamondoids have been studied theoretically using vibrationally resolved electronic absorption, emission and resonance Raman spectra. A time-dependent correlation function approach has been used for electronic two-state models, comprising a ground state (g) and a bright, excited state (e), the latter determined from linear-response, time-dependent density functional theory (TD-DFT). The harmonic and Condon approximations were adopted. In most cases origin shifts, frequency alteration and Duschinsky rotation in excited states were considered. For other cases where no excited state geometry optimization and normal mode analysis were possible or desired, a short-time approximation was used. The optical properties and spectra have been computed for (i) a set of recently synthesized sp2/sp3 hybrid species with C[double bond, length as m-dash]C double-bond connected saturated diamondoid subunits, (ii) functionalized (mostly by thiol or thione groups) diamondoids and (iii) urotropine and other C-substituted diamondoids. The ultimate goal is to tailor optical and electronic features of diamondoids by electronic blending, functionalization and substitution, based on a molecular-level understanding of the ongoing photophysics.
Regardless of what is intended by government curriculum
specifications and advised by educational experts, the competencies
taught and learned in and out of classrooms can vary considerably.
In this paper, we discuss in particular how we can investigate the
perceptions that individual teachers have of competencies in ICT,
and how these and other factors may influence students’ learning. We
report case study research which identifies contradictions within the
teaching of ICT competencies as an activity system, highlighting issues
concerning the object of the curriculum, the roles of the participants and
the school cultures. In a particular case, contradictions in the learning
objectives between higher order skills and the use of application tools
have been resolved by a change in the teacher’s perceptions which
have not led to changes in other aspects of the activity system. We look
forward to further investigation of the effects of these contradictions in
other case studies and on forthcoming curriculum change.
Double cyclization of short linear peptides obtained by solid phase peptide synthesis was used to prepare bridged bicyclic peptides (BBPs) corresponding to the topology of bridged bicyclic alkanes such as norbornane. Diastereomeric norbornapeptides were investigated by 1H-NMR, X-ray crystallography and CD spectroscopy and found to represent rigid globular scaffolds stabilized by intramolecular backbone hydrogen bonds with scaffold geometries determined by the chirality of amino acid residues and sharing structural features of β-turns and α-helices. Proteome profiling by capture compound mass spectrometry (CCMS) led to the discovery of the norbornapeptide 27c binding selectively to calmodulin as an example of a BBP protein binder. This and other BBPs showed high stability towards proteolytic degradation in serum.
John Birks
(2015)
We describe the career of John Birks as a pioneering scientist who has, over a career spanning five decades, transformed palaeoecology from a largely descriptive to a rigorous quantitative science relevant to contemporary questions in ecology and environmental change. We review his influence on students and colleagues not only at Cambridge and Bergen Universities, his places of primary employment, but also on individuals and research groups in Europe and North America. We also introduce the collection of papers that we have assembled in his honour. The papers are written by his former students and close colleagues and span many of the areas of palaeoecology to which John himself has made major contributions. These include the relationship between ecology and palaeoecology, late-glacial and Holocene palaeoecology, ecological succession, climate change and vegetation history, the role of palaeoecological techniques in reconstructing and understanding the impact of human activity on terrestrial and freshwater ecosystems and numerical analysis of multivariate palaeoecological data.
Obtaining a complete census of massive, evolved stars in a galaxy would be a key ingredient for testing stellar evolution models. However, as the evolution of stars is also strongly dependent on their metallicity, it is inevitable to have this kind of data for a variety of galaxies with different metallicities. Between 2009 and 2011, we conducted the Magellanic Clouds Massive Stars and Feedback Survey (MSCF); a spatially complete, multi-epoch, broad- and narrow-band optical imaging survey of the Large and Small Magellanic Clouds. With the inclusion of shallow images, we are able to give a complete photometric catalog of stars between B ≈ 18 and B ≈ 19 mag.
These observations were augmented with additional photometric data of similar spatial res-
olution from UV to IR (e.g. from GALEX, 2MASS and Spitzer) in order to sample a large portion of the spectral energy distribution of the brightest stars (B < 16 mag) in the Magel- lanic Clouds. Using these data, were are able to train a machine learning algorithm that gives us a good estimate of the spectral type of tens of thousands of stars.
This method can be applied to the search for Wolf-Rayet-Stars to obtain a sample of candi- dates for follow-up observations. As this approach can, in principle, be adopted for any resolved galaxy as long as sufficient photometric data is available, it can form an effective alternative method to the classical strategies (e.g. He II filter imaging).
We found original observations of PCygni by E. Kharadze and N. Magalashvili in the archives of the Abastumani Observatory. These observations were carried out in the period 1951–1983. Initially they used 29 Cygni as a comparison star, and all observations of PCygni were processed using this star. On the basis of their calculations, the authors decided that PCygni may be a WUMa type binary with an orbital period of 0.500565 d, but this hypothesis was not confirmed. The only observations that have been published in the Bulletin of the Abastumani Astrophysical Observatory were those of of 1951–1955. There are whole sets of observational data not only for PCygni and 29 Cygni, but in the majority of cases also for 36 Cygni in the archives. We recalculated all data (where it was possible) using 36 Cygni as a comparison star. We are presenting UBV light curves of the variable, and also observations made by V. Nikonov in Abastumani in the period 1935–1937
Spectroscopy is the preferred way to study the physical and wind properties of Wolf-Rayet (WR) stars, but with decreasing brightness and increasing distance of the object spectroscopy become very expensive. However, photometry still delivers a high signal to noise ratio. Current and past astronomical surveys and space missions provide large data sets, that can be harvested to discover new WR stars and study them over a wide metallicity range with the help of state of the art stellar atmosphere and evolutionary models.
Microsaccades
(2015)
The first thing we do upon waking is open our eyes. Rotating them in our eye sockets, we scan our surroundings and collect the information into a picture in our head. Eye movements can be split into saccades and fixational eye movements, which occur when we attempt to fixate our gaze. The latter consists of microsaccades, drift and tremor. Before we even lift our eye lids, eye movements – such as saccades and microsaccades that let the eyes jump from one to another position – have partially been prepared in the brain stem. Saccades and microsaccades are often assumed to be generated by the same mechanisms. But how saccades and microsaccades can be classified according to shape has not yet been reported in a statistical manner. Research has put more effort into the investigations of microsaccades’ properties and generation only since the last decade. Consequently, we are only beginning to understand the dynamic processes governing microsaccadic eye movements. Within this thesis, the dynamics governing the generation of microsaccades is assessed and the development of a model for the underlying processes. Eye movement trajectories from different experiments are used, recorded with a video-based eye tracking technique, and a novel method is proposed for the scale-invariant detection of saccades (events of large amplitude) and microsaccades (events of small amplitude). Using a time-frequency approach, the method is examined with different experiments and validated against simulated data. A shape model is suggested that allows for a simple estimation of saccade- and microsaccade related properties. For sequences of microsaccades, in this thesis a time-dynamic Markov model is proposed, with a memory horizon that changes over time and which can best describe sequences of microsaccades.
Background
Previous literature mainly introduced cognitive functions to explain performance decrements in dual-task walking, i.e., changes in dual-task locomotion are attributed to limited cognitive information processing capacities. In this study, we enlarge existing literature and investigate whether leg muscular capacity plays an additional role in children’s dual-task walking performance.
Methods
To this end, we had prepubescent children (mean age: 8.7 ± 0.5 years, age range: 7–9 years) walk in single task (ST) and while concurrently conducting an arithmetic subtraction task (DT). Additionally, leg lean tissue mass was assessed.
Results
Findings show that both, boys and girls, significantly decrease their gait velocity (f = 0.73), stride length (f = 0.62) and cadence (f = 0.68) and increase the variability thereof (f = 0.20-0.63) during DT compared to ST. Furthermore, stepwise regressions indicate that leg lean tissue mass is closely associated with step time and the variability thereof during DT (R2 = 0.44, p = 0.009). These associations between gait measures and leg lean tissue mass could not be observed for ST (R2 = 0.17, p = 0.19).
Conclusion
We were able to show a potential link between leg muscular capacities and DT walking performance in children. We interpret these findings as evidence that higher leg muscle mass in children may mitigate the impact of a cognitive interference task on DT walking performance by inducing enhanced gait stability.
The term “bilateral deficit” (BLD) has been used to describe a reduction in performance during bilateral contractions when compared to the sum of identical unilateral contractions. In old age, maximal isometric force production (MIF) decreases and BLD increases indicating the need for training interventions to mitigate this impact in seniors. In a cross-sectional approach, we examined age-related differences in MIF and BLD in young (age: 20–30 years) and old adults (age: >65 years). In addition, a randomized-controlled trial was conducted to investigate training-specific effects of resistance vs. balance training on MIF and BLD of the leg extensors in old adults. Subjects were randomly assigned to resistance training (n = 19), balance training (n = 14), or a control group (n = 20). Bilateral heavy-resistance training for the lower extremities was performed for 13 weeks (3 × / week) at 80% of the one repetition maximum. Balance training was conducted using predominately unilateral exercises on wobble boards, soft mats, and uneven surfaces for the same duration. Pre- and post-tests included uni- and bilateral measurements of maximal isometric leg extension force. At baseline, young subjects outperformed older adults in uni- and bilateral MIF (all p < .001; d = 2.61–3.37) and in measures of BLD (p < .001; d = 2.04). We also found significant increases in uni- and bilateral MIF after resistance training (all p < .001, d = 1.8-5.7) and balance training (all p < .05, d = 1.3-3.2). In addition, BLD decreased following resistance (p < .001, d = 3.4) and balance training (p < .001, d = 2.6). It can be concluded that both training regimens resulted in increased MIF and decreased BLD of the leg extensors (HRT-group more than BAL-group), almost reaching the levels of young adults.
Graph databases provide a natural way of storing and querying graph data. In contrast to relational databases, queries over graph databases enable to refer directly to the graph structure of such graph data. For example, graph pattern matching can be employed to formulate queries over graph data.
However, as for relational databases running complex queries can be very time-consuming and ruin the interactivity with the database. One possible approach to deal with this performance issue is to employ database views that consist of pre-computed answers to common and often stated queries. But to ensure that database views yield consistent query results in comparison with the data from which they are derived, these database views must be updated before queries make use of these database views. Such a maintenance of database views must be performed efficiently, otherwise the effort to create and maintain views may not pay off in comparison to processing the queries directly on the data from which the database views are derived.
At the time of writing, graph databases do not support database views and are limited to graph indexes that index nodes and edges of the graph data for fast query evaluation, but do not enable to maintain pre-computed answers of complex queries over graph data. Moreover, the maintenance of database views in graph databases becomes even more challenging when negation and recursion have to be supported as in deductive relational databases.
In this technical report, we present an approach for the efficient and scalable incremental graph view maintenance for deductive graph databases. The main concept of our approach is a generalized discrimination network that enables to model nested graph conditions including negative application conditions and recursion, which specify the content of graph views derived from graph data stored by graph databases. The discrimination network enables to automatically derive generic maintenance rules using graph transformations for maintaining graph views in case the graph data from which the graph views are derived change. We evaluate our approach in terms of a case study using multiple data sets derived from open source projects.
75 WR stars and 164 RSGs are identified in a single WFC3 pointing of our M101 survey. We find that within it's large star-forming complex NGC 5462 WR stars are preferentially located in the core whilst RSGs are found in the halo, suggesting two bursts of star-formation. A review of our WR candidates reveals that only ∼30% are detected in the archival broad-band ACS imaging whilst only ∼50% are associated with HII regions.
The Global Terrestrial Network for Permafrost (GTN-P) provides the first dynamic database associated with the Thermal State of Permafrost (TSP) and the Circumpolar Active Layer Monitoring (CALM) programs, which extensively collect permafrost temperature and active layer thickness (ALT) data from Arctic, Antarctic and mountain permafrost regions. The purpose of GTN-P is to establish an early warning system for the consequences of climate change in permafrost regions and to provide standardized thermal permafrost data to global models. In this paper we introduce the GTN-P database and perform statistical analysis of the GTN-P metadata to identify and quantify the spatial gaps in the site distribution in relation to climate-effective environmental parameters. We describe the concept and structure of the data management system in regard to user operability, data transfer and data policy. We outline data sources and data processing including quality control strategies based on national correspondents. Assessment of the metadata and data quality reveals 63% metadata completeness at active layer sites and 50% metadata completeness for boreholes.
Voronoi tessellation analysis on the spatial sample distribution of boreholes and active layer measurement sites quantifies the distribution inhomogeneity and provides a potential method to locate additional permafrost research sites by improving the representativeness of thermal monitoring across areas underlain by permafrost. The depth distribution of the boreholes reveals that 73% are shallower than 25m and 27% are deeper, reaching a maximum of 1 km depth. Comparison of the GTN-P site distribution with permafrost zones, soil organic carbon contents and vegetation types exhibits different local to regional monitoring situations, which are illustrated with maps. Preferential slope orientation at the sites most likely causes a bias in the temperature monitoring and should be taken into account when using the data for global models. The distribution of GTN-P sites within zones of projected temperature change show a high representation of areas with smaller expected temperature rise but a lower number of sites within Arctic areas where climate models project extreme temperature increase.
Semi-empirical sea-level models (SEMs) exploit physically motivated empirical relationships between global sea level and certain drivers, in the following global mean temperature. This model class evolved as a supplement to process-based models (Rahmstorf (2007)) which were unable to fully represent all relevant processes. They thus failed to capture past sea-level change (Rahmstorf et al. (2012)) and were thought likely to underestimate future sea-level rise. Semi-empirical models were found to be a fast and useful tool for exploring the uncertainties in future sea-level rise, consistently giving significantly higher projections than process-based models.
In the following different aspects of semi-empirical sea-level modelling have been studied. Models were first validated using various data sets of global sea level and temperature. SEMs were then used on the glacier contribution to sea level, and to infer past global temperature from sea-level data via inverse modelling. Periods studied encompass the instrumental period, covered by tide gauges (starting 1700 CE (Common Era) in Amsterdam) and satellites (first launched in 1992 CE), the era from 1000 BCE (before CE) to present, and the full length of the Holocene (using proxy data). Accordingly different data, model formulations and implementations have been used. It could be shown in Bittermann et al. (2013) that SEMs correctly predict 20th century sea-level when calibrated with data until 1900 CE. SEMs also turned out to give better predictions than the Intergovernmental Panel on Climate Change (IPCC) 4th assessment report (AR4, IPCC (2007)) models, for the period from 1961–2003 CE.
With the first multi-proxy reconstruction of global sea-level as input, estimate of the human-induced component of modern sea-level change and projections of future sea-level rise were calculated (Kopp et al. (2016)). It turned out with 90% confidence that more than 40 % of the observed 20th century sea-level rise is indeed anthropogenic. With the new semi-empirical and IPCC (2013) 5th assessment report (AR5) projections the gap between SEM and process-based model projections closes, giving higher credibility to both. Combining all scenarios, from strong mitigation to business as usual, a global sea-level rise of 28–131 cm relative to 2000 CE, is projected with 90% confidence. The decision for a low carbon pathway could halve the expected global sea-level rise by 2100 CE.
Present day temperature and thus sea level are driven by the globally acting greenhouse-gas forcing. Unlike that, the Milankovich forcing, acting on Holocene timescales, results mainly in a northern-hemisphere temperature change. Therefore a semi-empirical model can be driven with northernhemisphere temperatures, which makes it possible to model the main subcomponent of sea-level change over this period. It showed that an additional positive constant rate of the order of the estimated Antarctic sea-level contribution is then required to explain the sea-level evolution over the Holocene. Thus the global sea level, following the climatic optimum, can be interpreted as the sum of a temperature induced sea-level drop and a positive long-term contribution, likely an ongoing response to deglaciation coming from Antarctica.
The microbial community populating the human digestive tract has been linked to the development of obesity, diabetes and liver diseases. Proposed mechanisms on how the gut microbiota could contribute to obesity and metabolic diseases include: (1) improved energy extraction from diet by the conversion of dietary fibre to SCFA; (2) increased intestinal permeability for bacterial lipopolysaccharides (LPS) in response to the consumption of high-fat diets resulting in an elevated systemic LPS level and low-grade inflammation. Animal studies indicate differences in the physiologic effects of fermentable and non-fermentable dietary fibres as well as differences in long-and short-term effects of fermentable dietary fibre. The human intestinal microbiome is enriched in genes involved in the degradation of indigestible polysaccharides. The extent to which dietary fibres are fermented and in which molar ratio SCFA are formed depends on their physicochemical properties and on the individual microbiome. Acetate and propionate play an important role in lipid and glucose metabolism. Acetate serves as a substrate for de novo lipogenesis in liver, whereas propionate can be utilised for gluconeogenesis. The conversion of fermentable dietary fibre to SCFA provides additional energy to the host which could promote obesity. However, epidemiologic studies indicate that diets rich in fibre rather prevent than promote obesity development. This may be due to the fact that SCFA are also ligands of free fatty acid receptors (FFAR). Activation of FFAR leads to an increased expression and secretion of enteroendocrine hormones such as glucagon-like-peptide 1 or peptide YY which cause satiety. In conclusion, the role of SCFA in host energy balance needs to be re-evaluated.
Brownianmotion is ergodic in the Boltzmann–Khinchin sense that long time averages of physical observables such as the mean squared displacement provide the same information as the corresponding ensemble average, even at out-of-equilibrium conditions. This property is the fundamental prerequisite for single particle tracking and its analysis in simple liquids. We study analytically and by event-driven molecular dynamics simulations the dynamics of force-free cooling granular gases and reveal a violation of ergodicity in this Boltzmann-Khinchin sense as well as distinct ageing of the system. Such granular gases comprise materials such as dilute gases of stones, sand, various types of powders, or large molecules, and their mixtures are ubiquitous in Nature and technology, in particular in Space. We treat—depending on the physical-chemical properties of the inter-particle interaction upon their pair collisions—both a constant and a velocity-dependent
(viscoelastic) restitution coefficient e. Moreover we compare the granular gas dynamics with an effective single particle stochastic model based on an underdamped Langevin equation with time dependent diffusivity. We find that both models share the same behaviour of the ensemble mean squared displacement (MSD) and the velocity correlations in the limit of weak dissipation. Qualitatively, the reported non-ergodic behaviour is generic for granular gases with any realistic dependence of e on the impact velocity of particles.
Background: Continuous treatment is an important indicator of medication adherence in dementia. However, long-term studies in larger clinical settings are lacking, and little is known about moderating effects of patient and service characteristics.
Methods: Data from 12,910 outpatients with dementia (mean age 79.2 years; SD = 7.6 years) treated between January 2003 and December 2013 in Germany were included. Continuous treatment was analysed using Kaplan-Meier curves and log-rank tests. In addition, multivariate Cox regression models were fitted with continuous treatment as dependent variable and the predictors antidementia agent, age, gender, medical comorbidities, physician specialty, and health insurance status.
Results: After one year of follow-up, nearly 60% of patients continued drug treatment. Donezepil (HR: 0.88; 95% CI: 0.82-0.95) and memantine (HR: 0.85; 0.79-0.91) patients were less likely to be discontinued treatment as compared to rivastigmine users. Patients were less likely to be discontinued if they were treated by specialist physicians as compared to general practitioners (HR: 0.44; 0.41-0.48). Younger male patients and patients who had private health insurance had a lower discontinuation risk. Regarding comorbidity, patients were more likely to be continuously treated with the index substance if a diagnosis of heart failure or hypertension had been diagnosed at baseline.
Conclusions: Our results imply that besides type of antidementia agent, involvement of a specialist in the complex process of prescribing antidementia drugs can provide meaningful benefits to patients, in terms of more disease-specific and continuous treatment.
The evolution of massive stars in very low metallicity galaxies is less well observationally
constrained than in environments more similar to the Milky Way, M33, or the LMC. We discuss
in this contribution the current state of our program to search for and characterize Wolf-Rayet stars (and other massive emission line stars) in low metallicity galaxies in the Local Volume.
Kill one or kill them all?
(2015)
Research indicates individual pathways towards school attacks and inconsistent offender profiles. Thus, several authors have classified offenders according to mental disorders, motives, or number/kinds of victims. We assumed differences between single and multiple victim offenders (intending to kill one or more than one victim). In qualitative and quantitative analyses of data from qualitative content analyses of case files on seven school attacks in Germany, we found differences between the offender groups in seriousness, patterns, characteristics, and classes of leaking (announcements of offences), offence-related behaviour, and offence characteristics. There were only minor differences in risk factors. Our research thus adds to the understanding of school attacks and leaking. Differences between offender groups require consideration in the planning of effective preventive approaches.
Adjustment of empirically derived ground motion prediction equations (GMPEs), from a data- rich region/site where they have been derived to a data-poor region/site, is one of the major challenges associated with the current practice of seismic hazard analysis. Due to the fre- quent use in engineering design practices the GMPEs are often derived for response spectral ordinates (e.g., spectral acceleration) of a single degree of freedom (SDOF) oscillator. The functional forms of such GMPEs are based upon the concepts borrowed from the Fourier spectral representation of ground motion. This assumption regarding the validity of Fourier spectral concepts in the response spectral domain can lead to consequences which cannot be explained physically.
In this thesis, firstly results from an investigation that explores the relationship between Fourier and response spectra, and implications of this relationship on the adjustment issues of GMPEs, are presented. The relationship between the Fourier and response spectra is explored by using random vibration theory (RVT), a framework that has been extensively used in earthquake engineering, for instance within the stochastic simulation framework and in the site response analysis. For a 5% damped SDOF oscillator the RVT perspective of response spectra reveals that no one-to-one correspondence exists between Fourier and response spectral ordinates except in a limited range (i.e., below the peak of the response spectra) of oscillator frequencies. The high oscillator frequency response spectral ordinates are dominated by the contributions from the Fourier spectral ordinates that correspond to the frequencies well below a selected oscillator frequency. The peak ground acceleration (PGA) is found to be related with the integral over the entire Fourier spectrum of ground motion which is in contrast to the popularly held perception that PGA is a high-frequency phenomenon of ground motion.
This thesis presents a new perspective for developing a response spectral GMPE that takes the relationship between Fourier and response spectra into account. Essentially, this frame- work involves a two-step method for deriving a response spectral GMPE: in the first step two empirical models for the FAS and for a predetermined estimate of duration of ground motion are derived, in the next step, predictions from the two models are combined within the same RVT framework to obtain the response spectral ordinates. In addition to that, a stochastic model based scheme for extrapolating the individual acceleration spectra beyond the useable frequency limits is also presented. To that end, recorded acceleration traces were inverted to obtain the stochastic model parameters that allow making consistent extrapola- tion in individual (acceleration) Fourier spectra. Moreover an empirical model, for a dura- tion measure that is consistent within the RVT framework, is derived. As a next step, an oscillator-frequency-dependent empirical duration model is derived that allows obtaining the most reliable estimates of response spectral ordinates. The framework of deriving the response spectral GMPE presented herein becomes a self-adjusting model with the inclusion of stress parameter (∆σ) and kappa (κ0) as the predictor variables in the two empirical models. The entire analysis of developing the response spectral GMPE is performed on recently compiled RESORCE-2012 database that contains recordings made from Europe, the Mediterranean and the Middle East. The presented GMPE for response spectral ordinates should be considered valid in the magnitude range of 4 ≤ MW ≤ 7.6 at distances ≤ 200 km.
Computational Thinking
(2015)
Digital technology has radically changed the way people
work in industry, finance, services, media and commerce. Informatics
has contributed to the scientific and technological development of our
society in general and to the digital revolution in particular. Computational
thinking is the term indicating the key ideas of this discipline that
might be included in the key competencies underlying the curriculum
of compulsory education. The educational potential of informatics has
a history dating back to the sixties. In this article, we briefly revisit this
history looking for lessons learned. In particular, we focus on experiences
of teaching and learning programming. However, computational
thinking is more than coding. It is a way of thinking and practicing interactive
dynamic modeling with computers. We advocate that learners
can practice computational thinking in playful contexts where they can
develop personal projects, for example building videogames and/or robots,
share and discuss their construction with others. In our view, this
approach allows an integration of computational thinking in the K-12
curriculum across disciplines.
Obesity is a major health problem for many developing and industrial countries. Increasing rates reach almost 50 % of the population in some countries and related metabolic diseases including cardiovascular events and T2DM are challenging the health systems. Adiposity, an increase in body fat mass, is a major hallmark of obesity. Adipose tissue is long known not only to store lipids but also to influence whole-body metabolism including food intake, energy expenditure and insulin sensitivity. Adipocytes can store lipids and thereby protect other tissue from lipotoxic damage. However, if the energy intake is higher than the energy expenditure over a sustained time period, adipose tissue will expand. This can lead to an impaired adipose tissue function resulting in higher levels of plasma lipids, which can affect other tissue like skeletal muscle, finally leading to metabolic complications. Several studies showed beneficial metabolic effects of weight reduction in obese subjects immediately after weight loss. However, weight regain is frequently observed along with potential negative effects on cardiovascular risk factors and a high intra-individual response.
We performed a body weight maintenance study investigating the mechanisms of weight maintenance after intended WR. Therefore we used a low caloric diet followed by a 12-month life-style intervention. Comprehensive phenotyping including fat and muscle biopsies was conducted to investigate hormonal as well as metabolic influences on body weight regulation. In this study, we showed that weight reduction has numerous potentially beneficial effects on metabolic parameters. After 3-month WR subjects showed significant weight and fat mass reduction, lower TG levels as well as higher insulin sensitivity. Using RNA-Seq to analyse whole fat and muscle transcriptome a strong impact of weight reduction on adipose tissue gene expression was observed. Gene expression alterations over weight reduction included several cellular metabolic genes involved in lipid and glucose metabolism as well as insulin signalling and regulatory pathways. These changes were also associated with anthropometric parameters assigning body composition. Our data indicated that weight reduction leads to a decreased expression of several lipid catabolic as well as anabolic genes. Long-term body weight maintenance might be influenced by several parameters including hormones, metabolic intermediates as well as the transcriptional landscape of metabolic active tissues. Our data showed that genes involved in biosynthesis of unsaturated fatty acids might influence the BMI 18-month after a weight reduction phase. This was further supported by analysing metabolic parameters including RQ and FFA levels. We could show that subjects maintaining their lost body weight had a higher RQ and lower FFA levels, indicating increased metabolic flexibility in subjects.
Using this transcriptomic approach we hypothesize that low expression levels of lipid synthetic genes in adipose tissue together with a higher mitochondrial activity in skeletal muscle tissue might be beneficial in terms of body weight maintenance.
Spectral fingerprinting
(2015)
Current research on runoff and erosion processes, as well as an increasing demand for sustainable watershed management emphasize the need for an improved understanding of sediment dynamics. This involves the accurate assessment of erosion rates and sediment transfer, yield and origin. A variety of methods exist to capture these processes at the catchment scale. Among these, sediment fingerprinting, a technique to trace back the origin of sediment, has attracted increasing attention by the scientific community in recent years. It is a two-step procedure, based on the fundamental assumptions that potential sources of sediment can be reliably discriminated based on a set of characteristic ‘fingerprint’ properties, and that a comparison of source and sediment fingerprints allows to quantify the relative contribution of each source.
This thesis aims at further assessing the potential of spectroscopy to assist and improve the sediment fingerprinting technique. Specifically, this work focuses on (1) whether potential sediment sources can be reliably identified based on spectral features (‘fingerprints’), whether (2) these spectral fingerprints permit the quantification of relative source contribution, and whether (3) in situ derived source information is sufficient for this purpose. Furthermore, sediment fingerprinting using spectral information is applied in a study catchment to (4) identify major sources and observe how relative source contributions change between and within individual flood events. And finally, (5) spectral fingerprinting results are compared and combined with simultaneous sediment flux measurements to study sediment origin, transport and storage behaviour.
For the sediment fingerprinting approach, soil samples were collected from potential sediment sources within the Isábena catchment, a meso-scale basin in the central Spanish Pyrenees. Undisturbed samples of the upper soil layer were measured in situ using an ASD spectroradiometer and subsequently sampled for measurements in the laboratory. Suspended sediment was sampled automatically by means of ISCO samplers at the catchment as well as at the five major subcatchment outlets during flood events, and stored fine sediment from the channel bed was collected from 14 cross-sections along the main river. Artificial mixtures of known contributions were produced from source soil samples. Then, all source, sediment and mixture samples were dried and spectrally measured in the laboratory. Subsequently, colour coefficients and physically based features with relation to organic carbon, iron oxide, clay content and carbonate, were calculated from all in situ and laboratory spectra. Spectral parameters passing a number of prerequisite tests were submitted to principal component analyses to study natural clustering of samples, discriminant function analyses to observe source differentiation accuracy, and a mixing model for source contribution assessment. In addition, annual as well as flood event based suspended sediment fluxes from the catchment and its subcatchments were calculated from rainfall, water discharge and suspended sediment concentration measurements using rating curves and Quantile Regression Forests. Results of sediment flux monitoring were interpreted individually with respect to storage behaviour, compared to fingerprinting source ascriptions and combined with fingerprinting to assess their joint explanatory potential.
In response to the key questions of this work, (1) three source types (land use) and five spatial sources (subcatchments) could be reliably discriminated based on spectral fingerprints. The artificial mixture experiment revealed that while (2) laboratory parameters permitted source contribution assessment, (3) the use of in situ derived information was insufficient. Apparently, high discrimination accuracy does not necessarily imply good quantification results. When applied to suspended sediment samples of the catchment outlet, the spectral fingerprinting approach was able to (4) quantify the major sediment sources: badlands and the Villacarli subcatchment, respectively, were identified as main contributors, which is consistent with field observations and previous studies. Thereby, source contribution was found to vary both, within and between individual flood events. Also sediment flux was found to vary considerably, annually as well as seasonally and on flood event base. Storage was confirmed to play an important role in the sediment dynamics of the studied catchment, whereas floods with lower total sediment yield tend to deposit and floods with higher yield rather remove material from the channel bed. Finally, a comparison of flux measurements with fingerprinting results highlighted the fact that (5) immediate transport from sources to the catchment outlet cannot be assumed. A combination of the two methods revealed different aspects of sediment dynamics that none of the techniques could have uncovered individually.
In summary, spectral properties provide a fast, non-destructive, and cost-efficient means to discriminate and quantify sediment sources, whereas, unfortunately, straight-forward in situ collected source information is insufficient for the approach. Mixture modelling using artificial mixtures permits valuable insights into the capabilities and limitations of the method and similar experiments are strongly recommended to be performed in the future. Furthermore, a combination of techniques such as e.g. (spectral) sediment fingerprinting and sediment flux monitoring can provide comprehensive understanding of sediment dynamics.
GEOMAGIA50.v3
(2015)
Background: GEOMAGIA50.v3 for sediments is a comprehensive online database providing access to published paleomagnetic, rock magnetic, and chronological data obtained from lake and marine sediments deposited over the past 50 ka. Its objective is to catalogue data that will improve our understanding of changes in the geomagnetic field, physical environments, and climate.
Findings: GEOMAGIA50.v3 for sediments builds upon the structure of the pre-existing GEOMAGIA50 database for magnetic data from archeological and volcanic materials. A strong emphasis has been placed on the storage of geochronological data, and it is the first magnetic archive that includes comprehensive radiocarbon age data from sediments. The database will be updated as new sediment data become available.
Conclusions: The web-based interface for the sediment database is located at http://geomagia.gfz-potsdam.de/geomagiav3/SDquery.php. This paper is a companion to Brown et al. (Earth Planets Space doi:10.1186/s40623-015-0232-0,2015) and describes the data types, structure, and functionality of the sediment database.
The paper presents two approaches to the development of
a Computer Science Competence Model for the needs of curriculum
development and evaluation in Higher Education. A normativetheoretical
approach is based on the AKT and ACM/IEEE curriculum
and will be used within the recommendations of the German
Informatics Society (GI) for the design of CS curricula. An empirically
oriented approach refines the categories of the first one with regard to
specific subject areas by conducting content analysis on CS curricula of
important universities from several countries. The refined model will be
used for the needs of students’ e-assessment and subsequent affirmative
action of the CS departments.
The Franciscans in Cathay
(2015)
The study analyzes the process that leads to the elaboration of the thesis of a continuity between the Medieval Asia mission and the New World mission. This effort, undertaken by the Catholic historiography of the mission during the XIX century, is the result of the impulse provided by Alexander von Humboldt’s studies about the discovery of America (Examen critique). The data about the geography of Asia collected by the missionaries-travelers working in the territory between Karakorum and Khanbalik during the XIII e XIV century reaches Christopher Colombus with the mediation of Roger Bacon, whom Humboldt himself esteems as a true cultural mediator. The conclusion of the article tries to identify reasons and modalities of the secularization of the missionary concept, i.e. the shift from the ideal of the propagation of the Christian message to a prevailing interest for cartography and topography, transformations arranged by a late medieval historiography that introduces into martyrolagia the loca toponomastica.
In this paper we propose an algorithm to distinguish between light- and heavy-tailed probability laws underlying random datasets. The idea of the algorithm, which is visual and easy to implement, is to check whether the underlying law belongs to the domain of attraction of the Gaussian or non-Gaussian stable distribution by examining its rate of convergence. The method allows to discriminate between stable and various non-stable distributions. The test allows to differentiate between distributions, which appear the same according to standard Kolmogorov-Smirnov test. In particular, it helps to distinguish between stable and Student's t probability laws as well as between the stable and tempered stable, the cases which are considered in the literature as very cumbersome. Finally, we illustrate the procedure on plasma data to identify cases with so-called L-H transition.
Injection of nanoscale zero-valent iron (nZVI) is an innovative technology for in situ installation of a permeable reactive barrier in the subsurface. Zerovalent iron (ZVI) is highly reactive with chlorinated hydrocarbons (CHCs) and renders them into less harmful substances. Application of nZVI instead of granular ZVI can increase rates of dechlorination of CHCs by orders of magnitude, due to its higher surface area. This approach is still difficult to apply due to fast agglomeration and sedimentation of colloidal suspensions of nZVI, which leads to very short transport distances. To overcome this issue of limited mobility, polyanionic stabilisers are added to increase surface charge and stability of suspensions. In field experiments maximum transport distances of a few metres were achieved. A new approach, which is investigated in this thesis, is enhanced mobility of nZVI by a more mobile carrier colloid. The investigated composite material consists of activated carbon, which is loaded with nZVI.
In this cumulative thesis, transport characteristics of carbon-colloid supported nZVI (c-nZVI) are investigated. Investigations started with column experiments in 40 cm columns filled with various porous media to investigate on physicochemical influences on transport characteristics. The experimental setup was enlarged to a transport experiment in a 1.2-m-sized two-dimensional aquifer tank experiment, which was filled with granular porous media. Further, a field experiment was performed in a natural aquifer system with a targeted transport distance of 5.3 m. Parallel to these investigations, alternative methods for transport observations were investigated by using noninvasive tomographic methods. Experiments using synchrotron radiation and magnetic resonance (MRI) were performed to investigate in situ transport characteristics in a non-destructive way.
Results from column experiments show potentially high mobility under environmental relevant conditions. Addition of mono-and bivalent salts, e.g. more than 0.5 mM/L CaCl2, might decrease mobility. Changes in pH to values below 6 can inhibit mobility at all. Measurements of colloid size show changes in the mean particle size by a factor of ten. Measurements of zeta potential revealed an increase of –62 mV to –82 mV. Results from the 2D-aquifer test system suggest strong particle deposition in the first centimetres and only weak straining in the further travel path and no gravitational influence on particle transport. Straining at the beginning of the travel path in the porous medium was observed with tomographic investigations of transport. MRI experiments revealed similar results to the previous experiments, and observations using synchrotron radiation suggest straining of colloids at pore throats. The potential for high transport distances, which was suggested from laboratory experiments, was confirmed in the field experiment, where the transport distance of 5.3 m was reached by at least 10% of injected nZVI. Altogether, transport distances of the investigated carbon-colloid supported nZVI are higher than published results of traditional nZVI.
The girls set the tone
(2015)
In a four-wave longitudinal study with N = 1,321 adolescents in Germany, we examined the impact of class-level normative beliefs about aggression on aggressive norms and behavior at the individual level over the course of 3 years. At each data wave, participants indicated their normative acceptance of aggressive behavior and provided self-reports of physical and relational aggression. Multilevel analyses revealed significant cross-level interactions between class-level and individual-level normative beliefs at T1 on individual differences in physical aggression at T2, and the indirect interactive effects were significant up to T4. Normative approval of aggression at the class level, especially girls’ normative beliefs, defined the boundary conditions for the expression of individual differences in aggressive norms and their impact on physically and relationally aggressive behavior for both girls and boys. The findings demonstrate the moderating effect of social norms on the pathways from individual normative beliefs to aggressive behavior in adolescence.
The continuously increasing demand for rare earth elements in technical components of modern technologies, brings the detection of new deposits closer into the focus of global exploration. One promising method to globally map important deposits might be remote sensing, since it has been used for a wide range of mineral mapping in the past. This doctoral thesis investigates the capacity of hyperspectral remote sensing for the detection of rare earth element deposits. The definition and the realization of a fundamental database on the spectral characteristics of rare earth oxides, rare earth metals and rare earth element bearing materials formed the basis of this thesis. To investigate these characteristics in the field, hyperspectral images of four outcrops in Fen Complex, Norway, were collected in the near-field. A new methodology (named REEMAP) was developed to delineate rare earth element enriched zones. The main steps of REEMAP are: 1) multitemporal weighted averaging of multiple images covering the sample area; 2) sharpening the rare earth related signals using a Gaussian high pass deconvolution technique that is calibrated on the standard deviation of a Gaussian-bell shaped curve that represents by the full width of half maxima of the target absorption band; 3) mathematical modeling of the target absorption band and highlighting of rare earth elements. REEMAP was further adapted to different hyperspectral sensors (EO-1 Hyperion and EnMAP) and a new test site (Lofdal, Namibia). Additionally, the hyperspectral signatures of associated minerals were investigated to serve as proxy for the host rocks. Finally, the capacity and limitations of spectroscopic rare earth element detection approaches in general and of the REEMAP approach specifically were investigated and discussed. One result of this doctoral thesis is that eight rare earth oxides show robust absorption bands and, therefore, can be used for hyperspectral detection methods. Additionally, the spectral signatures of iron oxides, iron-bearing sulfates, calcite and kaolinite can be used to detect metasomatic alteration zones and highlight the ore zone. One of the key results of this doctoral work is the developed REEMAP approach, which can be applied from near-field to space. The REEMAP approach enables rare earth element mapping especially for noisy images. Limiting factors are a low signal to noise ratio, a reduced spectral resolution, overlaying materials, atmospheric absorption residuals and non-optimal illumination conditions. Another key result of this doctoral thesis is the finding that the future hyperspectral EnMAP satellite (with its currently published specifications, June 2015) will be theoretically capable to detect absorption bands of erbium, dysprosium, holmium, neodymium and europium, thulium and samarium. This thesis presents a new methodology REEMAP that enables a spatially wide and rapid hyperspectral detection of rare earth elements in order to meet the demand for fast, extensive and efficient rare earth exploration (from near-field to space).
The sea level rise induced intensification of coastal floods is a serious threat to many regions in proximity to the ocean. Although severe flood events are rare they can entail enormous damage costs, especially when built-up areas are inundated. Fortunately, the mean sea level advances slowly and there is enough time for society to adapt to the changing environment. Most commonly, this is achieved by the construction or reinforcement of flood defence measures such as dykes or sea walls but also land use and disaster management are widely discussed options. Overall, albeit the projection of sea level rise impacts and the elaboration of adequate response strategies is amongst the most prominent topics in climate impact research, global damage estimates are vague and mostly rely on the same assessment models. The thesis at hand contributes to this issue by presenting a distinctive approach which facilitates large scale assessments as well as the comparability of results across regions. Moreover, we aim to improve the general understanding of the interplay between mean sea level rise, adaptation, and coastal flood damage.
Our undertaking is based on two basic building blocks. Firstly, we make use of macroscopic flood-damage functions, i.e. damage functions that provide the total monetary damage within a delineated region (e.g. a city) caused by a flood of certain magnitude. After introducing a systematic methodology for the automatised derivation of such functions, we apply it to a total of 140 European cities and obtain a large set of damage curves utilisable for individual as well as comparative damage assessments. By scrutinising the resulting curves, we are further able to characterise the slope of the damage functions by means of a functional model. The proposed function has in general a sigmoidal shape but exhibits a power law increase for the relevant range of flood levels and we detect an average exponent of 3.4 for the considered cities. This finding represents an essential input for subsequent elaborations on the general interrelations of involved quantities.
The second basic element of this work is extreme value theory which is employed to characterise the occurrence of flood events and in conjunction with a damage function provides the probability distribution of the annual damage in the area under study. The resulting approach is highly flexible as it assumes non-stationarity in all relevant parameters and can be easily applied to arbitrary regions, sea level, and adaptation scenarios. For instance, we find a doubling of expected flood damage in the city of Copenhagen for a rise in mean sea levels of only 11 cm. By following more general considerations, we succeed in deducing surprisingly simple functional expressions to describe the damage behaviour in a given region for varying mean sea levels, changing storm intensities, and supposed protection levels. We are thus able to project future flood damage by means of a reduced set of parameters, namely the aforementioned damage function exponent and the extreme value parameters. Similar examinations are carried out to quantify the aleatory uncertainty involved in these projections. In this regard, a decrease of (relative) uncertainty with rising mean sea levels is detected. Beyond that, we demonstrate how potential adaptation measures can be assessed in terms of a Cost-Benefit Analysis. This is exemplified by the Danish case study of Kalundborg, where amortisation times for a planned investment are estimated for several sea level scenarios and discount rates.
The gas cloud G2 is currently being tidally disrupted by the Galactic Centre super-massive black hole, Sgr A*. The region around the black hole is populated by ∼ 30 Wolf-Rayet stars, which produce strong outflows. Here we explore the possibility that gas clumps like G2 originate from the collision of stellar winds via the non-linear thin shell instability.
Earthquake clustering has proven the most useful tool to forecast changes in seismicity rates in the short and medium term (hours to months), and efforts are currently being made to extend the scope of such models to operational earthquake forecasting. The overarching goal of the research presented in this thesis is to improve physics-based earthquake forecasts, with a focus on aftershock sequences. Physical models of triggered seismicity are based on the redistribution of stresses in the crust, coupled with the rate-and-state constitutive law proposed by Dieterich to calculate changes in seismicity rate. This type of models are known as Coulomb- rate and-state (CRS) models. In spite of the success of the Coulomb hypothesis, CRS models typically performed poorly in comparison to statistical ones, and they have been underepresented in the operational forecasting context. In this thesis, I address some of these issues, and in particular these questions: (1) How can we realistically model the uncertainties and heterogeneity of the mainshock stress field? (2) What is the effect of time dependent stresses in the postseismic phase on seismicity? I focus on two case studies from different tectonic settings: the Mw 9.0 Tohoku megathrust and the Mw 6.0 Parkfield strike slip earthquake. I study aleatoric uncertainties using a Monte Carlo method. I find that the existence of multiple receiver faults is the most important source of intrinsic stress heterogeneity, and CRS models perform better when this variability is taken into account. Epistemic uncertainties inherited from the slip models also have a significant impact on the forecast, and I find that an ensemble model based on several slip distributions outperforms most individual models. I address the role of postseismic stresses due to aseismic slip on the mainshock fault (afterslip) and to the redistribution of stresses by previous aftershocks (secondary triggering). I find that modeling secondary triggering improves model performance. The effect of afterslip is less clear, and difficult to assess for near-fault aftershocks due to the large uncertainties of the afterslip models. Off-fault events, on the other hand, are less sensitive to the details of the slip distribution: I find that following the Tohoku earthquake, afterslip promotes seismicity in the Fukushima region. To evaluate the performance of the improved CRS models in a pseudo-operational context, I submitted them for independent testing to a collaborative experiment carried out by CSEP for the 2010-2012 Canterbury sequence. Preliminary results indicate that physical models generally perform well compared to statistical ones, suggesting that CRS models may have a role to play in the future of operational forecasting. To facilitate efforts in this direction, and to enable future studies of earthquake triggering by time dependent processes, I have made the code open source. In the final part of this thesis I summarize the capabilities of the program and outline technical aspects regarding performance and parallelization strategies.
Loss of pdr-1/parkin influences Mn homeostasis through altered ferroportin expression in C. elegans
(2015)
Overexposure to the essential metal manganese (Mn) can result in an irreversible condition known as manganism that shares similar pathophysiology with Parkinson's disease (PD), including dopaminergic (DAergic) cell loss that leads to motor and cognitive impairments. However, the mechanisms behind this neurotoxicity and its relationship with PD remain unclear. Many genes confer risk for autosomal recessive, early-onset PD, including the parkin/PARK2 gene that encodes for the E3 ubiquitin ligase Parkin. Using Caenorhabditis elegans (C. elegans) as an invertebrate model that conserves the DAergic system, we previously reported significantly increased Mn accumulation in pdr-1/parkin mutants compared to wildtype (WT) animals. For the current study, we hypothesize that this enhanced accumulation is due to alterations in Mn transport in the pdr-1 mutants. While no change in mRNA expression of the major Mn importer proteins (smf-1-3) was found in pdr-1 mutants, significant downregulation in mRNA levels of the putative Mn exporter ferroportin (fpn-1.1) was observed. Using a strain overexpressing fpn-1.1 in worms lacking pdr-1, we show evidence for attenuation of several endpoints of Mn-induced toxicity, including survival, metal accumulation, mitochondrial copy number and DAergic integrity, compared to pdr-1 mutants alone. These changes suggest a novel role of pdr-1 in modulating Mn export through altered transporter expression, and provides further support of metal dyshomeostasis as a component of Parkinsonism pathophysiology.
Before GAIA improves the HIPPARCOS survey, direct determination of the distance via parallax is only possible for γ Vel, but the analysis of the cluster or association to which WR stars are associated can give distances with a 50% to a 10% accuracy. The list of Galactic clusters, associations and clusters/association candidates has grown significantly in the last decade with the numerous deep, high resolution surveys of the Milky Way. In this work, we revisit the fundamental parameters of known clusters with WR stars, and we present the search for new ones. All our work is based on the catalogs from the VVV (from the VISTA telescope) and the UKIDS (from the UKIRT telescope) near infrared surveys. Finally, the relations between the fundamental parameters of clusters with WR stars are explored.
Carbon-rich Wolf-Rayet stars are efficient carbon dust makers. Despite the strong evidence for dust formation in these objects provided by infrared thermal emission from dust, the routes to nucleation and condensation and the physical conditions required for dust production are still poorly understood. We discuss here the potential routes to carbon dust and the possible locations conducive to dust formation in the colliding winds of WC binaries.
The Technology Proficiency Self-Assessment (TPSA) questionnaire
has been used for 15 years in the USA and other nations as a
self-efficacy measure for proficiencies fundamental to effective technology
integration in the classroom learning environment. Internal consistency
reliabilities for each of the five-item scales have typically ranged
from .73 to .88 for preservice or inservice technology-using teachers.
Due to changing technologies used in education, researchers sought to
renovate partially obsolete items and extend self-efficacy assessment to
new areas, such as social media and mobile learning. Analysis of 2014
data gathered on a new, 34 item version of the TPSA indicates that the
four established areas of email, World Wide Web (WWW), integrated
applications, and teaching with technology continue to form consistent
scales with reliabilities ranging from .81 to .93, while the 14 new items
gathered to represent emerging technologies and media separate into
two scales, each with internal consistency reliabilities greater than .9.
The renovated TPSA is deemed to be worthy of continued use in the
teaching with technology context.
Modelling the transfer of supraglacial meltwater to the bed of Leverett Glacier, Southwest Greenland
(2015)
Meltwater delivered to the bed of the Greenland Ice Sheet is a driver of variable ice-motion through changes in effective pressure and enhanced basal lubrication. Ice surface velocities have been shown to respond rapidly both to meltwater production at the surface and to drainage of supraglacial lakes, suggesting efficient transfer of meltwater from the supraglacial to subglacial hydrological systems. Although considerable effort is currently being directed towards improved modelling of the controlling surface and basal processes, modelling the temporal and spatial evolution of the transfer of melt to the bed has received less attention. Here we present the results of spatially distributed modelling for prediction of moulins and lake drainages on the Leverett Glacier in Southwest Greenland. The model is run for the 2009 and 2010 ablation seasons, and for future increased melt scenarios. The temporal pattern of modelled lake drainages are qualitatively comparable with those documented from analyses of repeat satellite imagery. The modelled timings and locations of delivery of meltwater to the bed also match well with observed temporal and spatial patterns of ice surface speed-ups. This is particularly true for the lower catchment (< 1000 m a.s.l.) where both the model and observations indicate that the development of moulins is the main mechanism for the transfer of surface meltwater to the bed. At higher elevations (e.g. 1250-1500 m a.s.l.) the development and drainage of supraglacial lakes becomes increasingly important. At these higher elevations, the delay between modelled melt generation and subsequent delivery of melt to the bed matches the observed delay between the peak air temperatures and subsequent velocity speed-ups, while the instantaneous transfer of melt to the bed in a control simulation does not. Although both moulins and lake drainages are predicted to increase in number for future warmer climate scenarios, the lake drainages play an increasingly important role in both expanding the area over which melt accesses the bed and in enabling a greater proportion of surface melt to reach the bed.
In this thesis we study reciprocal classes of Markov chains. Given a continuous time Markov chain on a countable state space, acting as reference dynamics, the associated reciprocal class is the set of all probability measures on path space that can be written as a mixture of its bridges. These processes possess a conditional independence property that generalizes the Markov property, and evolved from an idea of Schrödinger, who wanted to obtain a probabilistic interpretation of quantum mechanics.
Associated to a reciprocal class is a set of reciprocal characteristics, which are space-time functions that determine the reciprocal class. We compute explicitly these characteristics, and divide them into two main families: arc characteristics and cycle characteristics. As a byproduct, we obtain an explicit criterion to check when two different Markov chains share their bridges.
Starting from the characteristics we offer two different descriptions of the reciprocal class, including its non-Markov probabilities.
The first one is based on a pathwise approach and the second one on short time asymptotic. With the first approach one produces a family of functional equations whose only solutions are precisely the elements of the reciprocal class. These equations are integration by parts on path space associated with derivative operators which perturb the paths by mean of the addition of random loops. Several geometrical tools are employed to construct such formulas. The problem of obtaining sharp characterizations is also considered, showing some interesting connections with discrete geometry. Examples of such formulas are given in the framework of counting processes and random walks on Abelian groups, where the set of loops has a group structure.
In addition to this global description, we propose a second approach by looking at the short time behavior of a reciprocal process. In the same way as the Markov property and short time expansions of transition probabilities characterize Markov chains, we show that a reciprocal class is characterized by imposing the reciprocal property and two families of short time expansions for the bridges. Such local approach is suitable to study reciprocal processes on general countable graphs. As application of our characterization, we considered several interesting graphs, such as lattices, planar
graphs, the complete graph, and the hypercube.
Finally, we obtain some first results about concentration of measure implied by lower bounds on the reciprocal characteristics.
In this work we study reciprocal classes of Markov walks on graphs. Given a continuous time reference Markov chain on a graph, its reciprocal class is the set of all probability measures which can be represented as a mixture of the bridges of the reference walks. We characterize reciprocal classes with two different approaches. With the first approach we found it as the set of solutions to duality formulae on path space, where the differential operators have the interpretation of the addition of infinitesimal random loops to the paths of the canonical process. With the second approach we look at short time asymptotics of bridges. Both approaches allow an explicit computation of reciprocal characteristics, which are divided into two families, the loop characteristics and the arc characteristics. They are those specific functionals of the generator of the reference chain which determine its reciprocal class. We look at the specific examples such as Cayley graphs, the hypercube and planar graphs. Finally we establish the first concentration of measure results for the bridges of a continuous time Markov chain based on the reciprocal characteristics.
Processes having the same bridges as a given reference Markov process constitute its reciprocal class. In this paper we study the reciprocal class of a continuous time random walk with values in a countable Abelian group, we compute explicitly its reciprocal characteristics and we present an integral characterization of it. Our main tool is a new iterated version of the celebrated Mecke's formula from the point process theory, which allows us to study, as transformation on the path space, the addition of random loops. Thanks to the lattice structure of the set of loops, we even obtain a sharp characterization. At the end, we discuss several examples to illustrate the richness of reciprocal classes. We observe how their structure depends on the algebraic properties of the underlying group.
Concluding Remarks
(2015)
cis-Diamminedichloroplatinum(II) (Cisplatin) is one of the most important and frequently used cytostatic drugs for the treatment of various solid tumors. Herein, a laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS) method incorporating a fast and simple sample preparation protocol was developed for the elemental mapping of Cisplatin in the model organism Caenorhabditis elegans (C. elegans). The method allows imaging of the spatially-resolved elemental distribution of platinum in the whole organism with respect to the anatomic structure in L4 stage worms at a lateral resolution of 5 μm. In addition, a dose- and time-dependent Cisplatin uptake was corroborated quantitatively by a total reflection X-ray fluorescence spectroscopy (TXRF) method, and the elemental mapping indicated that Cisplatin is located in the intestine and in the head of the worms. Better understanding of the distribution of Cisplatin in this well-established model organism will be instrumental in deciphering Cisplatin toxicity and pharmacokinetics. Since the cytostatic effect of Cisplatin is based on binding the DNA by forming intra- and interstrand crosslinks, the response of poly(ADP-ribose)metabolism enzyme 1 (pme-1) deletion mutants to Cisplatin was also examined. Loss of pme-1, which is the C. elegans ortholog of human poly(ADP-ribose) polymerase 1 (PARP-1) led to disturbed DNA damage response. With respect to survival and brood size, pme-1 deletion mutants were more sensitive to Cisplatin as compared to wildtype worms, while Cisplatin uptake was indistinguishable.
An overview of the known Wolf-Rayet (WR) population of the Milky Way is presented, including a brief overview of historical catalogues and recent advances based on infrared photometric and spectroscopic observations resulting in the current census of 642 (vl.13 online catalogue). The observed distribution of WR stars is considered with respect to known star clusters, given that ≤20% of WR stars in the disk are located in clusters. WN stars outnumber WC stars at all galactocentric radii, while early-type WC stars are strongly biased against the inner Milky Way. Finally, recent estimates of the global WR population in the Milky Way are reassessed, with 1,200±100 estimated, such that the current census may be 50% complete. A characteristic WR lifetime of 0.25 Myr is inferred for an initial mass threshold of 25 M⊙.
Computational thinking is a fundamental skill set that is learned
by studying Informatics and ICT. We argue that its core ideas can
be introduced in an inspiring and integrated way to both teachers and
students using fun and contextually rich cs4fn ‘Computer Science for
Fun’ stories combined with ‘unplugged’ activities including games and
magic tricks. We also argue that understanding people is an important
part of computational thinking. Computational thinking can be fun for
everyone when taught in kinaesthetic ways away from technology.
KEYCIT 2014
(2015)
In our rapidly changing world it is increasingly important not only to be an expert in a chosen field of study but also to be able to respond to developments, master new approaches to solving problems, and fulfil changing requirements in the modern world and in the job market. In response to these needs key competencies in understanding, developing and using new digital technologies are being brought into focus in school and university programmes. The IFIP TC3 conference "KEYCIT – Key Competences in Informatics and ICT (KEYCIT 2014)" was held at the University of Potsdam in Germany from July 1st to 4th, 2014 and addressed the combination of key competencies, Informatics and ICT in detail. The conference was organized into strands focusing on secondary education, university education and teacher education (organized by IFIP WGs 3.1 and 3.3) and provided a forum to present and to discuss research, case studies, positions, and national perspectives in this field.
The paper discusses the issue of supporting informatics
(computer science) education through competitions for lower and
upper secondary school students (8–19 years old). Competitions play
an important role for learners as a source of inspiration, innovation,
and attraction. Running contests in informatics for school students
for many years, we have noticed that the students consider the contest
experience very engaging and exciting as well as a learning experience.
A contest is an excellent instrument to involve students in problem
solving activities. An overview of infrastructure and development
of an informatics contest from international level to the national one
(the Bebras contest on informatics and computer fluency, originated
in Lithuania) is presented. The performance of Bebras contests in 23
countries during the last 10 years showed an unexpected and unusually
high acceptance by school students and teachers. Many thousands of
students participated and got a valuable input in addition to their regular
informatics lectures at school. In the paper, the main attention is paid
to the developed tasks and analysis of students’ task solving results in
Lithuania.
Messianic Jews are Jewish individuals who syncretically accept both the messianic character of Jesus and the ritual cultic practices provided by traditional Judaism. The present article examines the emergence of this marginal syncretic movement in contemporary Israel, and maintains that it represents a radical development in the bimillenary history of Jewish-Christian relations. This article offers a general introduction to the notion of Jewish-Christian identity, a brief history of the first group of Messianic Jews in the Land of Israel, the cultural influence and religious syncretism of the Messianic Jews in modern Israel, and, finally, the implication that Messianic Judaism is supposed to become the new paradigm within the various branches of Judaism.
The recently proposed global monsoon hypothesis interprets monsoon systems as part of one global-scale atmospheric overturning circulation, implying a connection between the regional monsoon systems and an in-phase behaviour of all northern hemispheric monsoons on annual timescales (Trenberth et al., 2000). Whether this concept can be applied to past climates and variability on longer timescales is still under debate, because the monsoon systems exhibit different regional characteristics such as different seasonality (i. e. onset, peak and withdrawal). To investigate the interconnection of different monsoon systems during the pre-industrial Holocene, five transient global climate model simulations have been analysed with respect to the rainfall trend and variability in different sub-domains of the Afro-Asian monsoon region. Our analysis suggests that on millennial timescales with varying orbital forcing, the monsoons do not behave as a tightly connected global system. According to the models, the Indian and North African monsoons are coupled, showing similar rainfall trend and moderate correlation in centennial rainfall variability in all models. The East Asian monsoon changes independently during the Holocene. The dissimilarities in the seasonality of the monsoon sub-systems lead to a stronger response of the North African and Indian monsoon systems to the Holocene insolation forcing than of the East Asian monsoon and affect the seasonal distribution of Holocene rainfall variations. Within the Indian and North African monsoon domain, precipitation solely changes during the summer months, showing a decreasing Holocene precipitation trend. In the East Asian monsoon region, the precipitation signal is determined by an increasing precipitation trend during spring and a decreasing precipitation change during summer, partly balancing each other. A synthesis of reconstructions and the model results do not reveal an impact of the different seasonality on the timing of the Holocene rainfall optimum in the different sub-monsoon systems. Rather they indicate locally inhomogeneous rainfall changes and show that single palaeo-records should not be used to characterise the rainfall change and monsoon evolution for entire monsoon sub-systems.
Based on niche theory, closely related and morphologically similar species are not predicted to coexist due to overlap in resource and habitat use. Local assemblages of bats often contain cryptic taxa, which co-occur despite notable similarities in morphology and ecology. We measured in two different habitat types on Madagascar levels of stable carbon and nitrogen isotopes in hair (n = 103) and faeces (n = 57) of cryptic Vespertilionidae taxa to indirectly examine whether fine-grained trophic niche differentiation explains their coexistence. In the dry deciduous forest (Kirindy), six sympatric species ranged over 6.0% in delta N-15, i.e. two trophic levels, and 4.2% in delta C-13 with a community mean of 11.3% in delta N-15 and - 21.0% in delta C-13. In the mesic forest (Antsahabe), three sympatric species ranged over one trophic level (delta N-15: 2.4%, delta C-13: 1.0%) with a community mean of 8.0% delta N-15 and - 21.7% in delta C-13. Multivariate analyses and residual permutation of Euclidian distances in delta C-13- delta N-15 bi-plots revealed in both communities distinct stable isotope signatures and species separation for the hair samples among coexisting Vespertilionidae. Intraspecific variation in faecal and hair stable isotopes did not indicate that seasonal migration might relax competition and thereby facilitate the local co-occurrence of sympatric taxa.
We have investigated the electrochemical, spectroscopic and electroluminescent properties of a family of aza-aromatic complexes of ruthenium of type [RuII(bpy/phen)2(L)]2+ (4d6) with three isomeric L ligands, where, bpy = 2,2′-bipyridine, phen = 1,10-phenanthroline and the L ligands are 3-(2-pyridyl)[1,2,4]triazolo[1,5-a]pyridine (L1), 3-(2-pyridyl[1,2,3])triazolo[1,5-a]pyridine (L2) and 2-(2-pyridyl)[1,2,4]triazolo[1,5-a]pyridine (L3). The complexes display two bands in the visible region near 410–420 and 440–450 nm. The complexes are diamagnetic and show well defined 1H NMR lines. They are electroactive in acetonitrile solution and exhibit a well defined RuII/RuIII couple near 1.20 to 1.30 V and −1.40 to −1.50 V due to ligand reduction versus Saturated Calomel Electrode (SCE). The solutions are also luminescent, with peaks are near 600 nm. All the complexes are electroluminescent in nature with peaks lying near 580 nm. L1 and L3 ligated complexes with two bpy co-ligands show weak photoluminescence (PL) but stronger electroluminescence (EL) compared to corresponding L2 ligated analogues.
The role of knowledge in the policy process remains a central theoretical puzzle in policy analysis and political science. This article argues that an important yet missing piece of this puzzle is the systematic exploration of the political use of policy knowledge. While much of the recent debate has focused on the question of how the substantive use of knowledge can improve the quality of policy choices, our understanding of the political use of knowledge and its effects in the policy process has remained deficient in key respects. A revised conceptualization of the political use of knowledge is introduced that emphasizes how conflicting knowledge can be used to contest given structures of policy authority. This allows the analysis to differentiate between knowledge creep and knowledge shifts as two distinct types of knowledge effects in the policy process. While knowledge creep is associated with incremental policy change within existing policy structures, knowledge shifts are linked to more fundamental policy change in situations when the structures of policy authority undergo some level of transformation. The article concludes by identifying characteristics of the administrative structure of policy systems or sectors that make knowledge shifts more or less likely.
What are the fundamental laws for the adsorption of charged polymers onto oppositely charged surfaces, for convex, planar, and concave geometries? This question is at the heart of surface coating applications, various complex formation phenomena, as well as in the context of cellular and viral biophysics. It has been a long-standing challenge in theoretical polymer physics; for realistic systems the quantitative understanding is however often achievable only by computer simulations. In this study, we present the findings of such extensive Monte-Carlo in silico experiments for polymer–surface adsorption in confined domains. We study the inverted critical adsorption of finite-length polyelectrolytes in three fundamental geometries: planar slit, cylindrical pore, and spherical cavity. The scaling relations extracted from simulations for the critical surface charge density sc—defining the adsorption–desorption transition—are in excellent agreement with our analytical calculations based on the ground-state analysis of the Edwards equation. In particular, we confirm the magnitude and scaling of sc for the concave interfaces versus the Debye screening length 1/k and the extent of confinement a for these three interfaces for small ka values. For large ka the critical adsorption condition approaches the known planar limit. The transition between the two regimes takes place when the radius of surface curvature or half of the slit thickness a is of the order of 1/k. We also rationalize how sc(k) dependence gets modified for semi-flexible versus flexible chains under external confinement. We examine the implications of the chain length for critical adsorption—the effect often hard to tackle theoretically—putting an emphasis on polymers inside attractive spherical cavities. The applications of our findings to some biological systems are discussed, for instance the adsorption of nucleic acids onto the inner surfaces of cylindrical and spherical viral capsids.
Messenger RNA acts as an informational molecule between DNA and translating ribosomes. Emerging evidence places mRNA in central cellular processes beyond its major function as informational entity. Although individual examples show that specific structural features of mRNA regulate translation and transcript stability, their role and function throughout the bacterial transcriptome remains unknown. Combining three sequencing approaches to provide a high resolution view of global mRNA secondary structure, translation efficiency and mRNA abundance, we unraveled structural features in E. coli mRNA with implications in translation and mRNA degradation. A poorly structured site upstream of the coding sequence serves as an additional unspecific binding site of the ribosomes and the degree of its secondary structure propensity negatively correlates with gene expression. Secondary structures within coding sequences are highly dynamic and influence translation only within a very small subset of positions. A secondary structure upstream of the stop codon is enriched in genes terminated by UAA codon with likely implications in translation termination. The global analysis further substantiates a common recognition signature of RNase E to initiate endonucleolytic cleavage. This work determines for the first time the E. coli RNA structurome, highlighting the contribution of mRNA secondary structure as a direct effector of a variety of processes, including translation and mRNA degradation.
A lot has been published about the competencies needed by
students in the 21st century (Ravenscroft et al., 2012). However, equally
important are the competencies needed by educators in the new era
of digital education. We review the key competencies for educators in
light of the new methods of teaching and learning proposed by Massive
Open Online Courses (MOOCs) and their on-campus counterparts,
Small Private Online Courses (SPOCs).
In this review, I discuss the suitability of massive star progenitors, evolved in isolation or in interacting binaries, for the production of observed supernovae (SNe) IIb, Ib, Ic. These SN types can be explained through variations in composition. The critical need of non-thermal effects to produce He I lines favours low-mass He-rich ejecta (in which ^56 Ni can be more easily mixed with He) for the production of SNe IIb/Ib, which thus may arise preferentially from moderate-mass donors in interacting binaries. SNe Ic may instead arise from higher mass progenitors, He-poor or not, because their larger CO cores prevent efficient non-thermal excitation of He i lines. However, current single star evolution models tend to produce Wolf-Rayet (WR) stars at death that have a final mass of > 10 M⊙. Single WR star explosion models produce ejecta that are too massive to match the observed light curve widths and rise times of SNe IIb/Ib/Ic, unless their kinetic energy is systematically and far greater than the canonical value of 10^56 erg. Future work is needed to evaluate the energy/mass degeneracy in light curve properties. Alternatively, a greater mass loss during the WR phase, perhaps in the form of eruptions, as evidenced in SNe Ibn, may reduce the final WR mass. If viable, such explosions would nonetheless favour a SN Ic, not a Ib.
The Strange-tailed Tyrant Alectrurus risora (Aves: Tyrannidae) is an endemic species of southern South American grasslands that suffered a 90% reduction of its original distribution due to habitat transformation. This has led the species to be classified as globally Vulnerable. By the beginning of the last century, populations were partially migratory and moved south during the breeding season. Currently, the main breeding population inhabits the Ibera wetlands in the province of Corrientes, north-east Argentina, where it is resident all year round. There are two remaining small populations in the province of Formosa, north-east Argentina, and in southern Paraguay, which are separated from the main population by the Parana-Paraguay River and its continuous riverine forest habitat. The populations of Corrientes and Formosa are separated by 300 km and the grasslands between populations are non-continuous due to habitat transformation. We used mtDNA sequences and eight microsatellite loci to test if there were evidences of genetic isolation between Argentinean populations. We found no evidence of genetic structure between populations (Phi(ST) = 0.004, P = 0.32; Fst = 0.01, P = 0.06), which can be explained by either retained ancestral polymorphism or by dispersal between populations. We found no evidence for a recent demographic bottleneck in nuclear loci. Our results indicate that these populations could be managed as a single conservation unit on a regional scale. Conservation actions should be focused on preserving the remaining network of areas with natural grasslands to guarantee reproduction, dispersal and prevent further decline of populations.
Let’s talk about CS!
(2015)
To communicate about a science is the most important key
competence in education for any science. Without communication we
cannot teach, so teachers should reflect about the language they use in
class properly. But the language students and teachers use to communicate
about their CS courses is very heterogeneous, inconsistent and
deeply influenced by tool names. There is a big lack of research and
discussion in CS education regarding the terminology and the role of
concepts and tools in our science. We don’t have a consistent set of
terminology that we agree on to be helpful for learning our science.
This makes it nearly impossible to do research on CS competencies as
long as we have not agreed on the names we use to describe these. This
workshop intends to provide room to fill with discussion and first ideas
for future research in this field.
Background
Mortality is a main driver in zooplankton population biology but it is poorly constrained in models that describe zooplankton population dynamics, food web interactions and nutrient dynamics. Mortality due to non-predation factors is often ignored even though anecdotal evidence of non-predation mass mortality of zooplankton has been reported repeatedly. One way to estimate non-predation mortality rate is to measure the removal rate of carcasses, for which sinking is the primary removal mechanism especially in quiescent shallow water bodies.
Objectives and Results
We used sediment traps to quantify in situ carcass sinking velocity and non-predation mortality rate on eight consecutive days in 2013 for the cladoceran Bosmina longirostris in the oligo-mesotrophic Lake Stechlin; the outcomes were compared against estimates derived from in vitro carcass sinking velocity measurements and an empirical model correcting in vitro sinking velocity for turbulence resuspension and microbial decomposition of carcasses. Our results show that the latter two approaches produced unrealistically high mortality rates of 0.58-1.04 d(-1), whereas the sediment trap approach, when used properly, yielded a mortality rate estimate of 0.015 d(-1), which is more consistent with concurrent population abundance data and comparable to physiological death rate from the literature.
Ecological implications
Zooplankton carcasses may be exposed to water column microbes for days before entering the benthos; therefore, non-predation mortality affects not only zooplankton population dynamics but also microbial and benthic food webs. This would be particularly important for carbon and nitrogen cycles in systems where recurring mid-summer decline of zooplankton population due to non-predation mortality is observed.
Biological materials, in addition to having remarkable physical properties, can also change shape and volume. These shape and volume changes allow organisms to form new tissue during growth and morphogenesis, as well as to repair and remodel old tissues. In addition shape or volume changes in an existing tissue can lead to useful motion or force generation (actuation) that may even still function in the dead organism, such as in the well known example of the hygroscopic opening or closing behaviour of the pine cone. Both growth and actuation of tissues are mediated, in addition to biochemical factors, by the physical constraints of the surrounding environment and the architecture of the underlying tissue. This habilitation thesis describes biophysical studies carried out over the past years on growth and swelling mediated shape changes in biological systems. These studies use a combination of theoretical and experimental tools to attempt to elucidate the physical mechanisms governing geometry controlled tissue growth and geometry constrained tissue swelling. It is hoped that in addition to helping understand fundamental processes of growth and morphogenesis, ideas stemming from such studies can also be used to design new materials for medicine and robotics.
Using a code that employs a self-consistent method for computing the effects of photoionization on circumstellar gas dynamics, we model the formation of wind-driven nebulae around massive Wolf-Rayet (W-R) stars. Our algorithm incorporates a simplified model of the photo-ionization source, computes the fractional ionization of hydrogen due to the photoionizing flux and recombination, and determines self-consistently the energy balance due to ionization, photo-heating and radiative cooling. We take into account changes in stellar properties and mass-loss over the star's evolution. Our multi-dimensional simulations clearly reveal the presence of strong ionization front instabilities. Using various X-ray emission models, and abundances consistent with those derived for W-R nebulae, we compute the X-ray flux and spectra from our wind bubble models. We show the evolution of the X-ray spectral features with time over the evolution of the star, taking the absorption of the X-rays by the ionized bubble into account. Our simulated X-ray spectra compare reasonably well with observed spectra of Wolf-Rayet bubbles. They suggest that X-ray nebulae around massive stars may not be easily detectable, consistent with observations.∗
Graph transformation systems are a powerful formal model to capture model transformations or systems with infinite state space, among others. However, this expressive power comes at the cost of rather limited automated analysis capabilities. The general case of unbounded many initial graphs or infinite state spaces is only supported by approaches with rather limited scalability or expressiveness. In this report we improve an existing approach for the automated verification of inductive invariants for graph transformation systems. By employing partial negative application conditions to represent and check many alternative conditions in a more compact manner, we can check examples with rules and constraints of substantially higher complexity. We also substantially extend the expressive power by supporting more complex negative application conditions and provide higher accuracy by employing advanced implication checks. The improvements are evaluated and compared with another applicable tool by considering three case studies.
Water resources from Central Asia’s mountain regions have a high relevance for the water supply of the water scarce lowlands. A good understanding of the water cycle in these mountain regions is therefore needed to develop water management strategies. Hydrological modeling helps to improve our knowledge of the regional water cycle, and it can be used to gain a better understanding of past changes or estimate future hydrologic changes in view of projected changes in climate. However, due to the scarcity of hydrometeorological data, hydrological modeling for mountain regions in Central Asia involves large uncertainties.
Addressing this problem, the first aim of this thesis was to develop hydrological modeling approaches that can increase the credibility of hydrological models in data sparse mountain regions. This was achieved by using additional data from remote sensing and atmospheric modeling. It was investigated whether spatial patterns from downscaled reanalysis data can be used for the interpolation of station-based precipitation data. This approach was compared to other precipitation estimates using a hydrologic evaluation based on hydrological modeling and a comparison of simulated and observed discharge, which demonstrated a generally good performance of this method. The study further investigated the value of satellite-derived snow cover data for model calibration. Trade-offs of good model performance in terms of discharge and snow cover were explicitly evaluated using a multiobjective optimization algorithm, and the results were contrasted with single-objective calibration and Monte Carlo simulations. The study clearly shows that the additional use of snow cover data improved the internal consistency of the hydrological model. In this context, it was further investigated for the first time how many snow cover scenes were required for hydrological model calibration.
The second aim of this thesis was the application of the hydrological model in order to investigate the causes of observed streamflow increases in two headwater catchments of the Tarim River over the recent decades. This simulation-based approach for trend attribution was complemented by a data-based approach. The hydrological model was calibrated to discharge and glacier mass balance data and considered changes in glacier geometry over time. The results show that in the catchment with a lower glacierization, increasing precipitation and temperature both contributed to the streamflow increases, while in the catchment with a stronger glacierization, increasing temperatures were identified as the dominant driver.
In the last 10 years, the governments of most of the German Lander initiated administrative reforms. All of these ventures included the municipalization of substantial sets of tasks. As elsewhere, governments argue that service delivery by communes is more cost-efficient, effective and responsive. Empirical evidence to back these claims is inconsistent at best: a considerable number of case studies cast doubt on unconditionally positive appraisals. Decentralization effects seem to vary depending on the performance dimension and task considered. However, questions of generalizability arise as these findings have not yet been backed by more 'objective' archival data. We provide empirical evidence on decentralization effects for two different policy fields based on two studies. Thereby, the article presents alternative avenues for research on decentralization effects and matches the theoretical expectations on decentralization effects with more robust results. The analysis confirms that overly positive assertions concerning decentralization effects are only partially warranted. As previous case studies suggested, effects have to be looked at in a much more differentiated way, including starting conditions and distinguishing between the various relevant performance dimensions and policy fields.
There is growing recognition of strong periglacial control on bedrock erosion in mountain landscapes, including the shaping of low-relief surfaces at high elevations (summit flats). But, as yet, the hypothesis that frost action was crucial to the assumed Late Cenozoic rise in erosion rates remains compelling and untested. Here we present a landscape evolution model incorporating two key periglacial processes - regolith production via frost cracking and sediment transport via frost creep - which together are harnessed to variations in temperature and the evolving thickness of sediment cover. Our computational experiments time-integrate the contribution of frost action to shaping mountain topography over million-year timescales, with the primary and highly reproducible outcome being the development of flattish or gently convex summit flats. A simple scaling of temperature to marine delta O-18 records spanning the past 14 Myr indicates that the highest summit flats in mid-to high-latitude mountains may have formed via frost action prior to the Quaternary. We suggest that deep cooling in the Quaternary accelerated mechanical weathering globally by significantly expanding the area subject to frost. Further, the inclusion of subglacial erosion alongside periglacial processes in our computational experiments points to alpine glaciers increasing the long-term efficiency of frost-driven erosion by steepening hillslopes.
Business Process Management has become an integral part of modern organizations in the private and public sector for improving their operations. In the course of Business Process Management efforts, companies and organizations assemble large process model repositories with many hundreds and thousands of business process models bearing a large amount of information. With the advent of large business process model collections, new challenges arise as structuring and managing a large amount of process models, their maintenance, and their quality assurance.
This is covered by business process architectures that have been introduced for organizing and structuring business process model collections. A variety of business process architecture approaches have been proposed that align business processes along aspects of interest, e. g., goals, functions, or objects. They provide a high level categorization of single processes ignoring their interdependencies, thus hiding valuable information. The production of goods or the delivery of services are often realized by a complex system of interdependent business processes. Hence, taking a holistic view at business processes interdependencies becomes a major necessity to organize, analyze, and assess the impact of their re-/design. Visualizing business processes interdependencies reveals hidden and implicit information from a process model collection.
In this thesis, we present a novel Business Process Architecture approach for representing and analyzing business process interdependencies on an abstract level. We propose a formal definition of our Business Process Architecture approach, design correctness criteria, and develop analysis techniques for assessing their quality. We describe a methodology for applying our Business Process Architecture approach top-down and bottom-up. This includes techniques for Business Process Architecture extraction from, and decomposition to process models while considering consistency issues between business process architecture and process model level. Using our extraction algorithm, we present a novel technique to identify and visualize data interdependencies in Business Process Data Architectures. Our Business Process Architecture approach provides business process experts,managers, and other users of a process model collection with an overview that allows reasoning about a large set of process models,
understanding, and analyzing their interdependencies in a facilitated way. In this regard we evaluated our Business Process Architecture approach in an experiment and provide implementations of selected techniques.
Social networks are currently at the forefront of tools that
lend to Personal Learning Environments (PLEs). This study aimed to
observe how students perceived PLEs, what they believed were the
integral components of social presence when using Facebook as part
of a PLE, and to describe student’s preferences for types of interactions
when using Facebook as part of their PLE. This study used mixed
methods to analyze the perceptions of graduate and undergraduate
students on the use of social networks, more specifically Facebook as a
learning tool. Fifty surveys were returned representing a 65 % response
rate. Survey questions included both closed and open-ended questions.
Findings suggested that even though students rated themselves relatively
well in having requisite technology skills, and 94 % of students used
Facebook primarily for social use, they were hesitant to migrate these
skills to academic use because of concerns of privacy, believing that
other platforms could fulfil the same purpose, and by not seeing the
validity to use Facebook in establishing social presence. What lies
at odds with these beliefs is that when asked to identify strategies in
Facebook that enabled social presence to occur in academic work, the
majority of students identified strategies in five categories that lead to
social presence establishment on Facebook during their coursework.
Faunal remains from Palaeolithic sites are important genetic sources to study preglacial and postglacial populations and to investigate the effect of climate change and human impact. Post mortem decay, resulting in fragmented and chemically modified DNA, is a key obstacle in ancient DNA analyses. In the absence of reliable methods to determine the presence of endogenous DNA in sub-fossil samples, temporal and spatial surveys of DNA survival on a regional scale may help to estimate the potential of faunal remains from a given time period and region. We therefore investigated PCR amplification success, PCR performance and post mortem damage in c. 47,000 to c. 12,000-year-old horse remains from 14 Palaeolithic sites along the Swiss Jura Mountains in relation to depositional context, tissue type, storage time and age, potentially influencing DNA preservation. The targeted 75 base pair mitochondrial DNA fragment could be amplified solely from equid remains from caves and not from any of the open dry and (temporary) wetland sites. Whether teeth are better than bones cannot be ultimately decided; however, both storage time after excavation and age significantly affect PCR amplification and performance, albeit not in a linear way. This is best explained by the—inevitable—heterogeneity of the data set. The extent of post mortem damage is not related to any of the potential impact factors. The results encourage comprehensive investigations of Palaeolithic cave sites, even from temperate regions.
Neuroenhancement (NE), the use of substances as a means to enhance performance, has garnered considerable scientific attention of late. While ethical and epidemiological publications on the topic accumulate, there is a lack of theory-driven psychological research that aims at understanding psychological drivers of NE. In this perspective article we argue that self-control strength offers a promising theory-based approach to further understand and investigate NE behavior. Using the strength model of self-control, we derive two theory-driven perspectives on NE-self-control research. First, we propose that individual differences in state/trait self-control strength differentially affect NE behavior based on one's individual experience of NE use. Building upon this, we outline promising research questions that (will) further elucidate our understanding of NE based on the strength model's propositions. Second, we discuss evidence indicating that popular NE substances (like Methylphenidate) may counteract imminent losses of self-control strength. We outline how further research on NE's effects on the ego-depletion effect may further broaden our understanding of the strength model of self-control.
In this paper we discuss how Alexander von Humboldt conceived a past to New Spain in his Political Essay on New Spain (1811) and how this text was, in turn, appropriated by the Mexican historiography during the 19th century.
In order to do so, we analyze how the Prussian drew from American sources, particularly from the text of the Jesuit Francisco Javier Clavijero, written shortly before. We also study Humboldt’s conceptions of text and of history, highlighting the place of the indigenous in the composition of his reasoning. Finally, we give examples of how the Mexican nationalist historiography read and reinterpreted the Political Essay.
For some years now, spectroscopic measurements of massive stars in the amateur domain have been fulfilling professional requirements. Various groups in the northern and southern hemispheres have been established, running successful professional-amateur (ProAm) collaborative campaigns, e.g., on WR, O and B type stars. Today high quality data (echelle and long-slit) are regularly delivered and corresponding results published. Night-to-night long-term observations over months to years open a new opportunity for massive-star research. We introduce recent and ongoing sample campaigns (e.g. ∊ Aur, WR 134, ζ Pup), show respective results and highlight the vast amount of data collected in various data bases. Ultimately it is in the time-dependent domain where amateurs can shine most.
Magnetic fields, non-thermal radiation and particle acceleration in colliding winds of WR-O stars
(2015)
Non-thermal emission has been detected in WR-stars for many years at long wavelengths spectral range, in general attributed to synchrotron emission. Two key ingredients are needed to explain such emissions, namely magnetic fields and relativistic particles. Particles can be accelerated to relativistic speeds by Fermi processes at strong shocks. Therefore, strong synchrotron emission is usually attributed to WR binarity. The magnetic field may also be amplified at shocks, however the actual picture of the magnetic field geometry, intensity, and its role on the acceleration of particles at WR binary systems is still unclear. In this work we discuss the recent developments in MHD modelling of wind-wind collision regions by means of numerical simulations, and the coupled particle acceleration processes related.
We describe a natural construction of deformation quantisation on a compact symplectic manifold with boundary. On the algebra of quantum observables a trace functional is defined which as usual annihilates the commutators. This gives rise to an index as the trace of the unity element. We formulate the index theorem as a conjecture and examine it by the classical harmonic oscillator.
The initiation of a marine ice-sheet instability (MISI) is generally discussed from the ocean side of the ice sheet. It has been shown that the reduction in ice-shelf buttressing and softening of the coastal ice can destabilize a marine ice sheet if the bedrock is sloping upward towards the ocean. Using a conceptional flow-line geometry, we investigate the possibility of whether a MISI can be triggered from the direction of the ice divide as opposed to coastal forcing and explore the interaction between connected basins. We find that the initiation of a MISI in one basin can induce a destabilization in the other. The underlying mechanism of basin interaction is based on dynamic thinning and a consecutive motion of the ice divide which induces a thinning in the adjacent basin and a successive initiation of the instability. Our simplified and symmetric topographic setup allows scaling both the geometry and the transition time between both instabilities. We find that the ice profile follows a universal shape that is scaled with the horizontal extent of the ice sheet and that the same exponent of 1/2 applies for the scaling relation between central surface elevation and horizontal extent as in the pure shallow ice approximation (Vialov profile). Altering the central bed elevation, we find that the extent of grounding-line retreat in one basin determines the degree of interaction with the other. Different scenarios of basin interaction are discussed based on our modeling results as well as on a conceptual flux-balance analysis. We conclude that for the three-dimensional case, the possibility of drainage basin interaction on timescales on the order of 1 kyr or larger cannot be excluded and hence needs further investigation.
Babelsberg/RML
(2015)
New programming language designs are often evaluated on concrete implementations. However, in order to draw conclusions about the language design from the evaluation of concrete programming languages, these implementations need to be verified against the formalism of the design. To that end, we also have to ensure that the design actually meets its stated goals. A useful tool for the latter has been to create an executable semantics from a formalism that can execute a test suite of examples. However, this mechanism so far did not allow to verify an implementation against the design.
Babelsberg is a new design for a family of object-constraint languages. Recently, we have developed a formal semantics to clarify some issues in the design of those languages. Supplementing this work, we report here on how this formalism is turned into an executable operational semantics using the RML system. Furthermore, we show how we extended the executable semantics to create a framework that can generate test suites for the concrete Babelsberg implementations that provide traceability from the design to the language. Finally, we discuss how these test suites helped us find and correct mistakes in the Babelsberg implementation for JavaScript.
A number of recent studies have investigated how syntactic and non-syntactic constraints combine to cue memory retrieval during anaphora resolution. In this paper we investigate how syntactic constraints and gender congruence interact to guide memory retrieval during the resolution of subject pronouns. Subject pronouns are always technically ambiguous, and the application of syntactic constraints on their interpretation depends on properties of the antecedent that is to be retrieved. While pronouns can freely corefer with non-quantified referential antecedents, linking a pronoun to a quantified antecedent is only possible in certain syntactic configurations via variable binding. We report the results from a judgment task and three online reading comprehension experiments investigating pronoun resolution with quantified and non-quantified antecedents. Results from both the judgment task and participants' eye movements during reading indicate that comprehenders freely allow pronouns to corefer with non-quantified antecedents, but that retrieval of quantified antecedents is restricted to specific syntactic environments. We interpret our findings as indicating that syntactic constraints constitute highly weighted cues to memory retrieval during anaphora resolution.
While children acquire new words and simple sentence structures extremely fast and without much effort, the ability to process complex sentences develops rather late in life. Although the conjoint occurrence between brain-structural and brain-functional changes, the decrease of plasticity, and changes in cognitive abilities suggests a certain causality between these processes, concrete evidence for the relation between brain development, language processing, and language performance is rare. Therefore, the current dissertation investigates the tripartite relationship between behavior (in the form of language performance and cognitive maturation as prerequisite for language processing), brain structure (in the form of gray matter maturation), and brain function (in the form of brain activation evoked by complex sentence processing). Previous developmental studies indicate a missing increase of activation in accordance to sentence complexity (functional selectivity) in language-relevant brain areas in children. To determine the factors contributing to the functional development of language-relevant brain areas, different methodologies and data acquisition techniques were used to investigate the processing of center-embedded sentences in 5- and 6-year-old children, 7- and 8-year-old children, and adults. Behavioral results indicate that children between 5 and 8 years show difficulties in processing double embedded sentences and that their performance for these type of sentences is positively correlated with digit span. In 7- and 8-year-old children, it was found that especially the processing of long-distance relations between the initial phrase and its corresponding verb appears to be associated with the subject’s verbal working memory capacity. In contrast, children’s performance for double embedded sentences in the younger age group positively correlated with their performance in a standardized sentence comprehension test. This finding supports the hypothesis that processing difficulties in this age group may be mainly attributed to difficulties in processing case marking information. These findings are discussed with respect to current accounts of language and working memory development. A second study aimed at investigating the structural maturation of brain areas involved in sentence comprehension. To do this, whole-brain magnetic resonance images from 59 children between 5 and 8 years were collected and children’s gray matter was analyzed by using voxel-based morphometry. Children’s grammatical proficiency was assessed by a standardized sentence comprehension test. A confirmatory factory analysis corroborated a grammar-relevant and a verbal working memory-relevant factor underlying the measured performance. While children’s ability to assign thematic roles is positively correlated with gray matter probability (GMP) in the left inferior temporal gyrus and the left inferior frontal gyrus, verbal working memory-related performance is positively correlated with GMP in the left parietal operculum extending into the posterior superior temporal gyrus. These areas have been previously shown to be differentially engaged in adults’ complex sentence processing. Thus, the findings of the second study suggest a specific correspondence between children’s GMP in language-relevant brain regions and differential cognitive abilities which underlie complex sentence comprehension. In a third study, functional brain activity during the processing of center-embedded sentences was investigated in three different age groups (5–6 years, 7–8 years, and adults). Although all age groups engage a qualitatively comparable network of the left pars opercularis (PO), the left inferior parietal lobe extending into the posterior superior temporal gyrus (IPL/pSTG), the supplementary motor area (SMA) and the cerebellum, functional selectivity of these regions was only observable in adults. However, functional activation of the language-related regions (PO and IPL/pSTG) predicted sentence comprehension performance for all age groups. To solve the question of the complex interplay between different maturational factors, a fourth study analyzed the predictive power of gray matter probability, verbal working memory capacity, and behavioral differences in performance for simple and complex sentence for the functional selectivity of each activated region. These analyses revealed that the establishment of the adult-like functional selectivity for complex sentences is predicted by a reduction of the left PO’s gray matter probability across age groups while that of the IPL/pSTG is additionally predicted by verbal working memory capacity. Taken all findings together, the current thesis provides evidence that both structural brain maturation and verbal working memory expansion provide the basis for the emergence of functional selectivity in language-related brain regions leading to more efficient sentence processing during development.
Dual-normal logic programs
(2015)
Disjunctive Answer Set Programming is a powerful declarative programming paradigm with complexity beyond NP. Identifying classes of programs for which the consistency problem is in NP is of interest from the theoretical standpoint and can potentially lead to improvements in the design of answer set programming solvers. One of such classes consists of dual-normal programs, where the number of positive body atoms in proper rules is at most one. Unlike other classes of programs, dual-normal programs have received little attention so far. In this paper we study this class. We relate dual-normal programs to propositional theories and to normal programs by presenting several inter-translations. With the translation from dual-normal to normal programs at hand, we introduce the novel class of body-cycle free programs, which are in many respects dual to head-cycle free programs. We establish the expressive power of dual-normal programs in terms of SE- and UE-models, and compare them to normal programs. We also discuss the complexity of deciding whether dual-normal programs are strongly and uniformly equivalent.
Commentary
(2015)
The primary motivation for systematic bases in first principles electronic structure simulations is to derive physical and chemical properties of molecules and solids with predetermined accuracy. This requires a detailed understanding of the asymptotic behaviour of many-particle Coulomb systems near coalescence points of particles. Singular analysis provides a convenient framework to study the asymptotic behaviour of wavefunctions near these singularities. In the present work, we want to introduce the mathematical framework of singular analysis and discuss a novel asymptotic parametrix construction for Hamiltonians of many-particle Coulomb systems. This corresponds to the construction of an approximate inverse of a Hamiltonian operator with remainder given by a so-called Green operator. The Green operator encodes essential asymptotic information and we present as our main result an explicit asymptotic formula for this operator. First applications to many-particle models in quantum chemistry are presented in order to demonstrate the feasibility of our approach. The focus is on the asymptotic behaviour of ladder diagrams, which provide the dominant contribution to shortrange correlation in coupled cluster theory. Furthermore, we discuss possible consequences of our asymptotic analysis with respect to adaptive wavelet approximation.