Refine
Has Fulltext
- yes (5964) (remove)
Year of publication
Document Type
- Postprint (2346)
- Doctoral Thesis (1737)
- Article (644)
- Preprint (425)
- Monograph/Edited Volume (246)
- Conference Proceeding (185)
- Working Paper (168)
- Master's Thesis (60)
- Habilitation Thesis (39)
- Part of Periodical (26)
Language
- English (5964) (remove)
Keywords
- climate change (74)
- Klimawandel (51)
- machine learning (41)
- morphology (40)
- information structure (39)
- MOOC (37)
- syntax (37)
- e-learning (36)
- digital education (35)
- Curriculum Framework (34)
Institute
- Institut für Physik und Astronomie (641)
- Institut für Biochemie und Biologie (557)
- Mathematisch-Naturwissenschaftliche Fakultät (485)
- Institut für Mathematik (475)
- Institut für Geowissenschaften (472)
- Extern (460)
- Institut für Chemie (428)
- Department Linguistik (237)
- Humanwissenschaftliche Fakultät (207)
- Department Psychologie (205)
Parabolic equations on manifolds with singularities require a new calculus of anisotropic pseudo-differential operators with operator-valued symbols. The paper develops this theory along the lines of sn abstract wedge calculus with strongly continuous groups of isomorphisms on the involved Banach spaces. The corresponding pseodo-diferential operators are continuous in anisotropic wedge Sobolev spaces, and they form an alegbra. There is then introduced the concept of anisotropic parameter-dependent ellipticity, based on an order reduction variant of the pseudo-differential calculus. The theory is appled to a class of parabolic differential operators, and it is proved the invertibility in Sobolev spaces with exponential weights at infinity in time direction.
Fronting of an infinite VP across a finite main verb-akin to German "VP-topicalization"-can be found also in Czech and Polish. The paper discusses evidence from large corpora for this process and some of its properties, both syntactic and information-structural. Based on this case, criteria for more user-friedly searching and retrieval of corpus data in syntactic research are being developed.
VS30, slope, H800 and f0
(2017)
The aim of this paper is to investigate the ability of various site-condition proxies (SCPs) to reduce ground-motion aleatory variability and evaluate how SCPs capture nonlinearity site effects. The SCPs used here are time-averaged shear-wave velocity in the top 30 m (VS30), the topographical slope (slope), the fundamental resonance frequency (f0) and the depth beyond which Vs exceeds 800 m/s (H800). We considered first the performance of each SCP taken alone and then the combined performance of the 6 SCP pairs [VS30–f0], [VS30–H800], [f0–slope], [H800–slope], [VS30–slope] and [f0–H800]. This analysis is performed using a neural network approach including a random effect applied on a KiK-net subset for derivation of ground-motion prediction equations setting the relationship between various ground-motion parameters such as peak ground acceleration, peak ground velocity and pseudo-spectral acceleration PSA (T), and Mw, RJB, focal depth and SCPs. While the choice of SCP is found to have almost no impact on the median groundmotion prediction, it does impact the level of aleatory uncertainty. VS30 is found to perform the best of single proxies
at short periods (T < 0.6 s), while f0 and H800 perform better at longer periods; considering SCP pairs leads to significant improvements, with particular emphasis on [VS30–H800] and [f0–slope] pairs. The results also indicate significant nonlinearity on the site terms for soft sites and that the most relevant loading parameter for characterising nonlinear site response is the “stiff” spectral ordinate at the considered period.
A comprehensive hydro-sedimentological dataset for the Isábena catchment, northeastern (NE) Spain, for the period 2010–2018 is presented to analyse water and sediment fluxes in a Mediterranean mesoscale catchment. The dataset includes rainfall data from 12 rain gauges distributed within the study area complemented by meteorological data of 12 official meteo-stations. It comprises discharge data derived from water stage measurements as well as suspended sediment concentrations (SSCs) at six gauging stations of the River Isábena and its sub-catchments. Soil spectroscopic data from 351 suspended sediment samples and 152 soil samples were collected to characterize sediment source regions and sediment properties via fingerprinting analyses. The Isábena catchment (445 km 2 ) is located in the southern central Pyrenees ranging from 450 m to 2720 m a.s.l.; together with a pronounced topography, this leads to distinct temperature and precipitation gradients. The River Isábena shows marked discharge variations and high sediment yields causing severe siltation problems in the downstream Barasona Reservoir. The main sediment source is badland areas located on Eocene marls that are well connected to the river network. The dataset features a comprehensive set of variables in a high spatial and temporal resolution suitable for the advanced process understanding of water and sediment fluxes, their origin and connectivity and sediment budgeting and for the evaluation and further development of hydro-sedimentological models in
Mediterranean mesoscale mountainous catchments.
Water at α-alumina surfaces
(2018)
The (0001) surface of α-Al₂O₃ is the most stable surface cut under UHV conditions and was studied by many groups both theoretically and experimentally. Reaction barriers computed with GGA functionals are known to be underestimated. Based on an example reaction at the (0001) surface, this work seeks to improve this rate by applying a hybrid functional method and perturbation theory (LMP2) with an atomic orbital basis, rather than a plane wave basis. In addition to activation barriers, we calculate the stability and vibrational frequencies of water on the surface. Adsorption energies were compared to PW calculations and confirmed PBE+D2/PW stability results. Especially the vibrational frequencies with the B3LYP hybrid functional that have been calculated for the (0001) surface are in good agreement with experimental findings. Concerning the barriers and the reaction rate constant, the expectations are fully met. It could be shown that recalculation of the transition state leads to an increased barrier, and a decreased rate constant when hybrid functionals or LMP2 are applied.
Furthermore, the molecular beam scattering of water on (0001) surface was studied. In a previous work by Hass the dissociation was studied by AIMD of molecularly adsorbed water, referring to an equilibrium situation. The experimental method to obtaining this is pinhole dosing. In contrast to this earlier work, the dissociation process of heavy water that is brought onto the surface from a molecular beam source was modeled in this work by periodic ab initio molecular dynamics simulations. This experimental method results in a non-equilibrium situation. The calculations with different surface and beam models allow us to understand the results of the non-equilibrium situation better. In contrast to a more equilibrium situation with pinhole dosing, this gives an increase in the dissociation probability, which could be explained and also understood mechanistically by those calculations.
In this work good progress was made in understanding the (1120) surface of α-Al₂O₃ in contact with water in the low-coverage regime. This surface cut is the third most stable one under UHV conditions and has not been studied to a great extent yet. After optimization of the clean, defect free surface, the stability of different adsorbed species could be classified. One molecular minimum and several dissociated species could be detected. Starting from these, reaction rates for various surface reactions were evaluated. A dissociation reaction was shown to be very fast because the molecular minimum is relatively unstable, whereas diffusion reactions cover a wider range from fast to slow. In general, the (112‾0) surface appears to be much more reactive against water than the (0001) surface. In addition to reactivity, harmonic vibrational frequencies were determined for comparison with the findings of the experimental “Interfacial Molecular Spectroscopy” group from Fritz-Haber institute in Berlin. Especially the vibrational frequencies of OD species could be assigned to vibrations from experimental SFG spectra with very good agreement. Also, lattice vibrations were studied in close collaboration with the experimental partners. They perform SFG spectra at very low frequencies to get deep into the lattice vibration region. Correspondingly, a bigger slab model with greater expansion perpendicular to the surface was applied, considering more layers in the bulk. Also with the lattice vibrations we could obtain reasonably good agreement in terms of energy differences between the peaks.
The hydrological budget of a region is determined based on the horizontal and vertical water fluxes acting in both inward and outward directions. These integrated water fluxes vary, altering the total water storage and consequently the gravitational force of the region. The time-dependent gravitational field can be observed through the Gravity Recovery and Climate Experiment (GRACE) gravimetric satellite mission, provided that the mass variation is above the sensitivity of GRACE. This study evaluates mass changes in prominent reservoir regions through three independent approaches viz. fluxes, storages, and gravity, by combining remote sensing products, in-situ data and hydrological model outputs using WaterGAP Global Hydrological Model (WGHM) and Global Land Data Assimilation System (GLDAS). The results show that the dynamics revealed by the GRACE signal can be better explored by a hybrid method, which combines remote sensing-based reservoir volume estimates with hydrological model outputs, than by exclusive model-based storage estimates. For the given arid/ semi-arid regions, GLDAS based storage estimations perform better than WGHM.
In the past decades, development cooperation (DC) led by conventional bi- and multilateral donors has been joined by a large number of small, private or public-private donors. This pluralism of actors raises questions as to whether or not these new donors are able to implement projects more or less effectively than their conventional counterparts. In contrast to their predecessors, the new donors have committed themselves to be more pragmatic, innovative and flexible in their development cooperation measures. However, they are also criticized for weakening the function of local civil society and have the reputation of being an intransparent and often controversial alternative to public services. With additional financial resources and their new approach to development, the new donors have been described in the literature as playing a controversial role in transforming development cooperation. This dissertation compares the effectiveness of initiatives by new and conventional donors with regard to the provision of public goods and services to the poor in the water and sanitation sector in India.
India is an emerging country but it is experiencing high poverty rates and poor water supply in predominantly rural areas. It lends itself for analyzing this research theme as it is currently being confronted by a large number of actors and approaches that aim to find solutions for these challenges .
In the theoretical framework of this dissertation, four governance configurations are derived from the interaction of varying actor types with regard to hierarchical and non-hierarchical steering of their interactions. These four governance configurations differ in decision-making responsibilities, accountability and delegation of tasks or direction of information flow. The assumption on actor relationships and steering is supplemented by possible alternative explanations in the empirical investigation, such as resource availability, the inheritance of structures and institutions from previous projects in a project context, gaining acceptance through beneficiaries (local legitimacy) as a door opener, and asymmetries of power in the project context.
Case study evidence from seven projects reveals that the actors' relationship is important for successful project delivery. Additionally, the results show that there is a systematic difference between conventional and new donors. Projects led by conventional donors were consistently more successful, due to an actor relationship that placed the responsibility in the hands of the recipient actors and benefited from the trust and reputation of a long-term cooperation. The trust and reputation of conventional donors always went along with a back-up from federal level and trickled down as reputation also at local level implementation. Furthermore, charismatic leaders, as well as the acquired structures and institutions of predecessor projects, also proved to be a positive influencing factor for successful project implementation.
Despite the mixed results of the seven case studies, central recommendations for action can be derived for the various actors involved in development cooperation. For example, new donors could fulfill a supplementary function with conventional donors by developing innovative project approaches through pilot studies and then implementing them as a supplement to the projects of conventional donors on the ground. In return, conventional donors would have to make room the new donors by integrating their approaches into already programs in order to promote donor harmonization. It is also important to identify and occupy niches for activities and to promote harmonization among donors on state and federal sides.
The empirical results demonstrate the need for a harmonization strategy of different donor types in order to prevent duplication, over-experimentation and the failure of development programs. A transformation to successful and sustainable development cooperation can only be achieved through more coordination processes and national self-responsibility.
The functioning of the surface water-groundwater interface as buffer, filter and reactive zone is important for water quality, ecological health and resilience of streams and riparian ecosystems. Solute and heat exchange across this interface is driven by the advection of water. Characterizing the flow conditions in the streambed is challenging as flow patterns are often complex and multidimensional, driven by surface hydraulic gradients and groundwater discharge. This thesis presents the results of an integrated approach of studies, ranging from the acquisition of field data, the development of analytical and numerical approaches to analyse vertical temperature profiles to the detailed, fully-integrated 3D numerical modelling of water and heat flux at the reach scale. All techniques were applied in order to characterize exchange flux between stream and groundwater, hyporheic flow paths and temperature patterns.
The study was conducted at a reach-scale section of the lowland Selke River, characterized by distinctive pool riffle sequences and fluvial islands and gravel bars. Continuous time series of hydraulic heads and temperatures were measured at different depths in the river bank, the hyporheic zone and within the river. The analyses of the measured diurnal temperature variation in riverbed sediments provided detailed information about the exchange flux between river and groundwater. Beyond the one-dimensional vertical water flow in the riverbed sediment, hyporheic and parafluvial flow patterns were identified. Subsurface flow direction and magnitude around fluvial islands and gravel bars at the study site strongly depended on the position around the geomorphological structures and on the river stage. Horizontal water flux in the streambed substantially impacted temperature patterns in the streambed. At locations with substantial horizontal fluxes the penetration depths of daily temperature fluctuations was reduced in comparison to purely vertical exchange conditions.
The calibrated and validated 3D fully-integrated model of reach-scale water and heat fluxes across the river-groundwater interface was able to accurately represent the real system. The magnitude and variations of the simulated temperatures matched the observed ones, with an average mean absolute error of 0.7 °C and an average Nash Sutcliffe Efficiency of 0.87. The simulation results showed that the water and heat exchange at the surface water-groundwater interface is highly variable in space and time with zones of daily temperature oscillations penetrating deep into the sediment and spots of daily constant temperature following the average groundwater temperature. The average hyporheic flow path temperature was found to strongly correlate with the flow path residence time (flow path length) and the temperature gradient between river and groundwater. Despite the complexity of these processes, the simulation results allowed the derivation of a general empirical relationship between the hyporheic residence times and temperature patterns. The presented results improve our understanding of the complex spatial and temporal dynamics of water flux and thermal processes within the shallow streambed. Understanding these links provides a general basis from which to assess hyporheic temperature conditions in river reaches.
The economic impact analysis contained in this book shows how irrigation farming is particularly susceptible when applying certain water management policies in the Australian Murray-Darling Basin, one of the world largest river basins and Australia’s most fertile region. By comparing different pricing and non-pricing water management policies with the help of the Water Integrated Market Model, it is found that the impact of water demand reducing policies is most severe on crops that need to be intensively irrigated and are at the same time less water productive. A combination of increasingly frequent and severe droughts and the application of policies that decrease agricultural water demand, in the same region, will create a situation in which the highly water dependent crops rice and cotton cannot be cultivated at all.
Hybrid nanomaterials offer the combination of individual properties of different types of nanoparticles. Some strategies for the development of new nanostructures in larger scale rely on the self-assembly of nanoparticles as a bottom-up approach. The use of templates provides ordered assemblies in defined patterns. In a typical soft-template, nanoparticles and other surface-active agents are incorporated into non-miscible liquids. The resulting self-organized dispersions will mediate nanoparticle interactions to control the subsequent self-assembly. Especially interactions between nanoparticles of very different dispersibility and functionality can be directed at a liquid-liquid interface.
In this project, water-in-oil microemulsions were formulated from quasi-ternary mixtures with Aerosol-OT as surfactant. Oleyl-capped superparamagnetic iron oxide and/or silver nanoparticles were incorporated in the continuous organic phase, while polyethyleneimine-stabilized gold nanoparticles were confined in the dispersed water droplets. Each type of nanoparticle can modulate the surfactant film and the inter-droplet interactions in diverse ways, and their combination causes synergistic effects. Interfacial assemblies of nanoparticles resulted after phase-separation. On one hand, from a biphasic Winsor type II system at low surfactant concentration, drop-casting of the upper phase afforded thin films of ordered nanoparticles in filament-like networks. Detailed characterization proved that this templated assembly over a surface is based on the controlled clustering of nanoparticles and the elongation of the microemulsion droplets. This process offers versatility to use different nanoparticle compositions by keeping the surface functionalization, in different solvents and over different surfaces. On the other hand, a magnetic heterocoagulate was formed at higher surfactant concentration, whose phase-transfer from oleic acid to water was possible with another auxiliary surfactant in ethanol-water mixture. When the original components were initially mixed under heating, defined oil-in-water, magnetic-responsive nanostructures were obtained, consisting on water-dispersible nanoparticle domains embedded by a matrix-shell of oil-dispersible nanoparticles.
Herein, two different approaches were demonstrated to form diverse hybrid nanostructures from reverse microemulsions as self-organized dispersions of the same components. This shows that microemulsions are versatile soft-templates not only for the synthesis of nanoparticles, but also for their self-assembly, which suggest new approaches towards the production of new sophisticated nanomaterials in larger scale.
The effect of cellulose-based polyelectrolytes on biomimetic calcium phosphate mineralization is described. Three cellulose derivatives, a polyanion, a polycation, and a polyzwitterion were used as additives. Scanning electron microscopy, X-ray diffraction, IR and Raman spectroscopy show that, depending on the composition of the starting solution, hydroxyapatite or brushite precipitates form. Infrared and Raman spectroscopy also show that significant amounts of nitrate ions are incorporated in the precipitates. Energy dispersive X-ray spectroscopy shows that the Ca/P ratio varies throughout the samples and resembles that of other bioinspired calcium phosphate hybrid materials. Elemental analysis shows that the carbon (i.e., polymer) contents reach 10% in some samples, clearly illustrating the formation of a true hybrid material. Overall, the data indicate that a higher polymer concentration in the reaction mixture favors the formation of polymer-enriched materials, while lower polymer concentrations or high precursor concentrations favor the formation of products that are closely related to the control samples precipitated in the absence of polymer. The results thus highlight the potential of (water-soluble) cellulose derivatives for the synthesis and design of bioinspired and bio-based hybrid materials.
In this work we investigated ultrafast demagnetization in a Heusler-alloy. This material belongs to the halfmetal and exists in a ferromagnetic phase. A special feature of investigated alloy is a structure of electronic bands. The last leads to the specific density of the states. Majority electrons form a metallic like structure while minority electrons form a gap near the Fermi-level, like in semiconductor. This particularity offers a good possibility to use this material as model-like structure and to make some proof of principles concerning demagnetization. Using pump-probe experiments we carried out time-resolved measurements to figure out the times of demagnetization. For the pumping we used ultrashort laser pulses with duration around 100 fs. Simultaneously we used two excitation regimes with two different wavelengths namely 400 nm and 1240 nm. Decreasing the energy of photons to the gap size of the minority electrons we explored the effect of the gap on the demagnetization dynamics. During this work we used for the first time OPA (Optical Parametrical Amplifier) for the generation of the laser irradiation in a long-wave regime. We tested it on the FETOSPEX-beamline in BASSYII electron storage ring. With this new technique we measured wavelength dependent demagnetization dynamics. We estimated that the demagnetization time is in a correlation with photon energy of the excitation pulse. Higher photon energy leads to the faster demagnetization in our material. We associate this result with the existence of the energy-gap for minority electrons and explained it with Elliot-Yaffet-scattering events. Additionally we applied new probe-method for magnetization state in this work and verified their effectivity. It is about the well-known XMCD (X-ray magnetic circular dichroism) which we adopted for the measurements in reflection geometry. Static experiments confirmed that the pure electronic dynamics can be separated from the magnetic one. We used photon energy fixed on the L3 of the corresponding elements with circular polarization. Appropriate incidence angel was estimated from static measurements. Using this probe method in dynamic measurements we explored electronic and magnetic dynamics in this alloy.
Contents: 1 Introduction 1.1 Tikhanov-Phillips Regularization of Ill-Posed Problems 1.2 A Compact Course to Wavelets 2 A Multilevel Iteration for Tikhonov-Phillips Regularization 2.1 Multilevel Splitting 2.2 The Multilevel Iteration 2.3 Multilevel Approach to Cone Beam Reconstuction 3 The use of approximating operators 3.1 Computing approximating families {Ah}
The dynamics of external contributions to the geomagnetic field is investigated by applying time-frequency methods to magnetic observatory data. Fractal models and multiscale analysis enable obtaining maximum quantitative information related to the short-term dynamics of the geomagnetic field activity. The stochastic properties of the horizontal component of the transient external field are determined by searching for scaling laws in the power spectra. The spectrum fits a power law with a scaling exponent β, a typical characteristic of self-affine time-series. Local variations in the power-law exponent are investigated by applying wavelet analysis to the same time-series. These analyses highlight the self-affine properties of geomagnetic perturbations and their persistence. Moreover, they show that the main phases of sudden storm disturbances are uniquely characterized by a scaling exponent varying between 1 and 3, possibly related to the energy contained in the external field. These new findings suggest the existence of a long-range dependence, the scaling exponent being an efficient indicator of geomagnetic activity and singularity detection. These results show that by using magnetogram regularity to reflect the magnetosphere activity, a theoretical analysis of the external geomagnetic field based on local power-law exponents is possible.
Projection methods based on wavelet functions combine optimal convergence rates with algorithmic efficiency. The proofs in this paper utilize the approximation properties of wavelets and results from the general theory of regularization methods. Moreover, adaptive strategies can be incorporated still leading to optimal convergence rates for the resulting algorithms. The so-called wavelet-vaguelette decompositions enable the realization of especially fast algorithms for certain operators.
We define weak boundary values of solutions to those nonlinear differential equations which appear as Euler-Lagrange equations of variational problems. As a result we initiate the theory of Lagrangian boundary value problems in spaces of appropriate smoothness. We also analyse if the concept of mapping degree of current importance applies to the study of Lagrangian problems.
Process models specify behavioral execution constraints between activities as well as between activities and data objects. A data object is characterized by its states and state transitions represented as object life cycle. For process execution, all behavioral execution constraints must be correct. Correctness can be verified via soundness checking which currently only considers control flow information. For data correctness, conformance between a process model and its object life cycles is checked. Current approaches abstract from dependencies between multiple data objects and require fully specified process models although, in real-world process repositories, often underspecified models are found. Coping with these issues, we introduce the concept of synchronized object life cycles and we define a mapping of data constraints of a process model to Petri nets extending an existing mapping. Further, we apply the notion of weak conformance to process models to tell whether each time an activity needs to access a data object in a particular state, it is guaranteed that the data object is in or can reach the expected state. Then, we introduce an algorithm for an integrated verification of control flow correctness and weak data conformance using soundness checking.
To understand the evolution and morphology of planetary nebulae, a detailed knowledge of their central stars is required. Central stars that exhibit emission lines in their spectra, indicating stellar mass-loss allow to study the evolution of planetary nebulae in action. Emission line central stars constitute about 10 % of all central stars. Half of them are practically hydrogen-free Wolf-Rayet type central stars of the carbon sequence, [WC], that show strong emission lines of carbon and oxygen in their spectra. In this contribution we address the weak emission-lines central stars (wels). These stars are poorly analyzed and their hydrogen content is mostly unknown. We obtained optical spectra, that include the important Balmer lines of hydrogen, for four weak emission line central stars. We present the results of our analysis, provide spectral classification and discuss possible explanations for their formation and evolution.
The World Wide Web as an application platform becomes increasingly important. However, the development of Web applications is often more complex than for the desktop. Web-based development environments like Lively Webwerkstatt can mitigate this problem by making the development process more interactive and direct. By moving the development environment into the Web, applications can be developed collaboratively in a Wiki-like manner. This report documents the results of the project seminar on Web-based Development Environments 2010. In this seminar, participants extended the Web-based development environment Lively Webwerkstatt. They worked in small teams on current research topics from the field of Web-development and tool support for programmers and implemented their results in the Webwerkstatt environment.
Virtual 3D city models represent and integrate a variety of spatial data and georeferenced data related to urban areas. With the help of improved remote-sensing technology, official 3D cadastral data, open data or geodata crowdsourcing, the quantity and availability of such data are constantly expanding and its quality is ever improving for many major cities and metropolitan regions. There are numerous fields of applications for such data, including city planning and development, environmental analysis and simulation, disaster and risk management, navigation systems, and interactive city maps.
The dissemination and the interactive use of virtual 3D city models represent key technical functionality required by nearly all corresponding systems, services, and applications. The size and complexity of virtual 3D city models, their management, their handling, and especially their visualization represent challenging tasks. For example, mobile applications can hardly handle these models due to their massive data volume and data heterogeneity. Therefore, the efficient usage of all computational resources (e.g., storage, processing power, main memory, and graphics hardware, etc.) is a key requirement for software engineering in this field. Common approaches are based on complex clients that require the 3D model data (e.g., 3D meshes and 2D textures) to be transferred to them and that then render those received 3D models. However, these applications have to implement most stages of the visualization pipeline on client side. Thus, as high-quality 3D rendering processes strongly depend on locally available computer graphics resources, software engineering faces the challenge of building robust cross-platform client implementations.
Web-based provisioning aims at providing a service-oriented software architecture that consists of tailored functional components for building web-based and mobile applications that manage and visualize virtual 3D city models. This thesis presents corresponding concepts and techniques for web-based provisioning of virtual 3D city models. In particular, it introduces services that allow us to efficiently build applications for virtual 3D city models based on a fine-grained service concept. The thesis covers five main areas:
1. A Service-Based Concept for Image-Based Provisioning of
Virtual 3D City Models It creates a frame for a broad range of services related to the rendering and image-based dissemination of virtual 3D city models.
2. 3D Rendering Service for Virtual 3D City Models This service provides efficient, high-quality 3D rendering functionality for virtual 3D city models. In particular, it copes with requirements such as standardized data formats, massive model texturing, detailed 3D geometry, access to associated feature data, and non-assumed frame-to-frame coherence for parallel service requests. In addition, it supports thematic and artistic styling based on an expandable graphics effects library.
3. Layered Map Service for Virtual 3D City Models It generates a map-like representation of virtual 3D city models using an oblique view. It provides high visual quality, fast initial loading times, simple map-based interaction and feature data access. Based on a configurable client framework, mobile and web-based applications for virtual 3D city models can be created easily.
4. Video Service for Virtual 3D City Models It creates and synthesizes videos from virtual 3D city models. Without requiring client-side 3D rendering capabilities, users can create camera paths by a map-based user interface, configure scene contents, styling, image overlays, text overlays, and their transitions. The service significantly reduces the manual effort typically required to produce such videos. The videos can automatically be updated when the underlying data changes.
5. Service-Based Camera Interaction It supports task-based 3D camera interactions, which can be integrated seamlessly into service-based visualization applications. It is demonstrated how to build such web-based interactive applications for virtual 3D city models using this camera service.
These contributions provide a framework for design, implementation, and deployment of future web-based applications, systems, and services for virtual 3D city models. The approach shows how to decompose the complex, monolithic functionality of current 3D geovisualization systems into independently designed, implemented, and operated service- oriented units. In that sense, this thesis also contributes to microservice architectures for 3D geovisualization systems—a key challenge of today’s IT systems engineering to build scalable IT solutions.
Obesity is a risk factor for several major cancers. Associations of weight change in middle adulthood with cancer risk, however, are less clear. We examined the association of change in weight and body mass index (BMI) category during middle adulthood with 42 cancers, using multivariable Cox proportional hazards models in the European Prospective Investigation into Cancer and Nutrition cohort. Of 241 323 participants (31% men), 20% lost and 32% gained weight (>0.4 to 5.0 kg/year) during 6.9 years (average). During 8.0 years of follow-up after the second weight assessment, 20 960 incident cancers were ascertained. Independent of baseline BMI, weight gain (per one kg/year increment) was positively associated with cancer of the corpus uteri (hazard ratio [HR] = 1.14; 95% confidence interval: 1.05-1.23). Compared to stable weight (+/- 0.4 kg/year), weight gain (>0.4 to 5.0 kg/year) was positively associated with cancers of the gallbladder and bile ducts (HR = 1.41; 1.01-1.96), postmenopausal breast (HR = 1.08; 1.00-1.16) and thyroid (HR = 1.40; 1.04-1.90). Compared to maintaining normal weight, maintaining overweight or obese BMI (World Health Organisation categories) was positively associated with most obesity-related cancers. Compared to maintaining the baseline BMI category, weight gain to a higher BMI category was positively associated with cancers of the postmenopausal breast (HR = 1.19; 1.06-1.33), ovary (HR = 1.40; 1.04-1.91), corpus uteri (HR = 1.42; 1.06-1.91), kidney (HR = 1.80; 1.20-2.68) and pancreas in men (HR = 1.81; 1.11-2.95). Losing weight to a lower BMI category, however, was inversely associated with cancers of the corpus uteri (HR = 0.40; 0.23-0.69) and colon (HR = 0.69; 0.52-0.92). Our findings support avoiding weight gain and encouraging weight loss in middle adulthood.
Welcome to the Dark Side
(2022)
Differences in natural light conditions caused by changes in moonlight are known to affect perceived predation risk in many nocturnal prey species. As artificial light at night (ALAN) is steadily increasing in space and intensity, it has the potential to change movement and foraging behavior of many species as it might increase perceived predation risk and mask natural light cycles. We investigated if partial nighttime illumination leads to changes in foraging behavior during the night and the subsequent day in a small mammal and whether these changes are related to animal personalities. We subjected bank voles to partial nighttime illumination in a foraging landscape under laboratory conditions and in large grassland enclosures under near natural conditions. We measured giving-up density of food in illuminated and dark artificial seed patches and video recorded the movement of animals. While animals reduced number of visits to illuminated seed patches at night, they increased visits to these patches at the following day compared to dark seed patches. Overall, bold individuals had lower giving-up densities than shy individuals but this difference increased at day in formerly illuminated seed patches. Small mammals thus showed carry-over effects on daytime foraging behavior due to ALAN, i.e., nocturnal illumination has the potential to affect intra- and interspecific interactions during both night and day with possible changes in personality structure within populations and altered predator-prey dynamics.
In sedimentary basins, rock thermal conductivity can vary both laterally and vertically, thus altering the basin’s thermal structure locally and regionally. Knowledge of the thermal conductivity of geological formations and its spatial variations is essential, not only for quantifying basin evolution and hydrocarbon maturation processes, but also for understanding geothermal conditions in a geological setting. In conjunction with the temperature gradient, thermal conductivity represents the basic input parameter for the determination of the heat-flow density; which, in turn, is applied as a major input parameter in thermal modeling at different scales. Drill-core samples, which are necessary to determine thermal properties by laboratory measurements, are rarely available and often limited to previously explored reservoir formations. Thus, thermal conductivities of Mesozoic rocks in the North German Basin (NGB) are largely unknown. In contrast, geophysical borehole measurements are often available for the entire drilled sequence. Therefore, prediction equations to determine thermal conductivity based on well-log data are desirable. In this study rock thermal conductivity was investigated on different scales by (1) providing thermal-conductivity measurements on Mesozoic rocks, (2) evaluating and improving commonly applied mixing models which were used to estimate matrix and pore-filled rock thermal conductivities, and (3) developing new well-log based equations to predict thermal conductivity in boreholes without core control. Laboratory measurements are performed on sedimentary rock of major geothermal reservoirs in the Northeast German Basin (NEGB) (Aalenian, Rhaethian-Liassic, Stuttgart Fm., and Middle Buntsandstein). Samples are obtained from eight deep geothermal wells that approach depths of up to 2,500 m. Bulk thermal conductivities of Mesozoic sandstones range between 2.1 and 3.9 W/(m∙K), while matrix thermal conductivity ranges between 3.4 and 7.4 W/(m∙K). Local heat flow for the Stralsund location averages 76 mW/m², which is in good agreement to values reported previously for the NEGB. For the first time, in-situ bulk thermal conductivity is indirectly calculated for entire borehole profiles in the NEGB using the determined surface heat flow and measured temperature data. Average bulk thermal conductivity, derived for geological formations within the Mesozoic section, ranges between 1.5 and 3.1 W/(m∙K). The measurement of both dry- and water-saturated thermal conductivities allow further evaluation of different two-component mixing models which are often applied in geothermal calculations (e.g., arithmetic mean, geometric mean, harmonic mean, Hashin-Shtrikman mean, and effective-medium theory mean). It is found that the geometric-mean model shows the best correlation between calculated and measured bulk thermal conductivity. However, by applying new model-dependent correction, equations the quality of fit could be significantly improved and the error diffusion of each model reduced. The ‘corrected’ geometric mean provides the most satisfying results and constitutes a universally applicable model for sedimentary rocks. Furthermore, lithotype-specific and model-independent conversion equations are developed permitting a calculation of water-saturated thermal conductivity from dry-measured thermal conductivity and porosity within an error range of 5 to 10%. The limited availability of core samples and the expensive core-based laboratory measurements make it worthwhile to use petrophysical well logs to determine thermal conductivity for sedimentary rocks. The approach followed in this study is based on the detailed analyses of the relationships between thermal conductivity of rock-forming minerals, which are most abundant in sedimentary rocks, and the properties measured by standard logging tools. By using multivariate statistics separately for clastic, carbonate and evaporite rocks, the findings from these analyses allow the development of prediction equations from large artificial data sets that predict matrix thermal conductivity within an error of 4 to 11%. These equations are validated successfully on a comprehensive subsurface data set from the NGB. In comparison to the application of earlier published approaches formation-dependent developed for certain areas, the new developed equations show a significant error reduction of up to 50%. These results are used to infer rock thermal conductivity for entire borehole profiles. By inversion of corrected in-situ thermal-conductivity profiles, temperature profiles are calculated and compared to measured high-precision temperature logs. The resulting uncertainty in temperature prediction averages < 5%, which reveals the excellent temperature prediction capabilities using the presented approach. In conclusion, data and methods are provided to achieve a much more detailed parameterization of thermal models.
We prove a local in time existence and uniqueness theorem of classical solutions of the coupled Einstein{Euler system, and therefore establish the well posedness of this system. We use the condition that the energy density might vanish or tends to zero at infinity and that the pressure is a certain function of the energy density, conditions which are used to describe simplified stellar models. In order to achieve our goals we are enforced, by the complexity of the problem, to deal with these equations in a new type of weighted Sobolev spaces of fractional order. Beside their construction, we develop tools for PDEs and techniques for hyperbolic and elliptic equations in these spaces. The well posedness is obtained in these spaces.
Countries processing raw coffee beans are burdened with low economical incomes to fight the serious environmental problems caused by the by-products and wastewater that is generated during the wet-coffee processing. The aim of this work was to develop alternative methods of improving the waste by-product quality and thus making the process economically more attractive with valorization options that can be brought to the coffee producers.
The type of processing influences not only the constitution of green coffee but also of by-products and wastewater. Therefore, coffee bean samples as well as by-products and wastewater collected at different production steps of were analyzed. Results show that the composition of wastewater is dependent on how much and how often the wastewater is recycled in the processing. Considering the coffee beans, results indicate that the proteins might be affected during processing and a positive effect of the fermentation on the solubility and accessibility of proteins seems to be probable. The steps of coffee processing influence the different constituents of green coffee beans which, during roasting, give rise to aroma compounds and express the characteristics of roasted coffee beans. Knowing that this group of compounds is involved in the Maillard reaction during roasting, this possibility could be utilized for the coffee producers to improve the quality of green coffee beans and finally the coffee cup quality.
The valorization of coffee wastes through modification to activated carbon has been considered as a low-cost option creating an adsorbent with prospective to compete with commercial carbons. Activation protocol using spent coffee and parchment was developed and prepared to assess their adsorption capacity for organic compounds. Spent coffee grounds and parchment proved to have similar adsorption efficiency to commercial activated carbon.
The results of this study document a significant information originating from the processing of the de-pulped to green coffee beans. Furthermore, it showed that coffee parchment and spent coffee grounds can be valorized as low-cost option to produce activated carbons. Further work needs to be directed to the optimization of the activation methods to improve the quality of the materials produced and the viability of applying such experiments in-situ to bring the coffee producer further valorization opportunities with environmental perspectives.
Coffee producers would profit in establishing appropriate simple technologies to improve green coffee quality, re-use coffee by-products, and wastewater valorization.
A lot has been published about the competencies needed by
students in the 21st century (Ravenscroft et al., 2012). However, equally
important are the competencies needed by educators in the new era
of digital education. We review the key competencies for educators in
light of the new methods of teaching and learning proposed by Massive
Open Online Courses (MOOCs) and their on-campus counterparts,
Small Private Online Courses (SPOCs).
What can we learn from climate data? : Methods for fluctuation, time/scale and phase analysis
(2006)
Since Galileo Galilei invented the first thermometer, researchers have tried to understand the complex dynamics of ocean and atmosphere by means of scientific methods. They observe nature and formulate theories about the climate system. Since some decades powerful computers are capable to simulate the past and future evolution of climate. Time series analysis tries to link the observed data to the computer models: Using statistical methods, one estimates characteristic properties of the underlying climatological processes that in turn can enter the models. The quality of an estimation is evaluated by means of error bars and significance testing. On the one hand, such a test should be capable to detect interesting features, i.e. be sensitive. On the other hand, it should be robust and sort out false positive results, i.e. be specific. This thesis mainly aims to contribute to methodological questions of time series analysis with a focus on sensitivity and specificity and to apply the investigated methods to recent climatological problems. First, the inference of long-range correlations by means of Detrended Fluctuation Analysis (DFA) is studied. It is argued that power-law scaling of the fluctuation function and thus long-memory may not be assumed a priori but have to be established. This requires to investigate the local slopes of the fluctuation function. The variability characteristic for stochastic processes is accounted for by calculating empirical confidence regions. The comparison of a long-memory with a short-memory model shows that the inference of long-range correlations from a finite amount of data by means of DFA is not specific. When aiming to infer short memory by means of DFA, a local slope larger than $\alpha=0.5$ for large scales does not necessarily imply long-memory. Also, a finite scaling of the autocorrelation function is shifted to larger scales in the fluctuation function. It turns out that long-range correlations cannot be concluded unambiguously from the DFA results for the Prague temperature data set. In the second part of the thesis, an equivalence class of nonstationary Gaussian stochastic processes is defined in the wavelet domain. These processes are characterized by means of wavelet multipliers and exhibit well defined time dependent spectral properties; they allow one to generate realizations of any nonstationary Gaussian process. The dependency of the realizations on the wavelets used for the generation is studied, bias and variance of the wavelet sample spectrum are calculated. To overcome the difficulties of multiple testing, an areawise significance test is developed and compared to the conventional pointwise test in terms of sensitivity and specificity. Applications to Climatological and Hydrological questions are presented. The thesis at hand mainly aims to contribute to methodological questions of time series analysis and to apply the investigated methods to recent climatological problems. In the last part, the coupling between El Nino/Southern Oscillation (ENSO) and the Indian Monsoon on inter-annual time scales is studied by means of Hilbert transformation and a curvature defined phase. This method allows one to investigate the relation of two oscillating systems with respect to their phases, independently of their amplitudes. The performance of the technique is evaluated using a toy model. From the data, distinct epochs are identified, especially two intervals of phase coherence, 1886-1908 and 1964-1980, confirming earlier findings from a new point of view. A significance test of high specificity corroborates these results. Also so far unknown periods of coupling invisible to linear methods are detected. These findings suggest that the decreasing correlation during the last decades might be partly inherent to the ENSO/Monsoon system. Finally, a possible interpretation of how volcanic radiative forcing could cause the coupling is outlined.
What Colin Reynolds could tell us about nutrient limitation, N:P ratios and eutrophication control
(2020)
Colin Reynolds exquisitely consolidated our understanding of driving forces shaping phytoplankton communities and those setting the upper limit to biomass yield, with limitation typically shifting from light in winter to phosphorus in spring. Nonetheless, co-limitation is frequently postulated from enhanced growth responses to enrichments with both N and P or from N:P ranging around the Redfield ratio, concluding a need to reduce both N and P in order to mitigate eutrophication. Here, we review the current understanding of limitation through N and P and of co-limitation. We conclude that Reynolds is still correct: (i) Liebig's law of the minimum holds and reducing P is sufficient, provided concentrations achieved are low enough; (ii) analyses of nutrient limitation need to exclude evidently non-limiting situations, i.e. where soluble P exceeds 3-10 mu g/l, dissolved N exceeds 100-130 mu g/l and total P and N support high biomass levels with self-shading causing light limitation; (iii) additionally decreasing N to limiting concentrations may be useful in specific situations (e.g. shallow waterbodies with high internal P and pronounced denitrification); (iv) management decisions require local, situation-specific assessments. The value of research on stoichiometry and co-limitation lies in promoting our understanding of phytoplankton ecophysiology and community ecology.
This study analyzes the influence of local and regional climatic factors on the stable isotopic composition of rainfall in the Vietnamese Mekong Delta (VMD) as part of the Asian monsoon region. It is based on 1.5 years of weekly rainfall samples. In the first step, the isotopic composition of the samples is analyzed by local meteoric water lines (LMWLs) and single-factor linear correlations. Additionally, the contribution of several regional and local factors is quantified by multiple linear regression (MLR) of all possible factor combinations and by relative importance analysis. This approach is novel for the interpretation of isotopic records and enables an objective quantification of the explained variance in isotopic records for individual factors. In this study, the local factors are extracted from local climate records, while the regional factors are derived from atmospheric backward trajectories of water particles. The regional factors, i.e., precipitation, temperature, relative humidity and the length of backward trajectories, are combined with equivalent local climatic parameters to explain the response variables delta O-18, delta H-2, and d-excess of precipitation at the station of measurement.
The results indicate that (i) MLR can better explain the isotopic variation in precipitation (R-2 = 0.8) compared to single-factor linear regression (R-2 = 0.3); (ii) the isotopic variation in precipitation is controlled dominantly by regional moisture regimes (similar to 70 %) compared to local climatic conditions (similar to 30 %); (iii) the most important climatic parameter during the rainy season is the precipitation amount along the trajectories of air mass movement; (iv) the influence of local precipitation amount and temperature is not sig-nificant during the rainy season, unlike the regional precipitation amount effect; (v) secondary fractionation processes (e.g., sub-cloud evaporation) can be identified through the d-excess and take place mainly in the dry season, either locally for delta O-18 and delta H-2, or along the air mass trajectories for d-excess. The analysis shows that regional and local factors vary in importance over the seasons and that the source regions and transport pathways, and particularly the climatic conditions along the pathways, have a large influence on the isotopic composition of rainfall. Although the general results have been reported qualitatively in previous studies (proving the validity of the approach), the proposed method provides quantitative estimates of the controlling factors, both for the whole data set and for distinct seasons. Therefore, it is argued that the approach constitutes an advancement in the statistical analysis of isotopic records in rainfall that can supplement or precede more complex studies utilizing atmospheric models. Due to its relative simplicity, the method can be easily transferred to other regions, or extended with other factors.
The results illustrate that the interpretation of the isotopic composition of precipitation as a recorder of local climatic conditions, as for example performed for paleorecords of water isotopes, may not be adequate in the southern part of the Indochinese Peninsula, and likely neither in other regions affected by monsoon processes. However, the presented approach could open a pathway towards better and seasonally differentiated reconstruction of paleoclimates based on isotopic records.
What did Cain say to Abel?
(2009)
Aim
Although research and clinical definitions of psychotherapeutic competence have been proposed, less is known about the layperson perspective. The aim was to explore the views of individuals with different levels of psychotherapy experience regarding what-in their views-constitutes a competent therapist.
Method
In an online survey, 375 persons (64% female, mean age 33.24 years) with no experience, with professional experience, or with personal pre-experience with psychotherapy participated. To provide low-threshold questions, we first presented two qualitative items (i.e. "In your opinion, what makes a good/competent psychotherapist?"; "How do you recognize that a psychotherapist is not competent?") and analysed them using inductive content analysis techniques (Mayring, 2014). Then, we gave participants a 16-item questionnaire including items from previous surveys and from the literature and analysed them descriptively.
Results
Work-relatedprinciples, professionalism, personalitycharacteristics, caringcommunication, empathy and understandingwere important categories of competence. Concerning the quantitative questions, most participants agreed with items indicating that a therapist should be open, listen well, show empathy and behave responsibly.
Conclusion
Investigating layperson perspectives suggested that effective and professional interpersonal behaviour of therapists plays a central role in the public's perception of psychotherapy.
The goal of this paper is to study the demand factors driving enrollment in massive open online courses. Using course level data from a French MOOC platform, we study the course, teacher and institution related characteristics that influence the enrollment decision of students, in a setting where enrollment is open to all students without administrative barriers. Coverage from social and traditional media done around the course is a key driver. In addition, the language of instruction and the (estimated) amount of work needed to complete the course also have a significant impact. The data also suggests that the presence of same-side externalities is limited. Finally, preferences of national and of international students tend to differ on several dimensions.
This study examines the course and driving forces of recent vegetation change in the Mongolian steppe. A sediment core covering the last 55years from a small closed-basin lake in central Mongolia was analyzed for its multi-proxy record at annual resolution. Pollen analysis shows that highest abundances of planted Poaceae and highest vegetation diversity occurred during 1977-1992, reflecting agricultural development in the lake area. A decrease in diversity and an increase in Artemisia abundance after 1992 indicate enhanced vegetation degradation in recent times, most probably because of overgrazing and farmland abandonment. Human impact is the main factor for the vegetation degradation within the past decades as revealed by a series of redundancy analyses, while climate change and soil erosion play subordinate roles. High Pediastrum (a green algae) influx, high atomic total organic carbon/total nitrogen (TOC/TN) ratios, abundant coarse detrital grains, and the decrease of C-13(org) and N-15 since about 1977 but particularly after 1992 indicate that abundant terrestrial organic matter and nutrients were transported into the lake and caused lake eutrophication, presumably because of intensified land use. Thus, we infer that the transition to a market economy in Mongolia since the early 1990s not only caused dramatic vegetation degradation but also affected the lake ecosystem through anthropogenic changes in the catchment area.
In this talk, I would like to share my experiences gained from participating in four CSP solver competitions and the second ASP solver competition. In particular, I’ll talk about how various programming techniques can make huge differences in solving some of the benchmark problems used in the competitions. These techniques include global constraints, table constraints, and problem-specific propagators and labeling strategies for selecting variables and values. I’ll present these techniques with experimental results from B-Prolog and other CLP(FD) systems.
The COVID-19 pandemic created the largest experiment in working from home. We study how persistent telework may change energy and transport consumption and costs in Germany to assess the distributional and environmental implications when working from home will stick. Based on data from the German Microcensus and available classifications of working-from-home feasibility for different occupations, we calculate the change in energy consumption and travel to work when 15% of employees work full time from home. Our findings suggest that telework translates into an annual increase in heating energy expenditure of 110 euros per worker and a decrease in transport expenditure of 840 euros per worker. All income groups would gain from telework but high-income workers gain twice as much as low-income workers. The value of time saving is between 1.3 and 6 times greater than the savings from reduced travel costs and almost 9 times higher for high-income workers than low-income workers. The direct effects on CO₂ emissions due to reduced car commuting amount to 4.5 millions tons of CO₂, representing around 3 percent of carbon emissions in the transport sector.
What is it good for?
(2023)
Military conflicts and wars affect a country’s development in various dimensions. Rising inflation rates are a potentially important economic effect associated with conflict. High inflation can undermine investment, weigh on private consumption, and threaten macroeconomic stability. Furthermore, these effects are not necessarily restricted to the locality of the conflict, but can also spill over to other countries. Therefore, to understand how conflict affects the economy and to make a more comprehensive assessment of the costs of armed conflict, it is important to take inflationary effects into account. To disentangle the conflict-inflation-nexus and to quantify this relationship, we conduct a panel analysis for 175 countries over the period 1950–2019. To capture indirect inflationary effects, we construct a distance based spillover index. In general, the results of our analysis confirm a statistically significant positive direct association between conflicts and inflation rates. This finding is robust across various model specifications. Moreover, our results indicate that conflict induced inflation is not solely driven by increasing money supply. Furthermore, we document a statistically significant positive indirect association between conflicts and inflation rates in uninvolved countries.
What is visualization?
(2011)
Over the last 20 years, information visualization became a common tool in science and also a growing presence in the arts and culture at large. However, the use of visualization in cultural research is still in its infancy. Based on the work in the analysis of video games, cinema, TV, animation, Manga and other media carried out in Software Studies Initiative at University of California, San Diego over last two years, a number of visualization techniques and methods particularly useful for cultural and media research are presented.
Extract: That the Celtic languages were of the Indo-European family was first recognised by Rasmus Christian Rask (*1787), a young Danish linguist, in 1818. However, the fact that he wrote in Danish meant that his discovery was not noted by the linguistic establishment until long after his untimely death in 1832. The same conclusion was arrived at independently of Rask and, apparently, of each other, by Adolphe Pictet (1836) and Franz Bopp (1837). This agreement between the foremost scholars made possible the completion of the picture of the spread of the Indo-European languages in the extreme west of the European continent. However, in the Middle Ages the speakers of Irish had no awareness of any special relationship between Irish and the other Celtic languages, and a scholar as linguistically competent as Cormac mac Cuillennáin (†908), or whoever compiled Sanas Chormaic, treated Welsh on the same basis as Greek, Latin, and the lingua northmannorum in the elucidation of the meaning and history of Irish words. [...]
In an effort to describe and produce different formats for video instruction, the research community in technology-enhanced learning, and MOOC scholars in particular, have focused on the general style of video production: whether it is a digitally scripted “talk-and-chalk” or a “talking head” version of a learning unit. Since these production styles include various sub-elements, this paper deconstructs the inherited elements of video production in the context of educational live-streams. Using over 700 videos – both from synchronous and asynchronous modalities of large video-based platforms (YouTube and Twitch), 92 features were found in eight categories of video production. These include commonly analyzed features such as the use of green screen and a visible instructor, but also less studied features such as social media connections and changing camera perspective depending on the topic being covered. Overall, the research results enable an analysis of common video production styles and a toolbox for categorizing new formats – independent of their final (a)synchronous use in MOOCs. Keywords: video production, MOOC video styles, live-streaming.
What Makes an Employer?
(2019)
As the policy debate on entrepreneurship increasingly centers on firm growth in terms of job creation, it is important to better understand which variables influence the first hiring decision and which ones influence the subsequent survival as an employer. Using the German Socio-economic Panel (SOEP), we analyze what role individual characteristics of entrepreneurs play in sustainable job creation. While human and social capital variables positively influence the hiring decision and the survival as an employer in the same direction, we show that none of the personality traits affect the two outcomes in the same way. Some traits are only relevant for survival as an employer but do not influence the hiring decision, other traits even unfold a revolving door effect, in the sense that employers tend to fail due to the same characteristics that positively influenced their hiring decision.
Regulatory focus is a motivational construct that describes humans’ motivational orientation during goal pursuit. It is conceptualized as a chronic, trait-like, as well as a momentary, state-like orientation. Whereas there is a large number of measures to capture chronic regulatory focus, measures for its momentary assessment are only just emerging. This paper presents the development and validation of a measure of Momentary–Chronic Regulatory Focus. Our development incorporates the distinction between self-guide and reference-point definitions of regulatory focus. Ideals and ought striving are the promotion and prevention dimension in the self-guide system; gain and non-loss regulatory focus are the respective dimensions within the reference-point system. Three-survey-based studies test the structure, psychometric properties, and validity of the measure in its version to assess chronic regulatory focus (two samples of working participants, N = 389, N = 672; one student sample [time 1, N = 105; time 2, n = 91]). In two further studies, an experience sampling study with students (N = 84, k = 1649) and a daily-diary study with working individuals (N = 129, k = 1766), the measure was applied to assess momentary regulatory focus. Multilevel analyses test the momentary measure’s factorial structure, provide support for its sensitivity to capture within-person fluctuations, and provide evidence for concurrent construct validity.
In this paper we report on our experiments in teaching computer science concepts with a mix of tangible and abstract object manipulations. The goal we set ourselves was to let pupils discover the challenges one has to meet to automatically manipulate formatted text. We worked with a group of 25 secondary school pupils (9-10th grade), and they were actually able to “invent” the concept of mark-up language. From this experiment we distilled a set of activities which will be replicated in other classes (6th grade) under the guidance of maths teachers.
The term Linked Data refers to connected information sources comprising structured data about a wide range of topics and for a multitude of applications. In recent years, the conceptional and technical foundations of Linked Data have been formalized and refined. To this end, well-known technologies have been established, such as the Resource Description Framework (RDF) as a Linked Data model or the SPARQL Protocol and RDF Query Language (SPARQL) for retrieving this information. Whereas most research has been conducted in the area of generating and publishing Linked Data, this thesis presents novel approaches for improved management. In particular, we illustrate new methods for analyzing and processing SPARQL queries. Here, we present two algorithms suitable for identifying structural relationships between these queries. Both algorithms are applied to a large number of real-world requests to evaluate the performance of the approaches and the quality of their results. Based on this, we introduce different strategies enabling optimized access of Linked Data sources. We demonstrate how the presented approach facilitates effective utilization of SPARQL endpoints by prefetching results relevant for multiple subsequent requests. Furthermore, we contribute a set of metrics for determining technical characteristics of such knowledge bases. To this end, we devise practical heuristics and validate them through thorough analysis of real-world data sources. We discuss the findings and evaluate their impact on utilizing the endpoints. Moreover, we detail the adoption of a scalable infrastructure for improving Linked Data discovery and consumption. As we outline in an exemplary use case, this platform is eligible both for processing and provisioning the corresponding information.
What’s in an irish name?
(2006)
Content: 1. Introduction: The Irish Patronymic System Prior to 1600 2. Anglicisation Pressure 3. Anglicisation: 1600-1900 3.1. Phonetic Approximation 3.2. Simplification 3.3. Translation 3.4. Mistranslation 3.5. Equivalence with Existing English Surname 3.6. Multiplicity of Anglicised Forms 3.7. Anglicisation of Prefixes 4. The Call to De-Anglicise 5. Current Personal Naming Patterns in Ireland 5.1. Current Modern Irish 6. Traditional Naming: “X (Son/Daughter) of Y (Son/Daughter) of Z” 7. Nicknames 8. Conclusion