Refine
Year of publication
- 2011 (1133) (remove)
Document Type
- Article (823)
- Doctoral Thesis (139)
- Postprint (40)
- Review (40)
- Monograph/Edited Volume (32)
- Conference Proceeding (26)
- Other (12)
- Preprint (7)
- Part of a Book (6)
- Master's Thesis (3)
Language
- English (1133) (remove)
Keywords
- climate change (8)
- X-rays: stars (7)
- Holocene (6)
- Tibetan Plateau (6)
- NMR (5)
- gamma rays: general (5)
- stars: massive (5)
- Dictyostelium (4)
- Eye movements (4)
- Photosynthesis (4)
Institute
- Institut für Biochemie und Biologie (230)
- Institut für Physik und Astronomie (178)
- Institut für Chemie (159)
- Institut für Geowissenschaften (134)
- Department Psychologie (55)
- Institut für Informatik und Computational Science (46)
- Institut für Ernährungswissenschaft (42)
- Institut für Mathematik (37)
- Department Linguistik (32)
- Institut für Anglistik und Amerikanistik (32)
We report on very high energy (>100 GeV) gamma-ray observations of Swift J164449.3+573451, an unusual transient object first detected by the Swift Observatory and later detected by multiple radio, optical, and X-ray observatories. A total exposure of 28 hr was obtained on Swift J164449.3+573451 with the Very Energetic Radiation Imaging Telescope Array System ( VERITAS) during 2011 March 28-April 15. We do not detect the source and place a differential upper limit on the emission at 500 GeV during these observations of 1.4 x 10(-12) erg cm(-2) s(-1) (99% confidence level). We also present time-resolved upper limits and use a flux limit averaged over the X-ray flaring period to constrain various emission scenarios that can accommodate both the radio-through-X-ray emission detected from the source and the lack of detection by VERITAS.
Business processes are commonly modeled using a graphical modeling language. The most widespread notation for this purpose is business process diagrams in the Business Process Modeling Notation (BPMN). In this article, we use the visual query language BPMN-Q for expressing patterns that are related to possible problems in such business process diagrams. We discuss two classes of problems that can be found frequently in real-world models: sequence flow errors and model fragments that can make the model difficult to understand.
By using a query processor, a business process modeler is able to identify possible errors in business process diagrams. Moreover, the erroneous parts of the business process diagram can be highlighted when an instance of an error pattern is found. This way, the modeler gets an easy-to-understand feedback in the visual modeling language he or she is familiar with. This is an advantage over current validation methods, which usually lack this kind of intuitive feedback.
Miniature eye movements jitter the retinal image unceasingly, raising the question of how perceptual continuity is achieved during visual fixation. Recent work discovered suppression of visual bursts in the superior colliculus around the time of microsaccades, tiny jerks of the eyes that support visual perception while gaze is fixed. This finding suggests that corollary discharge, supporting visual stability when rapid eye movements drastically shift the retinal image, may also exist for the smallest saccades.
A business process is a set of steps designed to be executed in a certain order to achieve a business value. Such processes are often driven by and documented using process models. Nowadays, process models are also applied to drive process execution. Thus, correctness of business process models is a must. Much of the work has been devoted to check general, domain-independent correctness criteria, such as soundness. However, business processes must also adhere to and show compliance with various regulations and constraints, the so-called compliance requirements. These are domain-dependent requirements.
In many situations, verifying compliance on a model level is of great value, since violations can be resolved in an early stage prior to execution. However, this calls for using formal verification techniques, e.g., model checking, that are too complex for business experts to apply. In this paper, we utilize a visual language. BPMN-Q to express compliance requirements visually in a way similar to that used by business experts to build process models. Still, using a pattern based approach, each BPMN-Qgraph has a formal temporal logic expression in computational tree logic (CTL). Moreover, the user is able to express constraints, i.e., compliance rules, regarding control flow and data flow aspects. In order to provide valuable feedback to a user in case of violations, we depend on temporal logic querying approaches as well as BPMN-Q to visually highlight paths in a process model whose execution causes violations.
Vitamin A metabolism is changed in donors after living-kidney transplantation an observational study
(2011)
Background: The kidneys are essential for the metabolism of vitamin A (retinol) and its transport proteins retinol-binding protein 4 (RBP4) and transthyretin. Little is known about changes in serum concentration after living donor kidney transplantation (LDKT) as a consequence of unilateral nephrectomy; although an association of these parameters with the risk of cardiovascular diseases and insulin resistance has been suggested. Therefore we analyzed the concentration of retinol, RBP4, apoRBP4 and transthyretin in serum of 20 living-kidney donors and respective recipients at baseline as well as 6 weeks and 6 months after LDKT.
Results: As a consequence of LDKT, the kidney function of recipients was improved while the kidney function of donors was moderately reduced within 6 weeks after LDKT. With regard to vitamin A metabolism, the recipients revealed higher levels of retinol, RBP4, transthyretin and apoRBP4 before LDKT in comparison to donors. After LDKT, the levels of all four parameters decreased in serum of the recipients, while retinol, RBP4 as well as apoRBP4 serum levels of donors increased and remained increased during the follow-up period of 6 months.
Conclusion: LDKT is generally regarded as beneficial for allograft recipients and not particularly detrimental for the donors. However, it could be demonstrated in this study that a moderate reduction of kidney function by unilateral nephrectomy, resulted in an imbalance of components of vitamin A metabolism with a significant increase of retinol and RBP4 and apoRBP4 concentration in serum of donors.
This Letter reports on new methods and a consistent model for voltage tunable optical transmission gratings. Elastomeric gratings were molded from holographically written surface relief gratings in an azobenzene sol-gel material. These were placed on top of a transparent electroactive elastomeric substrate. Two different electro-active substrate elastomers were employed, with a large range of prestretches. A novel finite-deformation theory was found to match the device response excellently, without fitting parameters. The results clearly show that the grating underwent pure-shear deformation, and more surprisingly, that the mechanical properties of the electro-active substrate did not affect device actuation. (C) 2011 Optical Society of America
Wavelet modelling of the gravity field by domain decomposition methods: an example over Japan
(2011)
With the advent of satellite gravity, large gravity data sets of unprecedented quality at low and medium resolution become available. For local, high resolution field modelling, they need to be combined with the surface gravity data. Such models are then used for various applications, from the study of the Earth interior to the determination of oceanic currents. Here we show how to realize such a combination in a flexible way using spherical wavelets and applying a domain decomposition approach. This iterative method, based on the Schwarz algorithms, allows to split a large problem into smaller ones, and avoids the calculation of the entire normal system, which may be huge if high resolution is sought over wide areas. A subdomain is defined as the harmonic space spanned by a subset of the wavelet family. Based on the localization properties of the wavelets in space and frequency, we define hierarchical subdomains of wavelets at different scales. On each scale, blocks of subdomains are defined by using a tailored spatial splitting of the area. The data weighting and regularization are iteratively adjusted for the subdomains, which allows to handle heterogeneity in the data quality or the gravity variations. Different levels of approximations of the subdomains normals are also introduced, corresponding to building local averages of the data at different resolution levels.
We first provide the theoretical background on domain decomposition methods. Then, we validate the method with synthetic data, considering two kinds of noise: white noise and coloured noise. We then apply the method to data over Japan, where we combine a satellite-based geopotential model, EIGEN-GL04S, and a local gravity model from a combination of land and marine gravity data and an altimetry-derived marine gravity model. A hybrid spherical harmonics/wavelet model of the geoid is obtained at about 15 km resolution and a corrector grid for the surface model is derived.
To understand the evolution and morphology of planetary nebulae, a detailed knowledge of their central stars is required. Central stars that exhibit emission lines in their spectra, indicating stellar mass-loss allow to study the evolution of planetary nebulae in action. Emission line central stars constitute about 10 % of all central stars. Half of them are practically hydrogen-free Wolf-Rayet type central stars of the carbon sequence, [WC], that show strong emission lines of carbon and oxygen in their spectra. In this contribution we address the weak emission-lines central stars (wels). These stars are poorly analyzed and their hydrogen content is mostly unknown. We obtained optical spectra, that include the important Balmer lines of hydrogen, for four weak emission line central stars. We present the results of our analysis, provide spectral classification and discuss possible explanations for their formation and evolution.
The World Wide Web as an application platform becomes increasingly important. However, the development of Web applications is often more complex than for the desktop. Web-based development environments like Lively Webwerkstatt can mitigate this problem by making the development process more interactive and direct. By moving the development environment into the Web, applications can be developed collaboratively in a Wiki-like manner. This report documents the results of the project seminar on Web-based Development Environments 2010. In this seminar, participants extended the Web-based development environment Lively Webwerkstatt. They worked in small teams on current research topics from the field of Web-development and tool support for programmers and implemented their results in the Webwerkstatt environment.
Urban forests fulfil various functions, among them the restoration process and aesthetical needs of urban residents. This article reflects the attitudes towards different managed forests on the one hand and their influence on psychological well-being on the other. Results of empirical approaches from both fields show some inconsistency, suggesting that people have a more positive attitude towards wild forest areas, while the effect on well-being is more positive after a walk in tended forest areas. A discussion follows on the link between perception and the effect of urban forests. An outlook on necessary research reveals the need for longitudinal research. The article concludes by showing management implications.
Which repair strategy does the language system deploy when it gets garden-pathed, and what can regressive eye movements in reading tell us about reanalysis strategies? Several influential eye-tracking studies on syntactic reanalysis (Frazier & Rayner, 1982; Meseguer, Carreiras, & Clifton, 2002; Mitchell, Shen, Green, & Hodgson, 2008) have addressed this question by examining scanpaths, i.e., sequential patterns of eye fixations. However, in the absence of a suitable method for analyzing scanpaths, these studies relied on simplified dependent measures that are arguably ambiguous and hard to interpret. We address the theoretical question of repair strategy by developing a new method that quantifies scanpath similarity. Our method reveals several distinct fixation strategies associated with reanalysis that went undetected in a previously published data set (Meseguer et al., 2002). One prevalent pattern suggests re-parsing of the sentence, a strategy that has been discussed in the literature (Frazier & Rayner, 1982); however, readers differed tremendously in how they orchestrated the various fixation strategies. Our results suggest that the human parsing system non-deterministically adopts different strategies when confronted with the disambiguating material in garden-path sentences.
What is visualization?
(2011)
Over the last 20 years, information visualization became a common tool in science and also a growing presence in the arts and culture at large. However, the use of visualization in cultural research is still in its infancy. Based on the work in the analysis of video games, cinema, TV, animation, Manga and other media carried out in Software Studies Initiative at University of California, San Diego over last two years, a number of visualization techniques and methods particularly useful for cultural and media research are presented.
Trait-based studies have become extremely common in plant ecology. Trait-based approaches often rely on the tacit assumption that intraspecific trait variability (ITV) is negligible compared to interspecific variability, so that species can be characterized by mean trait values. Yet, numerous recent studies have challenged this assumption by showing that ITV significantly affects various ecological processes. Accounting for ITV may thus strengthen trait-based approaches, but measuring trait values on a large number of individuals per species and site is not feasible. Therefore, it is important and timely to synthesize existing knowledge on ITV in order to (1) decide critically when ITV should be considered, and (2) establish methods for incorporating this variability. Here we propose a practical set of rules to identify circumstances under which ITV should be accounted for. We formulate a spatial trait variance partitioning hypothesis to highlight the spatial scales at which ITV cannot be ignored in ecological studies. We then refine a set of four consecutive questions on the research question, the spatial scale, the sampling design, and the type of studied traits, to determine case-by-case if a given study should quantify ITV and test its effects. We review methods for quantifying ITV and develop a step-by-step guideline to design and interpret simulation studies that test for the importance of ITV. Even in the absence of quantitative knowledge on ITV, its effects can be assessed by varying trait values within species within realistic bounds around the known mean values. We finish with a discussion of future requirements to further incorporate ITV within trait-based approaches. This paper thus delineates a general framework to account for ITV and suggests a direction towards a more quantitative trait-based ecology.
Spatial numerical associations (SNAs) are prevalent yet their origin is poorly understood. We first consider the possible prime role of reading habits in shaping SNAs and list three observations that argue against a prominent influence of this role: (1) directional reading habits for numbers may conflict with those for non-numerical symbols, (2) short-term experimental manipulations can overrule the impact of decades of reading experience, (3) SNAs predate the acquisition of reading. As a promising alternative, we discuss behavioral, neuroscientific, and neuropsychological evidence in support of finger counting as the most likely initial determinant of SNAs. Implications of this "manumerical cognition" stance for the distinction between grounded, embodied, and situated cognition are discussed.
The open source computational fluid dynamics (CFD) wind model (CFD-WEM) for wind erosion research in the Xilingele grassland in Inner Mongolia (autonomous region, China) is compared with two open source CFD models Gerris and OpenFOAM. The evaluation of these models was made according to software technology, implemented methods, handling, accuracy and calculation speed. All models were applied to the same wind tunnel data set. Results show that the simplest CFD-WEM has the highest calculation speed with acceptable accuracy, and the most powerful OpenFOAM produces the simulation with highest accuracy and the lowest calculation speed. Gerris is between CFD-WEM and OpenFOAM. It calculates faster than OpenFOAM, and it is capable to solve different CFD problems. CFD-WEM is the optimal model to be further developed for wind erosion research in Inner Mongolia grassland considering its efficiency and the uncertainties of other input data. However, for other applications using CFD technology, Gerris and OpenFOAM can be good choices. This paper shows the powerful capability of open source CFD software in wind erosion study, and advocates more involvement of open source technology in wind erosion and related ecological researches.