Refine
Year of publication
- 2024 (7)
- 2023 (13)
- 2022 (20)
- 2021 (15)
- 2020 (42)
- 2019 (156)
- 2018 (195)
- 2017 (117)
- 2016 (73)
- 2015 (18)
- 2014 (11)
- 2013 (17)
- 2012 (25)
- 2011 (21)
- 2010 (6)
- 2009 (7)
- 2008 (12)
- 2007 (11)
- 2006 (27)
- 2005 (17)
- 2004 (11)
- 2003 (4)
- 2002 (3)
- 2001 (2)
- 2000 (4)
- 1999 (2)
- 1998 (10)
- 1997 (5)
- 1996 (13)
- 1995 (9)
- 1994 (15)
- 1993 (4)
- 1992 (3)
- 1991 (1)
Document Type
- Other (897) (remove)
Language
- English (656)
- German (226)
- Spanish (5)
- Italian (4)
- Multiple languages (2)
- Polish (2)
- French (1)
- Portuguese (1)
Keywords
- Arrayseismologie (5)
- array seismology (5)
- Dysphagie (4)
- E-Learning (4)
- Erdbeben (4)
- Judaism (4)
- Judentum (4)
- MOOC (4)
- Patholinguistik (4)
- Schluckstörung (4)
Institute
- Institut für Biochemie und Biologie (86)
- Hasso-Plattner-Institut für Digital Engineering GmbH (83)
- Institut für Physik und Astronomie (83)
- Institut für Geowissenschaften (73)
- Institut für Mathematik (46)
- Department Psychologie (45)
- Department Sport- und Gesundheitswissenschaften (44)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (31)
- Institut für Ernährungswissenschaft (31)
- Institut für Chemie (30)
Pace-of-life syndromes
(2018)
This introduction to the topical collection on Pace-of-life syndromes: a framework for the adaptive integration of behaviour, physiology, and life history provides an overview of conceptual, theoretical, methodological, and empirical progress in research on pace-of-life syndromes (POLSs) over the last decade. The topical collection has two main goals. First, we briefly describe the history of POLS research and provide a refined definition of POLS that is applicable to various key levels of variation (genetic, individual, population, species). Second, we summarise the main lessons learned from current POLS research included in this topical collection. Based on an assessment of the current state of the theoretical foundations and the empirical support of the POLS hypothesis, we propose (i) conceptual refinements of theory, particularly with respect to the role of ecology in the evolution of (sexual dimorphism in) POLS, and (ii) methodological and statistical approaches to the study of POLS at all major levels of variation. This topical collection further holds (iii) key empirical examples demonstrating how POLS structures may be studied in wild populations of (non) human animals, and (iv) a modelling paper predicting POLS under various ecological conditions. Future POLS research will profit from the development of more explicit theoretical models and stringent empirical tests of model assumptions and predictions, increased focus on how ecology shapes (sex-specific) POLS structures at multiple hierarchical levels, and the usage of appropriate statistical tests and study designs. Significance statement As an introduction to the topical collection, we summarise current conceptual, theoretical, methodological and empirical progress in research on pace-of-life syndromes (POLSs), a framework for the adaptive integration of behaviour, physiology and life history at multiple hierarchical levels of variation (genetic, individual, population, species). Mixed empirical support of POLSs, particularly at the within-species level, calls for an evaluation and refinement of the hypothesis. We provide a refined definition of POLSs facilitating testable predictions. Future research on POLSs will profit from the development of more explicit theoretical models and stringent empirical tests of model assumptions and predictions, increased focus on how ecology shapes (sex-specific) POLSs structures at multiple hierarchical levels and the usage of appropriate statistical tests and study designs.
Background: Infliximab (IFX), an anti-TNF monoclonal antibody approved for the treatment of inflammatory bowel disease, is dosed per kg body weight (BW). However, the rationale for body size adjustment has not been unequivocally demonstrated [1], and first attempts to improve IFX therapy have been undertaken [2]. The aim of our study was to assess the impact of different dosing strategies (i.e. body size-adjusted and fixed dosing) on drug exposure and pharmacokinetic (PK) target attainment. For this purpose, a comprehensive simulation study was performed, using patient characteristics (n=116) from an in-house clinical database.
Methods: IFX concentration-time profiles of 1000 virtual, clinically representative patients were generated using a previously published PK model for IFX in patients with Crohn's disease [3]. For each patient 1000 profiles accounting for PK variability were considered. The IFX exposure during maintenance treatment after the following dosing strategies was compared: i) fixed dose, and per ii) BW, iii) lean BW (LBW), iv) body surface area (BSA), v) height (HT), vi) body mass index (BMI) and vii) fat-free mass (FFM)). For each dosing strategy the variability in maximum concentration Cmax, minimum concentration Cmin (= C8weeks) and area under the concentration-time curve (AUC), as well as percent of patients achieving the PK target, Cmin=3 μg/mL [4] were assessed.
Results: For all dosing strategies the variability of Cmin (CV ≈110%) was highest, compared to Cmax and AUC, and was of similar extent regardless of dosing strategy. The proportion of patients reaching the PK target (≈⅓ was approximately equal for all dosing strategies.
Our Conclusions
(2018)
Data analytics are moving beyond the limits of a single data processing platform. A cross-platform query optimizer is necessary to enable applications to run their tasks over multiple platforms efficiently and in a platform-agnostic manner. For the optimizer to be effective, it must consider data movement costs across different data processing platforms. In this paper, we present the graph-based data movement strategy used by RHEEM, our open-source cross-platform system. In particular, we (i) model the data movement problem as a new graph problem, which we prove to be NP-hard, and (ii) propose a novel graph exploration algorithm, which allows RHEEM to discover multiple hidden opportunities for cross-platform data processing.
OpenLL
(2018)
Today's rendering APIs lack robust functionality and capabilities for dynamic, real-time text rendering and labeling, which represent key requirements for 3D application design in many fields. As a consequence, most rendering systems are barely or not at all equipped with respective capabilities. This paper drafts the unified text rendering and labeling API OpenLL intended to complement common rendering APIs, frameworks, and transmission formats. For it, various uses of static and dynamic placement of labels are showcased and a text interaction technique is presented. Furthermore, API design constraints with respect to state-of-the-art text rendering techniques are discussed. This contribution is intended to initiate a community-driven specification of a free and open label library.
Die Open Access-Strategie der Universität Potsdam verfolgt das Ziel, den offenen Zugang zu wissenschaftlichen Publikationen von Wissenschaftlern der Universität Potsdam nachhaltig zu fördern. Dazu werden sieben übergeordnete strategische Ziele definiert, aus denen in einem zweiten Schritt konkrete Handlungsfelder abgeleitet werden.
Die Strategie wurde am 16.12.2015 vom Senat der Universität Potsdam zustimmend zur Kenntnis genommen.
One particular challenge in the Internet of Things is the management of many heterogeneous things. The things are typically constrained devices with limited memory, power, network and processing capacity. Configuring every device manually is a tedious task. We propose an interoperable way to configure an IoT network automatically using existing standards. The proposed NETCONF-MQTT bridge intermediates between the constrained devices (speaking MQTT) and the network management standard NETCONF. The NETCONF-MQTT bridge generates dynamically YANG data models from the semantic description of the device capabilities based on the oneM2M ontology. We evaluate the approach for two use cases, i.e. describing an actuator and a sensor scenario.
An IoT network may consist of hundreds heterogeneous devices. Some of them may be constrained in terms of memory, power, processing and network capacity. Manual network and service management of IoT devices are challenging. We propose a usage of an ontology for the IoT device descriptions enabling automatic network management as well as service discovery and aggregation. Our IoT architecture approach ensures interoperability using existing standards, i.e. MQTT protocol and SemanticWeb technologies. We herein introduce virtual IoT devices and their semantic framework deployed at the edge of network. As a result, virtual devices are enabled to aggregate capabilities of IoT devices, derive new services by inference, delegate requests/responses and generate events. Furthermore, they can collect and pre-process sensor data. These tasks on the edge computing overcome the shortcomings of the cloud usage regarding siloization, network bandwidth, latency and speed. We validate our proposition by implementing a virtual device on a Raspberry Pi.
This paper describes architectural extensions for a dynamically scheduled processor, so that it can be used in three different operation modes, ranging from high-performance, to high-reliability. With minor hardware-extensions of the control path, the resources of the superscalar data-path can be used either for high-performance execution, fail-safe-operation, or fault-tolerant-operation. This makes the processor-architecture a very good candidate for applications with dynamically changing reliability requirements, e.g. for automotive applications. The paper reports the hardware-overhead for the extensions, and investigates the performance penalties introduced by the fail-safe and fault-tolerant mode. Furthermore, a comprehensive fault simulation was carried out in order to investigate the fault-coverage of the proposed approach.
Web-based E-Learning uses Internet technologies and digital media to deliver education content to learners. Many universities in recent years apply their capacity in producing Massive Open Online Courses (MOOCs). They have been offering MOOCs with an expectation of rendering a comprehensive online apprenticeship. Typically, an online content delivery process requires an Internet connection. However, access to the broadband has never been a readily available resource in many regions. In Africa, poor and no networks are yet predominantly experienced by Internet users, frequently causing offline each moment a digital device disconnect from a network. As a result, a learning process is always disrupted, delayed and terminated in such regions. This paper raises the concern of E-Learning in poor and low bandwidths, in fact, it highlights the needs for an Offline-Enabled mode. The paper also explores technical approaches beamed to enhance the user experience inWeb-based E-Learning, particular in Africa.
Zero-shot learning in Language & Vision is the task of correctly labelling (or naming) objects of novel categories. Another strand of work in L&V aims at pragmatically informative rather than "correct" object descriptions, e.g. in reference games. We combine these lines of research and model zero-shot reference games, where a speaker needs to successfully refer to a novel object in an image. Inspired by models of "rational speech acts", we extend a neural generator to become a pragmatic speaker reasoning about uncertain object categories. As a result of this reasoning, the generator produces fewer nouns and names of distractor categories as compared to a literal speaker. We show that this conversational strategy for dealing with novel objects often improves communicative success, in terms of resolution accuracy of an automatic listener.
The US perennially has a far higher poverty rate than peer-rich democracies.1 This high poverty rate in the US presents an enormous challenge to population health given that considerable research demonstrates that being in poverty is bad for one’s health.2 Despite valuable contributions of prior research on income and mortality, the quantity of mortality associated with poverty in the US remains uknown. In this cohort study, we estimated the association between poverty and mortality and quantified the proportion and number of deaths associated with poverty.
Tailed bacteriophages specific for Gram‐negative bacteria encounter lipopolysaccharide (LPS) during the first infection steps. Yet, it is not well understood how biochemistry of these initial interactions relates to subsequent events that orchestrate phage adsorption and tail rearrangements to initiate cell entry. For many phages, long O‐antigen chains found on the LPS of smooth bacterial strains serve as essential receptor recognized by their tailspike proteins (TSP). Many TSP are depolymerases and O‐antigen cleavage was described as necessary step for subsequent orientation towards a secondary receptor. However, O‐antigen specific host attachment must not always come along with O‐antigen degradation. In this issue of Molecular Microbiology Prokhorov et al. report that coliphage G7C carries a TSP that deacetylates O‐antigen but does not degrade it, whereas rough strains or strains lacking O‐antigen acetylation remain unaffected. Bacteriophage G7C specifically functionalizes its tail by attaching the deacetylase TSP directly to a second TSP that is nonfunctional on the host's O‐antigen. This challenges the view that bacteriophages use their TSP only to clear their way to a secondary receptor. Rather, O‐antigen specific phages may employ enzymatically active TSP as a tool for irreversible LPS membrane binding to initiate subsequent infection steps.
Nitrogen transformations in flowpaths leading from soils to streams in Amazon forest and pasture
(2009)
Neuruppiner Landschaften : ein Video. Großschmetterlinge Brandenburgs : ein Video [Begleitheft]
(1994)
We investigated online electrophysiological components of distributional learning, specifically of tones by listeners of a non tonal language. German listeners were presented with a bimodal distribution of syllables with lexical tones from a synthesized continuum based on Cantonese level tones. Tones were presented in sets of four standards (within-category tokens) followed by a deviant (across-category token). Mismatch negativity (MMN) was measured. Earlier behavioral data showed that exposure to this bimodal distribution improved both categorical perception and perceptual acuity for level tones [I]. In the present study we present analyses of the electrophysiological response recorded during this exposure, i.e., the development of the MMN response during distributional learning. This development over time is analyzed using Generalized Additive Mixed Models and results showed that the MMN amplitude increased for both within and across-category tokens, reflecting higher perceptual acuity accompanying category formation. This is evidence that learners zooming in on phonological categories undergo neural changes associated with more accurate phonetic perception.
Point clouds provide high-resolution topographic data which is often classified into bare-earth, vegetation, and building points and then filtered and aggregated to gridded Digital Elevation Models (DEMs) or Digital Terrain Models (DTMs). Based on these equally-spaced grids flow-accumulation algorithms are applied to describe the hydrologic and geomorphologic mass transport on the surface. In this contribution, we propose a stochastic point-cloud filtering that, together with a spatial bootstrap sampling, allows for a flow accumulation directly on point clouds using Facet-Flow Networks (FFN). Additionally, this provides a framework for the quantification of uncertainties in point-cloud derived metrics such as Specific Catchment Area (SCA) even though the flow accumulation itself is deterministic.
Natur- und Technikphänomene
(2005)
Nanocarriers
(2017)
This study examined the relationships between the three phenotypic domains of the triarchic model of psychopathy —boldness, meanness, disinhibition— and electrophysiological indices of inhibitory control (NoGo-N2/NoGo-P3). EEG data from a 256-channel dense array were recorded while participants (135 un-dergraduates assessed via the Triarchic Psychopathy Measure) performed a Go/NoGo task with three types of stimuli (60% frequent-Go, 20% infrequent-Go, 20% infrequent-NoGo). N2 was defined as the mean amplitude between 240 ms and 340 ms after stimuli onset over fronto-central sensors on correct trials; P300 was defined as the mean amplitude between 350 ms and 550 ms after stimuli onset over centro-parietal sensors on correct trials. Multiple regression analyses using gender-corrected triarchic scores as predictors revealed that only Disinhibition scores significantly predicted reduced NoGo-N2 amplitudes (3.5% explained variance, beta weight = .23, p < .05) and reduced P3 amplitudes for NoGo and infrequent-Go trials (3.1 and 3.2% explained variance, respectively, beta weights = -.21, ps < .05). Our results indicate that high disinhibition entails deviations in early conflict monitoring processes (reduced NoGo-N2), as well as in latter evaluative and updating processing stages of infrequent events (reduced NoGo-P3 and infrequent-Go-P3). The null contribution of meanness and boldness domains in these results suggests that N2 and P3 amplitudes in Go/NoGo tasks could be considered as neurobiological indices of the externalizing tendencies comprised in this personality disorder.
Comparative text mining extends from genre analysis and political bias detection to the revelation of cultural and geographic differences, through to the search for prior art across patents and scientific papers. These applications use cross-collection topic modeling for the exploration, clustering, and comparison of large sets of documents, such as digital libraries. However, topic modeling on documents from different collections is challenging because of domain-specific vocabulary. We present a cross-collection topic model combined with automatic domain term extraction and phrase segmentation. This model distinguishes collection-specific and collection-independent words based on information entropy and reveals commonalities and differences of multiple text collections. We evaluate our model on patents, scientific papers, newspaper articles, forum posts, and Wikipedia articles. In comparison to state-of-the-art cross-collection topic modeling, our model achieves up to 13% higher topic coherence, up to 4% lower perplexity, and up to 31% higher document classification accuracy. More importantly, our approach is the first topic model that ensures disjunct general and specific word distributions, resulting in clear-cut topic representations.
Water management tools are necessary to guarantee the preservation of natural resources while ensuring optimum utilization. Linear regression models are a simple and quick solution for creating prognostic capabilities. Multivariate models show higher precision than univariate models. In the case of Waiwera, implementation of individual production rates is more accurate than applying just the total production rate. A maximum of approximately 1,075 m3/day can be pumped to ensure a water level of at least 0.5 m a.s.l. in the monitoring well. The model should be renewed annually to implement new data and current water level trends to keep the quality.
The nature restoration project ‘Lenzener Elbtalaue’, realised from 2002 to 2011 at the river Elbe, included the first large scale dike relocation in Germany (420 ha). Its aim was to initiate the development of endangered natural wetland habitats and processes, accompanied by greater biodiversity in the former grassland dominated area. The monitoring of spatial and temporal variations of soil moisture in this dike relocation area is therefore particularly important for estimating the restoration success. The topsoil moisture monitoring from 1990 to 2017 is based on the Soil Moisture Index (SMI)1 derived with the triangle method2 by use of optical remotely sensed data: land surface temperature and Normalized Differnce Vegetation Index are calculated from Landsat 4/5/7/8 data and atmospheric corrected by use of MODIS data. Spatial and temporal soil moisture variations in the restored area of the dike relocation are compared to the agricultural and pasture area behind the new dike. Ground truth data in the dike relocation area was obtained from field measurements in October 2017 with a FDR device. Additionally, data from a TERENO soil moisture sensor network (SoilNet) and mobile cosmic ray neutron sensing (CRNS) rover measurements are compared to the results of the triangle method for a region in the Harz Mountains (Germany). The SMI time series illustrates, that the dike relocation area has become significantly wetter between 1990 and 2017, due to restructuring measurements. Whereas the SMI of the dike hinterland reflects constant and drier conditions. An influence of climate is unlikely. However, validation of the dimensionless index with ground truth measurements is very difficult, mostly due to large differences in scale.
The emergence of cloud computing allows users to easily host their Virtual Machines with no up-front investment and the guarantee of always available anytime anywhere. But with the Virtual Machine (VM) is hosted outside of user's premise, the user loses the physical control of the VM as it could be running on untrusted host machines in the cloud. Malicious host administrator could launch live memory dumping, Spectre, or Meltdown attacks in order to extract sensitive information from the VM's memory, e.g. passwords or cryptographic keys of applications running in the VM. In this paper, inspired by the moving target defense (MTD) scheme, we propose a novel approach to increase the security of application's sensitive data in the VM by continuously moving the sensitive data among several memory allocations (blocks) in Random Access Memory (RAM). A movement function is added into the application source code in order for the function to be running concurrently with the application's main function. Our approach could reduce the possibility of VM's sensitive data in the memory to be leaked into memory dump file by 2 5% and secure the sensitive data from Spectre and Meltdown attacks. Our approach's overhead depends on the number and the size of the sensitive data.
Moving Forces
(2017)
Throughout a large part of the twentieth century, the body was interpreted as a field of signs, the meaning of which pointed to an unconscious dimension. At the height of the popularity of structuralism, Jacques Lacan deemed the unconscious to be “structured like a language.” Starting in the early 1990s, however, a deep shift occurred in the way the body was interpreted. A new movement cast tremendous doubt on the hegemony of language and instead advocated a performative, pictorial, and affective approach — the so-called material turn — which encompassed all of these. In the words of Karen Barad, this turn inquired as to why meaning, history, and truth are assigned to language only, whereas the movements of materiality are given less prominence: “How did language come to be more trustworthy than matter? Why are language and culture granted their own agency and historicity while matter is figured as passive and immutable?” With this shift toward the material, bodies began to be seen in a different light and their materiality understood as something that follows its own laws and movements, which cannot be understood exclusively in terms of social-cultural codes. Instead, these laws and movements call into question the very dichotomies of nature/culture and body/spirit.
Background and Aims: Ostarek et al. (2019) claimed a conclusive
demonstration that language comprehension relies profoundly on
visual simulations. They presented participants with visual noise during sentence-picture verification (SPV) and measured lateralized button response speed. The authors selectively eliminated the classical congruency effect (faster yes decisions when pictures match the objects implied by the sentences) with ‘‘high level’’ noise made from images of other objects. However, that visual noise included tool pictures, known to activate lateralized motor affordances. Moreover, some of their sentences described motor actions. This raises the question whether motor simulation may have contaminated their results.
Methods: Replicating Ostarek et al. (2019), 33 right-handed
participants performed SPV but either without visual noise or while viewing (a) only left-handled or (b) only right-handled or (c) alternatingly left- and right-handled tools. Accuracy and reaction times of manual yes responses were analyzed. Additionally, hand-relatedness of sentences was rated.
Results: Replicating Ostarek et al. (2019), the classical SPV congruency effect appeared without noise and vanished when alternatingly handled tools were presented. Crucially, it reappeared when noise objects were consistently either left- or righthandled. Higher hand-relatedness of sentence content reduced SPV performance and accuracy was lower with right-handled noise.
Conclusion: First, we demonstrated an interaction between motor-
related language, visual affordances and motor responses in SPV.
This result supports the embodied view of language processing.
Second, we identified a motor process not previously known in SPV. This extends our understanding of mental simulation and calls for methodological controls in future studies.
802.15.4 security protects against the replay, injection, and eavesdropping of 802.15.4 frames. A core concept of 802.15.4 security is the use of frame counters for both nonce generation and anti-replay protection. While being functional, frame counters (i) cause an increased energy consumption as they incur a per-frame overhead of 4 bytes and (ii) only provide sequential freshness. The Last Bits (LB) optimization does reduce the per-frame overhead of frame counters, yet at the cost of an increased RAM consumption and occasional energy-and time-consuming resynchronization actions. Alternatively, the timeslotted channel hopping (TSCH) media access control (MAC) protocol of 802.15.4 avoids the drawbacks of frame counters by replacing them with timeslot indices, but findings of Yang et al. question the security of TSCH in general. In this paper, we assume the use of ContikiMAC, which is a popular asynchronous MAC protocol for 802.15.4 networks. Under this assumption, we propose an Intra-Layer Optimization for 802.15.4 Security (ILOS), which intertwines 802.15.4 security and ContikiMAC. In effect, ILOS reduces the security-related per-frame overhead even more than the LB optimization, as well as achieves strong freshness. Furthermore, unlike the LB optimization, ILOS neither incurs an increased RAM consumption nor requires resynchronization actions. Beyond that, ILOS integrates with and advances other security supplements to ContikiMAC. We implemented ILOS using OpenMotes and the Contiki operating system.
MOOCs in Secondary Education
(2019)
Computer science education in German schools is often less than optimal. It is only mandatory in a few of the federal states and there is a lack of qualified teachers. As a MOOC (Massive Open Online Course) provider with a German background, we developed the idea to implement a MOOC addressing pupils in secondary schools to fill this gap. The course targeted high school pupils and enabled them to learn the Python programming language. In 2014, we successfully conducted the first iteration of this MOOC with more than 7000 participants. However, the share of pupils in the course was not quite satisfactory. So we conducted several workshops with teachers to find out why they had not used the course to the extent that we had imagined. The paper at hand explores and discusses the steps we have taken in the following years as a result of these workshops.
Monte-Carlo calculations are carried out to simulate the light transport in dense materials. Focus lies on the calculation of diffuse light transmission through films of scattering and absorbing media considering additionally the effect of dependent scattering. Different influences like interaction type between particles, particle size, composition etc. can be studied by this program. Simulations in this study show major influences on the diffuse transmission. Further simulations are carried out to model a sunscreen film and study best compositions of this film and will be presented.
In this paper, the applicability of deep downhole geoelectrical monitoring for detecting CO2 related signatures is evaluated after a nearly ten year period of CO2 storage at the Ketzin pilot site. Deep downhole electrode arrays have been studied as part of a multi-physical monitoring concept at four CO2 pilot test sites worldwide so far. For these sites, it was considered important to implement the geoelectrical method into the measurement program of tracking the CO2 plume. Analyzing the example of the Ketzin site, it can be seen that during all phases of the CO2 storage reservoir development the resistivity measurements and their corresponding tomographic interpretation contribute in a beneficial manner to the measurement, monitoring and verification (MMV) protocol. The most important impact of a permanent electrode array is its potential as tool for estimating reservoir saturations.
Precision fruticulture addresses site or tree-adapted crop management. In the present study, soil and tree status, as well as fruit quality at harvest were analysed in a commercial apple (Malus × domestica 'Gala Brookfield'/Pajam1) orchard in a temperate climate. Trees were irrigated in addition to precipitation. Three irrigation levels (0, 50 and 100%) were applied. Measurements included readings of apparent electrical conductivity of soil (ECa), stem water potential, canopy temperature obtained by infrared camera, and canopy volume estimated by LiDAR and RGB colour imaging. Laboratory analyses of 6 trees per treatment were done on fruit considering the pigment contents and quality parameters. Midday stem water potential (SWP), normalized crop water stress index (CWSI) calculated from thermal data, and fruit yield and quality at harvest were analysed. Spatial patterns of the variability of tree water status were estimated by CWSI imaging supported by SWP readings. CWSI ranged from 0.1 to 0.7 indicating high variability due to irrigation and precipitation. Canopy volume data were less variable. Soil ECa appeared homogeneous in the range of 0 to 4 mS m-1. Fruit harvested in a drought stress zone showed enhanced portion of pheophytin in the chlorophyll pool. Irrigation affected soluble solids content and, hence, the quality of fruit. Overall, results highlighted that spatial variation in orchards can be found even if marginal variability of soil properties can be assumed.
During their evolution, massive stars are characterized by a significant loss of mass either via spherically symmetric stellar winds or by aspherical mass-loss mechanisms, namely outflowing equatorial disks. However, the scenario that leads to the formation of a disk or rings of gas and dust around these objects is still under debate. Is it a viscous disk or an ouftlowing disk-forming wind or some other mechanism? It is also unclear how various physical mechanisms that act on the circumstellar environment of the stars affect its shape, density, kinematic, and thermal structure. We assume that the disk-forming mechanism is a viscous transport within an equatorial outflowing disk of a rapidly or even critically rotating star. We study the hydrodynamic and thermal structure of optically thick dense parts of outflowing circumstellar disks that may form around,e.g., Be stars, sgB[e] stars, or Pop m stars. We calculate self-consistent time dependent models of the inner dense region of the disk that is strongly affected either by irradiation from the central star and by contributions of viscous heating effects. We also simulate the dynamic effects of collision between expanding ejecta of supernovae and circumstellar disks that may be form in sgB[e] stars and, e.g., LBVs or Pop in stars.
Mixed-projection treemaps
(2017)
This paper presents a novel technique for combining 2D and 2.5D treemaps using multi-perspective views to leverage the advantages of both treemap types. It enables a new form of overview+detail visualization for tree-structured data and contributes new concepts for real-time rendering of and interaction with treemaps. The technique operates by tilting the graphical elements representing inner nodes using affine transformations and animated state transitions. We explain how to mix orthogonal and perspective projections within a single treemap. Finally, we show application examples that benefit from the reduced interaction overhead.
Mise-Unseen
(2019)
Creating or arranging objects at runtime is needed in many virtual reality applications, but such changes are noticed when they occur inside the user's field of view. We present Mise-Unseen, a software system that applies such scene changes covertly inside the user's field of view. Mise-Unseen leverages gaze tracking to create models of user attention, intention, and spatial memory to determine if and when to inject a change. We present seven applications of Mise-Unseen to unnoticeably modify the scene within view (i) to hide that task difficulty is adapted to the user, (ii) to adapt the experience to the user's preferences, (iii) to time the use of low fidelity effects, (iv) to detect user choice for passive haptics even when lacking physical props, (v) to sustain physical locomotion despite a lack of physical space, (vi) to reduce motion sickness during virtual locomotion, and (vii) to verify user understanding during story progression. We evaluated Mise-Unseen and our applications in a user study with 15 participants and find that while gaze data indeed supports obfuscating changes inside the field of view, a change is rendered unnoticeably by using gaze in combination with common masking techniques.
Minimising Information Loss on Anonymised High Dimensional Data with Greedy In-Memory Processing
(2018)
Minimising information loss on anonymised high dimensional data is important for data utility. Syntactic data anonymisation algorithms address this issue by generating datasets that are neither use-case specific nor dependent on runtime specifications. This results in anonymised datasets that can be re-used in different scenarios which is performance efficient. However, syntactic data anonymisation algorithms incur high information loss on high dimensional data, making the data unusable for analytics. In this paper, we propose an optimised exact quasi-identifier identification scheme, based on the notion of k-anonymity, to generate anonymised high dimensional datasets efficiently, and with low information loss. The optimised exact quasi-identifier identification scheme works by identifying and eliminating maximal partial unique column combination (mpUCC) attributes that endanger anonymity. By using in-memory processing to handle the attribute selection procedure, we significantly reduce the processing time required. We evaluated the effectiveness of our proposed approach with an enriched dataset drawn from multiple real-world data sources, and augmented with synthetic values generated in close alignment with the real-world data distributions. Our results indicate that in-memory processing drops attribute selection time for the mpUCC candidates from 400s to 100s, while significantly reducing information loss. In addition, we achieve a time complexity speed-up of O(3(n/3)) approximate to O(1.4422(n)).
Microstructure Characterisation of Advanced Materials via 2D and 3D X-Ray Refraction Techniques
(2018)
3D imaging techniques have an enormous potential to understand the microstructure, its evolution, and its link to mechanical, thermal, and transport properties. In this conference paper we report the use of a powerful, yet not so wide-spread, set of X-ray techniques based on refraction effects. X-ray refraction allows determining internal specific surface (surface per unit volume) in a non-destructive fashion, position and orientation sensitive, and with a nanometric detectability. We demonstrate showcases of ceramics and composite materials, where microstructural parameters could be achieved in a way unrivalled even by high-resolution techniques such as electron microscopy or computed tomography. We present in situ analysis of the damage evolution in an Al/Al2O3 metal matrix composite during tensile load and the identification of void formation (different kinds of defects, particularly unsintered powder hidden in pores, and small inhomogeneity’s like cracks) in Ti64 parts produced by selective laser melting using synchrotron X-ray refraction radiography and tomography.
Secondary mica minerals collected from the Santa Helena (W- (Cu) mineralization) and Venise (W-Mo mineralization) endogenic breccia structures were Ar-40/Ar-39 dated. The muscovite Ar-40/Ar-39 data yielded 286.8 +/- 1.2 (+/- 1 sigma) Ma (samples 6Ha and 11Ha) which reflect the age of secondary muscovite formation probably from magmatic biotite or feldspar alteration. Sericite Ar-40/Ar-39 data yielded 280.9 +/- 1.2 (+/- 1 sigma) Ma to 279.0 +/- 1.1 (+/- 1 sigma) Ma (samples 6Hb and 11Hb) reflecting the age of greisen alteration (T similar to 300 degrees C) where the W- disseminated mineralization occurs. The muscovite 40Ar/39Ar data of 277.3 +/- 1.3 (+/- 1 sigma) Ma and 281.3 +/- 1.2 (+/- 1 sigma) Ma (samples 5 and 6) also reflect the age of muscovite (selvage) crystallized adjacent to molybdenite veins within the Venise breccia. Geochronological data obtained confirmed that the W mineralization at Santa Helena breccia is older than Mo-mineralization at Venise breccia. Also, the timing of hydrothermal circulation and the cooling history for the W-stage deposition was no longer than 7 Ma and 4 Ma for Mo-deposition.
Metamaterial Devices
(2018)
In our hands-on demonstration, we show several objects, the functionality of which is defined by the objects' internal micro-structure. Such metamaterial machines can (1) be mechanisms based on their microstructures, (2) employ simple mechanical computation, or (3) change their outside to interact with their environment. They are 3D printed from one piece and we support their creating by providing interactive software tools.
Modern routing algorithms reduce query time by depending heavily on preprocessed data. The recently developed Navigation Data Standard (NDS) enforces a separation between algorithms and map data, rendering preprocessing inapplicable. Furthermore, map data is partitioned into tiles with respect to their geographic coordinates. With the limited memory found in portable devices, the number of tiles loaded becomes the major factor for run time. We study routing under these restrictions and present new algorithms as well as empirical evaluations. Our results show that, on average, the most efficient algorithm presented uses more than 20 times fewer tile loads than a normal A*.
The maximum entropy method is used to derive an alternative gravity model for a transport network. The proposed method builds on previous methods which assign the discrete value of a maximum entropy distribution to equal the traffic flow rate. The proposed method however, uses a distribution to represent each flow rate. The proposed method is shown to be able to handle uncertainty in a more elegant way and give similar results to traditional methods. It is able to incorporate more of the observed data through the entropy function, prior distribution and integration limits potentially allowing better inferences to be made.
Max Weber
(2005)
Die Website beinhaltet ausgewählte Werke Max Webers im Volltext.Die Potsdamer Internet-Ausgabe "PIA" folgt den alten Ausgaben der zwanziger Jahre (den "Marianne-Ausgaben"), die auch dem größten Teil der bisherigen Weber-Forschung zugrundeliegen. Das Projekt, Webers Werke der EDV zugänglich zu machen, entstand zunächst aus dem Bedürfnis nach neuen Registern. Wir arbeiten in der Erziehungswissenschaft. Für diesen Bereich sind die bisher verfügbaren Register ganz unzulänglich. Dem ist nun abgeholfen. Künftig können Weber-Interessierte aller Disziplinen ihre eigenen Register erstellen. Alle folgenden Texte können heruntergeladen und zur Schlagwort- und Zitatensuche, aber, versteht sich, auch zu anspruchsvolleren Inhaltsanalysen, sprachlichen Untersuchungen und anderen Vorhaben mithilfe spezieller Programme weiter bearbeitet werden. Die Auswahl der hier aufgenommenen Werke hat keine systematischen Gründe. Wir wollten einen Anfang machen und haben uns auf diejenigen Texte beschränkt, die uns in alten Ausgaben zur Hand waren, weil die jüngeren Ausgaben urheberrechtlich geschützt sind. Wichtiges fehlt: die Börsenschriften, "Wirtschaft und Gesellschaft", die Konfuzianismusstudie, die Musiksoziologie, die Schriften zur Russischen Revolution und andere.
We present a project combining lidar, photometer and particle counter data with a regularization software tool for a closure study of aerosol microphysical property retrieval. In a first step only lidar data are used to retrieve the particle size distribution (PSD). Secondly, photometer data are added, which results in a good consistency of the retrieved PSDs. Finally, those retrieved PSDs may be compared with the measured PSD from a particle counter. The data here were taken in Ny Alesund, Svalbard, as an example.
A hybrid design approach of the hierarchical physical implementation design flow is presented and demonstrated on a fault-tolerant low-power multiprocessor system. The proposed flow allows to implement selected submodules in parallel with contrary requirements such as identical placement and individual block implementation. The overall system contains four Leon2 cores and communicates via the Waterbear framework and supports Adaptive Voltage Scaling (AVS) functionality. Three of the processor core variants are derived from the first baseline reference core but implemented individually at block level based on their clock tree specification. The chip is prepared for space applications and designed with triple modular redundancy (TMR) for control parts. The low-power performance is enabled by contemporary power and clock management control. An ASIC is fabricated in a low-power 0.13 mu m BiCMOS technology process node.
Das Dokument ist eine Zusammenfassung der wesentlichen Aspekte zu Rosh Sukka.
Mobile expressive rendering gained increasing popularity among users seeking casual creativity by image stylization and supports the development of mobile artists as a new user group. In particular, neural style transfer has advanced as a core technology to emulate characteristics of manifold artistic styles. However, when it comes to creative expression, the technology still faces inherent limitations in providing low-level controls for localized image stylization. This work enhances state-of-the-art neural style transfer techniques by a generalized user interface with interactive tools to facilitate a creative and localized editing process. Thereby, we first propose a problem characterization representing trade-offs between visual quality, run-time performance, and user control. We then present MaeSTrO, a mobile app for orchestration of neural style transfer techniques using iterative, multi-style generative and adaptive neural networks that can be locally controlled by on-screen painting metaphors. At this, first user tests indicate different levels of satisfaction for the implemented techniques and interaction design.
A distinguishing feature of Answer Set Programming is that all atoms belonging to a stable model must be founded. That is, an atom must not only be true but provably true. This can be made precise by means of the constructive logic of Here-and-There, whose equilibrium models correspond to stable models. One way of looking at foundedness is to regard Boolean truth values as ordered by letting true be greater than false. Then, each Boolean variable takes the smallest truth value that can be proven for it. This idea was generalized by Aziz to ordered domains and applied to constraint satisfaction problems. As before, the idea is that a, say integer, variable gets only assigned to the smallest integer that can be justified. In this paper, we present a logical reconstruction of Aziz’ idea in the setting of the logic of Here-and-There. More precisely, we start by defining the logic of Here-and-There with lower bound founded variables along with its equilibrium models and elaborate upon its formal properties. Finally, we compare our approach with related ones and sketch future work.
LoANs
(2019)
Recently, deep neural networks have achieved remarkable performance on the task of object detection and recognition. The reason for this success is mainly grounded in the availability of large scale, fully annotated datasets, but the creation of such a dataset is a complicated and costly task. In this paper, we propose a novel method for weakly supervised object detection that simplifies the process of gathering data for training an object detector. We train an ensemble of two models that work together in a student-teacher fashion. Our student (localizer) is a model that learns to localize an object, the teacher (assessor) assesses the quality of the localization and provides feedback to the student. The student uses this feedback to learn how to localize objects and is thus entirely supervised by the teacher, as we are using no labels for training the localizer. In our experiments, we show that our model is very robust to noise and reaches competitive performance compared to a state-of-the-art fully supervised approach. We also show the simplicity of creating a new dataset, based on a few videos (e.g. downloaded from YouTube) and artificially generated data.