Refine
Year of publication
- 2016 (1341) (remove)
Document Type
- Article (1341) (remove)
Is part of the Bibliography
- yes (1341) (remove)
Keywords
- Magellanic Clouds (8)
- German (7)
- Eye movements (6)
- Sun: magnetic fields (6)
- climate change (6)
- intergalactic medium (6)
- methods: data analysis (6)
- techniques: spectroscopic (6)
- Germany (5)
- children (5)
Institute
- Institut für Geowissenschaften (221)
- Institut für Physik und Astronomie (203)
- Institut für Biochemie und Biologie (190)
- Institut für Chemie (156)
- Institut für Ernährungswissenschaft (49)
- Institut für Mathematik (48)
- Department Psychologie (43)
- Bürgerliches Recht (40)
- Wirtschaftswissenschaften (30)
- Department Linguistik (28)
The current financial reporting environment, with its increasing use of accounting estimates, including fair value estimates, suggests that unethical accounting estimates may be a growing concern. This paper provides explanations and empirical evidence for why some types of accounting estimates in financial reporting may promote a form of ethical blindness. These types of ethical blindness can have an escalating effect that corrupts not only an individual or organization but also the accounting profession and the public interest it serves. Ethical blindness in the standards of professional accountants may be a factor in the extent of misreporting, and may have taken on new urgency as a result of the proposals to change the conceptual framework for financial reporting using international standards. The social consequences for users of financial statements can be huge. The acquittal of former Nortel executives on fraud charges related to accounting manipulations is viewed by many as legitimizing accounting gamesmanship. This decision illustrates that the courts may not be the best place to deal with ethical reporting issues. The courts may be relied on for only the most egregious unethical conduct and, even then, the accounting profession is ill equipped to assist the legal system in prosecuting accounting fraud unless the standards have been clarified. We argue that the problem of unethical reporting should be addressed by the accounting profession itself, preferably as a key part of the conceptual framework that supports accounting and auditing standards, and the codes of ethical conduct that underpin the professionalism of accountants.
This article introduces the juxtaposed notions of liberal and neo-liberal gameplay in order to show that, while forms of contemporary game culture are heavily influenced by neo-liberalism, they often appear under a liberal disguise. The argument is grounded in Claus Pias’ idea of games as always a product of their time in terms of economic, political and cultural history. The article shows that romantic play theories (e.g. Schiller, Huizinga and Caillois) are circling around the notion of play as ‘free’, which emerged in parallel with the philosophy of liberalism and respective socio-economic developments such as the industrialization and the rise of the nation state. It shows further that contemporary discourse in computer game studies addresses computer game/play as if it still was the romantic form of play rooted in the paradigm of liberalism. The article holds that an account that acknowledges the neo-liberalist underpinnings of computer games is more suited to addressing contemporary computer games, among which are phenomena such as free to play games, which repeat the structures of a neo-liberal society. In those games the players invest time and effort in developing their skills, although their future value is mainly speculative – just like this is the case for citizens of neo-liberal societies.
Apart from their central role during 3D structure determination of proteins the backbone chemical shift assignment is the basis for a number of applications, like chemical shift perturbation mapping and studies on the dynamics of proteins. This assignment is not a trivial task even if a 3D protein structure is known and needs almost as much effort as the assignment for structure prediction if performed manually. We present here a new algorithm based solely on 4D [H-1, N-15]-HSQC-NOESY-[H-1, N-15]-HSQC spectra which is able to assign a large percentage of chemical shifts (73-82 %) unambiguously, demonstrated with proteins up to a size of 250 residues. For the remaining residues, a small number of possible assignments is filtered out. This is done by comparing distances in the 3D structure to restraints obtained from the peak volumes in the 4D spectrum. Using dead-end elimination, assignments are removed in which at least one of the restraints is violated. Including additional information from chemical shift predictions, a complete unambiguous assignment was obtained for Ubiquitin and 95 % of the residues were correctly assigned in the 251 residue-long N-terminal domain of enzyme I. The program including source code is available at https://github.com/thomasexner/4Dassign.
Macrocycles with quaterthiophene subunits were obtained by cyclooligomerization by direct oxidative coupling of unsubstituted dithiophene moieties. The rings were closed with high selectivity by an α,β′-connection of the thiophenes as proven by NMR spectroscopy. The reaction of the precursor with terthiophene moieties yielded the symmetric α,α′-linked macrocycle in low yield together with various differently connected isomers. Blocking of the β-position of the half-rings yielded selectively the α,α′-linked macrocycle. Selected cyclothiophenes were investigated by scanning tunneling microscopy, which displayed the formation of highly ordered 2D crystalline monolayers.
Spatio-temporal control of cellular uptake achieved by photoswitchable cell-penetrating peptides
(2016)
The selective uptake of compounds into specific cells of interest is a major objective in cell biology and drug delivery. By incorporation of a novel, thermostable azobenzene moiety we generated peptides that can be switched optically between an inactive state and an active, cell-penetrating state with excellent spatio-temporal control.
We present a statistical analysis of phase space density data from the first 26 months of the Van Allen Probes mission. In particular, we investigate the relationship between the tens and hundreds of keV seed electrons and >1 MeV core radiation belt electron population. Using a cross-correlation analysis, we find that the seed and core populations are well correlated with a coefficient of approximate to 0.73 with a time lag of 10-15 h. We present evidence of a seed population threshold that is necessary for subsequent acceleration. The depth of penetration of the seed population determines the inner boundary of the acceleration process. However, we show that an enhanced seed population alone is not enough to produce acceleration in the higher energies, implying that the seed population of hundreds of keV electrons is only one of several conditions required for MeV electron radiation belt acceleration.
In this study, we complement the notion of equilibrium states of the radiation belts with a discussion on the dynamics and time needed to reach equilibrium. We solve for the equilibrium states obtained using 1-D radial diffusion with recently developed hiss and chorus lifetimes at constant values of Kp = 1, 3, and 6. We find that the equilibrium states at moderately low Kp, when plotted versus L shell (L) and energy (E), display the same interesting S shape for the inner edge of the outer belt as recently observed by the Van Allen Probes. The S shape is also produced as the radiation belts dynamically evolve toward the equilibrium state when initialized to simulate the buildup after a massive dropout or to simulate loss due to outward diffusion from a saturated state. Physically, this shape, intimately linked with the slot structure, is due to the dependence of electron loss rate (originating from wave-particle interactions) on both energy and L shell. Equilibrium electron flux profiles are governed by the Biot number (tau(Diffusion)/tau(loss)), with large Biot number corresponding to low fluxes and low Biot number to large fluxes. The time it takes for the flux at a specific (L, E) to reach the value associated with the equilibrium state, starting from these different initial states, is governed by the initial state of the belts, the property of the dynamics (diffusion coefficients), and the size of the domain of computation. Its structure shows a rather complex scissor form in the (L, E) plane. The equilibrium value (phase space density or flux) is practically reachable only for selected regions in (L, E) and geomagnetic activity. Convergence to equilibrium requires hundreds of days in the inner belt for E>300 keV and moderate Kp (<= 3). It takes less time to reach equilibrium during disturbed geomagnetic conditions (Kp = 3), when the system evolves faster. Restricting our interest to the slot region, below L = 4, we find that only small regions in (L, E) space can reach the equilibrium value: E similar to [200, 300] keV for L= [3.7, 4] at Kp= 1, E similar to[0.6, 1] MeV for L = [3, 4] at Kp = 3, and E similar to 300 keV for L = [3.5, 4] at Kp = 6 assuming no new incoming electrons.
Since more than 15 years, the Cluster mission passes through Earth's radiation belts at least once every 2 days for several hours, measuring the electron intensity at energies from 30 to 400 keV. These data have previously been considered not usable due to contamination caused by penetrating energetic particles (protons at >100 keV and electrons at >400 keV). In this study, we assess the level of distortion of energetic electron spectra from the Research with Adaptive Particle Imaging Detector (RAPID)/Imaging Electron Spectrometer (IES) detector, determining the efficiency of its shielding. We base our assessment on the analysis of experimental data and a radiation transport code (Geant4). In simulations, we use the incident particle energy distribution of the AE9/AP9 radiation belt models. We identify the Roederer L values, L⋆, and energy channels that should be used with caution: at 3≤L⋆≤4, all energy channels (40–400 keV) are contaminated by protons (≃230 to 630 keV and >600 MeV); at L⋆≃1 and 4–6, the energy channels at 95–400 keV are contaminated by high-energy electrons (>400 keV). Comparison of the data with electron and proton observations from RBSP/MagEIS indicates that the subtraction of proton fluxes at energies ≃ 230–630 keV from the IES electron data adequately removes the proton contamination. We demonstrate the usefulness of the corrected data for scientific applications.
Low Earth orbiting geomagnetic satellite missions, such as the Swarm satellite mission, are the only means to monitor and investigate ionospheric currents on a global scale and to make in situ measurements of F region currents. High-precision geomagnetic satellite missions are also able to detect ionospheric currents during quiet-time geomagnetic conditions that only have few nanotesla amplitudes in the magnetic field. An efficient method to isolate the ionospheric signals from satellite magnetic field measurements has been the use of residuals between the observations and predictions from empirical geomagnetic models for other geomagnetic sources, such as the core and lithospheric field or signals from the quiet-time magnetospheric currents. This study aims at highlighting the importance of high-resolution magnetic field models that are able to predict the lithospheric field and that consider the quiet-time magnetosphere for reliably isolating signatures from ionospheric currents during geomagnetically quiet times. The effects on the detection of ionospheric currents arising from neglecting the lithospheric and magnetospheric sources are discussed on the example of four Swarm orbits during very quiet times. The respective orbits show a broad range of typical scenarios, such as strong and weak ionospheric signal (during day- and nighttime, respectively) superimposed over strong and weak lithospheric signals. If predictions from the lithosphere or magnetosphere are not properly considered, the amplitude of the ionospheric currents, such as the midlatitude Sq currents or the equatorial electrojet (EEJ), is modulated by 10-15 % in the examples shown. An analysis from several orbits above the African sector, where the lithospheric field is significant, showed that the peak value of the signatures of the EEJ is in error by 5 % in average when lithospheric contributions are not considered, which is in the range of uncertainties of present empirical models of the EEJ.
Generalizing a linear expression over a vector space, we call a term of an arbitrary type tau linear if its every variable occurs only once. Instead of the usual superposition of terms and of the total many-sorted clone of all terms in the case of linear terms, we define the partial many-sorted superposition operation and the partial many-sorted clone that satisfies the superassociative law as weak identity. The extensions of linear hypersubstitutions are weak endomorphisms of this partial clone. For a variety V of one-sorted total algebras of type tau, we define the partial many-sorted linear clone of V as the partial quotient algebra of the partial many-sorted clone of all linear terms by the set of all linear identities of V. We prove then that weak identities of this clone correspond to linear hyperidentities of V.
What shapes peace, and how can peace be successfully built in those countries affected by armed conflict? This paper examines mpeacebuilding in the aftermath of civil wars in order to identify the conditions for post-conflict peace. The field of civil war research is
characterised by case studies, comparative analyses and quantitative research, which relate relatively little to each other. Furthermore, the complex dynamics of peacebuilding have hardly been investigated so far. Thus, the question remains of how best to enhance the prospects
of a stable peace in post-conflict societies. Therefore, it is necessary to capture the dynamics of post-conflict peace. This paper aims at helping to narrow these research gaps by 1) presenting the benefits of set theoretic methods for peace and conflict studies; 2) identifying remote conflict environment factors and proximate peacebuilding factors which have an influence on the peacebuilding process and 3) proposing a
set-theoretic multi-method research approach in order to identify the causal structures and mechanisms underlying the complex realm of post-conflict peacebuilding. By implementing this transparent and systematic comparative approach, it will become possible to discover
the dynamics of post-conflict peace.
Answer Set Programming (ASP) is a powerful declarative programming paradigm that has been successfully applied to many different domains. Recently, ASP has also proved successful for hard optimization problems like course timetabling and travel allotment. In this paper, we approach another important task, namely, the shift design problem, aiming at an alignment of a minimum number of shifts in order to meet required numbers of employees (which typically vary for different time periods) in such a way that over- and understaffing is minimized. We provide an ASP encoding of the shift design problem, which, to the best of our knowledge, has not been addressed by ASP yet. Our experimental results demonstrate that ASP is capable of improving the best known solutions to some benchmark problems. Other instances remain challenging and make the shift design problem an interesting benchmark for ASP-based optimization methods.
For Vibrio cholerae, the coordinated import and export of Na+ is crucial for adaptation to habitats with different osmolarities. We investigated the Na+-extruding branch of the sodium cycle in this human pathogen by in vivo Na-23-NMR spectroscopy. The Na+ extrusion activity of cells was monitored after adding glucose which stimulated respiration via the Na+-translocating NADH:quinone oxidoreductase (Na+-NQR). In a V. cholerae deletion mutant devoid of the Na+-NQR encoding genes (nqrA-F), rates of respiratory Na+ extrusion were decreased by a factor of four, but the cytoplasmic Na+ concentration was essentially unchanged. Furthermore, the mutant was impaired in formation of transmembrane voltage (Delta psi, inside negative) and did not grow under hypoosmotic conditions at pH 8.2 or above. This growth defect could be complemented by transformation with the plasmid encoded nqr operon. In an alkaline environment, Na+/H+ antiporters acidify the cytoplasm at the expense of the transmembrane voltage. It is proposed that, at alkaline pH and limiting Na+ concentrations, the Na+-NQR is crucial for generation of a transmembrane voltage to drive the import of H+ by electrogenic Na+/H+ antiporters. Our study provides the basis to understand the role of the Na+-NQR in pathogenicity of V. cholerae and other pathogens relying on this primary Na+ pump for respiration. (C) 2015 Elsevier B.V. All rights reserved.
Based on theories of scientific discovery learning (SDL) and conceptual change, this study explores students' preconceptions in the domain of torques in physics and the development of these conceptions while learning with a computer-based SDL task. As a framework we used a three-space theory of SDL and focused on model space, which is supposed to contain the current conceptualization/model of the learning domain, and on its change through hypothesis testing and experimenting. Three questions were addressed: (1) What are students' preconceptions of torques before learning about this domain? To do this a multiple-choice test for assessing students' models of torques was developed and given to secondary school students (N = 47) who learned about torques using computer simulations. (2) How do students' models of torques develop during SDL? Working with simulations led to replacement of some misconceptions with physically correct conceptions. (3) Are there differential patterns of model development and if so, how do they relate to students’ use of the simulations? By analyzing individual differences in model development, we found that an intensive use of the simulations was associated with the acquisition of correct conceptions. Thus, the three-space theory provided a useful framework for understanding conceptual change in SDL.
This survey on the theme of Geometry Education (including new technologies) focuses chiefly on the time span since 2008. Based on our review of the research literature published during this time span (in refereed journal articles, conference proceedings and edited books), we have jointly identified seven major threads of contributions that span from the early years of learning (pre-school and primary school) through to post-compulsory education and to the issue of mathematics teacher education for geometry. These threads are as follows: developments and trends in the use of theories; advances in the understanding of visuo spatial reasoning; the use and role of diagrams and gestures; advances in the understanding of the role of digital technologies; advances in the understanding of the teaching and learning of definitions; advances in the understanding of the teaching and learning of the proving process; and, moving beyond traditional Euclidean approaches. Within each theme, we identify relevant research and also offer commentary on future directions.
The three-space theory of problem solving predicts that the quality of a learner's model and the goal specificity of a task interact on knowledge acquisition. In Experiment 1 participants used a computer simulation of a lever system to learn about torques. They either had to test hypotheses (nonspecific goal), or to produce given values for variables (specific goal). In the good- but not in the poor-model condition they saw torque depicted as an area. Results revealed the predicted interaction. A nonspecific goal only resulted in better learning when a good model of torques was provided. In Experiment 2 participants learned to manipulate the inputs of a system to control its outputs. A nonspecific goal to explore the system helped performance when compared to a specific goal to reach certain values when participants were given a good model, but not when given a poor model that suggested the wrong hypothesis space. Our findings support the three-space theory. They emphasize the importance of understanding for problem solving and stress the need to study underlying processes.
Recent studies of short-term serial order memory have suggested that the maintenance of order information does not involve domain-specific processes. We carried out two dual task experiments aimed at resolving several ambiguities in those studies. In our experiments, encoding and response of one serial reconstruction task was embedded within encoding and response of a concurrent serial reconstruction task. Order demands in both tasks were independently varied so as to find revealing patterns of interference between the two tasks. In Experiment 1, participants were to maintain and reconstruct the order of a list of verbal materials, while maintaining a list of spatial materials or vice-versa. Increasing the order demands in the outer reconstruction task resulted in small or non reliable performance decrements in the embedded reconstruction task. Experiment 2 sought to compare these results against two same-domain baseline conditions (two verbal lists or two spatial lists). In all conditions, increasing order demands in the outer task resulted in small or non-reliable performance decrements in the embedded task. However, performance in the embedded tasks was generally lower in the same-domain baseline conditions than in the cross-domain conditions. We argue that the main effect of domain in Experiment 2 indicates the contribution of domain-specific processes to short-term serial order maintenance. In addition, we interpret the failure to find consistent cross-list interference irrespective of domain as indicating the involvement of grouping mechanisms in concurrently performed serial order tasks. (C) 2015 Elsevier Inc. All rights reserved.
In a series of experiments, we tested a recently proposed hypothesis stating that the degree of alignment between the form of a mental representation resulting from learning with a particular visualization format and the specific requirements of a learning task determines learning performance (task-appropriateness). Groups of participants were required to learn the stroke configuration, the stroke order, or the stroke directions of a set of Chinese pseudocharacters. For each learning task, participants were divided into groups receiving dynamic, static-sequential, or static visualizations. An old/new character recognition task was given at test. The results showed that learning both stroke configuration and stroke order was best with static pictures (Experiments 1 and 2), while there was no reliable difference between the groups for learning stroke direction (Experiment 3). An additional experiment, however, revealed that learning with sequential pictures was superior when testing was carried out with sequential pictures, irrespective of the learning task (Experiment 4). The combined evidence from all experiments speaks against task requirements playing a role in determining the effectiveness of a visualization format. Furthermore, the evidence supports the view that a high degree of congruence between information presented during learning and information presented at test results in better learning (study-test congruence). Implications for instructional design are discussed.