Refine
Has Fulltext
- no (1552) (remove)
Year of publication
- 2004 (1552) (remove)
Document Type
- Article (1162)
- Monograph/Edited Volume (198)
- Doctoral Thesis (98)
- Review (82)
- Other (11)
- Course Material (1)
Language
Is part of the Bibliography
- yes (1552)
Institute
- Institut für Physik und Astronomie (178)
- Institut für Biochemie und Biologie (160)
- Sozialwissenschaften (78)
- Öffentliches Recht (75)
- Wirtschaftswissenschaften (74)
- Institut für Chemie (70)
- Institut für Umweltwissenschaften und Geographie (69)
- Department Erziehungswissenschaft (64)
- Historisches Institut (63)
- Institut für Romanistik (61)
- Institut für Geowissenschaften (59)
- Institut für Jüdische Studien und Religionswissenschaft (57)
- Department Psychologie (55)
- Institut für Mathematik (51)
- Institut für Anglistik und Amerikanistik (50)
- Institut für Germanistik (48)
- Institut für Informatik und Computational Science (48)
- Department Linguistik (36)
- Institut für Ernährungswissenschaft (36)
- Department Grundschulpädagogik (31)
- Bürgerliches Recht (29)
- MenschenRechtsZentrum (27)
- Philosophische Fakultät (24)
- Department Sport- und Gesundheitswissenschaften (22)
- Institut für Künste und Medien (18)
- Institut für Slavistik (16)
- Strafrecht (14)
- Lehreinheit für Wirtschafts-Arbeit-Technik (13)
- Department für Inklusionspädagogik (7)
- Interdisziplinäres Zentrum für Dynamik komplexer Systeme (4)
- Extern (1)
- Fachgruppe Soziologie (1)
- Kommunalwissenschaftliches Institut (1)
- Referat für Presse- und Öffentlichkeitsarbeit (1)
- ZIM - Zentrum für Informationstechnologie und Medienmanagement (1)
- Zentrum für Umweltwissenschaften (1)
The author considers the heat equation in dimension one with singular drift and inhomogeneous space-time white noise. In particular, the quadratic variation measure of the white noise is not required to be absolutely continuous w.r.t. the Lebesgue measure, neither in space nor in time. Under some assumptions the author gives statements on strong and weak existence as well as strong and weak uniqueness of continuous solutions.
Three cDNAs encoding purple acid phosphatase (PAP) were cloned from potato (Solanum tuberosum L. cv. Desiree) and expression of the corresponding genes was characterised. StPAP1 encodes a low-molecular weight PAP clustering with mammalian, cyanobacterial, and other plant PAPs. It was highly expressed in stem and root and its expression did not change in response to phosphorus (P) deprivation. StIPAP2 and StPAP3 code for high-molecular weight PAPs typical for plants. Corresponding gene expression was shown to be responsive to the level of P supply, with transcripts of StPAP2 and StPAP3 being most abundant in P-deprived roots or both stem and roots, respectively. Root colonisation by arbuscular mycorrhizal fungi had no effect on the expression of any of the three PAP genes. StIPAP1 mRNA is easily detectable along the root axis, including root hairs, but is barely detectable in root tips. In contrast, both StPAP2 and StPAP3 transcripts are abundant along the root axis, but absent in root hairs, and are most abundant in the root tip. All three PAPs described contain a predicted N-terminal secretion signal and could play a role in extracellular P scavenging, P mobilisation from the rhizosphere, or cell wall regeneration
A new efficient algorithm is presented for joint diagonalization of several matrices. The algorithm is based on the Frobenius-norm formulation of the joint diagonalization problem, and addresses diagonalization with a general, non- orthogonal transformation. The iterative scheme of the algorithm is based on a multiplicative update which ensures the invertibility of the diagonalizer. The algorithm's efficiency stems from the special approximation of the cost function resulting in a sparse, block-diagonal Hessian to be used in the computation of the quasi-Newton update step. Extensive numerical simulations illustrate the performance of the algorithm and provide a comparison to other leading diagonalization methods. The results of such comparison demonstrate that the proposed algorithm is a viable alternative to existing state-of-the-art joint diagonalization algorithms. The practical use of our algorithm is shown for blind source separation problems
We propose two methods that reduce the post-nonlinear blind source separation problem (PNL-BSS) to a linear BSS problem. The first method is based on the concept of maximal correlation: we apply the alternating conditional expectation (ACE) algorithm-a powerful technique from nonparametric statistics-to approximately invert the componentwise nonlinear functions. The second method is a Gaussianizing transformation, which is motivated by the fact that linearly mixed signals before nonlinear transformation are approximately Gaussian distributed. This heuristic, but simple and efficient procedure works as good as the ACE method. Using the framework provided by ACE, convergence can be proven. The optimal transformations obtained by ACE coincide with the sought-after inverse functions of the nonlinearitics. After equalizing the nonlinearities, temporal decorrelation separation (TDSEP) allows us to recover the source signals. Numerical simulations testing "ACE-TD" and "Gauss-TD" on realistic examples are performed with excellent results
We discuss the role of gravitational excitons/radions in different cosmological scenarios. Gravitational excitons are massive moduli fields which describe conformal excitations of the internal spaces and which, due to their Planck-scale suppressed coupling to matter fields, are WIMPs. It is demonstrated that, depending on the concrete scenario, observational cosmological data set strong restrictions on the allowed masses and initial oscillation amplitudes of these particles
We investigate noise-controlled resonant response of active media to weak periodic forcing, both in excitable and oscillatory regimes. In the excitable regime, we find that noise-induced irregular wave structures can be reorganized into frequency-locked resonant patterns by weak signals with suitable frequencies. The resonance occurs due to a matching condition between the signal frequency and the noise-induced inherent time scale of the media. m:1 resonant regions similar to the Arnold tongues in frequency locking of self-sustained oscillatory media are observed. In the self-sustained oscillatory regime, noise also controls the oscillation frequency and reshapes significantly the Arnold tongues. The combination of noise and weak signal thus could provide an efficient tool to manipulate active extended systems in experiments
The optical, structural, and electrical properties of thin layers made from poly(3-hexylthiophene) (P3HT) samples of different molecular weights are presented. As reported in a previous paper by Kline et al., Adv. Mater 2003, 15, 1519, the mobilities of these layers are a strong function of the molecular weight, with the largest mobility found for the largest molecular weight. Atomic force microscopy studies reveal a complex polycrystalline morphology which changes considerably upon annealing. X-ray studies show the occurrence of a layered phase for all P3HT fractions, especially after annealing at 1.50 degreesC . However, there is no clear correlation between the differences in the transport properties and the data from structural investigations. In order to reveal the processes limiting the mobility in these layers, the transistor properties were investigated as a function of temperature. The mobility decreases continuously with increasing temperatures; with the same trend pronounced thermochromic effects of the P3HT films occur. Apparently, the polymer chains adopt a more twisted, disordered conformation at higher temperatures, leading to interchain transport barriers. We conclude that the backbone conformation of the majority of the bulk material rather than the crystallinity of the layer is the most crucial parameter controlling the charge transport in these P3HT layers. This interpretation is supported by the significant blue-shift of the solid-state absorption spectra with decreasing molecular weight, which is indicative of a larger distortion of the P3HT backbone in the low-molecular weight P3HT layers
Stofftransport in einem Lösseinzugsgebiet: Experimentelle Evidenz und numerische Modellierung.
(2004)
[1] This paper examines the effect of uncertain initial soil moisture on hydrologic response at the plot scale (1 m(2)) and the catchment scale (3.6 km(2)) in the presence of threshold transitions between matrix and preferential flow. We adopt the concepts of microstates and macrostates from statistical mechanics. The microstates are the detailed patterns of initial soil moisture that are inherently unknown, while the macrostates are specified by the statistical distributions of initial soil moisture that can be derived from the measurements typically available in field experiments. We use a physically based model and ensure that it closely represents the processes in the Weiherbach catchment, Germany. We then use the model to generate hydrologic response to hypothetical irrigation events and rainfall events for multiple realizations of initial soil moisture microstates that are all consistent with the same macrostate. As the measures of uncertainty at the plot scale we use the coefficient of variation and the scaled range of simulated vertical bromide transport distances between realizations. At the catchment scale we use similar statistics derived from simulated flood peak discharges. The simulations indicate that at both scales the predictability depends on the average initial soil moisture state and is at a minimum around the soil moisture value where the transition from matrix to macropore flow occurs. The predictability increases with rainfall intensity. The predictability increases with scale with maximum absolute errors of 90 and 32% at the plot scale and the catchment scale, respectively. It is argued that even if we assume perfect knowledge on the processes, the level of detail with which one can measure the initial conditions along with the nonlinearity of the system will set limits to the repeatability of experiments and limits to the predictability of models at the plot and catchment scales
A statistical model describing the propensity for protein aggregation is presented. Only amino-acid hydrophobicity values and calculated net charge are used for the model. The combined effects of hydrophobic patterns as computed by the signal analysis technique, recurrence quantification, plus calculated net charge were included in a function emphasizing the effect of singular hydrophobic patches which were found to be statistically significant for predicting aggregation propensity as quantified by fluorescence studies obtained from the literature. These results suggest preliminary evidence for a mesoscopic principle for protein folding/aggregation. (C) 2004 Elsevier B.V. All rights reserved
The presence of partially folded intermediates along the folding funnel of proteins has been suggested to be a signature of potentially aggregating systems. Many studies have concluded that metastable, highly flexible intermediates are the basic elements of the aggregation process. In a previous paper, we demonstrated how the choice between aggregation and folding behavior was influenced by hydrophobicity distribution patterning along the sequence, as quantified by recurrence quantification analysis (RQA) of the Myiazawa-Jernigan coded primary structures. In the present paper, we tried to unify the "partially folded intermediate" and "hydrophobicity/charge" models of protein aggregation verifying the ability of an empirical relation, developed for rationalizing the effect of different mutations on aggregation propensity of acyl-phosphatase and based on the combination of hydrophobicity RQA and charge descriptors, to discriminate in a statistically significant way two different protein populations: (a) proteins that fold by a process passing by partially folded intermediates and (b) proteins that do not present partially folded intermediates
We investigate the effects of rotation on the behavior of the helium-burning shell source in accreting carbon- oxygen white dwarfs, in the context of the single degenerate Chandrasekhar mass progenitor scenario for type la supernovae (SNe Ia). We model the evolution of helium-accreting white dwarfs of initially 1 M-circle dot, assuming four different constant accretion rates (2, 3, 5 and 10 x 10(-7) M-circle dot/yr). In a one-dimensional approximation, we compute the mass accretion and subsequent nuclear fusion of helium into carbon and oxygen, as well as angular momentum accretion, angular momentum transport inside the white dwarf, and rotationally induced chemical mixing. Our models show two major effects of rotation: a) The helium-burning nuclear shell source in the rotating models is much more stable than in corresponding non-rotating models - which increases the likelihood that accreting white dwarfs reach the stage of central carbon ignition. This effect is mainly due to rotationally induced mixing at the CO/He interface which widens the shell source, and due to the centrifugal force lowering the density and degeneracy at the shell source location. b) The C/O-ratio in the layers which experience helium shell burning - which may affect the energy of an SN Ia explosion - is strongly decreased by the rotationally induced mixing of a-particles into the carbon-rich layers. We discuss implications of our results for the evolution of SNe la progenitors
We study the global singularity structure of solutions to 3-D semilinear wave equations with discontinuous initial data. More precisely, using Strichartz' inequality we show that the solutions stay conormal after nonlinear interaction if the Cauchy data are conormal along a circle. (C) 2003 Elsevier Inc. All rights reserved
The ethyl acetate extract of the stem bark of Erythrina abyssinica showed anti-plasmodial activity against the chloroquine-sensitive (D6) and chloroquine-resistant (W2) strains of Plasmodium falciparum with IC50 values of 7.9 +/- 1.1 and 5.3 +/- 0.7 mug/ml, respectively. From this extract, a new chalcone, 2,3,4,4'-tetrahydroxy-5- prenylchalcone (trivial name 5-prenylbutein) and a new flavanone, 4',7-dihydroxy-3'-methoxy-5'- prenylflavanone (trivial name, 5-deoxyabyssinin II) along with known flavonoids have been isolated as the anti- plasmodial principles. The structures were determined on the basis of spectroscopic evidence. (C) 2004 Elsevier Ltd. All rights reserved
Suppression of the keto-emission in polyfluorene light-emitting diodes : Experiments and models
(2004)
The spectral characteristics of polyfluorene (PF)-based light-emitting diodes (LEDs) containing a defined low concentration of either keto-defects or of the polymer poly(9.9-octylfuorene-co-benzothiadiazole) (F8BT) are preseneted. Both types of blend layers were tested in different device configurations with respect to the relative and absolute intensities of green blue emission components. It is shown that blending hole-transporting molecules into the emission layer at low concentration or incorporation of a suitable hole-transport layer reduces the green emission contribution in the electroluminescence (EL) spectrum of the PF:F8BT blend, which is similar to what is observed for the keto- containing PF layer. We conclude that the keto-defects in PF homopolymer layers mainly constitute weakly emissive electron traps, in agreement with the results of quantum-mechanical calculations
A commercially available Ir complex has been employed for the preparation of highly efficient (see Figure) single-layer phosphorescent polymer light,emitting diodes by use of appropriate thermal treatment and proper adjustment of the layer composition. These devices exhibit essentially no dependence of the driving field on the concentration of the Ir complex, suggesting that the build-up of space-charge in the layer is insignificant
We demonstrate efficient single-layer polymer phosphorescent light-emitting devices based on a green-emitting iridium complex and a polymer host co-doped with electron-transporting and hole-transporting molecules. These devices can be operated at relatively low voltages, resulting in a power conversion efficiency of up to 24 lm/W at luminous efficiencies exceeding 30 cd/A. The overall performances of these devices suggest that efficient electrophosphorescent devices with acceptable operating voltages can be achieved in very simple device structures fabricated by spin coating. (C) 2004 American Institute of Physics
It has been found in recent measurements that the singlet-to-triplet exciton ratio in organic light-emitting diodes (OLEDs) is larger than expected from spin degeneracy, and that singlet excitons form at a larger rate than triplets. We employed the technique of optically detected magnetic resonance to measure the spin-dependent exciton formation rates in films of a polymer and corresponding monomer, and explore the relation between the formation rates and the actual singlet-to-triplet ratio measured previously in OLEDs. We found that the spin-dependent exciton formation rates can indeed quantitatively explain the observed exciton yields, and that singlet formation rates and yields are significantly enhanced only in polymer OLEDs, but not in OLEDs made from the corresponding monomer
The roots of evil : the foundation years of anti-semitism : from the time of Bismarck to Hitler
(2004)
Various authors have investigated the problem of light deflection by radially moving gravitational lenses, but the results presented so far do not appear to agree on the expected deflection angles. Some publications claim a scaling of deflection angles with 1-v to first order in the radial lens velocity v, while others obtained a scaling with 1-2 v. In this paper we generalize the calculations for arbitrary lens velocities and show that the first result is the correct one. We discuss the seeming inconsistency of relativistic light deflection with the classical picture of moving test particles by generalizing the lens effect to test particles of arbitrary velocity, including light as a limiting case. We show that the effect of radial motion of the lens is very different for slowly moving test particles and light and that a critical test particle velocity exists for which the motion of the lens has no effect on the deflection angle to first order. An interesting and not immediately intuitive result is obtained in the limit of a highly relativistic motion of the lens towards the observer, where the deflection angle of light reduces to zero. This phenomenon is elucidated in terms of moving refractive media. Furthermore, we discuss the dragging of inertial frames in the field of a moving lens and the corresponding Lense-Thirring precession. in order to shed more light on the geometrical effects in the surroundings of a moving mass. In a second part we discuss the effect of transversal motion on the observed redshift of lensed sources. We demonstrate how a simple kinematic calculation explains the effects for arbitrary velocities of the lens and test particles. Additionally we include the transversal motion of the source and observer to show that all three velocities can be combined into an effective relative transversal velocity similar to the approach used in microlensing studies
B0218 + 357 is one of the most promising systems to determine the Hubble constant from time-delays in gravitational lenses. Consisting of two bright images, which are well resolved in very long baseline interferometry (VLBI) observations, plus one of the most richly structured Einstein rings, it potentially provides better constraints for the mass model than most other systems. The main problem left until now was the very poorly determined position of the lensing galaxy. After presenting detailed results from classical lens modelling, we apply our improved version of the LENSCLEAN algorithm which for the first time utilizes the beautiful Einstein ring for lens modelling purposes. The primary result using isothermal lens models is a now very well defined lens position of (255 +/- 6, 119 +/- 4) mas relative to the A image, which allows the first reliable measurement of the Hubble constant from the time-delay of this system. The result of H-0 = (78 +/- 6) km s(-1) Mpc(-1) (2sigma) is very high compared with other lenses. It is, however, compatible with local estimates from the Hubble Space Telescope (HST) key project and with WMAP results, but less prone to systematic errors. We furthermore discuss possible changes of these results for different radial mass profiles and find that the final values cannot be very different from the isothermal expectations. The power-law exponent of the potential is constrained by VLBI data of the compact images and the inner jet to be beta = 1.04 +/- 0.02, which confirms that the mass distribution is approximately isothermal (corresponding to beta = 1), but slightly shallower. The effect on H-0 is reduced from the expected 4 per cent decrease by an estimated shift of the best galaxy position of circa 4 mas to at most 2 per cent. Maps of the unlensed source plane produced from the best LENSCLEAN brightness model show a typical jet structure and allow us to identify the parts which are distorted by the lens to produce the radio ring. We also present a composite map which for the first time shows the rich structure of B0218 + 357 on scales ranging from mas to arcsec, both in the image plane and in the reconstructed source plane. Finally, we use a comparison of observations at different frequencies to investigate the question of possible weakening of one of the images by propagation effects and/or source shifts with frequency. The data clearly favour the model of significant 'extinction' without noticeable source position shifts. The technical details of our variant of the LENSCLEAN method are presented in the accompanying Paper I.
LensClean revisited
(2004)
We discuss the LENSCLEAN algorithm which for a given gravitational lens model fits a source brightness distribution to interferometric radio data in a similar way as standard CLEAN does in the unlensed case. The lens model parameters can then be varied in order to minimize the residuals and determine the best model for the lens mass distribution. Our variant of this method is improved in order to be useful and stable even for high dynamic range systems with nearly degenerated lens model parameters. Our test case B0218 + 357 is dominated by two bright images but the information needed to constrain the unknown parameters is provided only by the relatively smooth and weak Einstein ring. The new variant of LENSCLEAN is able to fit lens models even in this difficult case. In order to allow the use of general mass models with LENSCLEAN, we develop the new method LENTIL which inverts the lens equation much more reliably than any other method. This high reliability is essential for the use as part of LENSCLEAN. Finally a new method is developed to produce source plane maps of the unlensed source from the best LENSCLEAN brightness models. This method is based on the new concept of 'dirty beams' in the source plane. The application to the lens B0218 + 357 leads to the first useful constraints for the lens position and thus to a result for the Hubble constant. These results are presented in the accompanying Paper II, together with a discussion of classical lens modelling for this system
"Es ist mir ein innerer Parteitag, dass das 'Muttiheft' lebt". Wörterbücher zum DDR- Wortschatz
(2004)
Sprachverstehen mit Cochlea-Implantat : EKP-Studien mit postlingual ertaubten erwachsenen CI-Trägern
(2004)
Local asymptotic types
(2004)
The quasar HE 0047-1756, at z = 1.67, is found to be split into two images 1."144 apart by an intervening galaxy acting as a gravitational lens. The flux ratio for the two components is roughly 3.5:1, depending slightly upon wavelength. The lensing galaxy is seen on images obtained in the i (800 nm) and K-s bands (2.1 mum); there is also a nearby faint object which may be responsible for some shear. The spectra of the two quasar images are nearly identical, but the emission line ratio between the two components scale differently from the continuum. Moreover, the fainter component has a bluer continuum slope than the brighter one. We argue that these small differences are probably due to microlensing. There is evidence for a partial Einstein ring emanating from the brighter image toward the fainter one
We present spatially resolved spectrophotometric observations of multiply imaged QSOs, using the Potsdam Multi- Aperture Spectrophotometer (PMAS), with the intention to search for spectral differences between components indicative of either microlensing or dust extinction. For the quadruple QSO HE 0435-1223 we find that the continuum shapes are indistinguishable, therefore differential extinction is negligible. The equivalent widths of the broad emission lines are however significantly different, and we argue that this is most likely due to microlensing. Contrariwise, the two components of the well-known object UM 673 have virtually identical emission line properties, but the continuum slopes differ significantly and indicate different dust extinction along both lines of sight
English and German, though genetically closely related, have undergone different developments with regard to the verbal category aspect in its interaction with aktionsart. English has grammaticalized a periphrastic construction to mark the progressisve whereas German - if at all - uses word formation to mark the perfective. This study deals with verbal prefixes, especially ge-/gi-, in the earliest attestable stages of the two languages, i.e. in Old English (King Alfred's Orosius) and Old High German (Tafan). These elements have often been considered markers of perfective aspect or aktionsart and can be compared to perfectives, which - according to Bybee/Perkins/Pagliuca (1994) - have developed from "bounders", i.e. adverbial particles to denote situation boundaries. Our analyses suggest that although there are basic similarities in the use of the various verbal constructions, the diverging paths of development with regard to aspect seem to begin already in these early stages
Die englische Grammatikschreibung im 18. Jahrhundert ist vordergründig präskriptiv und basiert auf den traditionellen theoretischen Grundlagen, die für die klassischen Sprachen entwickelt wurden. So werden grammatische Kategorien wie Person und Numerus, Tempus, Modus, Genus Verbi unterschieden, für welche Flexionsparadigmen aufgestellt werden. Im Vergleich zu den klassischen Sprachen hat jedoch das Englische eine weitreichende Umgestaltung in der Strukturierung seiner gesamten Verbalkategorien erfahren: Analytische Mittel (have, be, do, will, etc. in Verbindung mit infiniten Formen des Verbs) werden verwendet, um verschiedene Ausprägungen der Vergangenheit, Zukünftigkeit, Gleichzeitigkeit, Vorzeitigkeit, Prozeßhaftigkeit etc. auszudrücken. Das Modussystem ist zusammengebrochen. Um dies zu kompensieren und einige der Funktionen des ehemaligen Konjunktivs zu übernehmen, wurden zum Beispiel die Modalverben grammatikalisiert. Dann ist auch noch eine völlig neue Kategorie entstanden, der Aspekt. In den frühen Grammatiken des 17. Jahrhunderts wurde die Konstruktion be + V-ing, die den Progressiven Aspekt ausdrückt, noch nicht einmal erwähnt (z.B. John Wallis 1653, Jeremiah Wharton 1654, Joseph Aickin 1693). Es ist interessant, daß sie zum ersten Mal von einem Ausländer Beachtung findet: Guy Miege führt diese Konstruktion auf in seiner Englischen Grammatik von 1688. Eine ausführliche und systematische Beschreibung erfolgt dann aber erst gegen Ende des 18. Jahrhunderts (James Pickbourne 1789). Er integriert die Progressive Form in das Tempussystem und unterscheidet somit insgesamt 18 Tempora im Englischen. Andere Grammatiker nennen 3 oder 5 oder 7 Tempora. Der Aufsatz beschreibt verschiedene Herangehensweisen an die Beschreibung des neu entstandenen Englischen Tempus- und Aspektsystems in der Grammatikschreibung des 18. Jahrhunderts. Ein zentraler Punkt ist die Integration der aspektuellen Unterscheidung zwischen Einfacher und Progressiver Form, die sich in dieser Zeit gerade erst in der Sprache etabliert hatte.
Fe K-edge X-ray absorption near edge structure (XANES) and Mossbauer spectra were collected on synthetic glasses of basaltic composition and of glasses on the sodium oxide-silica binary to establish a relation between the pre- edge of the XANES at the K-edge and the Fe oxidation state of depolymerised glasses. Charges of sample material were equilibrated at ambient pressure, superliquidus temperatures and oxygen fugacities that were varied over a range of about 15 orders of magnitude. Most experiments were carried out in gas-flow furnaces, either with pure oxygen, air, or different CO/CO2 mixtures. For the most reduced conditions, the samples charges were enclosed together with a pellet of the IQF oxygen buffer in an evacuated silica glass ampoule. Fe3+/SigmaFe x 100 of the samples determined by Mossbauer spectroscopy range between 0% and 100%. Position and intensity of the pre-edge centroid position vary strongly depending on the Fe oxidation state. The pre-edge centroid position and the Fe oxidation state determined by Mossbauer spectroscopy are nonlinearly related and have been fitted by a quadratic polynomial. Alternatively, the ratio of intensities measured at positions sensitive to Fe2+ and Fe3+, respectively, provides an even more sensitive method. Pre- edge intensities of the sample suite indicate average Fe co-ordination between 4 and 6 for all samples regardless of oxidation state. A potential application of the calibration given here opens the possibility of determining Fe oxidation state in glasses of similar compositions with high spatial resolution by use of a Micro-XANES setup (e.g., glass inclusions in natural minerals). (C) 2004 Elsevier B.V. All rights reserved
There is concern about the lack of recruitment of Acacia trees in the Negev desert of Israel. We have developed three models to estimate the frequency of recruitment necessary for long-term population survival (i.e. positive average population growth for 1,000 years and <10% probability of extinction). Two models assume purely episodic recruitment based on the general notion that recruitment in and environments is highly episodic. They differ in that the deterministic model investigates average dynamics while the stochastic model does not. Studies indicating that recruitment episodes in and environments have been overemphasized motivated the development of the third model. This semi-stochastic model simulates a mixture of continuous and episodic recruitment. Model analysis was done analytically for the deterministic model and via running model simulations for the stochastic and semi-stochastic models. The deterministic and stochastic models predict that, on average, 2.2 and 3.7 recruitment events per century, respectively, are necessary to sustain the population. According to the semi-stochastic model, 1.6 large recruitment events per century and an annual probability of 50% that a small recruitment event occurs are needed. A consequence of purely episodic recruitment is that all recruitment episodes produce extremely large numbers of recruits (i.e. at odds with field observations), an evaluation that holds even when considering that rare events must be large. Thus, the semi- stochastic model appears to be the most realistic model. Comparing the prediction of the semi-stochastic model to field observations in the Negev desert shows that the absence of observations of extremely large recruitment events is no reason for concern. However, the almost complete absence of small recruitment events is a serious reason for concern. The lack of recruitment may be due to decreased densities of large mammalian herbivores and might be further exacerbated by possible changes in climate, both in terms of average precipitation and the temporal distribution of rain
Wie sich eine Nation arm rechnet : einige statistische Bemerkungen zum Konzept der relativen Armut
(2004)
In the last decade, there has been an increasing interest in compensating thermally induced errors to improve the manufacturing accuracy of modular tool systems. These modular tool systems are interfaces between spindle and workpiece and consist of several complicatedly formed parts. Their thermal behavior is dominated by nonlinearities, delay and hysteresis effects even in tools with simpler geometry and it is difficult to describe it theoretically. Due to the dominant nonlinear nature of this behavior the so far used linear regression between the temperatures and the displacements is insufficient. Therefore, in this study we test the hypothesis whether we can reliably predict such thermal displacements via nonlinear temperature-displacement regression functions. These functions are estimated firstly from learning measurements using the alternating conditional expectation (ACE) algorithm and then tested on independent data sets. First, we analyze data that were generated by a finite element spindle model. We find that our approach is a powerful tool to describe the relation between temperatures and displacements for simulated data. Next, we analyze the temperature-displacement relationship in a silent real experimental setup, where the tool system is thermally forced. Again, the ACE-algorithm is powerful to estimate the deformation with high precision. The corresponding errors obtained by using the nonlinear regression approach are 10-fold lower in comparison to multiple linear regression analysis. Finally, we investigate the thermal behavior of a modular tool system in a working milling machine and get again promising results. The thermally inducedaccuracy using this nonlinear regression analysis. Therefore, this approach seems to be very useful for the development of new modular tool systems. errors can be estimated with 1-2 micrometer
In the last decade, there has been an increasing interest in compensating thermally induced errors to improve the manufacturing accuracy of modular tool systems. These modular tool systems are interfaces between spindle and workpiece and consist of several complicatedly formed parts. Their thermal behavior is dominated by nonlinearities, delay and hysteresis effects even in tools with simpler geometry and it is difficult to describe it theoretically. Due to the dominant nonlinear nature of this behavior the so far used linear regression between the temperatures and the displacements is insufficient. Therefore, in this study we test the hypothesis whether we can reliably predict such thermal displacements via nonlinear temperature-displacement regression functions. These functions are estimated firstly from learning measurements using the alternating conditional expectation (ACE) algorithm and then tested on independent data sets. First, we analyze data that were generated by a finite element spindle model. We find that our approach is a powerful tool to describe the relation between temperatures and displacements for simulated data. Next, we analyze the temperature-displacement relationship in a silent real experimental setup, where the tool system is thermally forced. Again, the ACE-algorithm is powerful to estimate the deformation with high precision. The corresponding errors obtained by using the nonlinear regression approach are 10-fold lower in comparison to multiple linear regression analysis. Finally, we investigate the thermal behavior of a modular tool system in a working milling machine and get again promising results. The thermally induced errors can be estimated with 1-2${mu m}$ accuracy using this nonlinear regression analysis. Therefore, this approach seems to be very useful for the development of new modular tool systems.