### Refine

#### Year of publication

#### Document Type

- Article (49)
- Monograph/Edited Volume (10)
- Conference Proceeding (1)
- Postprint (1)

#### Keywords

- Wavelet transform (3)
- Geomagnetic field (2)
- Geopotential theory (2)
- Satellite geodesy (2)
- Daily gravity field (1)
- Earthquake interaction (1)
- Forecasting and prediction (1)
- Fractal (1)
- Full rank matrix filters (1)
- GRACE (1)

The Smoothing Spline ANOVA (SS-ANOVA) requires a specialized construction of basis and penalty terms in order to incorporate prior knowledge about the data to be fitted. Typically, one resorts to the most general approach using tensor product splines. This implies severe constraints on the correlation structure, i.e. the assumption of isotropy of smoothness can not be incorporated in general. This may increase the variance of the spline fit, especially if only a relatively small set of observations are given. In this article, we propose an alternative method that allows to incorporate prior knowledge without the need to construct specialized bases and penalties, allowing the researcher to choose the spline basis and penalty according to the prior knowledge of the observations rather than choosing them according to the analysis to be done. The two approaches are compared with an artificial example and with analyses of fixation durations during reading.

Context. The theoretically studied impact of rapid rotation on stellar evolution needs to be compared with these results of high-resolution spectroscopy-velocimetry observations. Early-type stars present a perfect laboratory for these studies. The prototype A0 star Vega has been extensively monitored in recent years in spectropolarimetry. A weak surface magnetic field was detected, implying that there might be a (still undetected) structured surface. First indications of the presence of small amplitude stellar radial velocity variations have been reported recently, but the confirmation and in-depth study with the highly stabilized spectrograph SOPHIE/OHP was required.
Aims. The goal of this article is to present a thorough analysis of the line profile variations and associated estimators in the early-type standard star Vega (A0) in order to reveal potential activity tracers, exoplanet companions, and stellar oscillations.
Methods. Vega was monitored in quasi-continuous high-resolution echelle spectroscopy with the highly stabilized velocimeter SOPHIE/OHP. A total of 2588 high signal-to-noise spectra was obtained during 34.7 h on five nights (2 to 6 of August 2012) in high-resolution mode at R = 75 000 and covering the visible domain from 3895 6270 angstrom. For each reduced spectrum, least square deconvolved equivalent photospheric profiles were calculated with a T-eff = 9500 and log g = 4.0 spectral line mask. Several methods were applied to study the dynamic behaviour of the profile variations (evolution of radial velocity, bisectors, vspan, 2D profiles, amongst others).
Results. We present the discovery of a spotted stellar surface on an A-type standard star (Vega) with very faint spot amplitudes Delta F/Fc similar to 5 x 10(-4). A rotational modulation of spectral lines with a period of rotation P = 0.68 d has clearly been exhibited, unambiguously confirming the results of previous spectropolarimetric studies. Most of these brightness inhomogeneities seem to be located in lower equatorial latitudes. Either a very thin convective layer can be responsible for magnetic field generation at small amplitudes, or a new mechanism has to be invoked to explain the existence of activity tracing starspots. At this stage it is difficult to disentangle a rotational from a stellar pulsational origin for the existing higher frequency periodic variations.
Conclusions. This first strong evidence that standard A-type stars can show surface structures opens a new field of research and ask about a potential link with the recently discovered weak magnetic field discoveries in this category of stars.

Earthquake catalogs are probably the most informative data source about spatiotemporal seismicity evolution. The catalog quality in one of the most active seismogenic zones in the world, Japan, is excellent, although changes in quality arising, for example, from an evolving network are clearly present. Here, we seek the best estimate for the largest expected earthquake in a given future time interval from a combination of historic and instrumental earthquake catalogs. We extend the technique introduced by Zoller et al. (2013) to estimate the maximum magnitude in a time window of length T-f for earthquake catalogs with varying level of completeness. In particular, we consider the case in which two types of catalogs are available: a historic catalog and an instrumental catalog. This leads to competing interests with respect to the estimation of the two parameters from the Gutenberg-Richter law, the b-value and the event rate lambda above a given lower-magnitude threshold (the a-value). The b-value is estimated most precisely from the frequently occurring small earthquakes; however, the tendency of small events to cluster in aftershocks, swarms, etc. violates the assumption of a Poisson process that is used for the estimation of lambda. We suggest addressing conflict by estimating b solely from instrumental seismicity and using large magnitude events from historic catalogs for the earthquake rate estimation. Applying the method to Japan, there is a probability of about 20% that the maximum expected magnitude during any future time interval of length T-f = 30 years is m >= 9.0. Studies of different subregions in Japan indicates high probabilities for M 8 earthquakes along the Tohoku arc and relatively low probabilities in the Tokai, Tonankai, and Nankai region. Finally, for scenarios related to long-time horizons and high-confidence levels, the maximum expected magnitude will be around 10.

The inverse problem of determining the flow at the Earth's core-mantle boundary according to an outer core magnetic field and secular variation model has been investigated through a Bayesian formalism. To circumvent the issue arising from the truncated nature of the available fields, we combined two modeling methods. In the first step, we applied a filter on the magnetic field to isolate its large scales by reducing the energy contained in its small scales, we then derived the dynamical equation, referred as filtered frozen flux equation, describing the spatiotemporal evolution of the filtered part of the field. In the second step, we proposed a statistical parametrization of the filtered magnetic field in order to account for both its remaining unresolved scales and its large-scale uncertainties. These two modeling techniques were then included in the Bayesian formulation of the inverse problem. To explore the complex posterior distribution of the velocity field resulting from this development, we numerically implemented an algorithm based on Markov chain Monte Carlo methods. After evaluating our approach on synthetic data and comparing it to previously introduced methods, we applied it to a magnetic field model derived from satellite data for the single epoch 2005.0. We could confirm the existence of specific features already observed in previous studies. In particular, we retrieved the planetary scale eccentric gyre characteristic of flow evaluated under the compressible quasi-geostrophy assumption although this hypothesis was not considered in our study. In addition, through the sampling of the velocity field posterior distribution, we could evaluate the reliability, at any spatial location and at any scale, of the flow we calculated. The flow uncertainties we determined are nevertheless conditioned by the choice of the prior constraints we applied to the velocity field.

We describe an iterative method to combine seismicity forecasts. With this method, we produce the next generation of a starting forecast by incorporating predictive skill from one or more input forecasts. For a single iteration, we use the differential probability gain of an input forecast relative to the starting forecast. At each point in space and time, the rate in the next-generation forecast is the product of the starting rate and the local differential probability gain. The main advantage of this method is that it can produce high forecast rates using all types of numerical forecast models, even those that are not rate-based. Naturally, a limitation of this method is that the input forecast must have some information not already contained in the starting forecast. We illustrate this method using the Every Earthquake a Precursor According to Scale (EEPAS) and Early Aftershocks Statistics (EAST) models, which are currently being evaluated at the US testing center of the Collaboratory for the Study of Earthquake Predictability. During a testing period from July 2009 to December 2011 (with 19 target earthquakes), the combined model we produce has better predictive performance - in terms of Molchan diagrams and likelihood - than the starting model (EEPAS) and the input model (EAST). Many of the target earthquakes occur in regions where the combined model has high forecast rates. Most importantly, the rates in these regions are substantially higher than if we had simply averaged the models.

The magnetosphere-ionosphere-thermosphere (MIT) dynamic system significantly depends on the highly variable solar wind conditions, in particular, on changes of the strength and orientation of the interplanetary magnetic field (IMF). The solar wind and IMF interactions with the magnetosphere drive the MIT system via the magnetospheric field-aligned currents (FACs). The global modeling helps us to understand the physical background of this complex system. With the present study, we test the recently developed high-resolution empirical model of field-aligned currents MFACE (a high-resolution Model of Field-Aligned Currents through Empirical orthogonal functions analysis). These FAC distributions were used as input of the time-dependent, fully self-consistent global Upper Atmosphere Model (UAM) for different seasons and various solar wind and IMF conditions. The modeling results for neutral mass density and thermospheric wind are directly compared with the CHAMP satellite measurements. In addition, we perform comparisons with the global empirical models: the thermospheric wind model (HWM07) and the atmosphere density model (Naval Research Laboratory Mass Spectrometer and Incoherent Scatter Extended 2000). The theoretical model shows a good agreement with the satellite observations and an improved behavior compared with the empirical models at high latitudes. Using the MFACE model as input parameter of the UAM model, we obtain a realistic distribution of the upper atmosphere parameters for the Northern and Southern Hemispheres during stable IMF orientation as well as during dynamic situations. This variant of the UAM can therefore be used for modeling the MIT system and space weather predictions.

Amoebae explore their environment in a random way, unless external cues like, e. g., nutrients, bias their motion. Even in the absence of cues, however, experimental cell tracks show some degree of persistence. In this paper, we analyzed individual cell tracks in the framework of a linear mixed effects model, where each track is modeled by a fractional Brownian motion, i.e., a Gaussian process exhibiting a long-term correlation structure superposed on a linear trend. The degree of persistence was quantified by the Hurst exponent of fractional Brownian motion. Our analysis of experimental cell tracks of the amoeba Dictyostelium discoideum showed a persistent movement for the majority of tracks. Employing a sliding window approach, we estimated the variations of the Hurst exponent over time, which allowed us to identify points in time, where the correlation structure was distorted ("outliers"). Coarse graining of track data via down-sampling allowed us to identify the dependence of persistence on the spatial scale. While one would expect the (mode of the) Hurst exponent to be constant on different temporal scales due to the self-similarity property of fractional Brownian motion, we observed a trend towards stronger persistence for the down-sampled cell tracks indicating stronger persistence on larger time scales.

The injection of fluids is a well-known origin for the triggering of earthquake sequences. The growing number of projects related to enhanced geothermal systems, fracking, and others has led to the question, which maximum earthquake magnitude can be expected as a consequence of fluid injection? This question is addressed from the perspective of statistical analysis. Using basic empirical laws of earthquake statistics, we estimate the magnitude M-T of the maximum expected earthquake in a predefined future time window T-f. A case study of the fluid injection site at Paradox Valley, Colorado, demonstrates that the magnitude m 4.3 of the largest observed earthquake on 27 May 2000 lies very well within the expectation from past seismicity without adjusting any parameters. Vice versa, for a given maximum tolerable earthquake at an injection site, we can constrain the corresponding amount of injected fluids that must not be exceeded within predefined confidence bounds.