Discovery of starspots on Vega Bohm, T; Holschneider, M; Lignieres, F ...
Astronomy and astrophysics (Berlin),
5/2015, Letnik:
577
Journal Article
Recenzirano
The theoretically studied impact of rapid rotation on stellar evolution needs to be compared with these results of high-resolution spectroscopy-velocimetry observations. Early-type stars present a ...perfect laboratory for these studies. The prototype A0 star Vega has been extensively monitored in recent years in spectro-polarimetry. The goal of this article is to present a thorough analysis of the line profile variations and associated estimators in the early-type standard star Vega (A0) in order to reveal potential activity tracers, exoplanet companions, and stellar oscillations. Vega was monitored in quasi-continuous high-resolution echelle spectroscopy with the highly stabilized velocimeter SOPHIE/OHP. A total of 2588 high signal-to-noise spectra was obtained during 34.7 h on five nights in high-resolution mode at R = 75 000 and covering the visible domain from 3895-6270 Aring. This first strong evidence that standard A-type stars can show surface structures opens a new field of research and ask about a potential link with the recently discovered weak magnetic field discoveries in this category of stars.
In a previous study, a new snapshot modeling concept for the archeomagnetic field was introduced (Mauerberger et al., 2020, https://doi.org/10.1093/gji/ggaa336). By assuming a Gaussian process for ...the geomagnetic potential, a correlation‐based algorithm was presented, which incorporates a closed‐form spatial correlation function. This work extends the suggested modeling strategy to the temporal domain. A space‐time correlation kernel is constructed from the tensor product of the closed‐form spatial correlation kernel with a squared exponential kernel in time. Dating uncertainties are incorporated into the modeling concept using a noisy input Gaussian process. All but one modeling hyperparameters are marginalized, to reduce their influence on the outcome and to translate their variability to the posterior variance. The resulting distribution incorporates uncertainties related to dating, measurement and modeling process. Results from application to archeomagnetic data show less variation in the dipole than comparable models, but are in general agreement with previous findings.
Plain Language Summary
Global reconstructions of the past geomagnetic field are useful tools to study the geodynamo process that generates the Earth's magnetic field in the outer core. Data‐based field reconstructions are traditionally represented by a fixed number of coefficients in space and time. In a previous study, a new modeling concept for individual epochs of the magnetic field was introduced, which is better adapted to inhomogeneous data distributions as found in archeomagnetic data, and which provides more realistic uncertainty estimates. This new modeling concept effectively has one coefficient per data point. Here, the new method is expanded to also consider the time evolution and build continuous models of the past geomagnetic field. Uncertainties in archeomagnetic input data and in their ages are taken into account and contribute to estimating reasonable uncertainties for the resulting model. The application of the new method to archeomagnetic data over the past 1,200 years gives general agreement with previous findings with less variation in the dipole field contribution than seen in comparable models.
Key Points
Extension of a previous study on spatial correlation based archeomagnetic modeling to the temporal domain
Dating uncertainties are incorporated by the noisy input Gaussian process formalism
Results show general agreement with comparable models with less variation in the dipole field contribution
We consider a model based on the fractional Brownian motion under the influence of noise. We implement the Bayesian approach to estimate the Hurst exponent of the model. The robustness of the method ...to the noise intensity is tested using artificial data from fractional Brownian motion. We show that estimation of the parameters achieved when noise is considered explicitly in the model. Moreover, we identify the corresponding noise-amplitude level that allow to receive the correct estimation of the Hurst exponents in various cases.
Intestinal Dysganglionoses (IDs) represent a heterogeneous group of Enteric Nervous System anomalies including Hirschsprung's disease (HD), Intestinal Neuronal Dysplasia (IND), Internal Anal ...Sphincter Neurogenic Achalasia (IASNA) and Hypoganglionosis. At present HD is the only recognised clinico-pathological entity, whereas the others are not yet worldwide accepted and diagnosed. This report describes the areas of agreement and disagreement regarding definition, diagnosis, and management of IDs as discussed at the workshop of the fourth International Meeting on “Hirschsprung's disease and related neurochristopathies.”
The gold standards in the preoperative diagnosis of IDs are described, enlighting the importance of rectal suction biopsy in the diagnostic workup. The most important diagnostic features of HD are the combination of hypertrophic nerve trunks and aganglionosis in adequate specimens. Acetylcholinesterase staining is the best diagnostic technique to demonstrate hypertrophic nerve trunks in lamina propia mucosae, but many pathologist from different centers still use H&E staining effectively. Moreover, the importance of an adequate intraoperative pathological evaluation of the extent of IDs to avoid postoperative complications is stressed. Although it is not clear whether IND is a separate entity or some sort of secondary acquired condition, it is concluded that both IND and IASNA do exist. Other interesting conclusions are provided as well as detailed results of the discussion. Further investigation is needed to resolve the many controversies concerning IDs. The fourth International Conference in Sestri Levante stimulated discussion regarding these entities and led to the International guidelines to serve the best interest of our patients.
We compare the aftershock decay rate in natural data with predictions from a stochastic analytical model based on a Markov process with stationary transition rates. These transition rates vary ...according to the magnitude of a scalar representing the state of stress and defined as the overload. Thus, the aftershock decay rate in the model is a sum of independent exponential decay functions with different characteristic times. From different shapes of the overload distribution and different expressions of the transition rates, we discuss the magnitude of the exponent of the power law aftershock decay rate and the time interval over which we can expect to observe this regime. Before and after this time interval, we show that the decay is linear and exponential, respectively. From our analytical solutions, we deduce a model of aftershock decay rate in which a power law scaling exponent and two characteristic rates emerge. One rate is a short‐term linear decrease before the onset of the power law decay to account for a finite number of events at zero time, and the other one can be interpreted as an inverse correlation time, after which aftershocks no longer occur. Then, we interpret the empirical modified Omori law (MOL) and its parameters in the framework of our theoretical model. We suggest a technique to systematically estimate and interpret the temporal limits of the power law aftershock decay rate in real sequences. We approximate these temporal limits from data available from several well‐known aftershock sequences and show from an Akaike Information Criteria (AIC) that, in almost all cases examined here, our model fits better the aftershock decay rate than the MOL despite a quantitative penalty for the extra parameter required. From this work, we conclude that the time delay before the onset of the power law decay may be related to the recurrence time of an earthquake. Finally, we suggest that the power law decay rates extend over longer times according to the concentration of the deformation along dominant major faults.
We use a dynamic scanning electron microscope (DySEM) to map the spatial distribution of the vibration of a cantilever beam. The DySEM measurements are based on variations of the local secondary ...electron signal within the imaging electron beam diameter during an oscillation period of the cantilever. For this reason, the surface of a cantilever without topography or material variation does not allow any conclusions about the spatial distribution of vibration due to a lack of dynamic contrast. In order to overcome this limitation, artificial structures were added at defined positions on the cantilever surface using focused ion beam lithography patterning. The DySEM signal of such high-contrast structures is strongly improved, hence information about the surface vibration becomes accessible. Simulations of images of the vibrating cantilever have also been performed. The results of the simulation are in good agreement with the experimental images.
Summary
In this study we analyse the error distribution in regional models of the geomagnetic field. Our main focus is to investigate the distribution of errors when combining two regional patches to ...obtain a global field from regional ones. To simulate errors in overlapping patches we choose two different data region shapes that resemble that scenario. First, we investigate the errors in elliptical regions and secondly we choose a region obtained from two overlapping circular spherical caps. We conduct a Monte-Carlo simulation using synthetic data to obtain the expected mean errors. For the elliptical regions the results are similar to the ones obtained for circular spherical caps: the maximum error at the boundary decreases towards the centre of the region. A new result emerges as errors at the boundary vary with azimuth, being largest in the major axis direction and minimal in the minor axis direction. Inside the region there is an error decay towards a minimum at the centre at a rate similar to the one in circular regions. In the case of two combined circular regions there is also an error decay from the boundary towards the centre. The minimum error occurs at the centre of the combined regions. The maximum error at the boundary occurs on the line containing the two cap centres, the minimum in the perpendicular direction where the two circular cap boundaries meet. The large errors at the boundary are eliminated by combining regional patches. We propose an algorithm for finding the boundary region that is applicable to irregularly shaped model regions.
The magnetosphere‐ionosphere‐thermosphere (MIT) dynamic system significantly depends on the highly variable solar wind conditions, in particular, on changes of the strength and orientation of the ...interplanetary magnetic field (IMF). The solar wind and IMF interactions with the magnetosphere drive the MIT system via the magnetospheric field‐aligned currents (FACs). The global modeling helps us to understand the physical background of this complex system. With the present study, we test the recently developed high‐resolution empirical model of field‐aligned currents MFACE (a high‐resolution Model of Field‐Aligned Currents through Empirical orthogonal functions analysis). These FAC distributions were used as input of the time‐dependent, fully self‐consistent global Upper Atmosphere Model (UAM) for different seasons and various solar wind and IMF conditions. The modeling results for neutral mass density and thermospheric wind are directly compared with the CHAMP satellite measurements. In addition, we perform comparisons with the global empirical models: the thermospheric wind model (HWM07) and the atmosphere density model (Naval Research Laboratory Mass Spectrometer and Incoherent Scatter Extended 2000). The theoretical model shows a good agreement with the satellite observations and an improved behavior compared with the empirical models at high latitudes. Using the MFACE model as input parameter of the UAM model, we obtain a realistic distribution of the upper atmosphere parameters for the Northern and Southern Hemispheres during stable IMF orientation as well as during dynamic situations. This variant of the UAM can therefore be used for modeling the MIT system and space weather predictions.
Key Points
The MIT dynamic system is modeled using the MFACE as input for the UAMThe modeling results show good agreement with CHAMP data at high latitudesThe UAM model better reproduces mesoscale structures than empirical models
Potential fields are classically represented on the sphere using spherical harmonics. However, this decomposition leads to numerical difficulties when data to be modelled are irregularly distributed ...or cover a regional zone. To overcome this drawback, we develop a new representation of the magnetic and the gravity fields based on wavelet frames. In this paper, we first describe how to build wavelet frames on the sphere. The chosen frames are based on the Poisson multipole wavelets, which are of special interest for geophysical modelling, since their scaling parameter is linked to the multipole depth (Holschneider et al.). The implementation of wavelet frames results from a discretization of the continuous wavelet transform in space and scale. We also build different frames using two kinds of spherical meshes and various scale sequences. We then validate the mathematical method through simple fits of scalar functions on the sphere, named ‘scalar models’ Moreover, we propose magnetic and gravity models, referred to as ‘vectorial models’ taking into account geophysical constraints. We then discuss the representation of the Earth’s magnetic and gravity fields from data regularly or irregularly distributed. Comparisons of the obtained wavelet models with the initial spherical harmonic models point out the advantages of wavelet modelling when the used magnetic or gravity data are sparsely distributed or cover just a very local zone.