Technological advances have led to an increase in intensive longitudinal data and the statistical literature on modeling such data is rapidly expanding, as are software capabilities. Common methods ...in this area are related to time-series analysis, a framework that historically has received little exposure in psychology. There is a scarcity of psychology-based resources introducing the basic ideas of time-series analysis, especially for data sets featuring multiple people. We begin with basics of N = 1 time-series analysis and build up to complex dynamic structural equation models available in the newest release of Mplus Version 8. The goal is to provide readers with a basic conceptual understanding of common models, template code, and result interpretation. We provide short descriptions of some advanced issues, but our main priority is to supply readers with a solid knowledge base so that the more advanced literature on the topic is more readily digestible to a larger group of researchers.
Translational Abstract
The way that researchers collect data has been transformed by technological advances. New types of data are being collected which are accompanied by new statistical methods necessary to analyze such data. Specifically, intensive longitudinal data have become more widespread in psychology where each person in the data provides a large amount of data over a relatively short interval. The statistical literature for modeling this type of data has moved more quickly than the uptake by researcher who collect and analyze their data. Therefore, the goal of this article is to walkthrough the basic idea of these models for intensive longitudinal data and how they can be fit in the popular Mplus software program, which was recently added specific routines for facilitating these types of models.
Full text
Available for:
CEKLJ, FFLJ, NUK, ODKLJ, PEFLJ
Distributed photovoltaic systems can cause adverse distribution system impacts, including voltage violations at customer locations and thermal overload of lines, transformers, and other equipment ...resulting from high current. The installed capacity at which violations first occur and above which would require system upgrades is called the hosting capacity. Current static methods for determining hosting capacity tend to either consider infrequent worst-case snapshots in time and/or capture coarse time and spatial resolution. Because the duration of violations cannot be captured with these traditional methods, the metric thresholds used in these studies conservatively use the strictest constraints given in operating standards, even though both worse voltage performance and higher overloads may be temporarily acceptable. However, assessing the full details requires accurately capturing time-dependence, voltage-regulating equipment operations, and performance of advanced controls-based mitigation techniques. In this paper, we propose a dynamic distributed photovoltaic hosting capacity methodology to address these issues by conducting power flow analysis for a full year. A key contribution is the formulation of time aware metrics to take these annual results and identify the hosting capacity. Through a case study, we show that this approach can more fully capture grid impacts of distributed photovoltaic than traditional methods and the dynamic hosting capacity was 60%–200% higher than the static hosting capacity in this case study.
•Proposed dynamic study can more accurately capture a feeder’s hosting capacity.•Proposed time aware metrics comply more closely with interconnection standards.•Traditional static methods cannot capture grid impacts like annual energy losses.•Static studies cannot assess the efficacy of smart inverter-based control schemes.•Case study and sensitivity analysis show the scalability of the proposed approach.
This article investigates the presence of a new interferometric signal in multilooked synthetic aperture radar (SAR) interferograms that cannot be attributed to the atmospheric or Earth-surface ...topography changes. The observed signal is short-lived and decays with the temporal baseline; however, it is distinct from the stochastic noise attributed to temporal decorrelation. The presence of such a fading signal introduces a systematic phase component, particularly in short temporal baseline interferograms. If unattended, it biases the estimation of Earth surface deformation from SAR time series. Here, the contribution of the mentioned phase component is quantitatively assessed. The biasing impact on the deformation-signal retrieval is further evaluated. A quality measure is introduced to allow the prediction of the associated error with the fading signals. Moreover, a practical solution for the mitigation of this physical signal is discussed; special attention is paid to the efficient processing of Big Data from modern SAR missions such as Sentinel-1 and NISAR. Adopting the proposed solution, the deformation bias is shown to decrease significantly. Based on these analyses, we put forward our recommendations for efficient and accurate deformation-signal retrieval from large stacks of multilooked interferograms.
SUMMARY
Distinguishing between different types of seismic events is a task typically performed manually by expert analysts and can thus be both time and resource expensive. Analysts at the Swedish ...National Seismic Network (SNSN) use four different event types in the routine analysis: natural (tectonic) earthquakes, blasts (e.g. from mines, quarries and construction) and two different types of mining-induced events associated with large, underground mines. In order to aid manual event classification and to classify automatic event definitions, we have used fully connected neural networks to implement classification models which distinguish between the four event types. For each event, we bandpass filter the waveform data in 20 narrow-frequency bands before dividing each component into four non-overlapping time windows, corresponding to the P phase, P coda, S phase and S coda. In each window, we compute the root-mean-square amplitude and the resulting array of amplitudes is then used as the neural network inputs. We compare results achieved using a station-specific approach, where individual models are trained for each seismic station, to a regional approach where a single model is trained for the whole study area. An extension of the models, which distinguishes spurious phase associations from real seismic events in automatic event definitions, has also been implemented. When applying our models to evaluation data distinguishing between earthquakes and blasts, we achieve an accuracy of about 98 per cent for automatic events and 99 per cent for manually analysed events. In areas located close to large underground mines, where all four event types are observed, the corresponding accuracy is about 90 and 96 per cent, respectively. The accuracy when distinguishing spurious events from real seismic events is about 95 per cent. We find that the majority of erroneous classifications can be traced back to uncertainties in automatic phase picks and location estimates. The models are already in use at the SNSN, both for preliminary type predictions of automatic events and for reviewing manually analysed events.
In response to public concerns and campaigns, some United Kingdom supermarkets have implemented policies to reduce less-healthy food at checkouts. We explored the effects of these policies on ...purchases of less-healthy foods commonly displayed at checkouts.
We used a natural experimental design and two data sources providing complementary and unique information. We analysed data on purchases of small packages of common, less-healthy, checkout foods (sugary confectionary, chocolate, and potato crisps) from 2013 to 2017 from nine UK supermarkets (Aldi, Asda, Co-op, Lidl, M&S, Morrisons, Sainsbury's, Tesco, and Waitrose). Six supermarkets implemented a checkout food policy between 2013 and 2017 and were considered intervention stores; the remainder were comparators. Firstly, we studied the longitudinal association between implementation of checkout policies and purchases taken home. We used data from a large (n ≈ 30,000) household purchase panel of food brought home to conduct controlled interrupted time series analyses of purchases of less-healthy common checkout foods from 12 months before to 12 months after implementation. We conducted separate analyses for each intervention supermarket, using others as comparators. We synthesised results across supermarkets using random effects meta-analyses. Implementation of a checkout food policy was associated with an immediate reduction in four-weekly purchases of common checkout foods of 157,000 (72,700-242,800) packages per percentage market share-equivalent to a 17.3% reduction. This decrease was sustained at 1 year with 185,100 (121,700-248,500) fewer packages purchased per 4 weeks per percentage market share-equivalent to a 15.5% reduction. The immediate, but not sustained, effect was robust to sensitivity analysis. Secondly, we studied the cross-sectional association between checkout food policies and purchases eaten without being taken home. We used data from a smaller (n ≈ 7,500) individual purchase panel of food bought and eaten 'on the go'. We conducted cross-sectional analyses comparing purchases of common checkout foods in 2016-2017 from supermarkets with and without checkout food policies. There were 76.4% (95% confidence interval 48.6%-89.1%) fewer annual purchases of less-healthy common checkout foods from supermarkets with versus without checkout food policies. The main limitations of the study are that we do not know where in the store purchases were selected and cannot determine the effect of changes in purchases on consumption. Other interventions may also have been responsible for the results seen.
There is a potential impact of checkout food polices on purchases. Voluntary supermarket-led activities may have public health benefits.
The availability of time series of the evolution of the properties of physical systems is increasing, stimulating the development of many novel methods for the extraction of information about their ...behaviour over time, including whether or not they arise from deterministic or stochastic dynamical systems. Surrogate data testing is an essential part of many of these methods, as it enables robust statistical evaluations to ensure that the results observed are not obtained by chance, but are a true characteristic of the underlying system.
The surrogate data technique is based on the comparison of a particular property of the data (a discriminating statistic) with the distribution of the same property calculated in a set of constructed signals (surrogates) which match the original data set but do not possess the property that is being tested. Fourier transform based surrogates remain the most popular, yet many more options have since been developed to test increasingly varied null hypotheses while characterizing the dynamics of complex systems, including uncorrelated and correlated noise, coupling between systems, and synchronization.
Here, we provide a detailed overview of a wide range of surrogate types, discuss their practical applications and demonstrate their use in both numerically simulated and real experimental systems. We also compare the performance of various surrogate types for the detection of nonlinearity, synchronization and coherence, coupling strength between systems, and the nature of coupling. A MatLab toolbox for many of the surrogate methods is provided.
Interrupted time series (ITS) analysis is a valuable study design for evaluating the effectiveness of population-level health interventions that have been implemented at a clearly defined point in ...time. It is increasingly being used to evaluate the effectiveness of interventions ranging from clinical therapy to national public health legislation. Whereas the design shares many properties of regression-based approaches in other epidemiological studies, there are a range of unique features of time series data that require additional methodological considerations. In this tutorial we use a worked example to demonstrate a robust approach to ITS analysis using segmented regression. We begin by describing the design and considering when ITS is an appropriate design choice. We then discuss the essential, yet often omitted, step of proposing the impact model a priori. Subsequently, we demonstrate the approach to statistical analysis including the main segmented regression model. Finally we describe the main methodological issues associated with ITS analysis: over-dispersion of time series data, autocorrelation, adjusting for seasonal trends and controlling for time-varying confounders, and we also outline some of the more complex design adaptations that can be used to strengthen the basic ITS design.
Fault detection and classification are considered as one of the most mandatory techniques in nowadays industrial monitoring. The necessity of fault monitoring is due to the fact that early detection ...can restrain high-cost maintenance. Due to the complexity of the wind turbines and the considerable amount of data available via SCADA systems, machine learning methods and specifically deep learning approaches seem to be powerful means to solve the problem of fault detection in wind turbines. In this article, a novel deep learning fault detection and classification method is presented based on the time-series analysis technique and convolutional neural networks (CNN) in order to deal with some classes of faults in wind turbine machines. To validate this approach, challenging scenarios, which consists of less than 5% performance reduction (which is hard to identify) in the two actuators or four sensors of the wind turbine along with sensors noise are investigated, and the appropriate structures of CNN are suggested. Finally, these algorithms are evaluated in simulation based on the data of a 4.8 MW wind turbine benchmark and their accuracy approves the convincing performance of the proposed methods. The proposed algorithm are applicable to both on-shore and off-shore wind turbine machines.
SUMMARY
Adaptive noise cancelling of multichannel magnetic resonance sounding (MRS) signals is investigated. An analysis of the noise sources affecting MRS signals show that the applicability of ...adaptive noise cancelling is primarily limited to cancel powerline harmonics. The problems of handling spikes in MRS signals are discussed and an efficient algorithm for spike detection is presented. The optimum parameters for multichannel adaptive noise cancelling are identified through simulations with synthetic signals added to noise‐only recordings from an MRS instrument. We discuss the design and the efficiency of different stacking methods. The results from multichannel adaptive noise cancelling are compared to time‐domain multichannel Wiener filtering. Our results show that within the experimental uncertainty the two methods give identical results.