Many studies have found a link between time spent using social media and mental health issues, such as depression and anxiety. However, the existing research is plagued by cross-sectional research ...and lacks analytic techniques examining individual change over time. The current research involves an 8-year longitudinal study examining the association between time spent using social media and depression and anxiety at the intra-individual level. Participants included 500 adolescents who completed once-yearly questionnaires between the ages of 13 and 20. Results revealed that increased time spent on social media was not associated with increased mental health issues across development when examined at the individual level. Hopefully these results can move the field of research beyond its past focus on screen time.
•Time spent using social media was not related to individual changes in depression or anxiety over 8 years.•This lack of a relationship was found even in the transition between adolescence and emerging adulthood.•Results were not stronger for girls or boys.
Classifying breast cancer subtypes is crucial for clinical diagnosis and treatment. However, the early symptoms of breast cancer may not be apparent. Rapid advances in high-throughput sequencing ...technology have led to generating large number of multi-omics biological data. Leveraging and integrating the available multi-omics data can effectively enhance the accuracy of identifying breast cancer subtypes. However, few efforts focus on identifying the associations of different omics data to predict the breast cancer subtypes.
In this paper, we propose a differential sparse canonical correlation analysis network (DSCCN) for classifying the breast cancer subtypes. DSCCN performs differential analysis on multi-omics expression data to identify differentially expressed (DE) genes and adopts sparse canonical correlation analysis (SCCA) to mine highly correlated features between multi-omics DE-genes. Meanwhile, DSCCN uses multi-task deep learning neural network separately to train the correlated DE-genes to predict breast cancer subtypes, which spontaneously tackle the data heterogeneity problem in integrating multi-omics data.
The experimental results show that by mining the associations among multi-omics data, DSCCN is more capable of accurately classifying breast cancer subtypes than the existing methods.
Capturing the complex interaction between the node attribute and the network structure is important for attributed network embedding and anomaly detection. However, there are few methods to ...explicitly model the correlation between these two views of the node attribute and the network structure. In this paper, we propose an attributed network anomaly detection (CaCo) method based on canonical correlation analysis, which assumes that there should be a strong correlation between the attribute and structure features of normal nodes, and a weak correlation ones between those abnormal nodes, in the attributed networks. Consequently, a joint learning mechanism is designed in CaCo to explicitly measure the correlation between two views in the latent space. Specifically, the backbone of a weight-sharing graph convolutional network is employed to encode the node feature from two views of attribute and structure in the latent space, respectively. Then, a Kullback-Leibler (KL) divergence regularization is used to align the distributions of the two views. Finally, the parameters of CaCo are optimized by maximizing the correlation between attribute and structure features of normal nodes in the training phase, and anomalies can be detected by measuring the correlation between two views in the testing phase. Extensive experiments on six real-world datasets demonstrate the effectiveness of the proposed method compared to the state-of-art techniques.
In this study, the random finite element method, a finite element method with random field generation techniques, was applied to investigate the cross correlations between the observed head and ...hydraulic conductivity and specific storage at different locations and different times in pumping tests. The results show that the two cross correlations between the pumping well and the observation well reach their maximums before pumping reaches a steady state. Specifically, the cross correlation between the observed head and hydraulic conductivity is the greatest when the temporal derivative of the observed head does not change significantly, and that between the observed head and specific storage is the greatest when the temporal derivative (the rate) of the observed head is maximum. Based on the results of cross‐correlation analysis, a short‐term pumping strategy for hydraulic tomography is proposed to obtain the spatial distribution of hydraulic conductivity and specific storage using the successive linear estimator. Furthermore, this strategy was validated by Monte Carlo simulations. This paper points out that the sensitivity and cross‐correlation analyses report the ensemble (averaged) behaviors of any heterogeneous aquifers, which is not necessarily suitable for one realization. Furthermore, Monte Carlo simulation is suggested for validating any groundwater inverse modeling result.
Key Points
The cross correlations between the observed head and hydraulic parameters are investigated by a random finite element method
The cross correlations reach the maximums before flow reaches a steady state in a pumping (or injection) test
Successive linear estimator can yield good estimations of distributions of hydraulic parameters from short‐term pumping tests
•We re-analyse our previous data to address Mazaheri et al.'s, 2022 criticisms.•Re-analysis partly supports the relationship between the ΔPAF and pain-free state PAF.•It provides weak support to the ...relationship between pain-free state PAF and perception.•The new analysis reinstates the validity of our previous conclusions.•Methodological differences may account for the lack of conceptual replication.
In response to Mazaheri et al.'s critique, we revisited our study (Valentini et al., 2022) on the relationship between peak alpha frequency (PAF) and pain. Their commentary prompted us to reassess our data to address the independence between slow and slowing alpha brain oscillations, as well as the predictivity of slow alpha oscillations in pain perception. Bayesian correlation analyses revealed mixed support for independence. Investigating predictivity, we found inconsistent associations between pre-PAF and unpleasantness ratings. We critically reflected on methodological and theoretical issues on the path to PAF validation as a pain biomarker. We emphasized the need for diversified methodology and analytical approaches as well as robust findings across research groups.
Recently, canonical correlation analysis (CCA) has been widely used in steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs) due to its high efficiency, robustness, and ...simple implementation. However, a method with which to make use of harmonic SSVEP components to enhance the CCA-based frequency detection has not been well established.
This study proposed a filter bank canonical correlation analysis (FBCCA) method to incorporate fundamental and harmonic frequency components to improve the detection of SSVEPs. A 40-target BCI speller based on frequency coding (frequency range: 8-15.8 Hz, frequency interval: 0.2 Hz) was used for performance evaluation. To optimize the filter bank design, three methods (M1: sub-bands with equally spaced bandwidths; M2: sub-bands corresponding to individual harmonic frequency bands; M3: sub-bands covering multiple harmonic frequency bands) were proposed for comparison. Classification accuracy and information transfer rate (ITR) of the three FBCCA methods and the standard CCA method were estimated using an offline dataset from 12 subjects. Furthermore, an online BCI speller adopting the optimal FBCCA method was tested with a group of 10 subjects.
The FBCCA methods significantly outperformed the standard CCA method. The method M3 achieved the highest classification performance. At a spelling rate of ∼33.3 characters/min, the online BCI speller obtained an average ITR of 151.18 ± 20.34 bits min(-1).
By incorporating the fundamental and harmonic SSVEP components in target identification, the proposed FBCCA method significantly improves the performance of the SSVEP-based BCI, and thereby facilitates its practical applications such as high-speed spelling.
Self‐assembled nanocrystal superlattices have attracted large scientific attention due to their potential technological applications. However, the nucleation and growth mechanisms of superlattice ...assemblies remain largely unresolved due to experimental difficulties to monitor intermediate states. Here, the self‐assembly of colloidal PbS nanocrystals is studied in real time by a combination of controlled solvent evaporation from the bulk solution and in situ small‐angle X‐ray scattering (SAXS) in transmission geometry. For the first time for the investigated system a hexagonal closed‐packed (hcp) superlattice formed in a solvent vapor saturated atmosphere is observed during slow solvent evaporation from a colloidal suspension. The highly ordered hcp superlattice is followed by a transition into the final body‐centered cubic superlattice upon complete drying. Additionally, X‐ray cross‐correlation analysis of Bragg reflections is applied to access information on precursor structures in the assembly process, which is not evident from conventional SAXS analysis. The detailed evolution of the crystal structure with time provides key results for understanding the assembly mechanism and the role of ligand–solvent interactions, which is important both for fundamental research and for fabrication of superlattices with desired properties.
Time‐resolved self‐assembly of colloidal PbS nanocrystals upon controlled solvent evaporation is studied using in situ synchrotron small‐angle X‐ray scattering and X‐ray cross correlation analysis. PbS nanocrystals first form a highly ordered hexagonal closed‐packed superlattice in a solvent vapor saturated atmosphere, followed by a transition into the final body‐centered cubic superlattice upon complete evaporation of the solvent.
Identifying causal mechanisms has become a cornerstone of experimental social psychology, and editors in top social psychology journals champion the use of mediation methods, particularly innovative ...ones when possible (e.g. Halberstadt, 2010, Smith, 2012). Commonly, studies in experimental social psychology randomly assign participants to levels of the independent variable and measure the mediating and dependent variables, and the mediator is assumed to causally affect the dependent variable. However, participants are not randomly assigned to levels of the mediating variable(s), i.e., the relationship between the mediating and dependent variables is correlational. Although researchers likely know that correlational studies pose a risk of confounding, this problem seems forgotten when thinking about experimental designs randomly assigning participants to levels of the independent variable and measuring the mediator (i.e., “measurement-of-mediation” designs). Experimentally manipulating the mediator provides an approach to solving these problems, yet these methods contain their own set of challenges (e.g., Bullock, Green, & Ha, 2010). We describe types of experimental manipulations targeting the mediator (manipulations demonstrating a causal effect of the mediator on the dependent variable and manipulations targeting the strength of the causal effect of the mediator) and types of experimental designs (double randomization, concurrent double randomization, and parallel), provide published examples of the designs, and discuss the strengths and challenges of each design. Therefore, the goals of this paper include providing a practical guide to manipulation-of-mediator designs in light of their challenges and encouraging researchers to use more rigorous approaches to mediation because manipulation-of-mediator designs strengthen the ability to infer causality of the mediating variable on the dependent variable.
Multiview Privileged Support Vector Machines Tang, Jingjing; Tian, Yingjie; Zhang, Peng ...
IEEE transaction on neural networks and learning systems,
08/2018, Letnik:
29, Številka:
8
Journal Article
Multiview learning (MVL), by exploiting the complementary information among multiple feature sets, can improve the performance of many existing learning tasks. Support vector machine (SVM)-based ...models have been frequently used for MVL. A typical SVM-based MVL model is SVM-2K, which extends SVM for MVL by using the distance minimization version of kernel canonical correlation analysis. However, SVM-2K cannot fully unleash the power of the complementary information among different feature views. Recently, a framework of learning using privileged information (LUPI) has been proposed to model data with complementary information. Motivated by LUPI, we propose a new multiview privileged SVM model, multi-view privileged SVM model (PSVM-2V), for MVL. This brings a new perspective that extends LUPI to MVL. The optimization of PSVM-2V can be solved by the classical quadratic programming solver. We theoretically analyze the performance of PSVM-2V from the viewpoints of the consensus principle, the generalization error bound, and the SVM-2K learning model. Experimental results on 95 binary data sets demonstrate the effectiveness of the proposed method.