An introduction to the topic of front matter, with remarks addressing three papers that resulted from a lively round table and audience discussion on dictionary front matter that took place at the ...May 2019 conference of the Dictionary Society of North America, held in Bloomington, Indiana, USA. There, round table participants interacted with a large group of lexicographers in the audience, who expanded upon the participants’ remarks as they discussed dictionary front matter in different languages, from different cultural traditions, and from varying time periods. There is already a recognition among specialists that the online environment, which is often a “front-matter free zone,” has represented a significant loss of the lexicographic information traditionally provided in print dictionaries. It is hoped that the introduction (as well as the three papers included with it) will spur still more conversation in the field of lexicography.
•Dominating event patterns are discovered as reference events.•Relationships among video events are described with a smoothness regularization.•SR framework is constructed with reference events and ...smoothness regularization.•The identification of abnormal events gets easier.
Abnormal event detection is now a widely concerned research topic, especially for crowded scenes. In recent years, many dictionary learning algorithms have been developed to learn normal event regularities, and have presented promising performance for abnormal event detection. However, they seldom consider the structural information, which plays important roles in many computer vision tasks, such as image denoising and segmentation. In this paper, structural information is explored within a sparse representation framework. On the one hand, we introduce a new concept named reference event, which indicates the potential event patterns in normal video events. Compared with abnormal events, normal ones are more likely to approximate these reference events. On the other hand, a smoothness regularization is constructed to describe the relationships among video events. The relationships consist of both similarities in the feature space and relative positions in the video sequences. In this case, video events related to each other are more likely to possess similar representations. The structured dictionary and sparse representation coefficients are optimized through an iterative updating strategy. In the testing phase, abnormal events are identified as samples which cannot be well represented using the learned dictionary. Extensive experiments and comparisons with state-of-the-art algorithms have been conducted to prove the effectiveness of the proposed algorithm.
Due to the rapid development of DNA microarray technology, a large number of microarray data come into being and classifying these data has been verified useful for cancer diagnosis, treatment and ...prevention. However, microarray data classification is still a challenging task since there are often a huge number of genes but a small number of samples in gene expression data. As a result, a computational method for reducing the dimension of microarray data is necessary. In this paper, we introduce a computational gene selection model for microarray data classification via adaptive hypergraph embedded dictionary learning (AHEDL). Specifically, a dictionary is learned from the feature space of original high dimensional microarray data, and this learned dictionary is used to represent original genes with a reconstruction coefficient matrix. Then we use a l2, 1-norm regularization to impose the row sparsity on the coefficient matrix for selecting discriminate genes. Meanwhile, in order to capture the localmanifold geometrical structure of original microarray data in a high-order manner, a hypergraph is adaptively learned and embedded into the model. An iterative updating algorithm is designed for solving the optimization problem. In order to validate the efficacy of the proposed model, we have conducted experiments on six publicly available microarray data sets and the results demonstrate that AHEDL outperforms other state-of-the-art methods in terms of microarray data classification.
Unlabelled TableAHEDLAdaptive Hypergraph Embedded Dictionary LearningADMMAlternating Direction method of MultipliersSVMSupport Vector MachineRFRandom Forestk-NNk-Nearest NeighborCVcross validationMSVM-RFMulticlass Support Vector Machine-Recursive Feature EliminationKernelPLSKernel Partial Least SquaresWLMGSWeight Local Modularity based Gene SelectionGRSL-GSGene Selection via Subspace Learning and Manifold RegularizationLNNFWLocal-Nearest-Neighbors-based Feature Weighting for Gene SelectionRLRRegularized Logistic RegressionACCaccuracySDstandard deviationsANOVAAnalysis of VarianceDFDegrees of FreedomSSSum-of-SquareMSMean Sum-of-SquareFF-valueSigstatistical significanceSRBCTSmall Round Blue Cell TumorsGCMGlobal Cancer MapCLL_SUB_111B-cell chronic lymphocytic leukemia
•We introduce a new gene selection model for microarray data classification.•A dictionary is learned to reconstruct original genes.•A hypergraph is adaptively learned and embedded into the model.•The hypergraph is used to capture the high-order locality of microarray data.•Experiments on six data sets validate the efficacy of the proposed model.
An increasing number of studies in political communication focus on the "sentiment" or "tone" of news content, political speeches, or advertisements. This growing interest in measuring sentiment ...coincides with a dramatic increase in the volume of digitized information. Computer automation has a great deal of potential in this new media environment. The objective here is to outline and validate a new automated measurement instrument for sentiment analysis in political texts. Our instrument uses a dictionary-based approach consisting of a simple word count of the frequency of keywords in a text from a predefined dictionary. The design of the freely available Lexicoder Sentiment Dictionary (LSD) is discussed in detail here. The dictionary is tested against a body of human-coded news content, and the resulting codes are also compared to results from nine existing content-analytic dictionaries. Analyses suggest that the LSD produces results that are more systematically related to human coding than are results based on the other available dictionaries. The LSD is thus a useful starting point for a revived discussion about dictionary construction and validation in sentiment analysis for political communication.
This paper aims to provide an overview of anglophone literature on historical lexicography. It begins by defining history and lexicography in order to explore possible relationships between them. ...What follows is a critical discussion of two analytical perspectives: "history in lexicography" and "lexicography in history." The former seeks to explain what historical information is, how history has permeated dictionaries, particularly those compiled on historical principles, and why the historical dictionary needs to be re-interpreted along new lines. The latter, by contrast, attempts to identify the main elements involved in the writing of a history of lexicography. Since no historical account may be regarded as complete, further research is essential; it provides an opportunity to shed light on the dictionaries that have long been neglected, correct previous errors of judgment, and propose a new reading of the factors behind dictionary-making practices.
Online Social Networks (OSNs) is an ideal place for spreading rumor events as it is convenient in information production and dissemination. Automatically debunking these rumor events is important to ...pursue and restore the truth. However, it is a challenging task to employ traditional classification approaches for rumor events detection since they rely on hand-crafted features that require daunting manual efforts. Besides, we observe that the various posts of each rumor event will debate its realness over time. Different individuals also have different emotional reactions to events, which will affect others’ identification. Thus, this paper firstly employs an automatic construction method to develop a Sentiment Dictionary (SD) to capture the fine-grained human emotional reactions to different events. Secondly, a Two-steps Dynamic Time Series (TsDTS) algorithm, involving the sentimental information in the division process, is elaborated to retain the time-span distribution information of microblog events in a natural manner. At last, a novel two-layer Cascaded Gated Recurrent Unit (CGRU) model based on the SD and the TsDTS algorithm is proposed for rumor events detection, named as SD-TsDTS-CGRU. Experimental results on real datasets from OSNs demonstrate that our proposed SD-TsDTS-CGRU model outperforms the latest rumor events detection algorithms.
Multi-echo magnetic resonance (MR) images are acquired by changing the echo times (for T2 weighted) or relaxation times (for T1 weighted) of scans. The resulting (multi-echo) images are usually used ...for quantitative MR imaging. Acquiring MR images is a slow process and acquiring multi scans of the same cross section for multi-echo imaging is even slower. In order to accelerate the scan, compressed sensing (CS) based techniques have been advocating partial K-space (Fourier domain) scans; the resulting images are reconstructed via structured CS algorithms. In recent times, it has been shown that instead of using off-the-shelf CS, better results can be obtained by adaptive reconstruction algorithms based on structured dictionary learning. In this work, we show that the reconstruction results can be further improved by using structured deep dictionaries. Experimental results on real datasets show that by using our proposed technique the scan-time can be cut by half compared to the state-of-the-art.
In this paper, we propose a remote sensing image fusion method which combines the wavelet transform and sparse representation to obtain fusion images with high spectral resolution and high spatial ...resolution. Firstly, intensity-hue-saturation (IHS) transform is applied to Multi-Spectral (MS) images. Then, wavelet transform is used to the intensity component of MS images and the Panchromatic (Pan) image to construct the multi-scale representation respectively. With the multi-scale representation, different fusion strategies are taken on the low-frequency and the high-frequency sub-images. Sparse representation with training dictionary is introduced into the low-frequency sub-image fusion. The fusion rule for the sparse representation coefficients of the low-frequency sub-images is defined by the spatial frequency maximum. For high-frequency sub-images with prolific detail information, the fusion rule is established by the images information fusion measurement indicator. Finally, the fused results are obtained through inverse wavelet transform and inverse IHS transform. The wavelet transform has the ability to extract the spectral information and the global spatial details from the original pairwise images, while sparse representation can extract the local structures of images effectively. Therefore, our proposed fusion method can well preserve the spectral information and the spatial detail information of the original images. The experimental results on the remote sensing images have demonstrated that our proposed method could well maintain the spectral characteristics of fusion images with a high spatial resolution.
Zero-Shot Learning (ZSL) aims at recognizing unseen classes that are absent during the training stage. Unlike the existing approaches that learn a visual-semantic embedding model to bridge the ...low-level visual space and the high-level class prototype space, we propose a novel synthesized approach for addressing ZSL within a dictionary learning framework. Specifically, it learns both a dictionary matrix and a class-specific encoding matrix for each seen class to synthesize pseudo instances for unseen classes with auxiliary of seen class prototypes. This allows us to train the classifiers for the unseen classes with these pseudo instances. In this way, ZSL can be treated as a traditional classification task, which makes it applicable for traditional and generalized ZSL settings simultaneously. Extensive experimental results on four benchmark datasets (AwA, CUB, aPY, and SUN) demonstrate that our method yields competitive performances compared to state-of-the-art methods on both settings.