Matching pursuit is a greedy algorithm for computing a decomposition of the given signal as a linear combination of elements from an over-complete set (a ‘dictionary’). This study generalises the ...construction of an optimal dictionary to non-Gaussian envelope functions, and demonstrates that optimising the dictionary with regard to the envelope function has a significant effect on the decomposition accuracy. The dictionary construction is then evaluated and compared for a range of possible envelope functions. Based on the results, a novel ‘meta-exponential’ envelope function is proposed and it is shown that for any given decomposition accuracy, it allows one to reduce the dictionary size (hence the computational time) by ∼10% compared to the most common case of decomposition with Gabor dictionaries. More importantly, the results of a decomposition study on high-quality audio files are presented, confirming that both the choice of the envelope and adjusting the dictionary to a given envelope have a significant effect on the overall performance of matching pursuit.
This article focuses on the work of Professor Lev Ivanovich Skvortsov as a lexicographer, explores the main dictionaries he was involved in as a creator and an editor, as well as new dictionaries ...composed by Professor Skvortsov himself, characterizes them and establishes their practical value.Standard dictionaries, including spelling dictionaries, dictionaries of grammatical Russian speech and explanatory dictionaries are most demanded in the society. The Large Spelling Dictionary of the Russian Language (BOS), containing more than 106,000 words, under the editorship of Stepan Barkhudarov, Ivan Protchenko and Lev Skvortsov, continue the heritage of academic Spelling Diction ary of the Russian Language published in 30 editions. The new dictionary included a lot of recent terms from various areas of knowledge as well as data from modern everyday speech.Many years of language observation resulted in the Large Explanatory Dictionary of Grammatical Russian Speech (8,000 words) dated 2005, 2006 and 2016. It became popular among experts and all people interested in issues of correct speech. This is an explanatory dictionary of prescriptive and stylistic kind first introduced in the Russian lexicography. It served as a basis for a more compact School Dictionary on Culture of Spoken Russian. As for the general reader, the “Culture of Spoken Russian” dictionary by Professor Lev Skvortsov, reissued and revised, was introduced. This is an all-purpose dictionary on pronunciation and stress, formation of grammatical forms and structures, verb and nominal government, word building and phraseology, containing 3,000 entries.Speaking of the main explanatory dictionaries, we cannot overlook the Explanatory Dictionary of the Russian Language by Sergey Ozhegov, which runs through numerous editions. One of the recent editions, the 25th one, has been issued in 2006 under the editorship of Professor Skvortsov; it contains around 65,000 words and idioms. This edition of the dictionary includes a fair amount of new words and expressions (more than 3,000) reflecting changes in social and political, scientific, and cultural life of Russia. Moreover, modern linguistic processes were also taken into account. Professor Skvortsov worked on bringing Ozhegov’s dictionary closer to present days.The last and the most significant work of Professor Skvortsov was the Large Explanatory and Expository Dictionary of the Russian Language. Despite it wasn’t finished, the materials collected were enough to publish the first volume. It represents a new kind of explanatory dictionary of the Russian language. The author noted this himself, emphasizing that that was the new type of dictionary which “together with proper linguistic data also includes general historical and terminological knowledge, details from everyday life and extralinguistic information”.The article ends with a conclusion that Professor Skvortsov performed a tremendous task, leaving us a vast linguistic heritage. His remarkable diligence, wealth of knowledge and love for his native Russian language made it possible. His dictionaries listed above are of great importance: all of them undoubtedly help their readers to raise their level of Russian proficiency, to speak and to write correctly; these dictionaries can be immensely useful in classes of Modern Russian, Culture of Speech, and Practical Stylistics. Everybody - especially students and experts, both in humanities and engineering areas - would benefit from using these dictionaries.
Deep dictionary learning (DDL) shows good performance in visual classification tasks. However, almost all existing DDL methods ignore the locality relationships between the input data representations ...and the learned dictionary atoms, and learn sub-optimal representations in the feature coding stage, which are less conducive to classification. To this end, we propose a hierarchical locality-aware deep dictionary learning (HILADLE) framework for classification, which can learn locality-constrained dictionaries at different abstract levels through hierarchical dictionary learning. The locality constraints play an important role in learning informative dictionary atoms while preserving the data structure in the original input feature space. Moreover, instead of using an identity activation function like existing DDL methods, we further boost the generalization performance of our HILADLE method with a ReLU activation function to deal with the overfitting issue caused by over-parameterization, inspired by its effectiveness in deep neural networks. Finally, the concatenation of all feature representations learned at different layers is used as input to the final classifier. We demonstrate, through an extensive set of experiments on several benchmark face recognition, image classification, and age estimation datasets, that our method is able to surpass several dictionary learning, deep dictionary learning and deep learning methods.
In academia, dictionaries have become a daily commodity beyond the common yet relevant uses of spelling and meaning checks. Relying on data collected from 107 EFL learners through an opinion poll in ...the Saudi context, this paper investigated how such learners utilize dictionaries in their English program during their university studies. Findings showed that learners use electronic and paper-based dictionaries for limited purposes beyond the spelling and meaning check. Besides surveying the dictionary type (online and paper-based), the study argues for some uses on a broader approach rather than spelling words and their meanings. It construed dictionaries as special tutors that help second language learners develop a multitude of skills, including spelling, vocabulary, grammatical usage, pronunciation, and semantic features of the target language, e.g., synonyms, antonyms, polysemy, collocations, and the like. The study theorizes the dictionary not as an add-on but as an essential language learning source of English language programs tailored to dictionary-based tasks across the curriculum to respond to sustainable language education.
A cornerstore of Oxford's language reference list, Garner's Modern English Usage now includes revisions to more than 2,000 entries to reflect the nuances of English usage not only in the United ...States, but in Australia and New Zealand, the United Kingdom, Canada, and South Africa.
Display omitted
•Dictionary has a deep architecture, where each deeper dictionary layer is learned from a few atoms of previous dictionary layer.•Shared sub-dictionary is added into dictionary ...learning for learning and removing the common features from different classes.•Dictionary is extended by the shift-invariant strategy with circulant matrix to overcome the time-shift problem of vibration signals.•DSDL is more accurate than deep learning methods in the time-varying fault diagnosis with small training sample.
As the core of the Sparseland, dictionary learning has represented excellent performances in many fields, such as pattern recognition, fault diagnosis, noise reduction, image recognition and so on. Its key idea is that the data can have a good sparse representation on a specific dictionary consisting of a few basis atoms, so it demands that this specific dictionary is accurate and suitable enough to make the data sparse. Learning a good dictionary requires sufficient and comprehensive training data, while an efficient algorithm of dictionary learning is also essential. However, in many application fields, especially for the fault diagnosis, the training data is often scarce due to the cost of experimentation and time or other reasons. Thus, it’s not guaranteed that the data can have a good sparse representation on the single learned dictionary. To solve this problem, we proposed a novel dictionary learning named deep and shared dictionary learning (DSDL), which has the deep structure from deep learning and shared structure. In DSDL, the data is decomposed into several dictionary layers, where the deeper dictionary layer is learned from a few atoms of the previous layer. On the other hand, the shared structure aims to learn the common features from different classes and remove them for highlighting the class-specific features. We apply DSDL in two experimental cases of fault diagnosis under time-varying condition, and the results show that our proposed method always has better performances than other six state-of-the-art sparse representation methods. Compared to two popular deep learning methods, namely convolutional neural network (CNN) and deep belief network (DBN), DSDL is more accurate with small training sample.
It is very interesting to reconstruct high-resolution computed tomography (CT) medical images that are very useful for clinicians to analyse the diseases. This study proposes an improved ...super-resolution method for CT medical images in the sparse representation domain with dictionary learning. The sparse coupled K-singular value decomposition (KSVD) algorithm is employed for dictionary learning purposes. Images are divided into two sets of low resolution (LR) and high resolution (HR), to improve the quality of low-resolution images, the authors prepare dictionaries over LR and HR image patches using the KSVD algorithm. The main idea behind the proposed method is that sparse coupled dictionaries learn about each patch and establish the relationship between sparse coefficients of LR and HR image patches to recover the HR image patch for LR image. The proposed method is compared to conventional algorithms in terms of mean peak signal-to-noise ratio and structural similarity index measurements by using three different data set images, including CT chest, CT dental and CT brain images. The authors also analysed the proposed improved method for different dictionary sizes and patch size to obtain a similar high-resolution image. These parameters play an essential role in the reconstruction of the HR images.
Multi-spectral face recognition has been attracting increasing interest. In the last decade, several multi-spectral face recognition methods have been presented. However, it has not been well studied ...that how to jointly learn effective features with favorable discriminability from multiple spectra even when multi-spectral face images are severely contaminated by noise. Multi-view dictionary learning is an effective feature learning technique, which learns dictionaries from multiple views of the same object and has achieved state-of-the-art classification results. In this paper, we for the first time introduce the multi-view dictionary learning technique into the field of multi-spectral face recognition and propose a multi-spectral low-rank structured dictionary learning (MLSDL) approach. It learns multiple structured dictionaries, including a spectrum-common dictionary and multiple spectrum-specific dictionaries, which can fully explore both the correlated information and the complementary information among multiple spectra. Each dictionary contains a set of class-specified sub-dictionaries. Based on the low-rank matrix recovery theory, we apply low-rank regularization in multi-spectral dictionary learning procedure such that MLSDL can well solve the problem of multi-spectral face recognition with high levels of noise. We also design the low-rank structural incoherence term for multi-spectral dictionary learning, so as to reduce the redundancy among multiple spectrum-specific dictionaries. In addition, to enhance the efficiency of classification procedure, we design a low-rank structured collaborative representation classification scheme for MLSDL. Experimental results on HK PolyU, CMU and UWA hyper-spectral face databases demonstrate the effectiveness of the proposed approach.
•We propose a multi-spectral low-rank structured dictionary learning approach.•We learn spectrum-common dictionary and spectrum-specific dictionaries.•Low-rank structured regularization and incoherence terms are designed.•Low-rank structured collaborative representation classification is provided.