It is very interesting to reconstruct high-resolution computed tomography (CT) medical images that are very useful for clinicians to analyse the diseases. This study proposes an improved ...super-resolution method for CT medical images in the sparse representation domain with dictionary learning. The sparse coupled K-singular value decomposition (KSVD) algorithm is employed for dictionary learning purposes. Images are divided into two sets of low resolution (LR) and high resolution (HR), to improve the quality of low-resolution images, the authors prepare dictionaries over LR and HR image patches using the KSVD algorithm. The main idea behind the proposed method is that sparse coupled dictionaries learn about each patch and establish the relationship between sparse coefficients of LR and HR image patches to recover the HR image patch for LR image. The proposed method is compared to conventional algorithms in terms of mean peak signal-to-noise ratio and structural similarity index measurements by using three different data set images, including CT chest, CT dental and CT brain images. The authors also analysed the proposed improved method for different dictionary sizes and patch size to obtain a similar high-resolution image. These parameters play an essential role in the reconstruction of the HR images.
Multi-spectral face recognition has been attracting increasing interest. In the last decade, several multi-spectral face recognition methods have been presented. However, it has not been well studied ...that how to jointly learn effective features with favorable discriminability from multiple spectra even when multi-spectral face images are severely contaminated by noise. Multi-view dictionary learning is an effective feature learning technique, which learns dictionaries from multiple views of the same object and has achieved state-of-the-art classification results. In this paper, we for the first time introduce the multi-view dictionary learning technique into the field of multi-spectral face recognition and propose a multi-spectral low-rank structured dictionary learning (MLSDL) approach. It learns multiple structured dictionaries, including a spectrum-common dictionary and multiple spectrum-specific dictionaries, which can fully explore both the correlated information and the complementary information among multiple spectra. Each dictionary contains a set of class-specified sub-dictionaries. Based on the low-rank matrix recovery theory, we apply low-rank regularization in multi-spectral dictionary learning procedure such that MLSDL can well solve the problem of multi-spectral face recognition with high levels of noise. We also design the low-rank structural incoherence term for multi-spectral dictionary learning, so as to reduce the redundancy among multiple spectrum-specific dictionaries. In addition, to enhance the efficiency of classification procedure, we design a low-rank structured collaborative representation classification scheme for MLSDL. Experimental results on HK PolyU, CMU and UWA hyper-spectral face databases demonstrate the effectiveness of the proposed approach.
•We propose a multi-spectral low-rank structured dictionary learning approach.•We learn spectrum-common dictionary and spectrum-specific dictionaries.•Low-rank structured regularization and incoherence terms are designed.•Low-rank structured collaborative representation classification is provided.
Abstract
This study investigates the usefulness of a bilingualized dictionary (BLD) against a monolingual one (MD) in a fill-in-the-blank test requiring the ability to discriminate between a set of ...near-synonyms. 156 participants, all English major undergraduates studying in Poland, were divided into two groups based on the type of dictionary they used. The study compared the two groups with respect to their test scores, learners’ prior receptive vocabulary knowledge, and task completion time. The experiment revealed the limitations of the BLD in this type of task, demonstrating that the dictionary was no more useful than the MD. While the scores were unaffected by dictionary type, they were significantly influenced by the learners’ receptive vocabulary size: the larger the size, the better the performance. The task completion time was not significantly affected by dictionary type, vocabulary size, or scores. The paper discusses learners’ errors and possible reasons for the limitations of the BLD.
The paper examines the entries of 36 words for common animal sounds in two online English dictionaries, Cambridge Dictionaries and The Merriam-Webster.com Dictionary, to see what information is ...provided in the entries. Previous work on English dictionaries has focused on meaning explanations of referential, arbitrary words. With the examination of onomatopoeic words for animal sounds, this paper expands the field beyond the study of arbitrary words and into words whose forms contribute to their meaning.
The results show that there are no consistent policies for how words for animal sounds are handled, or how the animal readings are separated from other possible meanings of the words. There is variation in how explicitly the animals are mentioned in the definitions of meaning, and if they are in fact mentioned at all. The entries for the verb and noun uses of the words may not contain the same information. The examples, in cases where the animal readings are exemplified at all, may come in a different entry from where the relevant definition of meaning is provided. One of the dictionaries also relies heavily on definitions where the word’s meaning is explained in terms of itself. The use of synonyms in the meaning explanations of onomatopoeic words is also found to be problematic. As these words do not merely identify a referent, but describe or imitate what the referent sounds like, exchanging one word for another does not carry over the same imagery of sound.
This paper investigates the security of partial fingerprint-based authentication systems, especially when multiple fingerprints of a user are enrolled. A number of consumer electronic devices, such ...as smartphones, are beginning to incorporate fingerprint sensors for user authentication. The sensors embedded in these devices are generally small and the resulting images are, therefore, limited in size. To compensate for the limited size, these devices often acquire multiple partial impressions of a single finger during enrollment to ensure that at least one of them will successfully match with the image obtained from the user during authentication. Furthermore, in some cases, the user is allowed to enroll multiple fingers, and the impressions pertaining to multiple partial fingers are associated with the same identity (i.e., one user). A user is said to be successfully authenticated if the partial fingerprint obtained during authentication matches any one of the stored templates. This paper investigates the possibility of generating a "MasterPrint," a synthetic or real partial fingerprint that serendipitously matches one or more of the stored templates for a significant number of users. Our preliminary results on an optical fingerprint data set and a capacitive fingerprint data set indicate that it is indeed possible to locate or generate partial fingerprints that can be used to impersonate a large number of users. In this regard, we expose a potential vulnerability of partial fingerprint-based authentication systems, especially when multiple impressions are enrolled per finger.
•A novel low-rank double dictionary learning (LRD2L) approach is proposed for robust image classification.•It integrates the low-rank matrix recovery technique with the class-specific and ...class-shared dictionary learning.•It can effectively handle the image corruptions in both training and testing samples, which are inevitable in real-world applications.•The experimental results on three datasets demonstrate the effectiveness and superiority of the proposed approach.
In this paper, we propose a novel low-rank double dictionary learning (LRD2L) method for robust image classification tasks, in which the training and testing samples are both corrupted. Unlike traditional dictionary learning methods, LRD2L simultaneously learns three components from corrupted training data: 1) a low-rank class-specific sub-dictionary for each class to capture the most discriminative class-specific features of each class, 2) a low-rank class-shared dictionary which models the common patterns shared in the data of different classes, and 3) a sparse error term to model the noise in data. Through low-rank class-shared dictionary and noise term, the proposed method can effectively separate the corruptions and noise in training samples from creating low-rank class-specific sub-dictionaries, which are employed for correctly reconstructing and classifying testing images. Comparative experiments are conducted on three public available databases. Experimental results are encouraging, demonstrating the effectiveness of the proposed method and its superiority in performance over the state-of-the-art dictionary learning methods.
This paper presents the treatment of determinologized lexemes in the most recent growing monolingual general explanatory dictionary for Slovenian—Slovar slovenskega knjižnega jezika, 3. izdaja ...(Dictionary of the Slovenian Standard Language, 3rd Edition), or eSSKJ—while also drawing attention to conceptual differences in the understanding of the status of this vocabulary compared to previous editions of the dictionary (SSKJ and SSKJ2) and according to the treatment of terminology in the terminological dictionaries of the ZRC SAZU Fran Ramovš Institute of the Slovenian Language. It focuses on specific lexicographic issues that arise due to determinologization when dealing with this relatively extensive and hybrid segment of vocabulary in eSSKJ, addressing it from two points of view. It draws attention to the issues that editors face due to lexicographic requirements. At the same time, it presents issues and reservations external terminology consultants have as experts in individual subject fields when reviewing dictionary entries for determinologized vocabulary. Due to the specific nature of the work, both types of issues sometimes overlap.
U ovome radu predstavlja se obrada determinologiziranih leksema u najnovijemu jednojezičnom općem objasnidbenom rječniku slovenskoga jezika – Slovaru slovenskega knjižnega jezika, 3. izdanje, poznatijega kao eSSKJ. Ističu se razlike u koncepciji obrade u usporedbi s prethodnim izdanjima rječnika (SSKJ i SSKJ2) te razlike u obradi nazivlja u terminološkim rječnicima Instituta za slovenski jezik Frana Ramovša. Pozornost se usmjerava na specifična leksikografska pitanja koja se javljaju zbog determinologizacije, a odabrani se leksemi razmatraju s dvaju gledišta – izdvajaju se problemi s kojima se susreću urednici zbog leksikografskih zahtjeva te problemi i zadrške koje vanjski terminološki konzultanti imaju kao stručnjaci za pojedina područja pri pregledu rječničkih natuknica za determinologizirani vokabular. Zbog osobitosti leksikografske obrade katkad se ti problemi preklapaju.
The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the ...optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.
•Spare representation is extended to the field of force identification.•SpaRSA is developed to solve l1 regularization problem in force identification.•The Dirac, Db6, Sym4 and B-spline dictionaries are used to represent impact force.•The discrete cosine dictionary is used to represent three types of harmonic force.•Compared with Tikhonov regularization, SpaRSA is highly accurate and efficient.