Pruning is a very effective solution to alleviate the difficulty of deploying neural networks on resource-constrained devices. However, most of the existing methods focus on the inherent parameters ...of the network itself, but rarely consider the contribution of the output feature map. In this paper, we propose the FPC, a novel filter pruning method based on the contribution of the output feature map, which considers the diverse information carried by different output feature maps. According to the above characteristic, FPC can evaluate the contribution of output feature maps and then effectively delete low contribution part without reducing the performance of the model. In this paper, we firstly use Singular Value Decomposition (SVD) to decompose the output feature map. Then we analyze the contribution of the output feature map to the model performance. Finally, we delete the filters with lower contribution output feature maps. Extensive experimental results show that our proposed FPC can produce excellent compression results. For example, with VGG-16, we can reduce the FLOPs by 65.62% and increase the accuracy by 0.25% on CIFAR-10. With ResNet-110, we can reduce FLOPs by 50.66% and increase the accuracy by 0.09% on CIFAR-100.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
Broad Learning System (BLS) that aims to offer an alternative way of learning in deep structure is proposed in this paper. Deep structure and learning suffer from a time-consuming training process ...because of a large number of connecting parameters in filters and layers. Moreover, it encounters a complete retraining process if the structure is not sufficient to model the system. The BLS is established in the form of a flat network, where the original inputs are transferred and placed as "mapped features" in feature nodes and the structure is expanded in wide sense in the "enhancement nodes." The incremental learning algorithms are developed for fast remodeling in broad expansion without a retraining process if the network deems to be expanded. Two incremental learning algorithms are given for both the increment of the feature nodes (or filters in deep structure) and the increment of the enhancement nodes. The designed model and algorithms are very versatile for selecting a model rapidly. In addition, another incremental learning is developed for a system that has been modeled encounters a new incoming input. Specifically, the system can be remodeled in an incremental way without the entire retraining from the beginning. Satisfactory result for model reduction using singular value decomposition is conducted to simplify the final structure. Compared with existing deep neural networks, experimental results on the Modified National Institute of Standards and Technology database and NYU NORB object recognition dataset benchmark data demonstrate the effectiveness of the proposed BLS.
This paper develops the concept of the singular value decomposition (SVD) of a class of interval matrices. The methodology relies on tighter outer estimations of eigenvalues and their corresponding ...eigenvectors of an interval matrix. The interval enclosure of every eigenvalue of an interval matrix is determined using an iterative procedure, and an algorithm is proposed for determining the corresponding eigenvector enclosure. Using these concepts, the SVD enclosure of an interval matrix is obtained. Numerical examples are provided to demonstrate the principles of the proposed methodologies at different stages.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
The key problem of image denoising methods is to smooth noise while retaining the details of original image. The human vision system is more sensitive to the details (or the high frequency ...components) of original image, hence the restoration of image details ensures the good quality of denoised image. Different from denoising the image as a whole, this paper proposes a novel denoising method that reconstructs the high and low frequency components respectively. The sparse representation using patch-based structure similarity is proposed to reconstruct the high frequency parts. And the low frequency parts are reconstructed by singular value decomposition (SVD). Finally an energy minimization function that contains high and low frequency parts are presented. Experimental results illustrate that the proposed method is outstanding in both numerical precision and visual performance.
•The low and high frequency components are restored respectively.•A novel image denoising framework with SVD and sparse representation is proposed.•An energy function is proposed to aggregate the low and high frequency components.•Experiments show the competitiveness of the proposed method.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, PNG, SAZU, SBJE, UL, UM, UPUK, ZAGLJ, ZRSKP
The singular value decomposition (SVD) is a crucial method with successful practical applications. However, the quantification of uncertainties in SVD perturbation is challenging. To address this ...issue, this study introduces an interval perturbation method for SVD with unknown-but-bounded (UBB) parameters. Unlike probabilistic approaches that require statistical data on uncertain parameters, this method only necessitates the bounds of uncertain parameters. By using non-probabilistic theory, the proposed method can provide accurate and fast estimation of singular values and vectors. The paper provides a detailed derivation process for obtaining the interval bounds of singular values and vectors using the interval perturbation method. The subinterval method is applied to improve the estimation precision. The effectiveness of the proposed method is demonstrated through four numerical examples and an application example. The robustness of the proposed method is verified by various levels of uncertainties, different cases of subintervals, different matrix scales, close singular values, rectangular matrices, and small but nonzero singular values. The proposed method can be used in engineering numerical calculation inverse problem fields that require SVD with small uncertainties, including dynamic identification, image processing, and signal processing. Therefore, the superiority of the proposed method in accurately and quickly estimating singular values and vectors with uncertainties can be applied in the aforementioned research fields.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
Image watermarking has emerged as a useful method for solving security issues like authenticity, copyright protection and rightful ownership of digital data. Existing watermarking schemes use either ...a binary or grayscale image as a watermark. This paper proposes a new robust and adaptive watermarking scheme in which both the host and watermark are the color images of the same size and dimension. The security of the proposed watermarking scheme is enhanced by scrambling both color host and watermark images using Arnold chaotic map. The host image is decomposed by redundant discrete wavelet transform (RDWT) into four sub-bands of the same dimension, and then approximate sub-band undergoes singular value decomposition (SVD) to obtain the principal component (PC). The scrambled watermark is then directly inserted into a principal component of scrambled host image, using an artificial bee colony optimized adaptive multi-scaling factor, obtained by considering both the host and watermark image perceptual quality to overcome the tradeoff between imperceptibility and robustness of the watermarked image. The hybridization of RDWT-SVD provides an advantage of no shift-invariant to achieve higher embedding capacity in the host image and preserving the imperceptibility and robustness by exploiting SVD properties. To measure the imperceptibility and robustness of the proposed scheme, both qualitative and quantitative evaluation parameters like peak signal to noise ratio (PSNR), structural similarity index metric (SSIM) and normalized cross-correlation (NC) are used. Experiments are performed against several image processing attacks and the results are analyzed and compared with other related existing watermarking schemes which clearly depict the usefulness of the proposed scheme. At the same time, the proposed scheme overcomes the major security problem of false positive error (FPE) that mostly occurs in existing SVD based watermarking schemes.
•Image adaptive watermarking using dynamic embedding strength factor optimized by ABC.•Improving embedding capacity using redundant discrete wavelet transform.•Embedding in principal component to remove the problem of false positive error.•Arnold chaotic map is used to provide extra security.•Proposed algorithm can resist multiple attacks and extract the correct watermark.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
With the rapid development of Internet applications and social networks, we have entered an era of big data, and people are hard to effectively find the information they want. Therefore, lots of ...recommendation algorithms have been proposed to help users select useful and beneficial information, and save their time. Moreover, context-aware recommendation methods are becoming more and more popular since they could provide more accurate or personalized recommendation information, compared with traditional recommendation methods. Singular value decomposition (SVD) has been successfully integrated with some traditional recommendation algorithms. However, the basic SVD can only extract the feature vectors of users and items, which may result in lower recommendation precision. To improve the recommendation performance, we propose a novel context-aware recommendation algorithm with two-level SVD, named CTLSVD. First, CTLSVD applies SVD to divide the rating matrix into the user matrix and item matrix. Second, through extracting more refined factor vectors, CTLSVD further employs SVD to divide the user matrix and item matrix into two matrices, respectively. Finally, CTLSVD utilizes the time as the contextual information to filter the initial unsuitable recommendation results for improving the effectiveness and performance of the final recommendation results. To compare with some well-known recommendation methods, we evaluate CTLSVD on two real datasets, MovieLens and EachMovie. The experimental results demonstrate that our proposed algorithm CTLSVD is better than the traditional recommendation methods in terms of precision, recall and F1-measure.
•A novel context-aware recommendation algorithm (CSVD) is proposed based on the combination of the SVD algorithm and post-context filtering.•A new two-level SVD algorithm (TLSVD) is introduced to decompose the user factor matrix and item factor matrix separately in SVD for extracting more hidden factor vectors.•A novel context-based recommendation algorithm with two-level SVD is designed by combining CSVD and TLSVD.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UL, UM, UPCLJ, UPUK, ZRSKP
In this paper, we propose a very simple and elegant patch-based, machine learning technique for image denoising using the higher order singular value decomposition (HOSVD). The technique simply ...groups together similar patches from a noisy image (with similarity defined by a statistically motivated criterion) into a 3D stack, computes the HOSVD coefficients of this stack, manipulates these coefficients by hard thresholding, and inverts the HOSVD transform to produce the final filtered image. Our technique chooses all required parameters in a principled way, relating them to the noise model. We also discuss our motivation for adopting the HOSVD as an appropriate transform for image denoising. We experimentally demonstrate the excellent performance of the technique on grayscale as well as color images. On color images, our method produces state-of-the-art results, outperforming other color image denoising algorithms at moderately high noise levels. A criterion for optimal patch-size selection and noise variance estimation from the residual images (after denoising) is also presented.
Realizing multisensor signal fusion and weak feature adaptive extraction is a challenging task. Therefore, a new algorithm called tensor singular spectrum decomposition (SSD) is proposed in this ...study for the adaptive decomposition of multisensor time series. Traditional tensor decomposition algorithms, such as CANDECOMP/PARAFAC (CP), high-order singular value decomposition (HOSVD), and Tucker decomposition, are derived from Formula Omitted-mode product. The Formula Omitted-mode product essentially uses the idea of matrices to deal with tensors, given that it defines the multiplication between matrix and higher order tensor, thereby creating problems of nonpseudodiagonal core tensor and nonunique decomposition results in traditional tensor decomposition algorithms. To this end, the decomposition of the original tensor signal and the reconstruction of multisensor component signals are realized in this study by combining the trajectory tensor construction, superposition of the Gaussian function spectral model, adaptive iterative optimization of embedding dimension, and diagonal average method on the basis of the principle of tensor–tensor order-preserving multiplication. The proposed algorithm inherits the perfect mathematical theory and excellent properties of matrix SVD in processing single-sensor signals, while retaining the inherent structure and coupling relationship between multisensor data and realizing the organic fusion and adaptive decomposition of multisensor signals. The analysis results of simulation, experimental, and engineering signals showed that the proposed method can effectively extract weak fault quantification features hidden in original multisensor signals compared with the existing methods.