Copy-move forgery is one of the most commonly used manipulations for tampering digital images. Keypoint-based detection methods have been reported to be very effective in revealing copy-move evidence ...due to their robustness against various attacks, such as large-scale geometric transformations. However, these methods fail to handle the cases when copy-move forgeries only involve small or smooth regions, where the number of keypoints is very limited. To tackle this challenge, we propose a fast and effective copy-move forgery detection algorithm through hierarchical feature point matching. We first show that it is possible to generate a sufficient number of keypoints that exist even in small or smooth regions by lowering the contrast threshold and rescaling the input image. We then develop a novel hierarchical matching strategy to solve the keypoint matching problems over a massive number of keypoints. To reduce the false alarm rate and accurately localize the tampered regions, we further propose a novel iterative localization technique by exploiting the robustness properties (including the dominant orientation and the scale information) and the color information of each keypoint. Extensive experimental results are provided to demonstrate the superior performance of our proposed scheme in terms of both efficiency and accuracy.
Through exploiting the image nonlocal self-similarity (NSS) prior by clustering similar patches to construct patch groups, recent studies have revealed that structural sparse representation (SSR) ...models can achieve promising performance in various image restoration tasks. However, most existing SSR methods only exploit the NSS prior from the input degraded (internal) image, and few methods utilize the NSS prior from external clean image corpus; how to jointly exploit the NSS priors of internal image and external clean image corpus is still an open problem. In this paper, we propose a novel approach for image restoration by simultaneously considering internal and external nonlocal self-similarity (SNSS) priors that offer mutually complementary information. Specifically, we first group nonlocal similar patches from images of a training corpus. Then a group-based Gaussian mixture model (GMM) learning algorithm is applied to learn an external NSS prior. We exploit the SSR model by integrating the NSS priors of both internal and external image data. An alternating minimization with an adaptive parameter adjusting strategy is developed to solve the proposed SNSS-based image restoration problems, which makes the entire algorithm more stable and practical. Experimental results on three image restoration applications, namely image denoising, deblocking and deblurring, demonstrate that the proposed SNSS produces superior results compared to many popular or state-of-the-art methods in both objective and perceptual quality measurements.
Transformer-based methods have shown impressive performance in low-level vision tasks, such as image super-resolution. However, we find that these networks can only utilize a limited spatial range of ...input information through attribution analysis. This implies that the potential of Transformer is still not fully exploited in existing networks. In order to activate more input pixels for better reconstruction, we propose a novel Hybrid Attention Transformer (HAT). It combines both channel attention and window-based self-attention schemes, thus making use of their complementary advantages of being able to utilize global statistics and strong local fitting capability. Moreover, to better aggregate the cross-window information, we introduce an overlapping cross-attention module to enhance the interaction between neighboring window features. In the training stage, we additionally adopt a same-task pre-training strategy to exploit the potential of the model for further improvement. Extensive experiments show the effectiveness of the proposed modules, and we further scale up the model to demonstrate that the performance of this task can be greatly improved. Our overall method significantly outperforms the state-of-the-art methods by more than 1dB.
By recording the whole scene around the capturer, virtual reality (VR) techniques can provide viewers the sense of presence. To provide a satisfactory quality of experience, there should be at least ...60 pixels per degree, so the resolution of panoramas should reach 21600 × 10800. The huge amount of data will put great demands on data processing and transmission. However, when exploring in the virtual environment, viewers only perceive the content in the current field of view (FOV). Therefore if we can predict the head and eye movements which are important behaviors of viewer, more processing resources can be allocated to the active FOV. But conventional saliency prediction methods are not fully adequate for panoramic images. In this paper, a new panorama-oriented model, to predict head and eye movements, is proposed. Due to the superiority of computation in the spherical domain, the spherical harmonics are employed to extract features at different frequency bands and orientations. Related low- and high-level features including the rare components in the frequency domain and color domain, the difference between center vision and peripheral vision, visual equilibrium, person and car detection, and equator bias are extracted to estimate the saliency. To predict head movements, visual mechanisms including visual uncertainty and equilibrium are incorporated, and the graphical model and functional representation for the switch of head orientation are established. Extensive experimental results on the publicly available database demonstrate the effectiveness of our methods.
Passive crossbar resistive random access memory (RRAM) arrays require select devices with nonlinear I-V characteristics to address the sneak-path problem. Here, we present a systematical analysis to ...evaluate the performance requirements of select devices during the read operation of RRAM arrays for the proposed one-selector-one-resistor (1S1R) configuration with serially connected selector/storage element. We found high selector current density is critical and the selector nonlinearity (ON/OFF) requirement can be relaxed at present. Different read schemes were analyzed to achieve high read margin and low power consumption. Design optimizations of the sense resistance and the storage elements are also discussed.
Display omitted
•During the layer-by-layer laser remelting process, the cooling rate is increased.•The surface quality is improved after layer-by-layer laser remelting.•The microhardness and tensile ...performance are enhanced due to grain refinement.
Laser Powder Bed Fusion (LPBF) is an innovative additive manufacturing technology. But it is also limited by the defects and surface quality. In this work, the layer-by-layer laser remelting (LR) method is applied to LPBF AlSi10Mg to improve the surface quality and mechanical performance. To account for the physical mechanism of the laser remelting, a three-dimension multi-physics coupled transient model is established. The numerical results indicate that the molten pool during the LR process is significantly expanded. The larger molten pool plays a great role in removing the defects. Moreover, the temperature gradient and cooling rate are simultaneously increased during the LR process, which has a considerable impact on the microstructure transformation. The densification, surface quality, including roughness, wettability, and residual stress, microstructure, and mechanical property are investigated after LR treatment based on experiments. The experimental results show that after LR treatment, the densification can be up to 99.4%. The surface hydrophilicity is limited due to roughness reduction. The average grain size of top and side surface can be decreased by 6.74% and 28.79% due to the increasement of cooling rate. The average microhardness and ductility can be improved due to grain refinement and defect elimination.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
Nanoscale resistive switching devices (memristive devices or memristors) have been studied for a number of applications ranging from non-volatile memory, logic to neuromorphic systems. However a ...major challenge is to address the potentially large variations in space and time in these nanoscale devices. Here we show that in metal-filament based memristive devices the switching can be fully stochastic. While individual switching events are random, the distribution and probability of switching can be well predicted and controlled. Rather than trying to force high switching probabilities using excess voltage or time, the inherent stochastic nature of resistive switching allows these binary devices to be used as building blocks for novel error-tolerant computing schemes such as stochastic computing and provides the needed analog feature for neuromorphic applications. To verify such potential, we demonstrated memristor-based stochastic bitstreams in both time and space domains, and show that an array of binary memristors can act as a multi-level analog device for neuromorphic applications.
Resistive switching in memristive devices can be stochastic and can lead to novel applications including stochastic and neuromorphic computing.
Sparse coding has achieved a great success in various image processing tasks. However, a benchmark to measure the sparsity of image patch/group is missing since sparse coding is essentially an ...NP-hard problem. This work attempts to fill the gap from the perspective of rank minimization. We firstly design an adaptive dictionary to bridge the gap between group-based sparse coding (GSC) and rank minimization. Then, we show that under the designed dictionary, GSC and the rank minimization problems are equivalent, and therefore the sparse coefficients of each patch group can be measured by estimating the singular values of each patch group. We thus earn a benchmark to measure the sparsity of each patch group because the singular values of the original image patch groups can be easily computed by the singular value decomposition (SVD). This benchmark can be used to evaluate performance of any kind of norm minimization methods in sparse coding through analyzing their corresponding rank minimization counterparts. Towards this end, we exploit four well-known rank minimization methods to study the sparsity of each patch group and the weighted Schatten p-norm minimization (WSNM) is found to be the closest one to the real singular values of each patch group. Inspired by the aforementioned equivalence regime of rank minimization and GSC, WSNM can be translated into a non-convex weighted ℓp-norm minimization problem in GSC. By using the earned benchmark in sparse coding, the weighted ℓp-norm minimization is expected to obtain better performance than the three other norm minimization methods, i.e., ℓ1-norm, ℓp-norm and weighted ℓ1-norm. To verify the feasibility of the proposed benchmark, we compare the weighted ℓp-norm minimization against the three aforementioned norm minimization methods in sparse coding. Experimental results on image restoration applications, namely image inpainting and image compressive sensing recovery, demonstrate that the proposed scheme is feasible and outperforms many state-of-the-art methods.
Sparse representation has achieved great success in various image processing and computer vision tasks. For image processing, typical patch-based sparse representation (PSR) models usually tend to ...generate undesirable visual artifacts, while group-based sparse representation (GSR) models lean to produce over-smooth effects. In this paper, we propose a new sparse representation model, termed joint patch-group based sparse representation (JPG-SR). Compared with existing sparse representation models, the proposed JPG-SR provides an effective mechanism to integrate the local sparsity and nonlocal self-similarity of images. We then apply the proposed JPG-SR to image restoration tasks, including image inpainting and image deblocking. An iterative algorithm based on the alternating direction method of multipliers (ADMM) framework is developed to solve the proposed JPG-SR based image restoration problems. Experimental results demonstrate that the proposed JPG-SR is effective and outperforms many state-of-the-art methods in both objective and perceptual quality.
To enhance the visibility and usability of images captured in hazy conditions, many image dehazing algorithms (DHAs) have been proposed. With so many image DHAs, there is a need to evaluate and ...compare these DHAs. Due to the lack of the reference haze-free images, DHAs are generally evaluated qualitatively using real hazy images. But it is possible to perform quantitative evaluation using synthetic hazy images since the reference haze-free images are available and full-reference (FR) image quality assessment (IQA) measures can be utilized. In this paper, we follow this strategy and study DHA evaluation using synthetic hazy images systematically. We first build a synthetic haze removing quality (SHRQ) database. It consists of two subsets: regular and aerial image subsets, which include 360 and 240 dehazed images created from 45 and 30 synthetic hazy images using 8 DHAs, respectively. Since aerial imaging is an important application area of dehazing, we create an aerial image subset specifically. We then carry out subjective quality evaluation study on these two subsets. We observe that taking DHA evaluation as an exact FR IQA process is questionable, and the state-of-the-art FR IQA measures are not effective for DHA evaluation. Thus, we propose a DHA quality evaluation method by integrating some dehazing-relevant features, including image structure recovering, color rendition, and over-enhancement of low-contrast areas. The proposed method works for both types of images, but we further improve it for aerial images by incorporating its specific characteristics. Experimental results on two subsets of the SHRQ database validate the effectiveness of the proposed measures.