There are rare studies on the combination of visual communication courses and image style transfer. Nevertheless, such a combination can make students understand the difference in perception brought ...by image styles more vividly. Therefore, a collaborative application is reported here combining visual communication courses and image style transfer. First, the visual communication courses are sorted out to obtain the relationship between them and image style transfer. Then, a style transfer method based on deep learning is designed, and a fast transfer network is introduced. Moreover, the image rendering is accelerated by separating training and execution. Besides, a fast style conversion network is constructed based on TensorFlow, and a style model is obtained after training. Finally, six types of images are selected from the Google Gallery for the conversion of image style, including landscape images, architectural images, character images, animal images, cartoon images, and hand-painted images. The style transfer method achieves excellent effects on the whole image besides the part hard to be rendered. Furthermore, the increase in iterations of the image style transfer network alleviates lack of image content and image style. The image style transfer method reported here can quickly transmit image style in less than 1 s and realize real-time image style transmission. Besides, this method effectively improves the stylization effect and image quality during the image style conversion. The proposed style transfer system can increase students’ understanding of different artistic styles in visual communication courses, thereby improving the learning efficiency of students.
Photo style transfer aims to change the style of a given photo to a reference style image with the constraint by retaining the broad and faithful conservation of the content of the input image. Most ...previous algorithms still have challenging issues on how to exactly extract and represent the style of the image to avoid the interruption of human visual perception. In this paper, we present a texture preserving photo style transfer algorithm by separating the input image into texture and structure and then applying the deep structure style transfer network to effectively change the extracted style characteristics of the structure. The texture preserving photo style transfer overcomes the main drawback of the previous approaches like distortion and saturation of the boundary of the objects. The quantitative and qualitative experimental results including user study prove that the proposed photo style transfer is universally applicable comparing to remarkable previous approaches.
State-of-the-art image style transfer methods have achieved impressive results by using neural networks. However, neural style transfer (NST) methods either ignore the local details of the style ...image by using the global statistics for style modeling or cannot fully use shallow features of neural networks, leading to the synthesized image having fewer details. In this study, we proposed a new patch-based style transfer method that directly operates in the image pixel domain without using any neural networks, achieving fascinating style transfer results with rich image details. The proposed method was derived from classic texture synthesis methods. Most previous methods rely on nearest neighbor search (NNS) for patch matching. However, this greedy strategy cannot guarantee the similarity of patch distributions between the synthesized image and the style image, which limits the expressiveness of textures. We solved this problem by proposing an optimal patch matching algorithm formed on the Optimal Transport (OT) theory, which theoretically guarantees the similarity of the patch distributions and gives a flexible style modeling method. Various qualitative and quantitative experiments demonstrated that the proposed method achieves better synthesized results than state-of-the-art style transfer methods, including NST and classic methods based on texture synthesis.
The development of recent image style transfer methods allows the quick transformation of an input content image into an arbitrary style. However, these methods have a limitation that the ...scale-across style pattern of a style image cannot be fully transferred into a content image. In this paper, we propose a new style transfer method, named total style transfer, that resolves this limitation by utilizing intra/inter-scale statistics of multi-scaled feature maps without losing the merits of the existing methods. First, we use a more general feature transform layer that employs intra/inter-scale statistics of multi-scaled feature maps and transforms the multi-scaled style of a content image into that of a style image. Secondly, we generate a multi-scaled stylized image by using only a single decoder network with skip-connections, in which multi-scaled features are merged. Finally, we optimize the style loss for the decoder network in the intra/inter-scale statistics of image style. Our improved total style transfer can generate a stylized image with a scale-across style pattern from a pair of content and style images in one forwarding pass. Our method achieved less memory consumption and faster feed-forwarding speed compared with the recent cascade scheme and the lowest style loss among the recent style transfer methods.
Interpreting style transfer methods and generating high-quality style images are two challenging computer vision tasks. However, most of the current image style transfer methods are inexplicable, and ...their image cartoonilation performance is also not satisfactory due to the complex lines and rich abstract features of cartoon style. To alleviate these two issues, in this paper we propose a novel two-stage interpretable learning method, the two-stage generative adversarial network (TSGAN), for image cartoonization. Particularly, we first divide the generative model into a content learning stage and a stylization stage. The advantages are twofold. The first is that the finely differentiated two-stage image generation model has better interpretability and easy understanding. The second is that TSGAN can adjust the content and style details of the generated image. We further propose a Cartoon Image Enhance (CIE) module for dynamically sampling salient cartoon texture details from training data to generate cartoon images with higher quality. Experimental results show that our TSGAN is effective when compared with four representative methods in terms of visual, qualitative, and quantitative comparisons and user research.