UP - logo
E-resources
Full text
Peer reviewed Open access
  • Style-A-Video: Agile Diffus...
    Huang, Nisha; Zhang, Yuxin; Dong, Weiming

    IEEE signal processing letters, 2024, Volume: 31
    Journal Article

    Large-scale text-to-video diffusion models have shown outstanding capabilities. However, their direct application to video stylization is hindered by the limited availability of text-to-video datasets and computational resources. Moreover, meeting content preservation standards for style transfer tasks is challenging due to the stochastic and destructive nature of the noise addition process. This letter introduces a succinct video stylization approach, named Style-A-Video, which leverages a generative pre-trained transformer and an image latent diffusion model for text-controlled video stylization. We improve the guidance conditions in the denoising process to maintain a balance between artistic expression and structural preservation. Additionally, by integrating sampling optimization and temporal consistency modules, we address inter-frame flickering and prevent additional artifacts. Comprehensive experimental results demonstrate superior content preservation and stylistic performance while minimizing resource consumption.