Strand-based hair simulations have recently become increasingly popular for a range of real-time applications. However, accurately simulating the full number of hair strands remains challenging. A ...commonly employed technique involves simulating a subset of guide hairs to capture the overall behavior of the hairstyle. Details are then enriched by interpolation using linear skinning. Hair interpolation enables fast real-time simulations but frequently leads to various artifacts during runtime. As the skinning weights are often pre-computed, substantial variations between the initial and deformed shapes of the hair can cause severe deviations in fine hair geometry. Straight hairs may become kinked, and curly hairs may become zigzags. This work introduces a novel physical-driven hair interpolation scheme that utilizes existing simulated guide hair data. Instead of directly operating on positions, we interpolate the internal forces from the guide hairs before efficiently reconstructing the rendered hairs based on their material model. We formulate our problem as a constraint satisfaction problem for which we present an efficient solution. Further practical considerations are addressed using regularization terms that regulate penetration avoidance and drift correction. We have tested various hairstyles to illustrate that our approach can generate visually plausible rendered hairs with only a few guide hairs and minimal computational overhead, amounting to only about 20% of conventional linear hair interpolation. This efficiency underscores the practical viability of our method for real-time applications.
Procedural animation has seen widespread use in the design of expressive walking gaits for virtual characters. While similar tools could breathe life into robotic characters, existing techniques are ...largely unaware of the kinematic and dynamic constraints imposed by physical robots. In this paper, we propose a system for the artist-directed authoring of stylized bipedal walking gaits, tailored for execution on robotic characters. The artist interfaces with an interactive editing tool that generates the desired character motion in realtime, either on the physical or simulated robot, using a model-based control stack. Each walking style is encoded as a set of sample parameters which are translated into whole-body reference trajectories using the proposed procedural animation technique. In order to generalize the stylized gait over a continuous range of input velocities, we employ a phase-space blending strategy that interpolates a set of example walk cycles authored by the animator while preserving contact constraints. To demonstrate the utility of our approach, we animate gaits for a custom, free-walking robotic character, and show, with two additional in-simulation examples, how our procedural animation technique generalizes to bipeds with different degrees of freedom, proportions, and mass distributions.
Creating realistic animations of human faces is still a challenging task in computer graphics. While computer graphics (CG) models capture much variability in a small parameter vector, they usually ...do not meet the necessary visual quality. This is due to the fact, that geometry-based animation often does not allow fine-grained deformations and fails in difficult areas (mouth, eyes) to produce realistic renderings. Image-based animation techniques avoid these problems by using dynamic textures that capture details and small movements that are not explained by geometry. This comes at the cost of high-memory requirements and limited flexibility in terms of animation because dynamic texture sequences need to be concatenated seamlessly, which is not always possible and prone to visual artefacts. In this study, the authors present a new hybrid animation framework that exploits recent advances in deep learning to provide an interactive animation engine that can be used via a simple and intuitive visualisation for facial expression editing. The authors describe an automatic pipeline to generate training sequences that consist of dynamic textures plus sequences of consistent three-dimensional face models. Based on this data, they train a variational autoencoder to learn a low-dimensional latent space of facial expressions that is used for interactive facial animation.
The animation of
Three Body
has attracted much anticipation from the audience since its launch and exceeded the 100 million broadcast mark in 10 hours on the first day of broadcast, setting a record ...for the fastest broadcast of an animation episode to break 100 million. However, as the episodes continue to air, the word-of-mouth for the work has become polarised and has dropped sharply, with the Douban score dropping to 3.9 as of March 13. This article analyses the
Three Body
animation on the levels of airplay, reviews, animation production, script logic and originality. It analyses the overly fast and thrill-seeking narrative pace of the animation. Moreover, it analyses its flat and schematic characterisation that needs more depth, and the extensive rewriting of the original text to impose conflict and the major strategic miscalculations in its audience positioning (such as losing fans of the original text due to the drastic rewriting of the original text). Furthermore, this essay gives recommendations for well-known IP adaptations, anime adaptations in the Chinese market, and the future of the animation market. This essay also advises on the need to accurately target audience groups, reduce over-adaptation of the original, avoid deconstruction of the original’s undertone, and enrich the characters.
Abstract
The advent of the era of artificial intelligence has brought unprecedentedly technological changes and breakthroughs to animation creation, which makes animation art creation gradually move ...towards the field of intelligence, and avoid the cumbersome and intensive work mode, thereby making the creation focus on the creative innovation. This paper deeply discusses the relationship between artificial intelligence and animation creation, and clarifies the advantages of artificial intelligence in improving the efficiency of animation creation. At the same time, it also analyses that the root of animation creation in this period is still human nature, so the creator should create animation based on human nature itself.
In this paper we present a new paradigm for the generation and retargeting of facial animation. Like a vast majority of the approaches that have adressed these topics, our formalism is built on ...blendshapes. However, where prior works have generally encoded facial geometry using a low dimensional basis of these blendshapes, we propose to encode facial dynamics by looking at blendshapes as a basis of forces rather than a basis of shapes. We develop this idea into a dynamic model that naturally combines the blendshapes paradigm with physics‐based techniques for the simulation of deforming meshes. Because it escapes the linear span of the shape basis through time‐integration and physics‐inspired simulation, this approach has a wider expressive range than previous blendshape‐based methods. Its inherent physically‐based formulation also enables the simulation of more advanced physical interactions, such as collision responses on lip contacts.
Mickey Mouse, Betty Boop, Donald Duck, Bugs Bunny, Felix the Cat, and other beloved cartoon characters have entertained media audiences for almost a century, outliving the human stars who were once ...their contemporaries in studio-era Hollywood. In Animated Personalities, David McGowan asserts that iconic American theatrical short cartoon characters should be legitimately regarded as stars, equal to their live-action counterparts, not only because they have enjoyed long careers, but also because their star personas have been created and marketed in ways also used for cinematic celebrities. Drawing on detailed archival research, McGowan analyzes how Hollywood studios constructed and manipulated the star personas of the animated characters they owned. He shows how cartoon actors frequently kept pace with their human counterparts, granting “interviews," allowing “candid" photographs, endorsing products, and generally behaving as actual actors did—for example, Donald Duck served his country during World War II, and Mickey Mouse was even embroiled in scandal. Challenging the notion that studios needed actors with physical bodies and real off-screen lives to create stars, McGowan demonstrates that media texts have successfully articulated an off-screen existence for animated characters. Following cartoon stars from silent movies to contemporary film and television, this groundbreaking book broadens the scope of star studies to include animation, concluding with provocative questions about the nature of stardom in an age of digitally enhanced filmmaking technologies.
Flower blooming is a beautiful phenomenon in nature as flowers open in an intricate and complex manner whereas petals bend, stretch and twist under various deformations. Flower petals are typically ...thin structures arranged in tight configurations with heavy self‐occlusions. Thus, capturing and reconstructing spatially and temporally coherent sequences of blooming flowers is highly challenging. Early in the process only exterior petals are visible and thus interior parts will be completely missing in the captured data. Utilizing commercially available 3D scanners, we capture the visible parts of blooming flowers into a sequence of 3D point clouds. We reconstruct the flower geometry and deformation over time using a template‐based dynamic tracking algorithm. To track and model interior petals hidden in early stages of the blooming process, we employ an adaptively constrained optimization. Flower characteristics are exploited to track petals both forward and backward in time. Our methods allow us to faithfully reconstruct the flower blooming process of different species. In addition, we provide comparisons with state‐of‐the‐art physical simulation‐based approaches and evaluate our approach by using photos of captured real flowers.
Flower blooming is a beautiful phenomenon in nature as flowers open in an intricate and complex manner whereas petals bend, stretch and twist under various deformations. Flower petals are typically thin structures arranged in tight configurations with heavy self‐occlusions. Thus, capturing and reconstructing spatially and temporally coherent sequences of blooming flowers is highly challenging. Early in the process only exterior petals are visible and thus interior parts will be completely missing in the captured data. Utilizing commercially available 3D scanners, we capture the visible parts of blooming flowers into a sequence of 3D point clouds. We reconstruct the flower geometry and deformation over time using a template‐based dynamic tracking algorithm. To track and model interior petals hidden in early stages of the blooming process, we employ an adaptively constrained optimization. Flower characteristics are exploited to track petals both forward and backward in time. Our methods allow us to faithfully reconstruct the flower blooming process of different species.
Human character animation is often critical in entertainment content production, including video games, virtual reality or fiction films. To this end, deep neural networks drive most recent advances ...through deep learning (DL) and deep reinforcement learning (DRL). In this article, we propose a comprehensive survey on the state‐of‐the‐art approaches based on either DL or DRL in skeleton‐based human character animation. First, we introduce motion data representations, most common human motion datasets and how basic deep models can be enhanced to foster learning of spatial and temporal patterns in motion data. Second, we cover state‐of‐the‐art approaches divided into three large families of applications in human animation pipelines: motion synthesis, character control and motion editing. Finally, we discuss the limitations of the current state‐of‐the‐art methods based on DL and/or DRL in skeletal human character animation and possible directions of future research to alleviate current limitations and meet animators' needs.
Human character animation is often critical in entertainment content production, including video games, virtual reality or fiction films. To this end, deep neural networks drive most recent advances through deep learning (DL) and deep reinforcement learning (DRL).
Synthesizing graceful and life-like behaviors for physically simulated characters has been a fundamental challenge in computer animation. Data-driven methods that leverage motion tracking are a ...prominent class of techniques for producing high fidelity motions for a wide range of behaviors. However, the effectiveness of these tracking-based methods often hinges on carefully designed objective functions, and when applied to large and diverse motion datasets, these methods require significant additional machinery to select the appropriate motion for the character to track in a given scenario. In this work, we propose to obviate the need to manually design imitation objectives and mechanisms for motion selection by utilizing a fully automated approach based on adversarial imitation learning. High-level task objectives that the character should perform can be specified by relatively simple reward functions, while the low-level style of the character's behaviors can be specified by a dataset of unstructured motion clips, without any explicit clip selection or sequencing. For example, a character traversing an obstacle course might utilize a task-reward that only considers forward progress, while the dataset contains clips of relevant behaviors such as running, jumping, and rolling. These motion clips are used to train an adversarial motion prior, which specifies style-rewards for training the character through reinforcement learning (RL). The adversarial RL procedure automatically selects which motion to perform, dynamically interpolating and generalizing from the dataset. Our system produces high-quality motions that are comparable to those achieved by state-of-the-art tracking-based techniques, while also being able to easily accommodate large datasets of unstructured motion clips. Composition of disparate skills emerges automatically from the motion prior, without requiring a high-level motion planner or other task-specific annotations of the motion clips. We demonstrate the effectiveness of our framework on a diverse cast of complex simulated characters and a challenging suite of motor control tasks.