Article about Ubisoft

Published 9 months, 2 weeks ago

Games Ubisoft Published by J. Doe

Animating the Future – Developer Interview

Animation is a vital ingredient in building a realistic, immersive, and modern videogame world. As computing power and graphical quality increase, so too does the complexity involved in bringing characters, animals, objects, and effects to life with animation. Teams at Ubisoft La Forge and the Ubisoft China AI & Data Lab are always looking to the future for ways to make games more realistic, more immersive, and simpler to build. Recently, these teams presented two projects of interest internally at Ubisoft, with one also featured at the Symposium of Computer Animation.

This allows animations to be built from reference materials, meaning animators need to spend much less time building them from scratch, and do not need to rely on motion capture or the painstaking work of reproducing movements by sight alone.

What are your current roles and backgrounds?

Daniel Holden: I work as a research scientist at Ubisoft La Forge, working mainly on machine learning and animation. My job is essentially to lead the research and development of new animation technology, with the aim of helping Ubisoft create better animation systems more easily. Before joining Ubisoft La Forge, I studied for my Ph.D. at The University of Edinburgh, and before that I worked as a technical artist and graphics programmer for a couple of small game studios and indie developers.

Shahin Rabbani: I am on the research and development team as a physics programmer at La Forge. I have 10 years of experience working on robotics and character animation, but since 2018, I have been working on real-time fluid simulation, which is the topic of my SCA presentation; and La Torch, the tool we are working on.

What are the challenges animators face that can be helped by tools like the ones you have built?

ASF: Motion-capture with animals can be quite difficult for a few reasons. It may be possible to bring pets or domesticated animals into a motion-capture room if they are well trained, but our games include a wide range of species such as mountain lions, elephants, deer, wolverines, and much more. Traditionally, our animators manually craft keyframe animations frame-by-frame, which is extremely time-consuming.

The current common workaround is to “bake” or pre-make animation clips in physics-simulator software and place those in the scene, which is often a tedious and very iterative task. Moreover, the most interesting part of the physics is completely lost, and the animation does not react in real-time.

What is the current process for an animator to go from a still model to a moving, lifelike creature? How long does that usually take? ASF: The process usually starts by collecting images and video references of the animal being worked on. During this process, they go back and forth between the reference materials and the animation they are creating in tools such as Maya, 3DS Max, MotionBuilder or Blender. It is possible to interpolate poses between consecutive frames to speed up the process, but they still have to do a lot of manual and time-consuming work.

How do you teach a computer this same process with only 2D videos as a reference?

In our case, our pipeline involves three neural networks – an object-detection network locates the animal on each frame of the video, then a second network generates the 2D coordinates of the animal skeleton. Finally, a third network converts the 2D coordinates of the skeleton to 3D.

We train the second network with data generated from existing keyframe animations. This data includes images of the animal and the 2D coordinates of its skeleton. During the training process, the neural network learns features from the images and tries to guess the coordinates of its skeleton. The same technology can be applied to different species of animals with different skeletons, such as birds or fish, as long as we have existing keyframe animations to generate the synthetic training data.

What about La Torch? How is it different from the more traditional process of creating fire and smoke effects?

SR: As opposed to the technique of baking animations I mentioned, Torch 2.5D directly runs inside games in real time. Our system has two elements that pave the way for practical application in console games: first is a novel technique in speeding up the computation and equation-solving, which can achieve up to four times the speed. The second system creates a 3D illusion of the fire and smoke, but with a 2D simulation under the hood, which makes sure the computational costs are significantly lower while taking advantage of several rendering techniques that provide a realistic 3D simulation.

Ideally, we would like to completely cut the iterative process of baking the simulation in external software and do it all on the fly, directly in the game. Torch also provides the baking option for cases where it’s needed, saving computation budget and giving options based on scene requirements.

Is it usual for fire in games to feature physics, and be so reactive to things like movement or flow dynamics?

SR: Fire and smoke are great examples of an immersive gameplay experience, as the physics is always rich in detail and unpredictable, but in a predictable way. A part of the realism of smoke and fire depends on how they will react in response to their surrounding environment. As long as we use animated, baked clips of smoke and fire instead of actually computing their state in real time, there is little we can do to simulate the reactive qualities that make it so real, things like colliding with objects or changing shape and direction with the wind.

Original article

Oct 10, 2020 at 03:25


Video provided by Ubisoft.


Ubisoft China AI & Data Lab | ZooBuilder: 2D and 3D Pose Estimation for Quadrupeds


Sep 10, 2020 by Ubisoft La Forge


It has been accepted to the showcase track of SCA 2020 (http://computeranimation.org/).

Image related to: Animating the Future – Developer Interview.


2 items