Video game characters inspired by real people


In recent years, video game developers and computer scientists have tried to design techniques that can make gaming experiences more and more immersive, engaging and realistic. These include methods for automatically creating video game characters based on real people.

Most of the existing methods of creating and customizing video game characters require players to manually adjust their character’s facial features, in order to recreate their own face or that of other people. More recently, some developers have tried to develop methods that can automatically personalize a character’s face by analyzing images of faces of real people. However, these methods are not always effective and do not always reproduce the faces they analyze realistically.

Researchers recently created MeInGame, a deep learning technique that can automatically generate character faces by analyzing a single portrait of a person’s face.

They came up with an automatic character face creation method that predicts both face shape and texture from a single portrait and can be incorporated into most existing 3D games.

Some of the automatic character customization systems presented in previous works are based on computational techniques known as 3D Morphable Face Models (3DMM). While some of these methods reproduce a person’s facial features with a good level of precision, the way they represent geometric properties and spatial relationships (i.e. topology) often differs from the meshes used. in most 3D video games.

In order for 3DMMs to reliably reproduce the texture of a person’s face, they typically need to be trained on large sets of image data and associated texture data. Compiling these datasets can be time consuming. Additionally, these datasets don’t always contain real images collected from the wild, which can prevent models trained on them from performing consistently well when presented with new data.

From an input face photo, they first reconstruct a 3D face based on a morphable 3D face model (3DMM) and convolutional neural networks (CNN), then transfer the shape of the face. 3D mesh model. The proposed network takes the facial photo and the unwrapped coarse UV texture map as input, then predicts the lighting coefficients and the refined texture maps.

They evaluated their deep learning technique in a series of experiments, comparing the quality of the game characters it generated with that of the character faces produced by other existing cutting edge methods for automatic character customization. . Their method worked remarkably well, generating character faces that closely resembled those in the input images.

The proposed method not only produces detailed and vivid game characters similar to the entry portrait, but it can also eliminate the influence of lighting and occlusions. Experiments show that this method outperforms advanced methods used in games.

In the future, the character face generation method devised by this team of researchers could be incorporated into a number of 3d video games, allowing the automatic creation of characters that closely resemble real people.

Source link

Comments are closed.