We spoke with video game characters thanks to Nvidia’s AI, the potential is enormous


Nvidia intends to once again revolutionize the world of video games with ACE. This technology aims to use artificial intelligence to bring to life the characters you meet in your virtual adventures, and thus allow you to converse fluidly with them. We were able to try the demo.

Nvidia

Imagine. You’re relaxing in front of the latest big video game. Dropped into a vast open world, you’re a little lost. Your current quest asks you to kill a creature, but you have no idea where it is hiding. The reflex is therefore to go and ask the local villagers. You come across the first farmer on your route and you start a conversation. Instead of having a classic dialog box that opens on your screen, you speak to it directly via your microphone and it responds to you naturally thanks to AI… This is the type of scenario that ACE wants to offer ( Avatar Cloud Engine) from Nvidia, its new technology which allows video game characters to come to life with artificial intelligence.

We were able to try the demo and we were impressed. However, we still have many uncertainties and questions.

ACE, how does it work?

To create this system, Nvidia relies on the NPUs included in its RTX cards (regardless of generation), but also on the cloud. The American firm collaborated with Convai, a company responsible for creating characters for various publishers, such as Ubisoft, MiHoYo or Tencent. She designs NPCs for games, imagining a predetermined appearance, story, lines, voices and behaviors.

NvidiaNvidia

With ACE, when the player approaches a character, he has to use his microphone to talk to her. His voice is transcribed into writing by the GPU, then the text is sent to Nvidia’s servers. ACE then develops a complex response using AI. It is transformed into voice and sent to your PC. Face and animation management (Audio2Face) is managed by the GeForce RTX card. Finally, the character responds with a synthetic but credible voice.

NvidiaNvidia

We threatened a bartender, he took it pretty well

We were able to try ACE for an hour through a demo created with Unreal Engine 5. We entered a ramen bar located in a dystopian city, one of those that can be found in Cyberpunk 2077. Two characters were there: the boss and a client, a cybersecurity expert by trade.

NvidiaNvidia

By pointing the cursor towards one of the characters, it was possible for us to speak to him with our own voice into the microphone. We thus approached the expert. Let’s admit it, we were a little lacking in inspiration for the first lines. They were limited to “ Hello, what’s your name ? What do you do for a living ? Where do you live ? ». But the answers poured out; the young woman spoke to us coherently. More relaxed after the first cordial exchanges, we took the experience a little further, asking her what her favorite film was, if she wanted to go to Disneyland with us or if she liked reading Phonandroid. Again, the responses were consistent, even amusing, although sometimes vague. The icing on the cake was that we spoke in French, with Nvidia’s AI automatically translating from its remote server.

NvidiaNvidia

We then conversed with the bartender (in English), and again the responses were consistent. Moreover, he reacted correctly to his environment. For example, we asked him nicely to turn off the bar light, he did it. We ordered him a ramen, he prepared it for us. We asked him if he served hamburgers, he told us it wasn’t on the menu. We were interested in the fluorescent water jug ​​on the bar, he knew what it was…

Artificial intelligence still has its limits

However, it is with him that we were able to see the limits of this technology. We decided to threaten him with a “I have a gun, give me the money from the register”and he answered us in a mournful tone “I don’t like violence, stop”, instead of panicking. On this point, Nvidia specifies that each NPC does not react in the same way, since they all have a well-defined character and they never break out of this straitjacket. Faced with an absurd situation, they do not improvise.

Also read – DLSS: understand everything about the Nvidia technology that is revolutionizing video games

Likewise, it should be noted that the conversations are still very mechanical. In the microphone, we must speak softly, articulate well. Then, you have to wait a second for the character to respond. All of this doesn’t help with a smooth conversation, but let’s remember that we are in a demo of a still new technology. Likewise, over the course of the conversation, we very quickly grasp the structure of our interlocutor, what we can ask them to get a precise answer rather than a vague answer. Last point to improve: the voice of the NPCs is certainly credible, but monotonous and always on the same rhythm. When we tried to annoy them, they remained calm, although their dialogue conveyed annoyance at our antics. The specter of the Unsettling Valley is very present.

Will AI revolutionize video games?

With this demo, lots of questions come to mind. How could this technology be used in a real video game? Speaking to an NPC is fun in the moment, but is it sustainable in a 100-hour Witcher-style adventure? Even more, what does this mean for dialogue writers, actors, screenwriters? For example, it’s unthinkable to see an AI respond to us in a GTA, where each hand-written line is crafted to the extreme. We will be able to judge very soon. ACE is not a distant dream, since developers are already working to integrate it into their games. The first to take the plunge will be STALKER 2, which comes out next September.

The fact is that we tested raw technology, but it will then be up to the developers to use it maliciously. We can imagine a whole bunch of applications. For example, what makes the worlds alive in the Elder Scrolls (Oblivion, Skyrim) is the routine of the NPCs, who live their lives when you are not interacting with them. A scripted technology, but which could be brilliantly used with artificial intelligence. Likewise, we can imagine this process applied to the environment of a virtual universe, which changes dynamically according to your actions. All uses are possible.

ACE is therefore a promising technology and we had a lot of fun on the demo, despite its obvious limitations. It now remains to be seen how it will be used in the future.



Source link -101