Menu

Innovations in AI


Get in touch

 
 
Follow us on:

AI plays a huge role for developers, with many innovations in the space happening right under our noses.

We spoke with Emily Short and AI and Games to get the latest on all that AI has to offer. From game design improvements to NPC voice generation, to how gamers in the future will play.

It may be a terrifying thought to the budding developer, but AI is playing an increasingly important role in the making of video games. And it’s not really about what gamers traditionally think of when you mention the phrase “video game AI”, either.

Ask a gamer what their favourite video game AI is, and they may mention Alyx from Valve’s Half-Life series, the enemies from Monolith Productions’ shooter F.E.A.R., or the Alien from Creative Assembly’s terrifying horror Alien Isolation.

But these days the truly impressive, revolutionary video game AI has nothing to do with companion characters, enemies or horrid monsters. AI’s real impact on video games is rather more under the hood.

Emily Short
 

Emily Short, creative director at British studio Failbetter Games and expert in interactive narrative and AI dialogue, says one of the promising areas of AI advancement is the support of dynamic generated NPC performances and improvements in voice generation.

“Over the past few years, speech generation based on deep learning models has gotten much better and faster to train,” she says. “We’re also seeing improvements in how easily those models can copy specific voices and accents, and achieve plausible emotional inflection.”

“That in turn opens up a lot of possibilities for dialogue that changes in response to what the player has done so far in a game. Previously, audio recording and storage issues tended to constrain how much dialogue variation was possible from NPCs.”

Anyone who’s played Bethesda’s sprawling open-world role-playing game The Elder Scrolls V: Skyrim will know it can get pretty annoying to hear the town guards say “then I took an arrow in the knee” every time you pass by. Perhaps with the help of AI, those town guards will finally offer a different excuse for their lack of adventuring.

There are still some significant challenges in this space, though. For example, developers currently can’t get the same dramatic performance from AI that they would from a great voice actor. And voice models for minority languages tend to be of lower quality than those developed for well-represented ones, such as English and various forms of Chinese.

“However, the boundary of what’s possible here continues to move really quickly, and the state of the art is already good enough to allow for some interesting possibilities in dynamic barks, for instance,” Short says.

And, along the same lines, there has already been some promising work done around driving emotionally expressive animation, using AI models to control lip-syncing and a detailed facial rig.

In fact, animation is one aspect of video game development benefiting significantly from deep learning. The idea here is to use AI to build animation controllers, rather than using an animator who crafts the animation controllers by hand. The AI figures out the optimal blend points between handcrafted animations, and interpolates them based on different environmental conditions.

Let’s say you have a character sitting down on a chair. The AI will customise the sitting down animation within a range that is dependent on the chair. This process would include, for example, figuring out where the arms are going to be and interpolating correctly. 

Assassin’s Creed maker Ubisoft is already using AI to help with character animation via what it calls Learned Motion Matching. Here, the company enlists the help of machine learning to reduce the memory usage of Motion Matching-based animation systems, which can be considerable as motion capture data piles up.

AI has a virtual finger in many video game pies. It is being used for physics system approximations, cheat detection and even automated testing. Baldur’s Gate 3 developer Larian recently hit the headlines when it emerged it had built a testing AI for the RPG – and then tried to defeat it. This “super gamer,” as Larian calls it, works with QA teams to help stress-test player actions and combat, among other things. 

Tommy Thompson
 

“Effectively, we're looking at the idea of working smarter and not harder,” Tommy Thompson, director of AI and Games says.

Intelligent Design

But how can AI be used to improve the design of a game, rather than just making development more efficient?

There are individual instances of designers using reinforcement learning to train arena combat opponents, for instance, or imitation learning to create a character who can perform in a game environment in a way that’s similar to a human player. And, Thompson says, those machine-learning based bots, such as Larian’s “super gamer,” can be extremely valuable for identifying problems in a design.

“This is a really good way to see whether specific parts of the level's design isn't working. It could be the geometry is preventing them from moving from one part to another – or a button that should work isn't working. And so it's allowing them to validate and fact-check that these things are actually working as intended.”

But what’s exciting AI enthusiasts – and terrifying game designers – is the realm of computational creativity. Here, we hand over the tools for making games to the AI and see what it comes up with. Much of this work is being done in the academic space and isn’t actively being used by the wider game development community. But, according to Thompson, “exciting” research is coming out of machine learning-driven procedural content generation.

“Can we build levels that are evoking a specific design ethos?” Thompson wonders. “Can we build levels that look like other levels we've seen before? Generative adversarial networks being used to generate Mario levels, for example. There's stuff coming out in Minecraft research – it's a bit bonkers. It’s not quite ready for the production side, but this is something that's slowly emerging – the idea of using machine learning and data that we have of existing levels to then help us create new levels that embody those design principles.”

Thinking beyond levels and missions, can AI build virtual worlds? Former Naughty Dog technical art director Andrew Maximov says his next venture, Promethean AI, is "the world's first AI that works with artists [and] assists them in the process of building virtual worlds.” The technology is designed to take on some of the more mundane and non-creative tasks associated with the process, but is also capable of creative problem solving thanks to its ability to learn and adapt to individual tastes.

So if an AI can build a virtual world, could it eventually put developers out of a job?

“I do worry somewhere down the line, the odd triple-A publisher is going to think, ‘oh, right, this will be a good mechanism to cut a certain department out – we can save ourselves an awful lot of money down the line by not paying for these people’,” Thompson predicts. “If there's a budgetary benefit to having an AI come in and do this job, they'll probably try and go for it.”

“We're going to hit this point in the next few years, particularly as a lot of this technology is improving, that certain roles within game development pipelines may change. And we've already actually had this in some very limited spaces. But it might become a little bit more drastic in certain situations.”

Food for thought. For now, though, it seems human developers are still very much a part of the mix.

“You probably wouldn’t want to let the model behind Artbreeder loose on creating all the faces in your 2D game, for instance, because some of those images look terrible and are deep in the uncanny valley,” Short says. “But as a tool for non-artistic people to guide a computer tool and make artistic output? It offers some surprising new possibilities.”

Similarly, generative text models such as Generative Pre-trained Transformer 3 (GPT-3), probably can’t be effectively deployed directly in a game. “Currently, they’re too big, too slow, too expensive, too likely to produce unpredictable or ungrounded output, too likely to say something that’s just deeply unacceptable or biased or off,” Short says. “So hooking up such a model to drive your NPC conversation is, for most games, a non-starter.”

However, generative text models can be used to suggest content for humans to check and, in the right circumstances, can be a “useful assist” for a content pipeline, Short explains. “Even on fairly small games, I’ve sometimes built text generators to produce options and then selected between them.”

The practical benefit of a “useful assist” from AI is it allows developers to focus less on fiddly, time-consuming tasks and more on the game design itself. Take the aforementioned animation controllers. If an AI can handle those, the animation team is free to focus on simply making lovely animations. 

“Let's face it, nobody wants to make an animation controller,” Thompson laughs.

Deep Impact

Along the same lines, Nvidia believes its Deep Learning Super Sampling (DLSS) AI rendering technology can have a positive impact on video game design because it takes away that most restrictive of consideration: power.

DLSS, which is currently used in over 40 games including Fortnite, Cyberpunk 2077 and Minecraft, increases graphics performance by tapping into the power of a deep learning neural network. 

“Nowadays gamers are quite rightfully demanding more than just beautiful graphics or great performance,” Nvidia’s Ben Berraondo says. “They want both, and I think DLSS is a great way for us to enable that.”

Taking Fortnite as an example, DLSS means the player can turn up the settings and use advanced graphical features such as ray-tracing while still achieving 120fps at a low latency. Cyberpunk 2077, which demands a powerful PC to run well, can use DLSS to output at 60fps with all the graphics effects turned on.

It’s a fascinating piece of technology, and it’s improving all the time. DLSS’ goal is to output visuals as close to what it calls “ground truth”. This ground truth reference is obtained from various games rendered on Nvidia’s supercomputers at an eye-watering 16k resolution. DLSS takes the low resolution entry image – often 1080p or lower –, then uses motion vectors to work out how the game is drawing a particular scene, and compares its outputted high-resolution version against the ground truth image. These comparisons “train” the neural network over time. DLSS, it seems, sets itself high standards.

Traditionally, developers design their games within the constraints of the technology they run on. Consoles, for example, can only offer so much computational power. DLSS offers increased performance alongside fancy graphical features. And simply knowing DLSS is available encourages developers to shoot for their creative vision, Berraondo says.

“Beforehand, the developer might have said, ‘Oh, this is a really great idea, but being honest, when we've tried this before it hasn't really worked because it tanks performance too much.’ Now they're in a position to be able to say, ‘Well, actually, let's give it a go and see what happens.’ And we know that actually, people will have the performance to spare.”

Expect DLSS to become pervasive, too. The popular Unreal Engine 4 now has a DLSS plugin, which makes integration much quicker. Smaller studios are also picking up the tech. CyberPunch, which released a game called The Fabled Woods, is using DLSS, as is Nightdive Studios, which is making the new System Shock. In both those examples, DLSS was integrated within a few days because of the Unreal Engine 4 plugin. Berraondo teased DLSS will be integrated into more engines in the near future, with the goal that any game being made can easily make use of the technology.

It seems natural to wonder about the future of video game development when we think about AI, but in truth AI is already part of the process. Pretty much every triple-A video game company has an AI research unit of some description. EA, for example, has its Search for Extraordinary Experiences Division (SEED), which hit the headlines in 2018 after it built a self-learning AI agent that taught itself how to play Battlefield 1 multiplayer from scratch. Ubisoft La Forge, which is based in Montreal, has investigated using Deep Reinforcement Learning to improve NPC navigation. And back in 2017, DeepMind and Blizzard opened real-time strategy game StarCraft 2 as an AI research environment.

“This is becoming big business,” Thompson says, “and it's really beginning to shape these games in a way that you will never see.”

Share this page

Go back to the top of the page