
The Muse AI was trained on the video game Bleeding Edge
Microsoft
An artificial intelligence model from Microsoft can recreate realistic video game footage that the company says could help designers make games, but experts are unconvinced that the tool will be useful for most game developers.
Neural networks that can produce coherent and accurate footage from video games are not new. A recent Google-created AI generated a fully playable version of the classic computer game Doom without access to the underlying game engine. The original Doom, however, was released in 1993; more modern games are far more complex, with sophisticated physics and computationally intensive graphics, which have proved trickier for AIs to faithfully recreate.
Now, Katja Hofmann at Microsoft Research and her colleagues have developed an AI model called Muse, which can recreate full sequences of the multiplayer online battle game Bleeding Edge. These sequences appear to obey the game’s underlying physics and keep players and in-game objects consistent over time, which implies that the model has grasped a deep understanding of the game, says Hofmann.
Muse is trained on seven years of human gameplay data, including both controller and video footage, provided by Bleeding Edge’s Microsoft-owned developer, Ninja Studios. It works similarly to large language models like ChatGPT; when given an input, in the form of a video game frame and its associated controller actions, it is tasked with predicting the gameplay that might come next. “It’s really quite mind-boggling, even to me now, that purely from training models to predict what’s going to appear next… it learns a sophisticated, deep understanding of this complex 3D environment,” says Hofmann.
To understand how people might use an AI tool like Muse, the team also surveyed game developers to learn what features they would find useful. As a result, the researchers added the capability to iteratively adjust to changes made on the fly, such as a player’s character changing or new objects entering a scene. This could be useful for coming up with new ideas and trying out what-if scenarios for developers, says Hofmann.
But Muse is still limited to generating sequences within the bounds of the original Bleeding Edge game — it can’t come up with new concepts or designs. And it is unclear if this is an inherent limitation of the model, or something that could be overcome with more training data from other games, says Mike Cook at King’s College London. “This is a long, long way away from the idea that AI systems can design games on their own.”
While the ability to generate consistent gameplay sequences is impressive, developers might prefer to have greater control, says Cook. “If you build a tool that is actually testing your game, running the game code itself, you don’t need to worry about persistency or consistency, because it’s running the actual game. So these are solving problems that generative AI has itself introduced.”
It’s promising that the model is designed with developers in mind, says Georgios Yannakakis at the Institute of Digital Games at the University of Malta, but it might not be feasible for most developers who don’t have so much training data. “It comes down to the question of is it worth doing?” says Yannakakis. “Microsoft spent seven years collecting data and training these models to demonstrate that you can actually do it. But would an actual game studio afford [to do] this?”
Even Microsoft itself is equivocal over whether AI-designed games could be on the horizon: when asked if developers in its Xbox gaming division might use the tool, the company declined to comment.
While Hofmann and her team are hopeful that future versions of Muse will be able to generalise beyond their training data – coming up with new scenarios and levels for games on which they are trained, as well as working for different games – this will be a significant challenge, says Cook, because modern games are so complex.
“One of the ways a game distinguishes itself is by changing systems and introducing new conceptual level ideas. That makes it very hard for machine learning systems to get outside of their training data and innovate and invent beyond what they’ve seen,” he says.
Topics: