• AI KATANA
  • Posts
  • Can AI Run Doom? Google’s GameNGen Shows It Can

Can AI Run Doom? Google’s GameNGen Shows It Can

The world of AI continues to surprise us, pushing the boundaries of what was once thought impossible. One of the latest groundbreaking developments in AI comes from Google’s GameNGen project, which has successfully recreated the classic video game Doom using a neural network, without relying on the game’s original code or engine. This milestone is not just a technical marvel but also a glimpse into the future of game development and interactive simulations, showcasing the growing capabilities of generative AI models.

The Birth of GameNGen: A Neural Network-Powered Game Engine

At its core, Google’s GameNGen is a diffusion-based AI model, an adaptation of the Stable Diffusion model, capable of simulating Doom in real-time. This is achieved without the need for a traditional game engine, marking a pivotal shift in how games may be developed in the future. By utilizing machine learning, GameNGen generates gameplay frames based on patterns it has learned from existing video data.

In the past, games like Doom were painstakingly programmed to manage player input, environment changes, and game physics. GameNGen turns this process on its head. It relies on a two-phase training process. First, an AI agent learns to play the game, and its gameplay sessions are recorded. Then, another AI model analyzes these recordings, learning how the game environment behaves, from player actions to the visual output, before generating new game frames in real-time. This generative approach allows the AI to simulate interactions within the game world without pre-written code.

The approach trains an RL agent to generate gameplay data, refines a Stable Diffusion model for stable frame prediction, and improves image quality by fine-tuning the latent decoder to reduce artifacts.

A New Milestone for AI: Interactive Simulations

The implications of this achievement are significant. Traditionally, game engines are the backbone of video game development, defining the rules, rendering visuals, and managing input. GameNGen, however, bypasses these engines, simulating the environment directly through learned behaviors. It can run the game at 20 frames per second on a single Tensor Processing Unit (TPU), a specialized chip designed for AI workloads.

Perhaps most impressively, human raters could only distinguish the AI-generated gameplay from the original about half the time, demonstrating how convincingly the neural network can simulate the game’s visuals and mechanics. While some graphical glitches and limitations exist, such as frame degradation over time, this breakthrough represents the first step toward a future where AI could be tasked with generating entire virtual environments and games.

A human player is playing DOOM on GameNGen at 20FPS.

The Future of Game Development: From Code to AI Creativity

Generative AI’s potential in game development extends far beyond simple recreations. The ability for AI to “imagine” and generate games opens doors to a new paradigm where developers may no longer need to manually write code for every action, interaction, or visual. Instead, AI could generate game mechanics from descriptions, text, or even concept art, drastically reducing development time and cost.

This could democratize game development, allowing smaller studios or even solo creators to produce complex, interactive experiences without the need for large teams or extensive coding knowledge. Beyond game creation, this AI-driven approach could lead to more dynamic and evolving game worlds, where environments and narratives change based on player interactions in real time.

Broader Applications: From Gaming to Real-World Simulations

The implications of AI-driven simulations like GameNGen extend beyond gaming. Industries such as autonomous vehicles, virtual reality (VR), and smart city management could benefit from similar AI-generated environments. For instance, autonomous vehicles require simulations of countless driving scenarios for safety and performance testing. AI models like GameNGen could generate these environments dynamically, offering more realistic and complex training grounds for AI systems.

Similarly, in VR and augmented reality (AR), real-time AI-driven environments could create highly immersive worlds that adapt to user actions, further blurring the lines between virtual and real-world experiences.

Challenges and Next Steps

While the potential is vast, AI-powered game engines like GameNGen face several challenges. Scaling this technology to handle more graphically intensive modern games will require significant advancements in computational power. Moreover, developing a general-purpose AI engine that can simulate a wide range of games, not just specific titles like Doom, is a daunting task.

However, the progress made by GameNGen offers a tantalizing glimpse into the future of AI in gaming and beyond. As AI continues to evolve, we may soon enter an era where games are not just played by AI but created and powered by it.

A Leap Forward

Google’s GameNGen project represents a significant leap forward for generative AI. By successfully simulating Doom without traditional game code, this neural model opens the door to a new era of AI-driven game development. As this technology matures, it has the potential to revolutionize not only the gaming industry but also various fields that rely on real-time simulations. We are just beginning to see the possibilities of what AI can create—and the future looks incredibly exciting.

As AI and gaming continue to intersect, we may soon find ourselves asking not just “Can it run Doom?” but rather, “What will AI dream up next?”