Idea Simulation - Creating Playback/Replay and more.


I am working on testing methods and ideas to implement a system where I take a live combat RPG system with nested AI functions for 10-30 objects that will use a two dimensional space to battle as two teams using various skills, abilities and items.

Instead of actually playing this out with Objects as one would typically do, I am working on virtualizing the whole bit in pure code so that I can process what would normally be 20-30 steps in a single step, or essentially simulating a 2 two minute battle scene in a couple seconds.

As the scene is being processed it will be captured as a script of the battle that can then be played back at normal speeds, but only visually as all the calculations and process has been mapped out.

I don't have a working model yet but am working on it and hope to have some basic examples soon.

Has anyone else attempted this? Do you have any ideas or pitfalls you think I should avoid right off the bat?

Looking forward to hearing of others endeavors or ideas and I will post my own experiences as I have them!



🍋 *lemon noises*
GMC Elder
The most foolproof way to do this: save a complete snapshot of all game data every step. Reversing / replaying it later, you'd just load the snapshot to update the state of all objects.

  • Random number state. You might want it to reset each time you rewind / rewatch (depending on how much randomness you have, and if it affects gameplay or just effects), or you might want to save the state so you always get the same random numbers. In either case, you might need to deal with it a bit.
  • Saving too much information (leading to memory issues). In particular a concern if you have no upper limit for how long a replay can go on, or how many objects will interact in a scene.
  • Saving too little information, and having to guess when reconstructing (leading to issues like interpolated objects missing collisions, or randomness divergence)
  • Destroying objects makes them untrackable in the state machine after that point. Creating new ones makes them untrackable in past points. Make sure to figure out a way to deal with this.
It's fairly simple to store data (just put it in an expanding array). The hard part is figuring out a data format that lets you save just what you need, and everything you need, and then convert a game state to/from the data storage format.
All valid points Yal.

For me I am having to figure out some creative ways to manage this. For one, I want to run through the simulation very quickly, so I believe using actual objects is going to eat up the extra processing that I could use to speed up the number crunching when creating my "script" for playback, but secondarily and more importantly; when I playback the simulation, I want to keep it simply a handful of objects moving around "acting" out the script the simulation created without actually running anything other then animations on the screen. I will be designing it so the playback uses as little processing as possible but also to be pretty compact because I am going to be sending the script to clients over the network to playback.

I have done a good bit of object less programming, including creating AI battles that all reside in code and without using any objects other then a controller. This though is taking it up a notch for me because was doing it live before and now I want to crunch the whole scene as quickly as possible and package it in such a way that can be played back at normal speed and not look amazingly terrible xD

Agreed though, trimming the data to the minimum and then being able to expand that on playback so it does not look choppy or sloppy is going to be the challenge :D
I am thinking about trying to handle this in three different ways. Each have their own benefits and drawbacks but to some degree it's going to come down to how much I can fake it and how much power it takes to not fake it.


  1. Really run though AI trees and map out an action sequence for 30-50 virtual objects and then export the vital pieces of information like when/where/how to be visually played out on the client side.
  2. Scrap the AI trees and just run the numbers and output the when/where/how to the client for playback
  3. Scrap the AI and just run the most basic numbers and output the when/how and let the client handle throwing the where together itself.
I know the more I simplify it, the more I am going to have to step up my game to fake it to make it seem like there is more going on. In a loot and slow grind game I don't want an Anthem situation where people realize that the loot and the grind don't effect the game at all. That would be bad...

I would like the first option to work as I believe it creates a legitimate experience and will look more real as it would be more real. That being said it's going to take a lot more to process the scene and then some really great design on the sending/receiving to make sure it all lines up and plays back smoothly.

My specific purpose for this simulation is for a multiple player dungeon delver game where the players on the client will click to join the dungeon delve and then the server will crunch it all out and spit out a playback file to the client. Each delve will have up to 10 players and will have up to 20 enemies on the screen at the same time. The players will be gear collecting and advancing their hero with the other players but the actual dungeon delve is all AI controlled, the player just gets to sit back and see what happens.

Anyhow I will have to see what I can manage to work and make it run "amazing" xD


🍋 *lemon noises*
GMC Elder
I just lost a pretty long post to a BSOD, so I'll recap this really quickly.
  • Split code up into "decisions" and "execution" phases. Decisions do most of the heavy lifting, then you switch over to a pretty simple execution state like "walk here" or "attack in this direction" which could be just a few lines of code. This spreads out the CPU power needed to do the AI. Usually you don't need to run all the AI logic every step.
  • Execution states should be 100% predictable, so the same decision always has the same result. This makes them easier to replay. (For more unpredictability, the decision-making process can use randomness - as long as you store the results in a static way)
  • You just need to store all the decisions that were taken and in what order for the playback, not any of the logic used to make them.
  • You can optimize the decision phase even more if you have separate states like "explore", "loot everything that's not nailed down", "attack" or "run for your life" so you only run certain checks depending on your mood. You check if it's time to change current state every 60 iterations or so, and can have extra "rethink" events for cases like being attacked, or spotting loot nearby but off the path you're currently moving.
  • A simple "priority list" style AI decision tree would probably be enough for most cases if there's enough objects around (since the players can't focus on all of them at once). Go through actions in descending priority order, run a check to see if you should take them, and if so, do that. For instance "run away" might have a high priority, but only be taken if you're at low HP, there's an enemy nearby, and you're out of healing items.
  • During the simulation phase, you can store all sorts of information to help you make better decisions (e.g. a list of visited areas, so that the "explore" state minimizes pointless backtracking and the "escape" state knows which areas are safe) but you don't need it anymore once the simulation is complete, only the final list of decisions.
  • For more stability (just in case), you could store the current position / orientation along with each decision (or at regular timestamps), and just move the object there before playing out the execution phase again. This solves issues with the simulation getting out of sync if there's rounding errors or the like.
Last edited: