• Hello [name]! Thanks for joining the GMC. Before making any posts in the Tech Support forum, can we suggest you read the forum rules? These are simple guidelines that we ask you to follow so that you can get the best help possible for your issue.

 Integration Options

GMWolf

aka fel666
GM's built in speed, direction, v/hspeed, grav, etc varaibles agreat for prototyping something quick. But quite often, you find you would like to add delta timing, or change the integration method.

I suggest two new functions should be added to change the way the GM engine deals with v/hspeed, and gravity:

integration_enable_delta(enable, [target speed]): This function would allow you to specify if you want gm to use delta timing when calculating new x,y vspeed and hspeed values.
The second argument could be optional, allowing the user to define the target room speed he is looking for. (so if the game runs at 60fps, but the game runs at 120fps, then the delta value used would be 0.5). If not set, the deafault value would be set to the current room/game speed.

integration_set_method(method): This method would allow you to change betweeen eurler, midpoint, verlet or rk4 integration. Granted the last two are a little overkill for any use, but being able to select midpoint would be very useful when dealing with projectiles.


Including these changes would allow developers to much more easily make the change from fix time to delta timing. It would also make the use of hspeed and vspeed a little more relevant.
 

csanyk

Member
I agree, it would be a major improvement if GMS's engine defaulted to having a built-in delta time that adjusted everything dependent on framerate/speed so that the developer never has to even think about it.

For backward compatibility's sake, I can accept making this "integration" as you call it an option that you have to turn on in your project's options. I would set it to default "on" for any new projects (you can disable if you really want) and to "off" for imported projects from 1.4, and for projects imported from 2.0 leave it set as is.

I would love love love love love this if they did it. Delta timing is tricky to do well, and very hard to retrofit on an existing project that doesn't yet use it, but if it were built in and available "for free" it would be a huge win.
 

Yal

šŸ§ *penguin noises*
GMC Elder
IMO the problem is that there's no real "one size fits all" delta timing... effects and stuff could ignore collision checks halfway-through, but bullets might not, and there's no way for a built-in system to KNOW whether it can just move something several steps at once or must do extra collision checking to ensure nothing messes up.
 

GMWolf

aka fel666
IMO the problem is that there's no real "one size fits all" delta timing... effects and stuff could ignore collision checks halfway-through, but bullets might not, and there's no way for a built-in system to KNOW whether it can just move something several steps at once or must do extra collision checking to ensure nothing messes up.
The current system will already clip through objects when stepping over large distances...
Perhaps we should have more collisions options too then: Like the bullet option in box2d?

Perhaps the option to choose a number of substeps would be nice too
 
Last edited:

Mike

nobody important
GMC Elder
Unless a game is written with this in mind, it's impossible to just "bolt on". All it would take is a delta movement that steps a bullet over a player or baddie, and unless the game was checking vectors it would just miss. There are countless other cases like this. On top of this developers are constantly putting processing code into the rendering/draw events, and this also breaks this kind of thing.

Simply put... a simple "on" button is never going to appear.
 

GMWolf

aka fel666
Unless a game is written with this in mind, it's impossible to just "bolt on". All it would take is a delta movement that steps a bullet over a player or baddie, and unless the game was checking vectors it would just miss. There are countless other cases like this. On top of this developers are constantly putting processing code into the rendering/draw events, and this also breaks this kind of thing.

Simply put... a simple "on" button is never going to appear.
If im not mistaken, the current built in movement code in GM will already step over objects, without checking vectors, And, developers are already breaking the current system by updating variables in the draw event.

by delta timing, all i mean is that instead of GM doing y += vspeed; vspeed += gravity, It would do y += vspeed * adjusted_delta_time and vspeed += gravity * adjusted_delta_time.
Yes, it will not work for all situations, just like the current system does not work for all situations.

Im not suggesting a simple 'on' button, but a set of methods to tell GM what methods to use.
 
J

Jarmar Games

Guest
Simply put... a simple "on" button is never going to appear.
Why can't you just offer support for frame-base timing. I've been asking for this since i started using GMS. I've tried to code it properly, but like many have stated it's tricky to do. So, why not offer it. Many people have requested it. I was told by support to post in the forums to see if there was enough support for to be considered. I think it is safe to assume that many would welcome it. Just curious why it's not offered already?
 

Mike

nobody important
GMC Elder
If im not mistaken, the current built in movement code in GM will already step over objects, without checking vectors, And, developers are already breaking the current system by updating variables in the draw event.
by delta timing, all i mean is that instead of GM doing y += vspeed; vspeed += gravity, It would do y += vspeed * adjusted_delta_time and vspeed += gravity * adjusted_delta_time.
Yes, it will not work for all situations, just like the current system does not work for all situations.
Im not suggesting a simple 'on' button, but a set of methods to tell GM what methods to use.
This is my point... currently someone codes normal collision checking, and that would always work..for example. tiles of 16x16, but the player will only ever move 10 pixels max. they will never go through.
But with delta time, there might be a case where it slows down and "jumps" more than a frames worth and skip right through it.

You have to write your code so it'll work like that. you can't just switch it on.
 

GMWolf

aka fel666
This is my point... currently someone codes normal collision checking, and that would always work..for example. tiles of 16x16, but the player will only ever move 10 pixels max. they will never go through.
But with delta time, there might be a case where it slows down and "jumps" more than a frames worth and skip right through it.

You have to write your code so it'll work like that. you can't just switch it on.
I do see now how delta timing can lead to inconsistencies.
But given how it is a standard in games now, I guess ill just have to live with 1/2 of instance variables being a little useless :)
 

GMWolf

aka fel666
feel free to work out a 100% consistent method, and I'm all ears :)
well, there is a way:
We could supply a substep length: not in time, but in pixels.
if it is set to 20, and an object has to move 100 pixels in one step, then it will break it up into 5 substeps.
Ok, not a superb solution, but it works :)

I just want to aks: the current hspeed and vspeed system is so rarely used now, is it just here for legacy reasons? Becuase i cant think of many times you would wat to use it. Either because of delta timing, other kinds of collisions (like tilemaps), etc
 

csanyk

Member
well, there is a way:
We could supply a substep length: not in time, but in pixels.
if it is set to 20, and an object has to move 100 pixels in one step, then it will break it up into 5 substeps.
Ok, not a superb solution, but it works :)

I just want to aks: the current hspeed and vspeed system is so rarely used now, is it just here for legacy reasons? Becuase i cant think of many times you would wat to use it. Either because of delta timing, other kinds of collisions (like tilemaps), etc

My "technique" for this has been to crank the game speed way up, and make everything move very slowly per-step.

So, rather than having a 16px object moving 30px/step in a 30fps room,which would result in skipping over objects, I have it move 15px/step in a 60fps room. If I needed it to move 60px/step (which is VERY fast) I would need a 120fps room... and not all monitors support 120Hz refresh... not to mention it's more challenging to code a game to stay >120fps, so there are definitely limits to this approach, but it's very reliable if you can keep the performance at the level needed.

Edit: Hmm, now that I think about it... I guess another way to accomplish the same thing would be to scale up all my sprites x2, so that that 16px object would be 32px. Then I could handle moving it at 30px/step in a 30fps room. And then I could use views to scale the game back down to its original size... assuming I double up the sprites and half-down the view, and disable anti-aliasing, it might scale without artifacting, and be just fine! I will need to experiment!
 

Mike

nobody important
GMC Elder
well, there is a way:
We could supply a substep length: not in time, but in pixels.
if it is set to 20, and an object has to move 100 pixels in one step, then it will break it up into 5 substeps.
Ok, not a superb solution, but it works :)
See the %100 part :D

If it doesn't work 100%, then we will get hounded with issues that folk think are bugs. Has to be a complete solution.

[/QUOTE]I just want to aks: the current hspeed and vspeed system is so rarely used now, is it just here for legacy reasons? Becuase i cant think of many times you would wat to use it. Either because of delta timing, other kinds of collisions (like tilemaps), etc[/QUOTE]
Mainly for beginners, and for DnD.... Although I do know a load of folk who love to use them....
 

csanyk

Member
I just want to aks: the current hspeed and vspeed system is so rarely used now, is it just here for legacy reasons? Becuase i cant think of many times you would wat to use it. Either because of delta timing, other kinds of collisions (like tilemaps), etc
Mainly for beginners, and for DnD.... Although I do know a load of folk who love to use them....
I use them, when I can. Especially for Marketplace assets; I prefer to do things in a way that doesn't force people who use my assets to do everything my custom way. I like them to integrate with the existing built-in engine, and require as little special knowledge as possible. Of course, wherever possible I try to make my assets independent of the buit-in variables, so you can use the asset whether you use them or not.
 

GMWolf

aka fel666
feel free to work out a 100% consistent method, and I'm all ears :)
Another option, is the way construct (and i beleive a couple other system) do it:
Limit dt to be at most, the base fps.
So essentially, if you set the target fps to be 30fps, but the game runs at 60, then no problems: dt gets smaller and everything works as expected.

If however, the game runs at 15fps, dt stays at what it would be at 30 fps. slowing the whole game down.

Probably a good balance between all the different system.
 

csanyk

Member
Another option, is the way construct (and i beleive a couple other system) do it:
Limit dt to be at most, the base fps.
So essentially, if you set the target fps to be 30fps, but the game runs at 60, then no problems: dt gets smaller and everything works as expected.

If however, the game runs at 15fps, dt stays at what it would be at 30 fps. slowing the whole game down.

Probably a good balance between all the different system.
I think that the problem is when fps goes so low that the adjusted speeds result in skipped collisions. There shouldn't be a problem when fps exceeds target frame rate, unless I'm mistaken.
 

GMWolf

aka fel666
I think that the problem is when fps goes so low that the adjusted speeds result in skipped collisions. There shouldn't be a problem when fps exceeds target frame rate, unless I'm mistaken.
That's why I suggest that when fps drop below target fps, delta timing is fixed to target fps DT.
That's how many other systems do it.
 
A

Arjailer

Guest
That's why I suggest that when fps drop below target fps, delta timing is fixed to target fps DT.
That's how many other systems do it.
That's certainly how I did it in the game I wrote years ago (in Blitz3D) to avoid exactly the problems Mike mentioned. Not sure it's a good universal solution though.
 

csanyk

Member
That's why I suggest that when fps drop below target fps, delta timing is fixed to target fps DT.
That's how many other systems do it.
I'm not following... when fps dips below target, that's when fps needs to start distorting speeds in order to smooth out the lag. If it fixes to target fps, then the entire point of using delta timing, to correct a lag in speed when fps goes below target. I'm pretty sure other engines also use delta timing to keep a game from running too fast, but GM has always fixed performance by capping it to room speed even if fps_real is faster. It might help clear things up if we had an example project that we could both reference.
 

GMWolf

aka fel666
I'm not following... when fps dips below target, that's when fps needs to start distorting speeds in order to smooth out the lag. If it fixes to target fps, then the entire point of using delta timing, to correct a lag in speed when fps goes below target. I'm pretty sure other engines also use delta timing to keep a game from running too fast, but GM has always fixed performance by capping it to room speed even if fps_real is faster. It might help clear things up if we had an example project that we could both reference.
Fps_real is just a metric of how fast the game theoretically *could* run, right?

Also, delta timing for game running faster than target speeds is quite nice to have:
If the game was written for 30fps, delta timing means the game could be player at 45, 60 or even 120fps is player has a good PC.
Sure it wouldn't fix games running at lower than target - those would just have to run slower as if no DT was used, but that's what minimum requirements are for ^^


I think this strikes a great balance: better fps/accuracy for higher end PCs. Slower running game for underpowered machines.
 

csanyk

Member
Fps_real is just a metric of how fast the game theoretically *could* run, right?
https://docs.yoyogames.com/source/dadiospice/002_reference/debugging/fps.html

In GameMaker: Studio there are two main ways that can be used to tell the speed at which your game runs. The room_speed (as specified in the room editor) and the fps (frames per second). These values are often confused, but basically one is the number of game steps that GameMaker: Studio is supposed to be completing in a second, while the other (the fps) is the number of CPU steps that GameMaker: Studio is actually completing in a second up to a maximum value of the room speed itself. To get the true fps, ie. the actual number of cpu steps per game step, use the fps_real variable.

This read-only variable returns the current fps as an integer value. Please note that the function will only update once every step of your game and so may appear to "jump" from one value to another, but this is quite normal.
https://docs.yoyogames.com/source/dadiospice/002_reference/debugging/fps_real.html

In GameMaker: Studio there are two main ways that can be used to tell the speed at which your game runs. The room_speed (as specified in the room editor) and the fps (frames per second). These values are often confused, but basically one is the number of game steps that GameMaker: Studio is supposed to be completing in a second (room speed), while the other is the number of CPU steps that GameMaker: Studio is actually completing in a second (the real fps), and this value is generally much higher than the room speed, but will drop as your game gets more complex and uses more processing power to maintain the set room speed.

This read-only variable returns the current fps as an integer value. Please note that the function will only update once every step of your game and so may appear to "jump" from one value to another, but this is quite normal.
This is from GMS1's manual, so still references room_speed... and could be a bit clearer, I think, but as I understand it, room_speed (or game speed) is the target fps the engine tries to run at. fps is the actual fps achieved by the engine, capped to not exceed room_speed/game speed, and fps_real is the actual fps the engine is running at, unbound by room_speed/game speed.

What I still don't get about that is, if the engine really is running at 3000fps (or whatever) what is it doing with all those extra frame calculations that are happening between steps in the engine? It's not running the game faster. I think it seems to be saying "I calculated the last step of the engine so quickly that I could have done it 3000 times in a second. I didn't, because I only need to hit 30. But based on the time it took me to run the last step, I could have done it 2999 more times."

I'd love to know from someone in the know if that's correct.

Also, delta timing for game running faster than target speeds is quite nice to have:
If the game was written for 30fps, delta timing means the game could be player at 45, 60 or even 120fps is player has a good PC.
Sure it wouldn't fix games running at lower than target - those would just have to run slower as if no DT was used, but that's what minimum requirements are for ^^

I think this strikes a great balance: better fps/accuracy for higher end PCs. Slower running game for underpowered machines.
Yes, but GMS caps performance to room_speed, and always has. Unless you want to set room_speed to 9999, and use delta time for everything, in GMS there's no such thing as delta timing to smooth out frames happening faster than room_speed. Capping performance to room_speed already does that for you.

It's true that back in the DOS days, it used to be games were coded so close to the metal that higher clock speed CPU meant faster performance, but in a way that wasn't intended, and made games unplayable when you upgraded from an 8086 to an 80286 or 486. That's why really old PCs used to have "turbo buttons" on the front, so you could use the full speed of the CPU when needed, but could downclock so you could play old DOS games at their intended framerate, becasue it was keyed to the speed at which the CPU ran. Later games were programmed in such a way that the game wouldn't run faster than its intended frame rate if the hardware could handle doing so. Whether they framerate-capped or delta-timed above target fps, I have no idea -- probably it varied with the implemetnation.
 

GMWolf

aka fel666
Yes, but GMS caps performance to room_speed, and always has. Unless you want to set room_speed to 9999, and use delta time for everything, in GMS there's no such thing as delta timing to smooth out frames happening faster than room_speed. Capping performance to room_speed already does that for you.
Thats why i suggested that when setting delta timing with the functions i proposed, you could set the target fps.
so you could have the roomspeed be set at 9999, but set the target fps at 30.
 

csanyk

Member
Thats why i suggested that when setting delta timing with the functions i proposed, you could set the target fps.
so you could have the roomspeed be set at 9999, but set the target fps at 30.
That's what room_speed is!
 

GMWolf

aka fel666
That's what room_speed is!
No. Room speed is the fps cap.
Target fps would affect the way delta time is calculated.
As i stated above: If your target fps is 60, but the current fps is 120, then the dt would be 0.5. : Move 1/2 the distance, but twice as often.
Target fps's are used not as a cap, but to keep your velocities and accelerations in pixels per step, rather than pixels per second.
 
J

Jarmar Games

Guest
Back in 2001-2002 time frame (back when you had to just do it yourself, hehe) the solution to timing for my game library came down to this:


Code:
{ --- TPGTimer -------------------------------------------------------------- }
TPGTimer = class(TPGObject)
  private
    function GetTickCountPriv: Int64;
  protected
    FPCAvail: Boolean;
    FPCFreq: Int64;
    FCurTime: Int64;
    FLastTime: Int64;
    FDesiredFPS: Single;
    FMinElapsedTime: Single;
    FMaxElapsedTime: Single;
    FElapsedTimeScale: Single;
    FFPSTimeScale: Single;
    FFPSElapsedTime: Single;
    FElapsedTime: Single;
    FFrameRate: Cardinal;
    FFrameCount: Cardinal;
    //FTimer: Single;
  public
    constructor Create; override;
    destructor  Destroy; override;

    procedure Init(aDesiredFPS, aMaxElapsedTime: Single); virtual;
    procedure Clear;
    procedure Update;
    function  FrameElapsed(var aTimer: Single; aFrames: Single): Boolean;
    procedure ResetFrameElapsed(var aTimer: Single; aFrame: Single);
    function  FrameSpeed(var aTimer: Single; aSpeedFPS: Single): Boolean;
    function  ResetFrameSpeed(var aTimer: Single; aSpeedFPS: Single): Single;
  public
    function  GetTickCount: Cardinal;
    property  TickCount: Cardinal read GetTickCount;

    function  GetElapsedTime: Single;
    property  ElapsedTime: Single read GetElapsedTime;

    function  GetFrameRate: Cardinal;
    property  FrameRate: Cardinal read GetFrameRate;

    function  GetDesiredFPS: Single;
    property  DesiredFPS: Single read GetDesiredFPS;
  end;

{ --- TPGTimer -------------------------------------------------------------- }
function TPGTimer.GetTickCountPriv: Int64;
var
  Ticks: Int64;
begin
  if FPCAvail then
    begin
      QueryPerformanceCounter(Ticks);
    end
  else
    begin
      timeBeginPeriod(1);
      Ticks := timeGetTime();
      timeEndPeriod(1);
    end;

  Result := Ticks;
end;

constructor TPGTimer.Create;
begin
  inherited;
  Init(35.0, 2.0);
end;

destructor TPGTimer.Destroy;
begin
  inherited;
end;

procedure TPGTimer.Clear;
begin

  FFPSElapsedTime := 1;
  FLastTime := 1000;
  Update;
end;

function TPGTimer.GetTickCount: Cardinal;
var
  Ticks: Int64;
  Ms   : Cardinal;
begin

  if FPCAvail then
    begin
      QueryPerformanceCounter(Ticks);
      Ms := (Ticks * 1000) div FPCFreq;
    end
  else
    begin
      timeBeginPeriod(1);
      Ms := timeGetTime();
      timeEndPeriod(1);
    end;

  Result := Ms;
end;

procedure TPGTimer.Init(aDesiredFPS, aMaxElapsedTime: Single);
begin

  // check for performace counter
  FPCAvail := QueryPerformanceFrequency(FPCFreq);
  if FPCAvail then
    begin
      QueryPerformanceFrequency(FLastTime);
    end
  else
    begin
      FLastTime := 1000;
    end;
  FDesiredFPS := aDesiredFPS;
  FMinElapsedTime := 0.0;
  FMaxElapsedTime := aMaxElapsedTime;
  if FMaxElapsedTime < 1 then
    FMaxElapsedTime := 1
  else if FMaxElapsedTime > 1000 then
    FMaxElapsedTime := 1000;

  FElapsedTimeScale := FDesiredFPS / FLastTime;
  FFPSTimeScale := 1.0 / FLastTime;
end;

procedure TPGTimer.Update;
begin

  // calc elapsed time
  FCurTime := GetTickCountPriv;
  FElapsedTime := (FCurTime - FLastTime) * FElapsedTimeScale;
  if FElapsedTime < FMinElapsedTime then
    FElapsedTime := FMinElapsedTime
  else if FElapsedTime > FMaxElapsedTime
    then FElapsedTime := FMaxElapsedTime;

  // calc frame rate
  Inc(FFrameCount);
  FFPSElapsedTime := FFPSElapsedTime + ((FCurTime - FLastTime) * FFPSTimeScale);

  if FFPSElapsedTime >= 1 then
  begin
    FFPSElapsedTime := 0;
    FFrameRate := FFrameCount;
    FFrameCount := 0;
  end;

  FLastTime := FCurTime;
end;

function TPGTimer.GetElapsedTime: Single;
begin

  Result := FElapsedTime;
end;

function TPGTimer.GetFrameRate: Cardinal;
begin

  Result := FFrameRate;
end;

function TPGTimer.GetDesiredFPS: Single;
begin

  Result := FDesiredFPS;
end;

function TPGTimer.FrameElapsed(var aTimer: Single; aFrames: Single): Boolean;
begin

  Result := False;
  aTimer := aTimer + (1.0*FElapsedTime);
  if aTimer > aFrames then
  begin
    aTimer := 0;
    Result := True;
  end;
end;

procedure TPGTimer.ResetFrameElapsed(var aTimer: Single; aFrame: Single);
begin

  aTimer := aFrame + 1;
end;

function TPGTimer.FrameSpeed(var aTimer: Single; aSpeedFPS: Single): Boolean;
var
  ScaleTime: Single;
begin

  Result := False;
  aTimer := aTimer + FElapsedTime;
  ScaleTime := (FDesiredFPS / aSpeedFPS);
  if aTimer > ScaleTime then
  begin
    aTimer := aTimer - ScaleTime;
    Result := True;
  end;
end;

function TPGTimer.ResetFrameSpeed(var aTimer: Single; aSpeedFPS: Single): Single;
begin

  Result := (FDesiredFPS / aSpeedFPS);
  aTimer := Result;
end;

Here is my game loop:

Code:
procedure TPGGame.UpdateSimulation;
begin
  PG.ProcessMessages;

  if not PG.RenderDevice.Ready then
  begin
    if FRenderOnLostFocus = False then
    begin
      Sleep(55);
      Exit;
    end;

    if PG.RenderDevice.GetWindowed() = False then
    begin
      Sleep(55);
      Exit;
    end;

  end;

  PG.Input.Update;
  PG.Input.GetMousePosAbs(FMousePos, False);
  PG.Timer.Update;
  FElapsedTime := PG.Timer.GetElapsedTime;
  UpdateInput(FElapsedTime);
  ProcessTerminate;
  UpdateFrame(FElapsedTime);
  ClearFrame;
  if PG.RenderDevice.StartFrame() then
  begin
    RenderFrame;
    RenderHud;
    RenderCursor;
    ProcessScreenshot;
    PG.RenderDevice.EndFrame;
  end;
  PG.RenderDevice.ShowFrame;
end;

UpdateFrame would be passed the delta time in FElapsedTime. This value is based on the DesiredFPS value (30, 60 fps). The timer code has supports for when the delta value is too high. I just ran a 14 year old game using DirectX and it still runs ok on modern hardware using this frame based timing method. I posted the code because I think you can see what's going on better than trying to explain it.

Fps would be your desired frame rate. Another variable to control max_frame_elapsed to handle high delta spikes. For physics, maybe add a Fixed Step event and your physics code can be in that event maybe???

I just recorded a short gameplay video of my old game (hope to redo it soon in gms2). Notice at the start it's real slow (hard drive was grinding for a moment, grrr) the timing code keep the simulation updating. Then it gets fast. The simulation is locked to 30 fps while the game loop is allowed to run as fast as possible. They are decoupled. Notice even at the start when it's running slower, the simulation is silky smooth still. Thanks to frame-based timing.

 
Last edited by a moderator:

csanyk

Member
No. Room speed is the fps cap.
Target fps would affect the way delta time is calculated.
In GM, isn't room_speed both the target AND the cap? As I've always understood it, if room_speed is 30, then the runtime tries to run the game at 30fps. If it can run higher than 30, it will stay capped to 30, because it is hitting its target, and is trying to do that. If it can't hit 30, then it will run as fast as it can, albeit lower than target.

As i stated above: If your target fps is 60, but the current fps is 120, then the dt would be 0.5. : Move 1/2 the distance, but twice as often.
Target fps's are used not as a cap, but to keep your velocities and accelerations in pixels per step, rather than pixels per second.
I don't get what you're saying here at all. I've had plenty of games where my room_speed was set to 30 or 60, and fps was several hundred. I've never had to use delta timing to keep my instances moving at the correct pixels per step to reflect the speed desired at 30/60 fps because the game was running at 600 fps. I've always seen my games run at room_speed, and no higher than that. room_speed is the target, and if the engine achieves target, it caps itself so it doesn't run faster than desired automatically. No special solution needed on my part involving delta time.
 

GMWolf

aka fel666
Gm does not have a target FPS. Just a cap. It will run as fast as possible, up until it reaches the cap.
The idea is to introduce the notion of target fps such that you can code your game, assuming it will run at 30fps. But remove the cap and have delta timing keep objects moving at the right speed.

I think you may have to read up on how and when delta timing is used: it seems like you are saying it is a useless notion, when all modern games use it.

When your room speed is set at 30fps, but fps_real shows 600, the step event and draw event still only runs 30 times a second. Delta timing would allow the step and draw event to run uncapped, but still have your game objects moving at the correct speed.

So if you have an object that should move at 1 meter per second, it doesn't matter if your computer runs the game at 30, 45 or even 60 fps, delta timing will ensure your object moves at 1 meter per second.

For reference, open a commercial game (like CS:GO) and tweak the setting so that it will run at ~30 fps. Then tweak it so that it will run at a higher fps, like 45 or 60. Notice how everything moves at the same speed. You could even run that game at 200fps on very low graphics. Still, you move at the same speed.
Notice how those fps are true fps, where both rendering and game logic is done.

In GM, fps_real = 600 does NOT mean your step event code runs 600 times a second.

From what I gather, fps_real looks at the CPU time spent in abframe, and extrapolates it to find how fast the game runs uncapped.
Fps is the actual frequency at which your game is calculated and rendered.
 

csanyk

Member
Gm does not have a target FPS. Just a cap. It will run as fast as possible, up until it reaches the cap.
The idea is to introduce the notion of target fps such that you can code your game, assuming it will run at 30fps. But remove the cap and have delta timing keep objects moving at the right speed.
And WHY does it stop when it reaches the cap? Because that is the DESIRED frame rate, the target fps that it is intended to achieve. We could say cap and target are synonyms. The engine tries to run at room_speed fps, and will go as fast as it can go, but no faster than that speed, because that is the target speed that it is capped at.

I think you may have to read up on how and when delta timing is used: it seems like you are saying it is a useless notion, when all modern games use it.
Not at all. Delta timing is very useful, and I was saying earlier that it'd be nice if YYG could have it be built in to the engine so that developers don't have to implement it on their own. @Mike said that's basically impossible, because introducing adjustments via delta-timing can result in problems and they don't want a delta-time solution unless it's a 100% solution.
When your room speed is set at 30fps, but fps_real shows 600, the step event and draw event still only runs 30 times a second. Delta timing would allow the step and draw event to run uncapped, but still have your game objects moving at the correct speed.
You're talking about a game done in GMS, right? How does one "uncap" fps in GMS? Any game has a room_speed/game_speed. You can set that to a high integer, so that effectively there's no hardware that can achieve that fps, but there's still a cap. And yes, delta-timing is used to adjust distances when the game is running below the target fps. If you want to move an object 10 pixels in one step, and one step to happen every 1/30th of a second, but the game is running slow for a step, so that one step only happens every 1/15th of a second, delta time calculates the difference and adjusts the movement to 20 pixels in that step. And then the next step, if the computer can calculate everything so that this step takes 1/20th of a second, delta time adjusts and it moves 16 pixels. The player perceives that the game runs smoothly even though fps is varying and lower than target.

So if you have an object that should move at 1 meter per second, it doesn't matter if your computer runs the game at 30, 45 or even 60 fps, delta timing will ensure your object moves at 1 meter per second.
Right, that would be true if you were talking about an engine that ran as fast as possible, with no cap. But GMS is not such an engine.

For reference, open a commercial game (like CS:GO) and tweak the setting so that it will run at ~30 fps. Then tweak it so that it will run at a higher fps, like 45 or 60. Notice how everything moves at the same speed. You could even run that game at 200fps on very low graphics. Still, you move at the same speed.
Notice how those fps are true fps, where both rendering and game logic is done.
Right, and CS:GO wasn't developed in GMS, but another engine.

In GM, fps_real = 600 does NOT mean your step event code runs 600 times a second.

From what I gather, fps_real looks at the CPU time spent in abframe, and extrapolates it to find how fast the game runs uncapped.
That's basically what I said. fps_real is a readonly variable that returns the actual fps that the engine is running at. The engine does not actually compute "extra" frames when running faster than the target (or capped as you prefer to call it) speed; but the CPU calculates how much time the last frame took to generate, and then reports how many times that time fits into one second.

Fps is the actual frequency at which your game is calculated and rendered.
I haven't used fps in GMS2 yet, but in GMS1, it is a readonly variable that returns the actual fps that the game is running at, clamped to room_speed on the top end. So if you're running a game in a room_speed=30 room, and the game is running slower, then fps will report lower than room_speed, and if you're not having performance problems, it will report the capped value, which is room_speed, which is the target you told the engine to try to hit.

I don't think we're disagreeing, but it does seem like you've been misunderstanding what I've been saying. I'm not sure why, but hopefully this clarifies.
 

GMWolf

aka fel666
And WHY does it stop when it reaches the cap? Because that is the DESIRED frame rate, the target fps that it is intended to achieve. We could say cap and target are synonyms. The engine tries to run at room_speed fps, and will go as fast as it can go, but no faster than that speed, because that is the target speed that it is capped at.
The idea of target FPS is not the ideal fps (in theory, more fps is better). target fps is what your values have been set to work with.
for example, you may want to set you target fps at 30 fps.
that means you assume the game will run at 30.
If you set hspeed to 10, you are saying you want x to increase by 10, every expected frame. -> increases by 300 every second.
however, if the game runs at 60 fps at any given moment, x will be increased by 5 during that frame -> 300 every second.

As you can see, target fps a value you use to determine the time scale used in your value. It does not influence how fast the game runs.
It is used to calculate the dt value, using the formula: dt = (target_fps / (1 / delta_time)); Where delta_time is the time since the last frame in seconds.
This allows the user to keep using familiar values in pixels per frame, rather than pixels per second.
This of those 'frames' as virtual frames.

It could also be used to determin the maximum time step to use. If fps dips below target fps, the timestep stays low so that object dont step over collisions. This means the game no longer benefits from delta timing, but stays stable. This is observed in a few games, But is not often used as the game becomes unplayable at low fps anyways.

Right, that would be true if you were talking about an engine that ran as fast as possible, with no cap. But GMS is not such an engine.
So, GM could change? Also, a room speed of 9999 is essentially removing the fps cap.
Right, and CS:GO wasn't developed in GMS, but another engine.
And this is why i am suggesting a change be made in GM.
Yes, i undestand your point: This is not how GM works. I say: this is how GM could work, lets consider it :)
 
F

foxtails

Guest
Out if curiosity. Why not use common tools built-in to 2d animation and 3d software for timing fps. Would a keyed timeline work in this situation with the usual advanced tools to flatten and ease in and out for video games as well?
 
J

Jarmar Games

Guest
Your game loop can run as fast as the cpu can push it, yet your simulation will run at a known and predictable rate. They are decoupled.
 
J

Jarmar Games

Guest
I just had a thought... what if the current gms2 render loop was modified this way: a) no delay on the draw events, b) control the step events based on delta_time. So like rather than a pause at the end, fire them when delta_time has elapsed equal to the desired fps room speed. Would this give us what we want with minimal changes internally?
 

Juju

Member
Hi, so I did the whole converting HLD to 60FPS thing, right?

There's no 100% consistent way of applying delta time automatically for the simple reason that different algorithms integrate (w.r.t. time) in different ways. If the programmer is doing something unusual with any speed value, which they will, then the integration changes. You can get it close most of the time with a naive method but that isn't good enough for a feature that's going to be in all games. You have to take responsibility for it yourself as a programmer.

Other engines support delta time because a lot of the machinery is held at arm's length from the developers. This is not an option in GameMaker in its current configuration.
 
Last edited:
P

Pudsy

Guest
Very good thread! I've come to similar conclusions as the OP in the past, namely that many of the built-in variables (and any processing power GMS expends maintaining them) goes to waste if you wish to do anything more complex than fixed-rate, 1 update, 1 draw, repeat.

I think this user has succinctly described a similar solution to the one I am about to suggest & explain in greater detail (AKA "blather on about for much longer than I intended!!!"TM)
I just had a thought... what if the current gms2 render loop was modified this way: a) no delay on the draw events, b) control the step events based on delta_time. So like rather than a pause at the end, fire them when delta_time has elapsed equal to the desired fps room speed. Would this give us what we want with minimal changes internally?
So, please feel free to try to shoot holes in this suggestion, or help make this work "100%" as requested! :)
The intention is that this would create minimal work on the GMS/YYG side of things, for maximum benefit to us devs, while at the same time maintaining backwards compatibility (so we don't break anybody's old code!) and allowing us to use the built-in variables/logic without having to recreate our own duplicates. Hopefully then we can all benefit from supporting both higher & lower frame rates (than room_speed), with smooth motion, and without any issues around collisions etc.


The basic idea... (take this as your TLDR, as this post may get lengthy!)

This is "render tweening" rather than "delta time", however in many cases the results can turn out to be close enough that it doesn't matter.

Here we stop thinking of using "delta_time" to have "Steps" which vary in duration (affecting speeds & accelerations in all sorts of annoying ways, as alluded to with the mention of integration etc. above!)

Instead, GMS continues to process fixed-length/fixed-rate "Steps" at the defined "room_speed". And we continue to design games with fixed update rates (eg. 30 or 60) which are friendly to all devices in terms processing power, and consistent in terms of collisions/animations & similar.

However, GMS would allow us to decouple the render rate from the update rate.

GMS could (optionally!) 'tween it's "Draw" events, by interpolating the positions, speeds, etc. between each game "Step". It would do this based on the current "delta_time" each time it decides to trigger another "Draw" event to render a new frame.

This would allow the game to run at any frame rate (draw_speed), be it slower or faster than the defined room_speed, without affecting the updates/Step logic.

One approach might be if GMS auto-tweens the x,y,hspeed,vspeed,etc. variables (& potentially even Spine animations) appropriately for each "Draw" event. And/or we could write our own Draw event code to override the defaults (as usual) to change any GMS default behaviour if it doesn't suit any particular project, eg. for instances we don't want tweened


The details...

NOTE: Assume that when I mention "Step" or "Draw" events, that also includes all the associated Pre/Post/Begin/End- events, as I think that makes the most sense with this approach.
  1. All (Begin/End-) Step events are still triggered at the defined room_speed, as usual.
  2. Allow us to decouple all the "Step" and "Draw" events by introducing a new GMS variable eg. "draw_speed"
    • By default "draw_speed" would be zero (0) = backwards-compatible behaviour, simply alternate Step & Draw events as usual, limiting everything to the current room_speed, and exhibiting the usual slow down if the engine can't keep up.
    • Any negative value (-1) for "draw_speed" would allow the GMS "Draw" events to be triggered as fast as possible (suitable for eg. vsync disabled, high refresh-rate displays). In between each "Draw" event, GMS checks the timer & processes the required number of "Step" events to make sure we stay aligned (catch up, or avoid getting ahead) with the current "room_speed". Note that this might sometimes require zero Step events to be processed if we're rendering much faster than room_speed, or sometimes require multiple Step events if rendering slower than room_speed.
    • If "draw_speed" is set to any positive value, this would act as a fixed upper limit on the rate that "Draw" events are triggered (suitable for eg. vsync enabled, mobile devices, or capped 30/60/100/120/144Hz to match display). As above, monitor the timer & process the required number of "Step" events between each "Draw" event. But this time, also monitor the timer & rate that "Draw" events are being triggered... and don't allow it to go above the current "draw_speed".
  3. If "draw_speed" is set to anything other than default (0), then GMS modifies it's behaviour during "Draw" events...
    • Before each "Draw" event, save a copy of the current value of all appropriate built-in variables (eg. x, y, hspeed, vspeed, image_angle, etc.)
    • Tween/Interpolate the x/y position of every instance based on it's current velocity & the "delta time" (how far we are between previous & next Step events)
      ie. {draw position} = {current position} - ( {current speed} * (1.0 - {delta_time}) )
      [assuming {delta time} = 0.0 to 1.0]
    • Similarly tween all other appropriate built-in variables (hspeed, vspeed, image_angle, image_xscale, image_yscale, etc.)
    • Trigger the GMS "Draw" event as normal, which should hopefully require no/little modification to render each instance at the tweened/interpolated position & orientation.
      Presumably Spine animations would support similar tweening based on interpolating their position within the animations' timelines.
    • After each "Draw" event, restore all the built-in variables to their previously-saved values (so they can be used by the upcoming Step events)
  4. We can override the "Draw" events as usual by defining our own, which would allow us to define more complex/custom behaviour on a per-Object/Instance basis, as required for each project

Other Notes...
  • There will no doubt be some components of GMS which I've overlooked that require slight modification to support decoupled Step & Draw events. Any help here pointing them out...? Hopefully most things are independent anyway, especially as GMS/YYG quite rightly encourage avoidance of "update"/Step logic during Draw events! ;)
  • There may be a need for some sort of "safety valve", in case of sudden background activity on the device which could cause a "hiccup" in the timing. Perhaps something like: don't process more than "x" Steps in between each Draw event. So if the game suddenly doesn't have the performance it requires to keep up, it doesn't get stuck endlessly processing Steps that it can never catch up with! Instead it would suffer one or two minor stutters (not unlike GMS might currently exhibit in a similar situation with the fixed-rate system).
  • Saving/restoring all those built-in variables may well introduce undesired overhead, especially for projects with a large number of instances. I guess that algorithm doesn't have to be the actual implementation, there may be a better approach? I was simply aiming for simple & backwards-compatibility, while keeping GMS runner/engine changes to a minimum.
  • Alternatively, perhaps the default behaviour should be to NOT auto-tween every instance, and we should call a function to "instance_enable_tweening()", or something along those lines for each instance we want to be auto-tweened.
  • One massive bonus of this entire approach might be that you could instantly pause any(?) game, by temporarily setting it's room_speed to zero! The "Draw" events would still be called, and you could still have a responsive GUI or pop-up pause menu. Might need a little more thought, but I'm sure it could work somehow.

OK, I've written enough. Probably missed a few obvious things, but go ahead & point them out & help make this workable for all of our benefits! :)
 
Last edited by a moderator:

Mike

nobody important
GMC Elder
So..the biggest thing you should know about most normal GameMaker developers - that has ALWAYS been true, and that blocks all this kind of thing; is that they never... EVER keep processing out of drawing code. This means you can't call the (custom) rendering code of events as they will inevitably do some kind of processing and mess with game speeds...variables etc.

You also can't just take that last frame and tween it - well, I guess you could, but you'd get some odd fringe results for new frames, and end frames. And processing raw vertex buffers would be complicated in the extreme. If you were dealing with one instance, one sprite you'd probably manage. But when your dealing with custom drawing code - you have no idea what they're drawing, or how to tween it, so they have to.

after all... if I build a dynamic vertex buffer and pass it through a shader to a surface that I then use, how can you automagically tween that? You can't I'm afraid.


Just doesn't work..... We've been through all this ourselves when trying to think of a way to do it. We "settled" on the delta time for a reason, even if that reasons sucks. :)
 
P

Pudsy

Guest
Thanks Mike, for giving the proposal a look & responding :)

Reading my post again, I perhaps got carried away with suggesting GMS should auto-tween for us, especially towards the end of the "Other Notes" section! I originally intended the main thrust of the suggestion to be for GMS to allow us to decouple Step & Draw events/timings, along the lines of what I quoted from "Jamar Games".

I mentioned early on that the auto-tweening on GMS' behalf was an "optional" part of the suggestion, "and/or we could write our own Draw event code" to do the tweening, but you're a busy guy, so apologies if that didn't come across amongst all the text! :)

I'll have another stab...


UPDATED PROPOSAL IN BRIEF:
Currently, we need to artificially ramp up the room_speed to get the granularity we need to operate an independent Draw rate (for tweening, delta time, or any other similar solution), based on our own custom "game tick" rate. But by doing that, many of the instance variables become meaningless, and we have to reinvent the wheel to get back all those positional/motion variables & that nice functionality.


If GMS could provide us with a way to decouple the rate that Step & Draw events are triggered, it means that we can use all those lovely built-in positional variables to calculate our own tweening/interpolation in each Draw event (eg. x/y, xprevious/yprevious), since they would no longer be mangled by Step events being called too often.


I'll strip back the approach here, rather than make messy edits to my previous post...
  • Ideally there would be 3 modes of operation:-
    • Default = as things are now, 1 Step, followed by 1 Draw, repeat, never exceed room_speed
    • Draw events triggered as fast as possible, process appropriate number of Step events in between (suitable for eg. vsync disabled, high refresh-rate displays)
    • Draw events triggered at specified fixed rate, process appropriate number of Step events in between (suitable for eg. vsync enabled, mobile devices, or capped 30/60/100/120/144Hz to match display)
  • So basically, ignore section "3" of the "details" list in my previous proposal. The rest would stand, and the default behaviour would be to execute the Draw events as normal. However, this happens at a rate (draw_speed) that is independent of the Step event rate (room_speed).
  • It is then up to us to write custom Draw events & do our own tweening (or whatever other method you may choose to implement) for each element of the game, and to suit each individual project. If we do nothing, then everything would still be rendered in their current positions/states, it's just that we'd get greater (or fewer) than 1 frame rendered per Step, some of them being identical.
  • We could do our tweening based on delta_time. Or even better, perhaps GMS could provide a new "delta_step" value which could represent the current position between the previous & next Step events, as a value from 0.0 to 1.0. It's obviously easily calculated, but might aid writing custom Draw events to tween things.

So..the biggest thing you should know about most normal GameMaker developers - that has ALWAYS been true, and that blocks all this kind of thing; is that they never... EVER keep processing out of drawing code. This means you can't call the (custom) rendering code of events as they will inevitably do some kind of processing and mess with game speeds...variables etc.
Ha! I can certainly appreciate that! Although as I suggested, if we had an independent Draw speed, the default behaviour would definitely have to operate exactly the same as things are now. This way we certainly wouldn't be breaking any code. And then if someone writes GML to use the new functionality to decouple Step/Draw events, it would have to be documented & assumed that any code in the Draw event will be executed at a different rate to the Step events. After all, that would be the singular reason for it's existence! ;)
( EDIT: and since this modified proposal no longer requires GMS to auto-tween or auto-call custom drawing code, it should no longer be an issue as it would all be in the hands of those devs to use or ignore it )

You also can't just take that last frame and tween it - well, I guess you could, but you'd get some odd fringe results for new frames, and end frames. And processing raw vertex buffers would be complicated in the extreme. If you were dealing with one instance, one sprite you'd probably manage. But when your dealing with custom drawing code - you have no idea what they're drawing, or how to tween it, so they have to.

after all... if I build a dynamic vertex buffer and pass it through a shader to a surface that I then use, how can you automagically tween that? You can't I'm afraid.
Yep, good examples of some of the issues I've already experienced myself before. Certainly there's no way I'd expect GMS to handle all cases correctly with auto-tweening. I perhaps went a bit far there!

But I'd still like to think there's a seed of something in the general approach...
Allow us to separate Step/Draw rates, and then we can handle everything beyond that ourselves.

Regardless, thanks again for taking the time to consider this stuff!
 
Last edited by a moderator:

gnysek

Member
I just want to aks: the current hspeed and vspeed system is so rarely used now, is it just here for legacy reasons?
From what I remember, if you set speed and direction, hspeed/vspeed would be equal to lenghtdir_x/lenghtdir_y(speed, direction) - so knowing this it may came for use :)

Also GM does in background, and every step - x += hspeed, y+= vspeed; etc., Knowing this, a system about which you all talk, can be easily created:

Code:
// creation code
_speed = 0;
_direction = 0;
_vspeed = 0;
_hspeed = 0;
_gravity = 0;
_friction = 0;
Then, you can easily create a script which can be put in "Step" event which will emulate default GM movement engine on x and y variables - and, you can add delta timing there also. I think this kind of thing should be in marketplace as "must-have" asset.
 

Mike

nobody important
GMC Elder
I quite like the idea of a different FPS for the draw event, that's not actually that hard. I suspect it would have to be a multiplier of the step (or other way round - whatever). We do get requests of more steps to the draw, so things like physics can be run at a higher rate. There's no point in running graphics faster than 60fps (currently) as although there are the new GSync monitors, they are rare and aren't really catching on.

The problem would be that if it wasn't, then eventually one of the events would suffer a delay due to the other one over running. For example, an FPS of 23 for step and 60 for rendering. This would mean than the drawing would at some point be delayed if the step needed to be run just before, and this could cause stuttering anyway.

When they are tied together (as they are now), devs take this into account - even when they don't realise they're doing it, but with a more free form speed, that's a harder thing to grasp and we'll get constant reports of stuttering and tearing.

If I were doing this in native code, I'd run each process on a separate thread, then pass over "new" drawing locations each step event. then the drawing code could tween nicely itself, and the process event would never hamper the drawing on as it was on a separate thread (assuming at least dual core). But that's currently not possible in GMS - and a WHOLE different conversation.
 

GMWolf

aka fel666
@gnysek I know how hspeed and vspeed works.. Thanks.
I was more alluding to the fact people don't tend to use then that much anymore, in favor of their own DX and dy variables.
 
P

Pudsy

Guest
I quite like the idea of a different FPS for the draw event, that's not actually that hard.
Awesome! Ready by Monday? :D
Sorry! But yeah, what I was trying to say near the end of my first post was that I'm sure GMS is designed in such a way that it's internals are not too dependent on the Step & Draw events being coupled (rather than the haphazard way some of us users may allow a stray update in our 'Draw' code!). So I'm hopeful that it wouldn't be too much of a mammoth task to split the events & basically have 2 separate timing loops which trigger them.

I suspect it would have to be a multiplier of the step (or other way round - whatever). ...(snip)...
Yeah, I sort of agree that if we (as GMS users) want to define a fixed "draw_speed" (one of the 3 modes I suggested), we might prefer to set both it & the room_speed so that one divides nicely into the other where possible. As mentioned, I think the main use of that facility would be to allow a game to set it's draw_speed to match the display refresh rate. That works easily for say, room_speed=30, and then draw_speed=30 (for low-spec devices) or 60 (for better specs). With vsync enabled, we'd then be guaranteed a whole number of Step events between each Draw, and no tearing.

( EDIT: But, it wouldn't actually matter if the two rates weren't nice factors/multipliers! See the bit after the next quote for why. )

Bear in mind here (in this mode, where we've defined a fixed draw_speed which differs to the room_speed, so we're ok to break backwards compatibility!), if the Draw events fall behind this specified fixed rate (device too busy or whatever), then they would be skipped as needed. Step events however would always be processed: if they fall behind, Draw events would be skipped until Step gets "in step" again!

( EDIT: And of course, it could work equally well the other way around, for high-update-rate physics, as you mention. )

The problem would be that if it wasn't, then eventually one of the events would suffer a delay due to the other one over running. For example, an FPS of 23 for step and 60 for rendering. This would mean than the drawing would at some point be delayed if the step needed to be run just before, and this could cause stuttering anyway.
OK, let's follow through on this thought, as it nicely highlights the main advantage of the approach (and indeed the reason I've used it elsewhere before)...

I think rather than designing a game with a room_speed of 23 (unusual, but it made a point!), this situation would be more typical of the draw_speed mode I mentioned where it would perform Draw events as fast as possible, so the render rate actually varies & definitely does not divide nicely into room_speed chunks! This would usually be for where you only care about maximum performance, with eg. room_speed=30 or 60 or whatever, and possibly vsync disabled (though could be enabled to cap it to display refresh rate). But it would equally apply for eg. room_speed=60 & draw_speed=144 (capped with vsync for a 144Hz monitor)

So GMS triggers Draw events as fast as possible. Now, like you say, sometimes in between Draw events, the game has to process another Step just at the boundary before it would be due to process the next one. In doing so, it slightly overruns the timer into what would be the next Step. Well, no problem... as described in my first proposal, we just process another Step immediately, before we trigger the next Draw event. Remember that the idea here is NOT that Draw events occur at regular intervals. Instead we (GMS users) can implement whatever tweening or similar we want in our Draw code (which is the intention by enabling this feature in the first place), in order to smooth out all motion. That next Draw event will be called at the very beginning of the tweened time period following that next Step, so there won't be much motion to tween yet, so it's not like everything on screen has moved 2 whole Steps at once. The important thing is that Step events are never skipped, and happen as regularly as possible, at the expense of Draw events when needed (since the tweening will smooth those out).

Picture this: delta_step = 0.0 through 1.0
- at 0.001 Draw would render everything as it was just after the previous Step, step(n-1), at a point in time immediately after the current step(n) has been processed
- at 0.999 Draw would render everything tweened to just before it reaches the positions set by the current step(n)
- if delta_step >= 1.0, we process another Step, step(n+1), and subtract 1.0 from delta_step

In this way, on faster devices, you could have eg. 10 Draw events triggered in between each Step (at eg, delta_step = 0.1, 0.2, 0.3, etc.)

Or on slower devices, you could have multiple Step events processed in between each Draw event.
So those Draw events might be rendered at eg. Step times = 0.3, 1.6, 2.9, 4.2, 5.5, etc
So there it skips over 2 Step events (there's no 3.x in that example), but of course everything tweened on-screen only moves the same 1.3 Steps as for every other frame in that sequence, so the motion remains fluid throughout, no jerks/stutters.

Note, this explanation does highlight one slight drawback of tweening: that what you are rendering is typically somewhere between 0.0 and 1.0 Steps "behind" what the game has just processed. But in practice that's so slight, the added fluidity outweighs it, especially as the draw_speed increases.

If I were doing this in native code, I'd run each process on a separate thread, then pass over "new" drawing locations each step event. then the drawing code could tween nicely itself, and the process event would never hamper the drawing on as it was on a separate thread (assuming at least dual core). But that's currently not possible in GMS - and a WHOLE different conversation.
Absolutely, in an ideal world. But I'd never dream of suggesting GMS should support multi-threading for this kind of stuff, and it'd be complete overkill for most of us and our projects anyway!

There's no point in running graphics faster than 60fps (currently) as although there are the new GSync monitors, they are rare and aren't really catching on.
I'd actually be interested in figures on that if anyone has a reference. Steam's hardware survey doesn't seem to include Refresh Rate (any more, I believe it may have done in the past), unless I've overlooked it. Which could point to a very low take up of high refresh rate monitors since CRTs bit the dust.

I agree the GSync/Freesync displays are rare (& eye-wateringly expensive!), but there are certainly a lot of higher-than-60Hz displays out there for not much more $$$ than your standard 60Hz ones. Gamers who've adopted them seem to find it hard to go back to 60Hz, even perhaps at the expense of resolution or colour reproduction?

Anyway, I don't see any reason for game devs to exclude supporting higher refresh rates if it's not much trouble to do so. And it's certainly good practice to test on them.

If we could get something in place that supports independent Step/Draw rates, I don't suppose it matters what refresh rate it's targetting. It would just be more tweened/interpolated Draw events in between each Step. Should be flexible enough to support whatever. Whereas it's much trickier to support them with Step/Draw events locked together.


Hopefully that helps iron out a few more wrinkles :)
Happy to discuss/iterate further on it if anyone thinks there are any other gotchas / showstoppers?
 
Last edited by a moderator:
Top