GameMaker Bizarre performance decreasing, random-number related Issue

Fluury

Member

I had posted a thread earlier about my issues, but unfortunately to no avail.

Given there was a lot of information to write, I just put it in a video.

The gist is that as it appears, in some context, using random number generation just seems to cause some internal, permanent (until you close the game atleast) slowdown. I don't know if there is something I can do to fix this given the earlier thread didn't seem to really resolve, but it really just came down to me trying to replicate the "general area" of the issue until I noticed that after I set the "lines" to be a random length long, the performance started to go down the drain the more times I'd reset the level (or well, barcode in this case.)

PROJECT DOWNLOAD TO TINKER WITH: https://www.dropbox.com/s/lm7zycbwzl104vk/GenPerfProblem.rar?dl=0

As I said in the video, I am 100% clueless as to what is going on - if this is a bug, if I forgot something obvious - I do not know what to think of this.

If you have any ideas or thoughts, please post 'em.
 

Nidoking

Member
May I guess that the non-random length you set for your testing was closer to 100 than 200? I saw the same results you described in the video using random (dropping FPS) and a constant max of 100 (steady FPS), but when I set the constant max to 200, I see the same performance hit. It's not the randomization, but the increased number of instances. Even though they have no code, they're still drawing to the screen and still calculating movement, and that's what's taking up all that processing (according to the Profiler, which I recommend checking out). Why it increases over time even when you stop generating the lines, I don't know, but it's likely some internal Game Maker thing, like garbage collection that's way behind the destruction of instances, or something to do with exceeding a number of active instances that spills over a threshold from which there is no return.
 

obscene

Member
It's definitely related to instance_create_layer(). Commenting out that line seemed to stabilize the FPS for me. The random functions seemed to have no impact.

I made a script to average the FPS over 10,000 frames at a time so it was easier to monitor since the FPS will be running wild if you comment out the instance creation. I saw no downward trend even after a few minutes.

If you agree, it's probably unfortunately a bug in GMS2.
 

Fluury

Member
May I guess that the non-random length you set for your testing was closer to 100 than 200? I saw the same results you described in the video using random (dropping FPS) and a constant max of 100 (steady FPS), but when I set the constant max to 200, I see the same performance hit. It's not the randomization, but the increased number of instances. Even though they have no code, they're still drawing to the screen and still calculating movement, and that's what's taking up all that processing (according to the Profiler, which I recommend checking out). Why it increases over time even when you stop generating the lines, I don't know, but it's likely some internal Game Maker thing, like garbage collection that's way behind the destruction of instances, or something to do with exceeding a number of active instances that spills over a threshold from which there is no return.
The non random-length I used to test was 150 afaik, but as I said in the video I was very unsure about it. Of course if you set it to 200, the FPS will be lower by default. The point of the video is that despite being seed 0, coming back to seed 0 you end up with less performance than before if you were to generate a bunch of instances over the timeframe, and then wipe them.

I believe I had tested the interaction with a set amount of max instances for each walker and a random amount, and the FPS would would not decrease in the case of set, but *would* decrease in the case of random max. I will test this again later.

The big issue at hand is mostly the permanent performance loss as you had stated, which is quite a big deal to me.

It's definitely related to instance_create_layer(). Commenting out that line seemed to stabilize the FPS for me. The random functions seemed to have no impact.

I made a script to average the FPS over 10,000 frames at a time so it was easier to monitor since the FPS will be running wild if you comment out the instance creation. I saw no downward trend even after a few minutes.

If you agree, it's probably unfortunately a bug in GMS2.
I am personally speculating that it seems to be a combination of both - and to be quite frank, if it really is 100% related to the creation of instances I... don't exactly know how to circumvent that issue? Doesn't this imply that if you have a project create a bunch of instances, that after some threshold it will create a permanent burden on the project - if only gradually? Where even is that threshold?
 

Fluury

Member
Just tested it again. It has to be related to the random number generation at the start of oWalker that determines how many tiles each walker gets to create. Doing this with a set number for the max amount of walkers does not result in a permanent performance loss, it only seems to happen if you use random number generation for the max amount of instances each walker spawns.

If you want to test this yourself, do the following:
- Replace the random-generation with a set number. I personally used 200.
- Generate a barcode for seed 0, note down your FPS. Mine was 390~.
- Now, generate a bunch of barcodes for (still) different seeds. The max length will be the same for every walker.
- Stop. Go back to seed 0. Generate the seed 0 barcode, and check your FPS. Mine was still at around 390, despite having generated barcodes for 10 minutes or so.

In comparison when I set it back to the random number max-length, even after 3 minutes of generating the FPS tanked by 100 as demonstrated in the video.

@obscene
 

Yal

🐧 *penguin noises*
GMC Elder
This is starting to sound like it belongs in a bug report. Creating instances dynamically is essential to basically every project, it shouldn't cause slowdown. I'd recommend reporting this to Yoyo, including the example project file and a link to this topic.

One final straw to grasp at first, though: does instance_create_depth() also have the slowdown? Maybe the get-layer-by-string functionality leaks the string data or something while the "just check depths" version doesn't?
 

Fluury

Member
This is starting to sound like it belongs in a bug report. Creating instances dynamically is essential to basically every project, it shouldn't cause slowdown. I'd recommend reporting this to Yoyo, including the example project file and a link to this topic.

One final straw to grasp at first, though: does instance_create_depth() also have the slowdown? Maybe the get-layer-by-string functionality leaks the string data or something while the "just check depths" version doesn't?
I plan to tinker around with this program a bit more later today and will report on the findings. The reason why I am so confused is that this seems like a massive deal - why... did no one else seemingly report this already? As you said, this is something that should happen in quite literally every project if said project creates a bunch of instances (using random number generation(?))

As for your suggestion, I have not tried instance_create_depth() yet, I can however confirm that if instead of the layer-by-string you use a value that stored the layer id using layer_get_id(), this issue still occurs.

If all fails, how would I be to report this issue to Yoyo? Is there a specific bug-report forum/form or is it just the classic "Submit a request" over at the support forum? From googling it seems to be the latter, but I'd like to be sure.
 

Yal

🐧 *penguin noises*
GMC Elder
If all fails, how would I be to report this issue to Yoyo? Is there a specific bug-report forum/form or is it just the classic "Submit a request" over at the support forum? From googling it seems to be the latter, but I'd like to be sure.
Inside Game Maker, Help --> Report A Bug should open the correct page in your default browser.
 

Nocturne

Friendly Tyrant
Forum Staff
Admin
I would report this as a bug. I have spent a good hour studying your example and testing various different things and the only conclusion I can come to is that there is an issue with the random functions. When you file the bug, please link to this topic, link to the video you made, and also include the link to the project you've made and explain how it works, mentioning specifically that it looks like something is wrong with the random functions, as using a fixed value instead of using the random ones does not show the performance decrease.
 

Nocturne

Friendly Tyrant
Forum Staff
Admin
Okay, so, after a chat with one of the devs, we have a solution to the issue.

It appears that reusing a layer like this is the issue. If you move the controller onto a different layer then modify the code so it is this:

GML:
if(auto_generate and !instance_exists(oWalker)){
    layer_destroy(layer_get_id("Instances"));
    layer_create(0, "Instances");
    instance_destroy(oTile);
    if(!set_seed)randomize();
    else random_set_seed(0);
    alarm[0] = 1;
}
The issue is resolved. It seems that there is some underlying issue with layers not cleaning up internal structures correctly, possibly exacerbated by the use of the random functions... So, still file the bug, but now you can at least point to precisely what the issue is (and work around it in your games).
 

saffeine

Member
does this also apply to instances created with depth, or just layers?
i'm not sure if instances added with depth are added to the same layer as the instance that called it, so clarification would be great.
 

Fluury

Member
Okay, so, after a chat with one of the devs, we have a solution to the issue.

It appears that reusing a layer like this is the issue. If you move the controller onto a different layer then modify the code so it is this:

GML:
if(auto_generate and !instance_exists(oWalker)){
    layer_destroy(layer_get_id("Instances"));
    layer_create(0, "Instances");
    instance_destroy(oTile);
    if(!set_seed)randomize();
    else random_set_seed(0);
    alarm[0] = 1;
}
The issue is resolved. It seems that there is some underlying issue with layers not cleaning up internal structures correctly, possibly exacerbated by the use of the random functions... So, still file the bug, but now you can at least point to precisely what the issue is (and work around it in your games).
Ah shucks, I submitted the bug like 10 minutes ago. Well, I did link the thread.

Still, an absolutely massive thank you. After submitting the bug I was just kinda sitting there wondering how I should circumvent this issue, but it being related to layers and the random functions somehow making the problem even *worse* is interesting to me. More importantly, I just wonder how many projects are affected by this, without the developers even knowing what's going on. This seems like quite the significant bug!

Thank you again. I will implement this later, which should hopefully deal with the performance problems my original project was facing.

does this also apply to instances created with depth, or just layers?
i'm not sure if instances added with depth are added to the same layer as the instance that called it, so clarification would be great.
In all the projects (and in the original project) I worked on I was working with layers exclusively, I don't know if addition via depths would circumvent this issue. It's worth trying out for sure.
 
Last edited:

Fluury

Member
Okay, so, after a chat with one of the devs, we have a solution to the issue.

It appears that reusing a layer like this is the issue. If you move the controller onto a different layer then modify the code so it is this:

GML:
if(auto_generate and !instance_exists(oWalker)){
    layer_destroy(layer_get_id("Instances"));
    layer_create(0, "Instances");
    instance_destroy(oTile);
    if(!set_seed)randomize();
    else random_set_seed(0);
    alarm[0] = 1;
}
The issue is resolved. It seems that there is some underlying issue with layers not cleaning up internal structures correctly, possibly exacerbated by the use of the random functions... So, still file the bug, but now you can at least point to precisely what the issue is (and work around it in your games).
Welp, I came around to actually try your suggestion and unfortunately it did not fix anything for me. I made a new layer over the old Instances layer, placed the Controller object in there and then modified the code as you suggested.

The performance still goes down the drain after generating a bunch of walkers over and over again :( I triple checked that I did everything correctly given there isnt exactly a lot of room for error when it comes to just adding two lines, a new layer and moving an object from one layer to a different one.

Can someone else in the thread test this? Because my performance goes from 500 to 350 after like 2 minutes of generating.
 

Nocturne

Friendly Tyrant
Forum Staff
Admin
That's interesting... I was getting fairly steady 560fps after applying this... Oh well, let's just wait and see what the guys in tech support have to say!!!
 

Fluury

Member
I'm mostly interested if that means the problem is on my side somehow. Given this, well, kinda halts the development of the project I am working on in a lot of ways given I wouldn't want to keep working on it while this issue persists and I am still iffy on what exactly causes it.
 

Nocturne

Friendly Tyrant
Forum Staff
Admin
Can't you work on other parts, like UI or something for a day or two until you get a reply back?
 

Fluury

Member
I feel like something worth mentioning is that this issue also happens if, instead of a random function, you instead of use (for example) 100 + get_timer() mod 100;.

Moving the random function over to oController and using with() with the new walker instance doesnt fix the issue either.

I suppose it's finally time for me to pass the torch to the tech guys and wait until then and stop wasting me time trying to dance around this.
 

Yal

🐧 *penguin noises*
GMC Elder
I feel like something worth mentioning is that this issue also happens if, instead of a random function, you instead of use (for example) 100 + get_timer() mod 100;.
Does it happen if you don't use mod specifically? Basic random number generators are usually modulo-based (add a prime number to the current seed, modulo by a different prime number, return the remainder) so maybe the random issue is caused by mod leaking memory?
 

Fluury

Member
Does it happen if you don't use mod specifically? Basic random number generators are usually modulo-based (add a prime number to the current seed, modulo by a different prime number, return the remainder) so maybe the random issue is caused by mod leaking memory?
As said earlier, the "bug" doesn't happen if you use a fixed number. Unless you are implying I should try out writing something up that would simulate a random function without using mod, where I'll be frank with you I wouldn't really know where to begin with :V
 

Yal

🐧 *penguin noises*
GMC Elder
As said earlier, the "bug" doesn't happen if you use a fixed number. Unless you are implying I should try out writing something up that would simulate a random function without using mod, where I'll be frank with you I wouldn't really know where to begin with :V
I'm not suggesting that, I'm just saying that if both GM mod and random functions exhibit the same issue, it might be because the leak is in mod specifically (which could help solve the bug faster).
 

Fluury

Member
I'm not suggesting that, I'm just saying that if both GM mod and random functions exhibit the same issue, it might be because the leak is in mod specifically (which could help solve the bug faster).
I see! That sounds very plausible! That would actually make even more sense; I have a lot of functions in the level gen which use mod over and over in some steps. That might be why the performance goes down even after only generating 5 levels, because of the sheer frequency of mod being used.
 

Yal

🐧 *penguin noises*
GMC Elder
We could actually replace mod with a script like this, now when you mention it...
GML:
///mood(num,divisor)
return argument0 - (argument0 div argument1)*argument1;
It's going to be a pretty messy transition since you need to change the calling syntax from a mod b ---> mod(a,b) but it could be an interesting check to test if mod specifically has the issue.
(Fixing the random numbers is gonna be a bit more tedious, though, and I don't have any quick-and-dirty answers for that)
 

Fluury

Member
We could actually replace mod with a script like this, now when you mention it...
GML:
///mood(num,divisor)
return argument0 - (argument0 div argument1)*argument1;
It's going to be a pretty messy transition since you need to change the calling syntax from a mod b ---> mod(a,b) but it could be an interesting check to test if mod specifically has the issue.
(Fixing the random numbers is gonna be a bit more tedious, though, and I don't have any quick-and-dirty answers for that)
Gave it a shot. Performance still goes down the drain interestingly enough. Sure hope the tech lads leave a reply this week because man this is weird.
 

Fluury

Member
I wouldn't want to be awfully pushy about this, especially considering the current situation, but uuh, how long does it usually take for the tech lads to reply to a bug report..?

EDIT: Aaand caught, cheers!
 
Last edited:
T

TheForeman847

Guest
I am having a similar problem with my game. Did you ever figure out a solution?

I have a level generator as well. I think it has something to do with the instances being created. In the task manager, the CPU usage and memory usage slowly creep up as I generate more and more rooms to the point where my FPS drops below 60 after about 20 or so rooms. Each rooms has roughly 1000 instances in it (walls, floors etc)

If i disable the script that creates the instances, I don't get any performance drops. Even tho the level generator is still using a lot of random numbers.

I have a feeling that when instances are created and destroyed, it still reserves a spot in the memory for them. For example, I think the first room uses instances 1-1000, the next room uses instances 1001-2000, next room 2001-3000 and so on. I hope that makes sense.
 

Roldy

Member
Maybe the OPs problem got fixed. I just ran his project for over an hour and didn't see any significant fps variance or drop.
 

Fluury

Member
I am having a similar problem with my game. Did you ever figure out a solution?

I have a level generator as well. I think it has something to do with the instances being created. In the task manager, the CPU usage and memory usage slowly creep up as I generate more and more rooms to the point where my FPS drops below 60 after about 20 or so rooms. Each rooms has roughly 1000 instances in it (walls, floors etc)

If i disable the script that creates the instances, I don't get any performance drops. Even tho the level generator is still using a lot of random numbers.

I have a feeling that when instances are created and destroyed, it still reserves a spot in the memory for them. For example, I think the first room uses instances 1-1000, the next room uses instances 1001-2000, next room 2001-3000 and so on. I hope that makes sense.
Are you reusing the same layers via persistence? My project never restarts the room and just keeps reusing the same layers. Scroll up a bit, and read up on the suggestion of Nocturne. The solution didn't work for me (even on the test project if I recall correctly) but maybe it does the job for you?

The bug is currently sitting in the bug tracker for some time now without any update made to it, and my own project still suffers from the issue. I desperately hope they'll fix it soon-ish, as I really want to release a public demo of my project but loudly announcing that you need to restart the project every 6 runs to not end up in low-fps hell is... not very attractive.

Maybe the OPs problem got fixed. I just ran his project for over an hour and didn't see any significant fps variance or drop.
This is very strange to hear. The bug tracker page even mentions that the issue can be reproduced 100% - are you sure you followed the steps 1:1, aren't on 2.3, are on Windows VM etc.?
 

Roldy

Member
This is very strange to hear. The bug tracker page even mentions that the issue can be reproduced 100% - are you sure you followed the steps 1:1, aren't on 2.3, are on Windows VM etc.?
I originally ran the project from this thread. Didn't see any significant drop. I'm running 2.2.5.

I just ran the project from the bug report following the steps in the report:

1. Run the project for Windows VM.
2. Press and release the spacebar. This will cause one "loop" of tiles using a random seed, then a second loop using seed of 0.
3. When the second loop has happened, wait a couple of seconds for your avg fps value to stabilise.
4. Press and release Ctrl. This will cause 999 loops using a random seed each time, then a final loop using seed of 0 again. (It will also remember the old FPS value for you....)
5. Come back to your PC after a few minutes and see that the game is now consistently running at a considerably lower avg FPS than the value which was recorded at the time you pressed Ctrl.

Here is the screen shot of that run. 410 FPS to start. 410 FPS at the end of 1000 loops.


1594715875848.png

I was intrigued by your test. I thought it was very well setup. But I didn't get the same results.

Unless you consider 0.07 FPS significant after creating and destroying 10 million instances.

Its an interesting bug for sure.
 
Last edited:

Fluury

Member
I originally ran the project from this thread. Didn't see any significant drop. I'm running 2.2.5.

I just ran the project from the bug report following the steps in the report:

1. Run the project for Windows VM.
2. Press and release the spacebar. This will cause one "loop" of tiles using a random seed, then a second loop using seed of 0.
3. When the second loop has happened, wait a couple of seconds for your avg fps value to stabilise.
4. Press and release Ctrl. This will cause 999 loops using a random seed each time, then a final loop using seed of 0 again. (It will also remember the old FPS value for you....)
5. Come back to your PC after a few minutes and see that the game is now consistently running at a considerably lower avg FPS than the value which was recorded at the time you pressed Ctrl.

Here is the screen shot of that run. 410 FPS to start. 410 FPS at the end of 1000 loops.


View attachment 32751

I was intrigued by your test. I thought it was very well setup. But I didn't get the same results.

Unless you consider 0.07 FPS significant after creating and destroying 10 million instances.

Its an interesting bug for sure.
That's incredibly interesting, and probably makes the bug an even bigger headache to deal with for the devs :V

Thanks for sharing!
 
T

TheForeman847

Guest
I am running 2.2.5. Windows VM is selected.
I create my instances at depth 0, instead of using layers.
To generate the next level, I restart the room.
I would like to add that I am using 3d in my project. I found that the memory would stack rapidly as I generated new levels. I discovered that you need to destroy the vertex buffers when finished with them, as restarting the rooms does not destroy them. After destroying them, the memory didn't stack rapidly anymore, just very slowly. However the FPS rate did not improve. It is still just as bad as when I wasn't destroying the vertex buffers.
When I start my game and generate the first level, my CPU uses 10%, and RAM uses 47mb. After generating 10 levels, the CPU goes up to about 20% usage and 53mb. After 20 levels, 30% and 60mb. As I create more and more levels, the RAM usage keeps climbing slowly, but the CPU usage remains at 30%. The FPS in game goes from 300FPS to about 50FPS after 20 levels. And from there on keeps dropping to unplayable frame rates.
I wish there was a way to clear all memory in the game with the exception of global variables.
 

Fluury

Member
I am running 2.2.5. Windows VM is selected.
I create my instances at depth 0, instead of using layers.
To generate the next level, I restart the room.
I would like to add that I am using 3d in my project. I found that the memory would stack rapidly as I generated new levels. I discovered that you need to destroy the vertex buffers when finished with them, as restarting the rooms does not destroy them. After destroying them, the memory didn't stack rapidly anymore, just very slowly. However the FPS rate did not improve. It is still just as bad as when I wasn't destroying the vertex buffers.
When I start my game and generate the first level, my CPU uses 10%, and RAM uses 47mb. After generating 10 levels, the CPU goes up to about 20% usage and 53mb. After 20 levels, 30% and 60mb. As I create more and more levels, the RAM usage keeps climbing slowly, but the CPU usage remains at 30%. The FPS in game goes from 300FPS to about 50FPS after 20 levels. And from there on keeps dropping to unplayable frame rates.
I wish there was a way to clear all memory in the game with the exception of global variables.
The issue I described doesn't really share your descriptions, CPU and memory wouldn't climb.

I recommend making a separate thread, as this appears to be a different issue.
 

Fluury

Member
Given 2 months passed without the bug getting any kind of attention within the Bugtracker and the bug still preventing me from doing any kind of demo release to get more data on gameplay and the like... let's dig a bit more I suppose?

Nocturne initially reporting that they were able to circumvent the issue entirely with the code they attached, and deleting the layer not fixing the issue for me has made me suspicious that this issue could potentially be hardware related... in some way or fashion. This is just blind guessing, but I don't see any other reason why the same code would produce entirely different results on different machines.

So I went ahead and made two versions of the GenPerfProblem Executable. One as is, and the other with it deleting and recreating the layer between wipes. So the block now looks like this:

GML:
if (go and !instance_exists(oWalker))
{
    counter++;
    
    layer_destroy(layer_get_id("Instances"));
    layer_create(0,"Instances");
    
    instance_destroy(oTile);
    
    if (counter >= maxLoops-1) { random_set_seed(0); }
    else { randomize(); }
    
    if (counter >= maxLoops) { go = false; }
    
    alarm[0] = 1;
}
For Nocturne and the developer they were talking with, the latter version must've fixed it. However for me, it didn't fix it.

I went ahead and sent the two versions to a few people and tasked them to run both of them just like in the bug tracker. The results were consistent; Both versions would result in a performance loss.

@Roldy however your case is a very special one, given you didn't experience any slowdown at all. Would it be OK if you could share your specs?

If anyone else wants to try this out, I invite you to.
 

Roldy

Member
@Roldy however your case is a very special one, given you didn't experience any slowdown at all. Would it be OK if you could share your specs?

If anyone else wants to try this out, I invite you to.

So I ran the project from the bug report again on two machines. I exported the build as a stand alone executable, so its not running with GMS open. But the exe is still interpreted, not YYC.

The first machine is the one I used previously and the results where about the same (actually FPS increased over the 1000 loops). This is from a Asus laptop with a 1060 dedicated graphics card (interesting enough this laptop has no ability to run from integrated graphics even though the chipset supports it).

run1.png


The second run is from an old Alienware lap top. This laptops dedicated graphics card has burned up so it runs from Intel integrated graphics. And I did see a steady gradual drop in FPS as it ran the loops:

run2.png

I tried to attach the dxdiag.txt for these machines but the forum doesn't like it. So I'll just post the relevant snippets

Asus:

Code:
------------------
System Information
------------------
      Time of this report: 7/24/2020, 10:57:25
             Machine name: LAPTOP-0CVNFN0C
               Machine Id: {29C12EEB-E1D2-4E1B-897B-8FDA01CC4231}
         Operating System: Windows 10 Home 64-bit (10.0, Build 18362) (18362.19h1_release.190318-1202)
                 Language: English (Regional Setting: English)
      System Manufacturer: ASUSTeK COMPUTER INC.
             System Model: Strix GL703GM_GL703GM
                     BIOS: GL703GM.310 (type: UEFI)
                Processor: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz (12 CPUs), ~2.2GHz
                   Memory: 16384MB RAM
      Available OS Memory: 16306MB RAM
                Page File: 10789MB used, 7949MB available
              Windows Dir: C:\WINDOWS
          DirectX Version: DirectX 12
      DX Setup Parameters: Not found
         User DPI Setting: 96 DPI (100 percent)
       System DPI Setting: 96 DPI (100 percent)
          DWM DPI Scaling: Disabled
                 Miracast: Available, with HDCP
Microsoft Graphics Hybrid: Not Supported
DirectX Database Version: Unknown
           DxDiag Version: 10.00.18362.0387 64bit Unicode

------------
DxDiag Notes
------------
      Display Tab 1: No problems found.
      Display Tab 2: No problems found.
        Sound Tab 1: No problems found.
        Sound Tab 2: No problems found.
        Sound Tab 3: No problems found.
          Input Tab: No problems found.

--------------------
DirectX Debug Levels
--------------------
Direct3D:    0/4 (retail)
DirectDraw:  0/4 (retail)
DirectInput: 0/5 (retail)
DirectMusic: 0/5 (retail)
DirectPlay:  0/9 (retail)
DirectSound: 0/5 (retail)
DirectShow:  0/6 (retail)

---------------
Display Devices
---------------
           Card name: NVIDIA GeForce GTX 1060
        Manufacturer: NVIDIA
           Chip type: GeForce GTX 1060
            DAC type: Integrated RAMDAC
         Device Type: Full Device (POST)
          Device Key: Enum\PCI\VEN_10DE&DEV_1C20&SUBSYS_10111043&REV_A1
       Device Status: 0180200A [DN_DRIVER_LOADED|DN_STARTED|DN_DISABLEABLE|DN_NT_ENUMERATOR|DN_NT_DRIVER]
Device Problem Code: No Problem
Driver Problem Code: Unknown
      Display Memory: 14205 MB
    Dedicated Memory: 6052 MB
       Shared Memory: 8153 MB
        Current Mode: 1920 x 1080 (32 bit) (120Hz)
         HDR Support: Not Supported
    Display Topology: Extend
Display Color Space: DXGI_COLOR_SPACE_RGB_FULL_G22_NONE_P709
     Color Primaries: Red(0.677734,0.308594), Green(0.263672,0.677734), Blue(0.151367,0.059570), White Point(0.313477,0.329102)
   Display Luminance: Min Luminance = 0.500000, Max Luminance = 270.000000, MaxFullFrameLuminance = 270.000000
        Monitor Name: Generic PnP Monitor
       Monitor Model: unknown
          Monitor Id: CMN1747
         Native Mode: 1920 x 1080(p) (120.000Hz)
         Output Type: Displayport Embedded
Monitor Capabilities: HDR Not Supported
Display Pixel Format: DISPLAYCONFIG_PIXELFORMAT_32BPP
      Advanced Color: Not Supported
Driver Name: C:\WINDOWS\System32\DriverStore\FileRepository\nvam.inf_amd64_9bb0c60e9ce433c0\nvldumdx.dll,C:\WINDOWS\System32\DriverStore\FileRepository\nvam.inf_amd64_9bb0c60e9ce433c0\nvldumdx.dll,C:\WINDOWS\System32\DriverStore\FileRepository\nvam.inf_amd64_9bb0c60e9ce433c0\nvldumdx.dll,C:\WINDOWS\System32\DriverStore\FileRepository\nvam.inf_amd64_9bb0c60e9ce433c0\nvldumdx.dll
Driver File Version: 26.21.0014.4294 (English)
      Driver Version: 26.21.14.4294
         DDI Version: 12
      Feature Levels: 12_1,12_0,11_1,11_0,10_1,10_0,9_3,9_2,9_1
        Driver Model: WDDM 2.6
Graphics Preemption: Pixel
  Compute Preemption: Dispatch
            Miracast: Not Supported by Graphics driver
      Detachable GPU: No
Hybrid Graphics GPU: Not Supported
      Power P-states: Not Supported
      Virtualization: Paravirtualization
          Block List: No Blocks
  Catalog Attributes: Universal:False Declarative:False
   Driver Attributes: Final Retail
    Driver Date/Size: 4/7/2020 2:00:00 PM, 963816 bytes
         WHQL Logo'd: n/a
     WHQL Date Stamp: n/a
   Device Identifier: {D7B71E3E-5F60-11CF-3563-1F301BC2D735}
           Vendor ID: 0x10DE
           Device ID: 0x1C20
           SubSys ID: 0x10111043
         Revision ID: 0x00A1
  Driver Strong Name: oem64.inf:0f066de39deb88fe:Section110:26.21.14.4294:pci\ven_10de&dev_1c20&subsys_10111043
      Rank Of Driver: 00D10001
         Video Accel:
Alienware:

Code:
------------------
System Information
------------------
      Time of this report: 7/24/2020, 11:28:19
             Machine name: HAM-PC
               Machine Id: {690E0DF3-D0EB-499B-9097-CCC4C19A5D55}
         Operating System: Windows 10 Pro 64-bit (10.0, Build 18362) (18362.19h1_release.190318-1202)
                 Language: English (Regional Setting: English)
      System Manufacturer: Alienware
             System Model: M14xR2
                     BIOS: InsydeH2O Version 03.72.07A03 (type: BIOS)
                Processor: Intel(R) Core(TM) i7-3610QM CPU @ 2.30GHz (8 CPUs), ~2.3GHz
                   Memory: 8192MB RAM
      Available OS Memory: 8094MB RAM
                Page File: 5318MB used, 10967MB available
              Windows Dir: C:\WINDOWS
          DirectX Version: DirectX 12
      DX Setup Parameters: Not found
         User DPI Setting: 96 DPI (100 percent)
       System DPI Setting: 96 DPI (100 percent)
          DWM DPI Scaling: Disabled
                 Miracast: Available, with HDCP
Microsoft Graphics Hybrid: Supported
DirectX Database Version: 1.1.5
           DxDiag Version: 10.00.18362.0387 64bit Unicode

------------
DxDiag Notes
------------
      Display Tab 1: No problems found.
      Display Tab 2: No problems found.
      Display Tab 3: No problems found.
        Sound Tab 1: No problems found.
          Input Tab: No problems found.

--------------------
DirectX Debug Levels
--------------------
Direct3D:    0/4 (retail)
DirectDraw:  0/4 (retail)
DirectInput: 0/5 (retail)
DirectMusic: 0/5 (retail)
DirectPlay:  0/9 (retail)
DirectSound: 0/5 (retail)
DirectShow:  0/6 (retail)

---------------
Display Devices
---------------
           Card name: Intel(R) HD Graphics 4000
        Manufacturer: Intel Corporation
           Chip type: Intel(R) HD Graphics Family
            DAC type: Internal
         Device Type: Full Device (POST)
          Device Key: Enum\PCI\VEN_8086&DEV_0166&SUBSYS_05521028&REV_09
       Device Status: 0180200A [DN_DRIVER_LOADED|DN_STARTED|DN_DISABLEABLE|DN_NT_ENUMERATOR|DN_NT_DRIVER]
Device Problem Code: No Problem
Driver Problem Code: Unknown
      Display Memory: 1792 MB
    Dedicated Memory: 32 MB
       Shared Memory: 1760 MB
        Current Mode: 1600 x 900 (32 bit) (60Hz)
         HDR Support: Not Supported
    Display Topology: Extend
Display Color Space: DXGI_COLOR_SPACE_RGB_FULL_G22_NONE_P709
     Color Primaries: Red(0.610352,0.349609), Green(0.320313,0.559570), Blue(0.150391,0.129883), White Point(0.313477,0.329102)
   Display Luminance: Min Luminance = 0.500000, Max Luminance = 270.000000, MaxFullFrameLuminance = 270.000000
        Monitor Name: Generic PnP Monitor
       Monitor Model: unknown
          Monitor Id: AUO213E
         Native Mode: 1600 x 900(p) (60.048Hz)
         Output Type: Internal
Monitor Capabilities: HDR Not Supported
Display Pixel Format: DISPLAYCONFIG_PIXELFORMAT_32BPP
      Advanced Color: Not Supported
         Driver Name: igdumdim64.dll,igd10iumd64.dll,igd10iumd64.dll
Driver File Version: 10.18.0010.4358 (English)
      Driver Version: 10.18.10.4358
         DDI Version: 11.2
      Feature Levels: 11_0,10_1,10_0,9_3,9_2,9_1
        Driver Model: WDDM 1.3
Graphics Preemption: DMA
  Compute Preemption: Thread group
            Miracast: Supported
      Detachable GPU: No
Hybrid Graphics GPU: Integrated
      Power P-states: Not Supported
      Virtualization: Not Supported
          Block List: No Blocks
  Catalog Attributes: N/A
   Driver Attributes: Final Retail
    Driver Date/Size: 12/20/2015 2:00:00 PM, 11157656 bytes
         WHQL Logo'd: Yes
     WHQL Date Stamp: Unknown
   Device Identifier: {D7B78E66-4226-11CF-9E62-5825B4C2C735}
           Vendor ID: 0x8086
           Device ID: 0x0166
           SubSys ID: 0x05521028
         Revision ID: 0x0009
  Driver Strong Name: oem69.inf:5f63e5341859ec8c:iIVBM_w10:10.18.10.4358:pci\ven_8086&dev_0166
      Rank Of Driver: 00D12001
         Video Accel: ModeMPEG2_A ModeMPEG2_C ModeWMV9_C ModeVC1_C
Hope that helps.
 
Last edited:

Fluury

Member
Well, that should be dead-on proof that it must be something hardware related that is causing this. Unfortunately even after comparing a few specs, I am unable to find anything consistent, partly because you are still the only one so far who had the "no fps loss" happen. It'd require more people to try it out to find a consistency.

Thank you for taking the time and testing it out!
 

Roldy

Member
Well, that should be dead-on proof that it must be something hardware related that is causing this. Unfortunately even after comparing a few specs, I am unable to find anything consistent, partly because you are still the only one so far who had the "no fps loss" happen. It'd require more people to try it out to find a consistency.

Thank you for taking the time and testing it out!
I'd be curious what the reported Display Memory and Dedicated Video Memory was on the devices you have so far tested on.

When I get a chance I will test the layer deletion 'fix' on the machine that shows a slowdown.
 

Fluury

Member
I'd be curious what the reported Display Memory and Dedicated Video Memory was on the devices you have so far tested on.

When I get a chance I will test the layer deletion 'fix' on the machine that shows a slowdown.
Here it is from my current machine:

GML:
Display Memory: 6042 MB
 Dedicated Memory: 1977 MB
Here they are from one friend's machine that also suffered from the issue -the issue wasnt fixed with either program.

Code:
Display Memory: 20390 MB
Dedicated Memory: 8171 MB
For the sake of completion, here is my entire dxdiag:

Code:
------------------
System Information
------------------
      Time of this report: 7/25/2020, 10:45:21
             Machine name: DESKTOP-5NNF3H8
               Machine Id: {665614C4-170C-46AE-838F-76CFC94EEA42}
         Operating System: Windows 10 Pro 64-bit (10.0, Build 18362) (18362.19h1_release.190318-1202)
                 Language: English (Regional Setting: English)
      System Manufacturer: ASUS
             System Model: All Series
                     BIOS: BIOS Date: 07/22/15 14:30:56 Ver: 25.01 (type: BIOS)
                Processor: Intel(R) Core(TM) i5-4590 CPU @ 3.30GHz (4 CPUs), ~3.3GHz
                   Memory: 8192MB RAM
      Available OS Memory: 8130MB RAM
                Page File: 6687MB used, 6562MB available
              Windows Dir: C:\Windows
          DirectX Version: DirectX 12
      DX Setup Parameters: Not found
         User DPI Setting: 96 DPI (100 percent)
       System DPI Setting: 96 DPI (100 percent)
          DWM DPI Scaling: Disabled
                 Miracast: Available, with HDCP
Microsoft Graphics Hybrid: Not Supported
 DirectX Database Version: Unknown
           DxDiag Version: 10.00.18362.0387 64bit Unicode

------------
DxDiag Notes
------------
      Display Tab 1: No problems found.
      Display Tab 2: No problems found.
        Sound Tab 1: No problems found.
        Sound Tab 2: No problems found.
        Sound Tab 3: No problems found.
        Sound Tab 4: No problems found.
          Input Tab: No problems found.

--------------------
DirectX Debug Levels
--------------------
Direct3D:    0/4 (retail)
DirectDraw:  0/4 (retail)
DirectInput: 0/5 (retail)
DirectMusic: 0/5 (retail)
DirectPlay:  0/9 (retail)
DirectSound: 0/5 (retail)
DirectShow:  0/6 (retail)

---------------
Display Devices
---------------
           Card name: NVIDIA GeForce GTX 1050
        Manufacturer: NVIDIA
           Chip type: GeForce GTX 1050
            DAC type: Integrated RAMDAC
         Device Type: Full Device (POST)
          Device Key: Enum\PCI\VEN_10DE&DEV_1C81&SUBSYS_8C971462&REV_A1
       Device Status: 0180200A [DN_DRIVER_LOADED|DN_STARTED|DN_DISABLEABLE|DN_NT_ENUMERATOR|DN_NT_DRIVER]
 Device Problem Code: No Problem
 Driver Problem Code: Unknown
      Display Memory: 6042 MB
    Dedicated Memory: 1977 MB
       Shared Memory: 4064 MB
        Current Mode: 1920 x 1080 (32 bit) (60Hz)
         HDR Support: Not Supported
    Display Topology: Extend
 Display Color Space: DXGI_COLOR_SPACE_RGB_FULL_G22_NONE_P709
     Color Primaries: Red(0.654297,0.333008), Green(0.324219,0.625000), Blue(0.157227,0.075195), White Point(0.313477,0.329102)
   Display Luminance: Min Luminance = 0.500000, Max Luminance = 270.000000, MaxFullFrameLuminance = 270.000000
        Monitor Name: Generic PnP Monitor
       Monitor Model: BenQ GL2580
          Monitor Id: BNQ78E5
         Native Mode: 1920 x 1080(p) (60.000Hz)
         Output Type: HDMI
Monitor Capabilities: HDR Not Supported
Display Pixel Format: DISPLAYCONFIG_PIXELFORMAT_32BPP
      Advanced Color: Not Supported
         Driver Name: C:\Windows\System32\DriverStore\FileRepository\nvmd.inf_amd64_82063bd87f0dc443\nvldumdx.dll,C:\Windows\System32\DriverStore\FileRepository\nvmd.inf_amd64_82063bd87f0dc443\nvldumdx.dll,C:\Windows\System32\DriverStore\FileRepository\nvmd.inf_amd64_82063bd87f0dc443\nvldumdx.dll,C:\Windows\System32\DriverStore\FileRepository\nvmd.inf_amd64_82063bd87f0dc443\nvldumdx.dll
 Driver File Version: 26.21.0014.4166 (English)
      Driver Version: 26.21.14.4166
         DDI Version: 12
      Feature Levels: 12_1,12_0,11_1,11_0,10_1,10_0,9_3,9_2,9_1
        Driver Model: WDDM 2.6
 Graphics Preemption: Pixel
  Compute Preemption: Dispatch
            Miracast: Not Supported
      Detachable GPU: No
 Hybrid Graphics GPU: Not Supported
      Power P-states: Not Supported
      Virtualization: Paravirtualization
          Block List: No Blocks
  Catalog Attributes: Universal:False Declarative:True
   Driver Attributes: Final Retail
    Driver Date/Size: 06/12/2019 02:00:00, 962168 bytes
         WHQL Logo'd: Yes
     WHQL Date Stamp: Unknown
   Device Identifier: {D7B71E3E-5FC1-11CF-9467-99AC1BC2D735}
           Vendor ID: 0x10DE
           Device ID: 0x1C81
           SubSys ID: 0x8C971462
         Revision ID: 0x00A1
  Driver Strong Name: oem34.inf:0f066de3d325ba4c:Section008:26.21.14.4166:pci\ven_10de&dev_1c81&subsys_8c971462
      Rank Of Driver: 00CF0001
         Video Accel:
 

Fluury

Member
And I'm back - this time with some new info regarding this bug.

Recently on the Discord someone suggested checking if 2.3 magically fixes it (it didn't) but, as a workaround, someone suggested pooling, a.k.a instead of creating/destroying the instances over and over again, you instead just create 'em all once, save them in an array for example, and if they are no longer needed deactivate them instead of wiping them. Then, when you need one again, instead of creating a new tile, you now instead just take one from the array and reset it, instead of creating a new one.

That method should heavily reduce the amount of tiles being created/destroyed, thus slowing down the performance drain massively if the bug is related to the creation of instances and their deletion.

Given the project I am working on "uses layers very often" in two ways: For instances, and dynamic asset creation, I was wondering if I had to setup pooling for both of them. So the big question was: Does the performance drop also happen if you create/destroy assets over and over again? Some suggested that this is specifically about layer usage rather than the instances being created, so I went ahead and modified the bar-code maker project to use sprite assets instead of instances, and then wipe them by wiping the layer they are created on.

No performance loss! It 100% comes from creation/deletion of instances, not direct usage of the layer with anything else from what I have seen.

So if anyone else is wrestling with this issue, I'd check out pooling and see if that helps you out in case YYG doesn't fix the bug in your required timeframe.

EDIT: Tried out pooling and... it doesn't seem to do anything? I'll update this post later with more info once I figured this out. Maybe this is another puzzle piece to the source of this bug.

EDIT2: After more testing I can confirm it isn't related to instance creation. The system I setup just creates 10000 instaces at the start, deactivates them, and then uses them across all loops. The issue still persists - as confirmed earlier, it is 100% related to random number functions in combination with them being used to terminate the "instance activation/instance creation" process, as both cause issues.


Looks like there is quite literally no way to circumvent this issue.
 
Last edited:
Top