• Hello [name]! Thanks for joining the GMC. Before making any posts in the Tech Support forum, can we suggest you read the forum rules? These are simple guidelines that we ask you to follow so that you can get the best help possible for your issue.

 Bug with instance_destroy ?! [BUG]

I was testing creation and destruction speed and performance found myself using this code:

Code:
#macro times 100000

var i, time;
var create = 0, destroy = 0:

repeat (times) {
    time = get_timer();
    i = instance_create_depth(0,0,0,obj_system);
    create += get_timer()-time;

    time = get_timer();
    instance_destroy(i);
    destroy += get_timer()-time;
}

var total = "(x" + string(times) + ")";

show_debug_message("Instance create " + total);
show_debug_message(create/times);

show_debug_message("Instance destroy " + total);
show_debug_message(destroy/times);
PS: "obj_system" is just an empty object!

the problem here is that after the big repeat chunk (that happens ONE time, during the creation code of one room - "room_test") I find the memory usage in the 33933,42K ...it's an empty project, no more rooms, no more objects, just this test room!! The value doesn't decay over time..

Is instance_destroy not freeing memory?! (it is at least freeing SOME memory... as during repeat the memory goes up to (+/-) 80000K and then drops to the 34000K. Is this suppose to happen?

in the end of the process, frame rate is stable and very high (6000fps, read in the debugger) so I guess the instances are correctly being removed from the internal update list... but the memory doesn't get totally freed.

Do you have the same problem? What do you think?!
 
Last edited:

FrostyCat

Member
It is freeing memory, just not immediately back into the heap. There are always delays in situations like this with deallocation and GC.

I've noticed a similar behaviour when I was working with my MCTS demos, which had a whole load of nestled arrays being constantly created and unlinked. The memory goes up when the AI gets its first move of the session, but on subsequent moves and rounds the memory doesn't go up again. It looks like the freed amount is simply marked as "available" without eager removal, similar to what happens on an average file system.

As long as it doesn't continue to go up uncontrollably, there's nothing to worry about.
 
It is freeing memory, just not immediately back into the heap. There are always delays in situations like this with deallocation and GC.

I've noticed a similar behaviour when I was working with my MCTS demos, which had a whole load of nestled arrays being constantly created and unlinked. The memory goes up when the AI gets its first move of the session, but on subsequent moves and rounds the memory doesn't go up again. It looks like the freed amount is simply marked as "available" without eager removal, similar to what happens on an average file system.

As long as it doesn't continue to go up uncontrollably, there's nothing to worry about.
I tested the same thing using arrays:

Code:
#macro times 100000

var i, time;
var create = 0, destroy = 0:

repeat (times) {
    time = get_timer();
    i = array_create(200);
    create += get_timer()-time;

    time = get_timer();
    i = 0;
    destroy += get_timer()-time;
}

var total = "(x" + string(times) + ")";

show_debug_message("array create " + total);
show_debug_message(create/times);

show_debug_message("array destroy " + total);
show_debug_message(destroy/times);
here the memory stays always in the 5000K, no memory problems!
 

rwkay

YoYo Games Staff
YYG Staff
If you are using the Task Manager memory figure then that will be very inaccurate as that is the showing the amount of memory we have allocated (or committed) from the system, this is not showing how much memory is allocated or free within the program itself, this is quite different.

When we free any memory internally this is freed from our internal pools (which will still be allocated from the system) but that freeing will not necessarily be reflected in the Task Manager (only when a significant amount of memory is released will it show up in Task Manager).

Please don't use Task Manager to decide if we are managing memory correctly, though it is useful for deciding if we are leaking memory like a sieve.

Russell

EDIT: changed accurate to inaccurate as that changed the whole meaning of what I was saying
 
Last edited:
If you are using the Task Manager memory figure then that will be very accurate as that is the showing the amount of memory we have allocated (or committed) from the system, this is not showing how much memory is allocated or free within the program itself, this is quite different.

When we free any memory internally this is freed from our internal pools (which will still be allocated from the system) but that freeing will not necessarily be reflected in the Task Manager (only when a significant amount of memory is released will it show up in Task Manager).

Please don't use Task Manager to decide if we are managing memory correctly, though it is useful for deciding if we are leaking memory like a sieve.

Russell
I used the value that appears in the debug screen.. not the task manager one :)
Does this mean it may be a bug?

You could give it a try for yourself!
 
Last edited:

gnysek

Member
Remember that Windows gives a RAM to applications, and manages it, trying to predict how much program may need so sometimes it doesn't free it ASAP even if program really uses less memory. In case of bigger changes it does it, but for smaller arrays etc. it may keep small blocks "reserved" for again use. If there is memory leak like 100-500MB visible, then for sure something is wrong, but for 1-2MB it may be just Windows which is just trying to play a clairvoyant.
 
Remember that Windows gives a RAM to applications, and manages it, trying to predict how much program may need so sometimes it doesn't free it ASAP even if program really uses less memory. In case of bigger changes it does it, but for smaller arrays etc. it may keep small blocks "reserved" for again use. If there is memory leak like 100-500MB visible, then for sure something is wrong, but for 1-2MB it may be just Windows which is just trying to play a clairvoyant.
Thank you for the response :D
To test this I created a code:

that repeats the creation and destruction of 1 object for 100.000 times!!
and the memory use after that (shown in the GM debug screen is) 34MB... during the process it reaches the 80MB... but as soon as the repeat loop ends it drops for the 34MB and stays there forever (I tested it for 30min) and it doesn't drop! So again... is this normal?!
running just an empty project would use +/- 3 MB...

if it needs 77MB (80MB - 3MB) of memory for 100.000 instances... this gives +/- 0.77KB for instance
and then it drops to 30MB... (33MB - 3MB) this means it is "not freeing" the memory equivalent of 38.961 instances!! (The maths may not be that linear I'm just trying to prove a point)

Notes:
1) everything is done in the room creation code
2) no more objects are in this room
3) no more code is in the project
4) the object created is empty.. doesnt have events or code or variables or sprites (empty)
5) the code used is presented in the original post (test it and see it for yourselves)

I'm trying to help here... because if it is a bug YoYo should address it... if it is not... tell me... this is not a feature I want to be implemented.. this is me trying to contribute to the correction of bugs :)
 
Last edited:

gnysek

Member
If it's in create event it may be that RAM indicator is updating lazy according to real usage :)

Try to repeat this test, that this code executes in same game in 2-3 seconds interval for example 100 times and check, that memory rises again. It will probably remain at same stage, that it raised for first time.
 

rwkay

YoYo Games Staff
YYG Staff
I suspect that what has happened is that internal buffers have been allocated to accommodate 100,000 items at once and these are not reduced in size once you have destroyed them all, the slots in those buffers are technically free, but the resize has happened and so they are now still allocated in the system.

Its not a bug as the memory to the engine is still technically free'd (but will not be returned back to the system until exit) - but the space is still allocated.

Russell
 
I suspect that what has happened is that internal buffers have been allocated to accommodate 100,000 items at once and these are not reduced in size once you have destroyed them all, the slots in those buffers are technically free, but the resize has happened and so they are now still allocated in the system.

Its not a bug as the memory to the engine is still technically free'd (but will not be returned back to the system until exit) - but the space is still allocated.

Russell
Okay, nice to read this ;) thank you (can't wait for the Mac IDE day after tomorrow BTW)
 
Top