Performance testing (Including structs)

AllCrimes

Member
I put together another performance test for my own sanity. DISCLAIMER: There's no perfect way to test structs (that I can think of) without using the variable_struct_get and variable_struct_set from within the for loop and the performance hit of these as opposed to dot notation is unknown. This of course also requires calling a string() parse on the iteration (i). The best thing I could think of to account for this was to loop through __numLoops calling string(i), getting the time it took to complete, and removing that from the final time calculation on the struct test. If anyone can think of a better way to test this please let me know. This particular test was run with the YYC, and 100,000 iterations.

perf-test.png

Some observations...

Array with an enum:
+ By far the fastest speed
+ Intellisense in the IDE from the enum itself
- You must continue to fake inheritance

Struct:
+ Proper inheritance
- No intellisense (Cannot be faked)
- Very slow (Cannot be faked)

Personally, without further evidence, the convenience of structs themselves is not worth the performance cost if you intend to have them represent a large number of entities but their usefulness might be seen as wrappers for sets of functions & things of that nature. Also bare in mind that you lose intellisense, which means you have to remember the properties down through all your inheritances. Please, please correct me if you see otherwise.

You can find the code here:

EDIT: Important further reading:
 
Last edited:

Nocturne

Friendly Tyrant
Forum Staff
Admin
These results are to be expected, tbh. The coal with this update was to introduce new features and get them as stable as possible, and then the next few updates will introduce optimisations once YYG can see that they work as planned and see how people are using them as well as where the bottlenecks are. Things will only improve from here on in! Feel free to file a bug report with your findings and a link to the test project so YYG can check it out. These kinds of things are very helpful!
 

kburkhart84

Firehammer Games
It is good to know what is going on. Maybe I'm wrong, but I thought auto-complete was working for structs?!

As far as performance, unless the stuff you are doing is performance intensive, the benefits of organization that structs provide seems to be worth using. And soon enough, performance should get better(I don't count on it until I see it though, as with everything).
 

chamaeleon

Member
Regarding the test itself, it seems like creating massive numbers of struct members is a bit of an abuse. I realize that it is said upfront that there's not exactly a good way of doing it, but perhaps it shouldn't be done in the first place. If you need massive amounts of data in that fashion, odds are arrays or ds_lists, etc., will work just fine, quite possibly as a member of a struct rather than as a standalone unit. If I were to really need a huge number of values accessed by different names, the structure I'd reach for would be a ds_map, more than likely (or an implementation based on structs, I suppose, if it seems appropriate, in which case arrays are probably not an appropriate alternative rendering speed comparisons meaningless). Essentially, I won't worry too much about the efficiency of structs due to the performance of variable_struct_set and variable_struct_get, as I'll personally try to keep the usage of structs in such a way that I don't need to reach for those functions.

When using the dot notation for a small number of elements, the difference between an array and a struct is not so bad.
GML:
function benchmark_array(loops){
    var a = [];
    a[0] = 10; a[1] = 20; a[2] = 30; a[3] = 40; a[4] = 50;
    var sum = 0;
  
    for (var i = 0; i < loops; i++) {
        sum += a[0]; sum += a[1]; sum += a[2]; sum += a[3]; sum += a[4];
    }
  
    show_debug_message("Array sum = " + string(sum));
}

function benchmark_struct(loops) {
    var s = { };
    s.a = 10; s.b = 20; s.c = 30; s.d = 40; s.e = 50;
    var sum = 0;
  
    for (var i = 0; i < loops; i++) {
        sum += s.a; sum += s.b; sum += s.c; sum += s.d; sum += s.e;
    }
  
    show_debug_message("Struct sum = " + string(sum));
}
GML:
var n = 10000000;

var t1 = get_timer();
benchmark_array(n);
var t2 = get_timer();
show_debug_message("Array time = " + string((t2-t1)/1000000));

var t1 = get_timer();
benchmark_struct(n);
var t2 = get_timer();
show_debug_message("Struct time = " + string((t2-t1)/1000000));
Windows VM
Code:
Array sum = 1500000000
Array time = 8.48
Struct sum = 1500000000
Struct time = 10.31
Windows YYC
Code:
Array sum = 1500000000
Array time = 0.67
Struct sum = 1500000000
Struct time = 1.43
 

AllCrimes

Member
You raise a good point, which is that you would not use structs to store such large numbers of properties. Interestingly, the functions for getting & setting structs do seem to come with some considerable overhead. An unexpected amount of overhead in comparison to using for example the ds_list_find_value or ds_map_find_value function vs an accessor, where the difference is basically non-existent.

GML:
    var __testStruct = {
        a: 1,
        b: 2,
        c: 3,
    }
   
    var __label = "READING STRUCT (FUNCTION)";
   
    var __start = get_timer();
    repeat(__numLoops) {
        __dummy = variable_struct_get(__testStruct, "a")
    }  
    var __end = get_timer();

    var __result = (__end - __start)
    __strOut += __label + ": " + string(__result) + "\n"
   
    ////////////////////////////////////////////
    // Struct get with dot notation
    var __label = "READING STRUCT (DOT NOTATION)";
   
    var __start = get_timer();
    repeat(__numLoops) {
        __dummy = __testStruct.a
    }  
    var __end = get_timer();

    var __result = (__end - __start)
    __strOut += __label + ": " + string(__result) + "\n"
Here's the result:

differences.png

There might be a way to apply this time difference to the original test in the same way I did with the string conversion. In any case, the view is still a blurry one because we don't know what kind of optimizations the compiler is capable of, or what might be getting cached (at least I don't).
 

GMWolf

aka fel666
A struct is basically a map right?
And since you would expect them to store a dozen properties rather than thousands, they would be optimised for small sizes.
My guess is a flat map, with linear search, maybe sorted and binary search for larger structs.
Having a hash map per instance would be very costly in memory.
Although, it's also entirely possible they are just implemented as hash maps for now.

Whatever the case may be, structs will always be slower than arrays. Numeric indices are a lot easier to work with than arbitrary identifiers. That's just one of the many costs of dynamic languages. (Bad perf, no compile time checking... Suffice to say I'm not a fan :) )
 
Top