[SOLVED] experiencing weird things while using get_timer

  • Thread starter electronic_entertainments
  • Start date
E

electronic_entertainments

Guest
Hi there
i just executed this piece of code for researching purposes
GML:
//create event \/
max_timer = 0;
variable  = 0;

//step event \/
var timer = get_timer();
variable  = 2500;
timer     = get_timer() - timer;

if (timer > max_timer) max_timer = timer;
so the weirdness starts here when i run the game at 10 FPS, maximum value of the max_timer is 1 and when i run the game at the 5K FPS the maximum number of the max_timer is 145 , so as far as my basic understanding go, 1 micro second is not equal to 145 micro seconds, what cause the code to take more time to assign the variable a new value ? they not proportional, no way

thanks for reading
 

curato

Member
I never much used that function. If you are trying to use it as more than an experiment, I have always managed timers with an alarm even and it seem to keep good time.
 
E

electronic_entertainments

Guest
I never much used that function. If you are trying to use it as more than an experiment, I have always managed timers with an alarm even and it seem to keep good time.
Thanks for the reply, but that is not the point, the point is calculating time complexity, iam just curious how much time it take for my cpu to assign a value to a vatiable but it is not the same during the fps change and idk why
 

Mnementh

Member
GMC Elder
Just measuring the maximum doesn't give very much information. I'd suggest measuring how often it takes more than 1, 2, 4, 8, and so on microseconds (I'm presuming those are the units here based on what you said). It could be that when you run your game at 5K it always takes 145us, or maybe it usually takes 1us but occasionally takes more. Knowing which would be interesting. It would also be interesting to know what happens at other speeds.

Aside, this probably isn't telling you very much about how long it takes to assign a value to a variable. A microsecond is likely far too long a time. You might want to check how often both calls to get_timer() return the same value. Also, the get_timer() calls aren't instantaneous, so you're also measuring the second half of the first call and the first half of the second call. I'd bet that much more time is spent in get_timer() than assigning the variable. That doesn't mean this isn't interesting, but it will take some more thought to figure out how long it takes to assign a variable.

Edit: Powers of two are probably not as good as something simpler like multiples of 10 since you already know the range you're looking for.

Edit 2: Sometimes the journey is more important than the destination. ;)
 
Last edited:

Roldy

Member
Whatever you are doing it doesn't need to be done. If you want to profile then use a profiler. If you want to analyze runtime complexity, then do that. Not sure what you are trying to do.

However, what you are most likely seeing is the OS switching thread context away from your app. The more often you poll (fps) the more often you will see a long context switch. It can happen at anytime. But if you're burning CPU cycles for no reason you are going to see it more often.

Your CPU only has so many cores, but at any point the OS and other apps have hundreds/thousands of threads running. It has to switch between them.
 
Last edited:

rytan451

Member
If you're attempting to measure how long it takes to hold a variable for optimisation purposes, you may rest assured that it is effectively free. In your code, you're spending significantly more time getting the current timer and calculating time elapsed than you are actually setting the variable.

As for why the timings are weird when you're at 5K FPS, I suspect it's because the computer is allocating much more time attempting to draw to the screen at 5kHz, and so execution order at the CPU level is becoming a tiny bit nondeterministic.
 

TheouAegis

Member
get_timer() measurements should be done over many iterations. The typical speed test is running an operation within a while loop THOUSANDS of times. As Roldy said, the CPU is going to slow things down at seemingly random times, meaning you will never get consistent numbers, so each test needs to be run multiple times as well to find the average. Then when all is said and done, if comparing two or more operations, if the average discrepancy is less than 800 or so microseconds, it is safe to say the two operations are comparable.

Another issue with speed tests like these (regardless of how you time it) is you need to be certain when comparing results that both tests are functionally identical and rationalize any discrepancies. For example, "object.variable=1" is a lot faster than "with object variable=1", but the two operations are not functionally identical. The correct test would be to compare "if instance_exists(object) object.variable=1", and the speeds level off. Thus the rationalization is when it is safe to assume an instance will exist, one is faster than the other. And even then, this was a flawed argument, because with can preclude blocks of code, do then you'd have to test multiple operations, at which point the scale tips and "with object variable=1" becomes faster. At that point, it's safe to just say "screw it" and use whichever operation you want.

Incomplete speed tests also entice people to jump to erroneous conclusions. A simple speed test will show you that reading and writing grids is hella slow, lists are much faster than grids, and arrays are much faster than lists. So the obvious conclusion of course is that you should always use an array instead of the other data structures because they are so much faster. Except, if that is all you're going to be doing with the data, then you should use an array, but that is not all grids and lists are used for. Both data structures can be shuffled and sorted; lists will even automatically sort themselves; both can be dumped to a file easily enough. Even though arrays are much faster than the other data structures, the data structures are easier to work with in situations beyond just reading and writing.

One last example of jumping to erroneous conclusions, people tested reading and writing to a specific index in an array against reading and writing to a specific position in a buffer. It's a simple enough test - make an array and buffer of the same size, pick a position, then either read or write to that position - and it showed buffers are slow because you have to use the buffer_seek() function to move around inside the buffer. However, a simple loop reading or writing to every single value in the array and buffer - which allows you to not use buffer_seek() - is the same speed for both arrays and buffers. Coupled with the fact that buffers can be easily resized, easily dump edto a file, easily shared across a network, take up significantly less memory, and could be read 4x faster (I don't know why you would, but you could), it becomes clear that conclusions drawn from early speed test were erroneous and buffers are indeed comprable to arrays.

You're going to run a speed test, you're going to come to a conclusion, and you're going to stop doing whatever you were doing before because you will come to the conclusion that what you were doing before was horribly slow. You are not going to stop and think about if your test is incomplete, you're not going to stop and think about if your test is wrong, you're not going to stop and think about if what you were doing before has greater merit in some scenarios than others over the alternative. And in some cases, a localized speed test such as this is more inaccurate than a global speed test. What I mean by that is suppose you write a program to generate a room using one form of data structure because certain aspects of that data structure are known to be faster than another data structure; then for giggles you rewrite the room generation algorithm using a slower data structure just to see how much slower it is -- except surprise surprise, the slower data structure actually yielded a faster algorithm. "The whole is greater than the sum of its parts."
 
Last edited:
Top