get_timer() measurements should be done over many iterations. The typical speed test is running an operation within a while loop THOUSANDS of times. As Roldy said, the CPU is going to slow things down at seemingly random times, meaning you will never get consistent numbers, so each test needs to be run multiple times as well to find the average. Then when all is said and done, if comparing two or more operations, if the average discrepancy is less than 800 or so microseconds, it is safe to say the two operations are comparable.
Another issue with speed tests like these (regardless of how you time it) is you need to be certain when comparing results that both tests are functionally identical and rationalize any discrepancies. For example, "object.variable=1" is a lot faster than "with object variable=1", but the two operations are not functionally identical. The correct test would be to compare "if instance_exists(object) object.variable=1", and the speeds level off. Thus the rationalization is when it is safe to assume an instance will exist, one is faster than the other. And even then, this was a flawed argument, because with can preclude blocks of code, do then you'd have to test multiple operations, at which point the scale tips and "with object variable=1" becomes faster. At that point, it's safe to just say "screw it" and use whichever operation you want.
Incomplete speed tests also entice people to jump to erroneous conclusions. A simple speed test will show you that reading and writing grids is hella slow, lists are much faster than grids, and arrays are much faster than lists. So the obvious conclusion of course is that you should always use an array instead of the other data structures because they are so much faster. Except, if that is all you're going to be doing with the data, then you should use an array, but that is not all grids and lists are used for. Both data structures can be shuffled and sorted; lists will even automatically sort themselves; both can be dumped to a file easily enough. Even though arrays are much faster than the other data structures, the data structures are easier to work with in situations beyond just reading and writing.
One last example of jumping to erroneous conclusions, people tested reading and writing to a specific index in an array against reading and writing to a specific position in a buffer. It's a simple enough test - make an array and buffer of the same size, pick a position, then either read or write to that position - and it showed buffers are slow because you have to use the buffer_seek() function to move around inside the buffer. However, a simple loop reading or writing to every single value in the array and buffer - which allows you to not use buffer_seek() - is the same speed for both arrays and buffers. Coupled with the fact that buffers can be easily resized, easily dump edto a file, easily shared across a network, take up significantly less memory, and could be read 4x faster (I don't know why you would, but you could), it becomes clear that conclusions drawn from early speed test were erroneous and buffers are indeed comprable to arrays.
You're going to run a speed test, you're going to come to a conclusion, and you're going to stop doing whatever you were doing before because you will come to the conclusion that what you were doing before was horribly slow. You are not going to stop and think about if your test is incomplete, you're not going to stop and think about if your test is wrong, you're not going to stop and think about if what you were doing before has greater merit in some scenarios than others over the alternative. And in some cases, a localized speed test such as this is more inaccurate than a global speed test. What I mean by that is suppose you write a program to generate a room using one form of data structure because certain aspects of that data structure are known to be faster than another data structure; then for giggles you rewrite the room generation algorithm using a slower data structure just to see how much slower it is -- except surprise surprise, the slower data structure actually yielded a faster algorithm. "The whole is greater than the sum of its parts."