I don't know whether anybody's published speed tests between the two operations, but I tested this, because I wanted to know if it'd actually matter for something I want to do.
In my test, I set up a child object, that is parented through five layers of inheritance. I built 10K of them and did a single Instance-variable operation on them, using three different methods of iteration.
I figured that asset_has_any_tag() would be reasonably competitive: it would pull the struct for the object_index, pull the array of tags, and if it hit a winner via an equivalency test, it was done.
But it wasn't that close of a race. Looks like with() must be getting pre-built stored lists? Anyhow, it's fast.
Presuming that maybe my with(all) might be slowing things down, I wrote all the ids to an array upon creation, and tested again. It's actually worse, on average. Probably that gap would tighten with a YYC build, but I suspect with() will still win every time.
Testing results:
I then tried out a theory: perhaps the performance of with(ancestor) would fall apart, if there were lots of ultimate children, broadening the search. I duplicated the ultimate child 100 times. Answer: no. Performance of with() probably degrades a little bit as searches broaden (even if the ultimate ancestor stores every child, down to the ultimate child, it's going to take a little time to reach said child, even if it's just a switch statement), but it's not N^2 degradation or anything like that (I'm presuming it's linear). I didn't test what happens if all those children are instantiated, but I'm guessing that doesn't impact with().
Of course, it usually doesn't matter at all, of course; usually what's going on within a with() is far more important than the actual search. In the case of what I'm messing with today, the search speed difference really doesn't matter; what's going to happen afterwards is where all the real cost is. But if you've been beating yourself up, trying to get rid of a nested with()... or if you just like going to extreme lengths to optimize something... these results appear to indicate that it beats the other methods by a considerable margin.
If anybody is interested in testing this out, here's the source I used:
In my test, I set up a child object, that is parented through five layers of inheritance. I built 10K of them and did a single Instance-variable operation on them, using three different methods of iteration.
I figured that asset_has_any_tag() would be reasonably competitive: it would pull the struct for the object_index, pull the array of tags, and if it hit a winner via an equivalency test, it was done.
But it wasn't that close of a race. Looks like with() must be getting pre-built stored lists? Anyhow, it's fast.
Presuming that maybe my with(all) might be slowing things down, I wrote all the ids to an array upon creation, and tested again. It's actually worse, on average. Probably that gap would tighten with a YYC build, but I suspect with() will still win every time.
Testing results:
- With() took 0 to 1 microseconds.
- Using with(all) and a Tag test took 3-4 microseconds.
- Using a manually-created array of ids took 5-6.
I then tried out a theory: perhaps the performance of with(ancestor) would fall apart, if there were lots of ultimate children, broadening the search. I duplicated the ultimate child 100 times. Answer: no. Performance of with() probably degrades a little bit as searches broaden (even if the ultimate ancestor stores every child, down to the ultimate child, it's going to take a little time to reach said child, even if it's just a switch statement), but it's not N^2 degradation or anything like that (I'm presuming it's linear). I didn't test what happens if all those children are instantiated, but I'm guessing that doesn't impact with().
Of course, it usually doesn't matter at all, of course; usually what's going on within a with() is far more important than the actual search. In the case of what I'm messing with today, the search speed difference really doesn't matter; what's going to happen afterwards is where all the real cost is. But if you've been beating yourself up, trying to get rid of a nested with()... or if you just like going to extreme lengths to optimize something... these results appear to indicate that it beats the other methods by a considerable margin.
If anybody is interested in testing this out, here's the source I used:
GML:
/// @description Test
if(hasBuiltThings = false){
for(var i = 0; i < 10000; i++){
var theThings = instance_create_depth(x,y,100,obj_Child);
listOfIDS[arrayPos] = theThings.id;
arrayPos += 1;
}
hasBuiltThings = true;
return;
}
//Wait 30 frames to let things settle.
if(global.counterOne < 30){
global.counterOne += 1;
return;
}
if(global.hasCheckedWith = false){
var timeStamp = current_time;
with(obj_ParentHead){
hasSetVar = true;
}
global.hasCheckedWith = true;
show_debug_message("Time taken, WITH: " + string(current_time - timeStamp));
return;
}
//Wait 30 frames to let things settle.
if(global.counterOne < 60){
global.counterOne += 1;
return;
}
if(global.hasCheckedTags = false){
var timeStamp = current_time;
with(all){
if(asset_has_any_tag(object_index,"check_tag",asset_object)){
hasSetVar2 = true;
}
}
show_debug_message("Time taken, TAGS, using WITH ALL: " + string(current_time - timeStamp));
global.hasCheckedTags = true;
return;
}
//Wait 30 frames to let things settle.
if(global.counterOne < 90){
global.counterOne += 1;
return;
}
if(global.hasCheckedTagsARRAY = false){
var timeStamp = current_time;
for(var i = 0; i < 9999; i++){
if(asset_has_any_tag(listOfIDS[i].object_index,"check_tag",asset_object)){
hasSetVar2 = true;
}
}
show_debug_message("Time taken, TAGS, using ARRAY: " + string(current_time - timeStamp));
global.hasCheckedTagsARRAY = true;
return;
}