Mouse press management for GUI sprites and in game Objects

M

MajesticThe

Guest
First of all, I would like to say hello to the community as I am slowly spending more and more time on my GMS2 project, Ive been reading this place a lot latley :)

The problem that I am currently having is that I dont think I understand how GUI is supposed to work in GMS2. I did some research few weeks ago about GUI layer vs clickable buttons. From what I understand putting an object in GUI is a no-go, and people are suggesting workarounds where they draw_sprite in DrawGUI event, follow it up with mouse_check_button within the step event and point_in_rectangle/point_in_circle.

On the other hand I just encountered an issue where overlaping objects respond to a single mouse click. Here most of the solutions I've found are based on some sort of mouse input management calling all the instances in scope and choosing the best one based on depth.

Now those 2 systems dont cooperate when it comes to clicking a GUI button over an object, since the GUI "object" is simply not there. This in general made me question my approach to layers and mouse management. I have a strong feeling that this should be a very popular problem and that there is a best approach solution to it that I am missing so I am open to suggestions!
 

samspade

Member
First of all, I would like to say hello to the community as I am slowly spending more and more time on my GMS2 project, Ive been reading this place a lot latley :)

The problem that I am currently having is that I dont think I understand how GUI is supposed to work in GMS2. I did some research few weeks ago about GUI layer vs clickable buttons. From what I understand putting an object in GUI is a no-go, and people are suggesting workarounds where they draw_sprite in DrawGUI event, follow it up with mouse_check_button within the step event and point_in_rectangle/point_in_circle.

On the other hand I just encountered an issue where overlaping objects respond to a single mouse click. Here most of the solutions I've found are based on some sort of mouse input management calling all the instances in scope and choosing the best one based on depth.

Now those 2 systems dont cooperate when it comes to clicking a GUI button over an object, since the GUI "object" is simply not there. This in general made me question my approach to layers and mouse management. I have a strong feeling that this should be a very popular problem and that there is a best approach solution to it that I am missing so I am open to suggestions!
It's neither correct no incorrect to say that you can't put an object on the GUI layer. GML just defaults most like (like mouse_x, mouse_y) to room space because that's the most common use. However, it is easy to switch it over so that you can use all of the normal functions. You really only need to do three things. 1) use device_mouse_x_to_gui and device_mouse_y_to_gui to get the mouse's position in GUI space 2) disable the normal draw event by putting a blank draw event there 3) draw in the draw GUI layer instead.

Code:
///step event
if (mouse_click) && (position_meeting(device_mouse_x_to_gui(0), device_mouse_y_to_gui(0), id)) {
    //you've clicked on the object in gui coordinates
}

///draw event
//leave it blank

///draw gui event
draw_self();
 
M

MajesticThe

Guest
Hi.
As much as this is the case, the conflict between the 2 approaches arises not because the object is drawn in GUI but because once it's drawn in GUI the mouse press is no longer based on the objects mask or I'd.
It's just a mathematical check for coordinates. This means if I were to introduce the mouse press controller I will have to manually take in to account all of the unresponsive sprites. That includes manual check if they are currently drawn (f.e. an inventory menu that should not be clicked through).
Seems like an awfull lot of work for something that seems obvious to have in almost any game.
1) You want GUI to be interactive
2) You don't want your GUI to let mouse clicks through when windows are open.
 
I think you have a fundamental misunderstanding of the Draw GUI event. It makes no sense to talk of "placing objects on the GUI" because the GUI event is simply the last place that is checked for draw commands. To put it in other words, in a sense, I think using the word GUI in the Draw GUI event is a little misleading. A better name might be "Draw On Top" event as it's simply the last event that draws anything (and the last thing to be drawn is drawn on top of everything else that has been drawn). It's got nothing to do with any inherent GUI properties that you associate with other programs.

This is also why the problem of clicking somewhere and it triggering click events in multiple instances happens. Because GUI isn't "separate" and it doesn't allow you to place "buttons" or anything that are any different from any other instance. If you want object depth ordering when clicking on something, you have to program it in yourself regardless of whether the object draws itself in the normal Draw event or the Draw GUI event.
 
1) The GUI -is- interactive, it just has a different coordinate set than the normal object positioning (and this is completely as it should be, the GUI always starts at x=0,y=0 and goes to x=display_get_gui_width(),y=display_get_gui_height(), this is because it does not move as the instance moves).
2) The GUI is SIMPLY a draw event. It's not a separate system, beyond the coordinate change. You are only ever drawing something in the GUI, that thing does not "exist" in a "GUI layer". A "GUI layer" itself does not exist. It is simply a way to make something the last thing to be drawn, hence it is drawn over everything else (useful for a GUI, but not specific to a GUI). What happens if you have two buttons that are both drawn in the Draw GUI event and overlap each other? You would have to program in a object depth sorting system. So the problem you are facing exists regardless of Draw GUI or not.
 
M

MajesticThe

Guest
@RefresherTowel you are all correct but you are forgetting that once you draw your object in GUI it no longer responds to mouse press. It becomes unlockable and thus the mouse press has to be handled by mouse check button and a shape. That's what causes most of the issues and in my opinion makes the GUI not interactive by nature.
 
The GUI is simply a Draw event. It's not a special layer or anything like that. Since it uses a different coordinate system, you have to check for that coord system. This is why I said it's name is a bit misleading. It's not there JUST for GUI events, and they probably -should- add in mouse events for the GUI coordinate system, but since they haven't, it's just as easy to do if (point_in_rectangle(device_mouse_x_to_gui(0),device_mouse_y_to_gui(0),gui_button_x1,gui_button_y1,gui_button_x2,gui_button_y2)) { to check for mouse collision. If you place the object in the room at the position you want it to be in the GUI, then you can simply use it's x and y positions+width and height for the gui_button_x1, etc, variables.
 

samspade

Member
@RefresherTowel you are all correct but you are forgetting that once you draw your object in GUI it no longer responds to mouse press. It becomes unlockable and thus the mouse press has to be handled by mouse check button and a shape. That's what causes most of the issues and in my opinion makes the GUI not interactive by nature.
Again, this is wrong. Or at least partially wrong. You are right that the specific Game Maker event Mouse Pressed (not global mouse pressed) will not work as intended as the object 'occupies' a different coordinate space and mouse pressed is (as far as I know) hard coded to use the room coordinates. However, that is pretty much the only thing that doesn't work. You do not need to use the mouse pressed event, and in fact many of us don't. You can still use the global mouse pressed event and simply use the mouses gui coordinates rather than room coordinates (which GML tracks for you) or as I posted above, you can simply do an position meeting check in the step event in order to use the sprite and mask. This is what I do in my projects.

Your other issue is more an issue of how to not click on things below the topmost object. But these issues aren't connected. Without knowing more of what you want to do (e.g. pause menu or just not interact with the world when you click on your hud) it's hard to say what you should do. For the second, one relatively simply solution is to define certain areas in the gui space as unclickable spaces in real world. Like the following:

Code:
///mouse pressed event for other objects
gui_x = device_mouse_x_to_gui(0)
gui_y = device_mouse_y_to_gui(0)

if (point_in_rectangle(gui_x, gui_y, 0, 0, 100, 100)) {
    exit;
}

rest of your code
the above would make so that if any object that 'existed' in room space was clicked on (using the mouse pressed event) but that object was located in a box of 0, 0, 100, 100 of gui space, the rest of the mouse pressed event would not run.

Personally, I would say that a better solution would be to ditch the mouse pressed event all together, have a input object that tracks all inputs the way you want, and then have other objects reference the input object.
 
M

MajesticThe

Guest
Ok, to clarify I agree with everything that you guys are saying. I understand it too (I think).
To pinpoint the issue even more:
I found a few simple solutions to resolve topmost mouse pressing regarding the objects and their depth. In most cases the mouse pressed returns to me the id of the clicked object and then I can compare its depths.
However this solution fails utterly because I can click through my GUI that is not made of object instances on the screen. It uses draw commands within the DrawGUI event. This causes extra effort to build a mouse semaphore between the GUI and the objects in the room.
If I have to I will of course but given how many aspects of the DrawGUI are unintuitive so far I thought I am missing something. I mean seriously, who wants to click through the GUI items? Especially if you open like a pause menu or an inventory window that covers the entire camera.
 

samspade

Member
Ok, to clarify I agree with everything that you guys are saying. I understand it too (I think).
To pinpoint the issue even more:
I found a few simple solutions to resolve topmost mouse pressing regarding the objects and their depth. In most cases the mouse pressed returns to me the id of the clicked object and then I can compare its depths.
However this solution fails utterly because I can click through my GUI that is not made of object instances on the screen. It uses draw commands within the DrawGUI event. This causes extra effort to build a mouse semaphore between the GUI and the objects in the room.
If I have to I will of course but given how many aspects of the DrawGUI are unintuitive so far I thought I am missing something. I mean seriously, who wants to click through the GUI items? Especially if you open like a pause menu or an inventory window that covers the entire camera.
It doesn't seem like you fully understand because the suggestion I'm making would return the instance id you clicked on in the GUI layer using standard collision functions if you wanted it for some reason.

However, if it is a pause menu you're making, then you're better off simply deactivating all instances that aren't needed unless you need them active in which case you can accomplish the same thing with more work by having a paused variable and if true, not letting their clicked on code run.
 
M

MajesticThe

Guest
So are you saying that if I draw the sprites within the DrawGUI event (multiple sprites drawn for one object) they will also intercept the mouse press and return the instance ID of the object and its depth?
 

Nocturne

Friendly Tyrant
Forum Staff
Admin
Use the device_mouse_x/y_to_gui functions to translate the mouse position to the GUI space, then call point_in_rectangle or something to detect if the press is within the area of the instance being drawn to the GUI.
 

samspade

Member
So are you saying that if I draw the sprites within the DrawGUI event (multiple sprites drawn for one object) they will also intercept the mouse press and return the instance ID of the object and its depth?
You can only detect the sprites mask regardless of whether you're using the draw gui or draw event, so I'm not sure what you mean by multiple sprites as that wouldn't work in either case, but otherwise yes.


Neither the draw event nor the draw gui event have anything to do with an objects location. They only determine how the game interprets its location.

An object has a location - its x and y. These are just numbers. Importantly, and object also has a bounding box - determined by its mask its bbox_top, bbox_left, bbox_right, bbox_bottom. These are also just numbers.

There is coordinate system A - Room Space
There is coordinate system B - GUI space

Your mouse has a position in room space - mouse_x and mouse_y
Your mouse has a position in GUI space - device_mouse_x_to_gui(0) and device_mouse_y_to_gui(0)

The event, functions, and numbers you use simple determine how all of the above is interpreted.

So if you use a draw event, the game will draw the object at the object's x and y relative to room space.
If you use the draw gui event, the game will draw the object at the object's x and y relative to gui space.

In neither case has the object's x, y, or bounding box been affected.

If you check for a collision with position_meeting or instance_position (the two collision functions you can use for checking for a collision with a mask using an x and y) you are saying is the x and y I give the function going to be within the bounding box of the object. If you say:

Code:
position_meeting(mouse_x, mouse_y, id);
instance_position(mouse_x, mouse_y, id);
You are using the mouse's x and y position in room space to check against the bounding box.

If you say:

Code:
position_meeting(device_mouse_x_to_gui(0), device_mouse_y_to_gui(0), id);
instance_position(device_mouse_x_to_gui(0), device_mouse_y_to_gui(0), id);
You are using the mouse's x and y position in gui space t o check against the bounding box

Neither of these methods affect either the x and y or the bounding box.

The only thing you can't do (as far as I know) is use the built in GM events mouse pressed as those will only use mouse_x mouse_y - the mouse's position in room space - to check against the bounding box.

If you want to use an instance's mask to check for collision in GUI space all you need to do is:

Code:
///step event
if (mouse_click) && (position_meeting(device_mouse_x_to_gui(0), device_mouse_y_to_gui(0), id)) {
   //you've clicked on the object in gui coordinates
}

//global mouse pressed event
if (position_meeting(device_mouse_x_to_gui(0), device_mouse_y_to_gui(0), id)) {
   //you've clicked on the object in gui coordinates
}
If you want to return the specific instance id, do the same using instance_position.

Again, an objects x, y, and more importantly bounding box are only numbers. You can use them in room space or gui space as you want. It is slightly easier to use them in room space as GM defaults to that giving you built in variables for mouse_x, mouse_y, a built in event for using room space mouse position to check against a bounding box and automatically drawing an object in room space, but all of these things are only defaults and can without much effort be changed (or in the case of mouse pressed use an alternative).
 
M

MajesticThe

Guest
@samspade First of all great post. I actualy waitied untill I have few free hours to make sure I'll soak it up.
The only question then that remains is what defines the relationship of an object with a sprite?
All of my GUI objects due to lack of ability to use the mouse press event have been created sprite-less, and they call multiple draw_sprites within the DrawGUI layer.
Is it expected that those sprites and their masks will be used to define collision with the object that calls the draw?
 

samspade

Member
@samspade First of all great post. I actualy waitied untill I have few free hours to make sure I'll soak it up.
The only question then that remains is what defines the relationship of an object with a sprite?
All of my GUI objects due to lack of ability to use the mouse press event have been created sprite-less, and they call multiple draw_sprites within the DrawGUI layer.
Is it expected that those sprites and their masks will be used to define collision with the object that calls the draw?
In one sense, the answer of what defines the relationship of an object (if by object you mean location and mask for purpose of collisions) with a sprite is nothing. There is no relationship. But that isn't entirely correct. The better answer is probably that GML defaults a number of things to be connected, all of which can be changed (at least all the ones I can think of right now).

By default, GM will draw the sprite you have given an object at the x and y location of an instance of that object in room space where x and y correspond to the origin point of the sprite assigned to the object. It will use all the built in variables (sprite_index, image_index, x, y, image_xscale, image_yscale, image_angle, image_blend, and image_alpha) along with variables that affect those (such as image_speed) to draw and animate it. An objects mask for collision purpose by default is also that of the sprite.

All of this can be changed with the exception of the mouse pressed event which must use mouse_x and mouse_y to check for collisions as far as I know - in other words those events always check in room space, so you can't use them to check gui space.

So you can assign sprites to objects and treat everything as normal and then disable the normal draw event by creating a draw event (which is the single to GM that you want to control the drawing) and leaving it blank. Instead use the draw gui event. Assuming your gui layer isn't a different resolution you can just put draw self in there. That takes care of drawing it in the gui layer. To do a collision check with the mouse using gui coordinates you could use either of the following:

Code:
///step event
if (mouse_click) && (position_meeting(device_mouse_x_to_gui(0), device_mouse_y_to_gui(0), id)) {
   //you've clicked on the object in gui coordinates
}

//global mouse pressed event
if (position_meeting(device_mouse_x_to_gui(0), device_mouse_y_to_gui(0), id)) {
   //you've clicked on the object in gui coordinates
}
I'm not sure I understand this question: "Is it expected that those sprites and their masks will be used to define collision with the object that calls the draw?"
 
Yeah, things are less connected than you are thinking. You can make a single object that has no sprite assigned and code your entire game within that object: sprites, collisions and everything if you wanted. All you ever need to do for "collisions" is to mathematically define the boundaries of the collision and check whether position variables are inside of those boundaries.

If you simply ignore the Draw events, GM will default to drawing the assigned sprite at the object's x and y position within the room and it's Collision Mask will define the collision boundaries. You can then use GM's inbuilt Collision events and Mouse events.

However, it's entirely possible to ignore all of that completely, define your own positional variables:
Code:
my_pos_x = 100;
my_pos_y = 100;
Draw the sprite at those position variables:
Code:
draw_sprite(my_sprite,0,my_pos_x,my_pos_y);
And check for collisions using those position variables combined with the drawn sprites dimensions:
Code:
var inst = instance_position(my_pos_x,my_pos_y,obj_collider);
if (instance_exists(inst)) {
   // Collision code here
}
// Or
var spr_width = sprite_get_width(my_sprite);
var spr_height = sprite_get_height(my_sprite);
var x1 = my_pos_x;
var y1 = my_pos_y;
var x2 = x1+spr_width;
var y2 = y1+spr_height
if (point_in_rectangle(mouse_x,mouse_y,x1,y1,x2,y2)) {
   // Mouse is inside of the sprite
}
// Of course, the above examples are in the Draw event, if you did it in the Draw GUI you would have to alter the position checks to account for the different coordinate system, but it is exactly the same thing in essence
And you've effectively decoupled everything from everything else. If you were to program in some other language that is not a game engine (C++ for example), you would actually be doing all this by hand anyway. GMS is really just useful as a sprite renderer, with some nice functions thrown in on the side.

As to your multiple sprites being draw in the Draw GUI, all you need to do is keep track of the positions that you are drawing each sprite in some variables and run a positional check against those variables.

It's often recommended NOT to do this in the Draw events themselves but in my humble opinion, it is fine to do positional checks for collisions in the Draw events (the reason it's not recommended is that if an instance is not visible, none of it's Draw event code will be run at all and this can lead to some odd bugs that require you to properly understand what is going on in order to squash).

So if you have multiple sprites being drawn, just run a mouse position check for each one, using the code @samspade has given you and you will be able to check to see whether the mouse has entered and exited each sprite. Since you are manually drawing the sprites and probably not using the instances position for them (i.e. you are drawing them at positions that do not correspond with the x and y of the instance) it means that the bounding box + sprite dimensions for each sprite are completely ignored, so you have to account for each one like I have in the second chunk of the code I have above.
 
Last edited:

samspade

Member
It's often recommended NOT to do this in the Draw events themselves but in my humble opinion, it is fine to do positional checks for collisions in the Draw events (the reason it's not recommended is that if an instance is not visible, none of it's Draw event code will be run at all and this can lead to some odd bugs that require you to properly understand what is going on in order to squash).
Not to side track the conversation, but the other reason it is not recommended to run code besides draw codes in draw events is that draw events run once per active view. Generally not a concern, but with multiple views you would be doubling (or more) your functions which is both slower and could have other unexpected results (e.g. incrementint twice in a single step).
 
M

MajesticThe

Guest
Thank you guys for all of this information, I really apprechiate it.
The more I read the more I understand that there is no way around the issue.
Like @RefresherTowel mentioned I am indeed detatching all my GUI objects from sprites, while my interactive in game objects remain true to their sprites, locations and masks.
The idea was to make use of the following code:
Code:
if(mouse_check_button_pressed(mb_left)){
    var click_id = instance_position(mouse_x,mouse_y,all);
    if(click_id){
        show_debug_message(click_id);
    }
}
Based on that I would compare depths and lock mouse clicks, but since my GUI is "hand drawn" from multiple sprites it doesnt return any object id.
Im affraid that I will have to manage x and y positions in GUI manually - check what windows are open and what areas are immune to mousepressing.

As I dont see any alternatives I am even considering externalizing all of my mouse press management in to 1 object.
 

samspade

Member
Thank you guys for all of this information, I really apprechiate it.
The more I read the more I understand that there is no way around the issue.
Like @RefresherTowel mentioned I am indeed detatching all my GUI objects from sprites, while my interactive in game objects remain true to their sprites, locations and masks.
The idea was to make use of the following code:
Code:
if(mouse_check_button_pressed(mb_left)){
    var click_id = instance_position(mouse_x,mouse_y,all);
    if(click_id){
        show_debug_message(click_id);
    }
}
Based on that I would compare depths and lock mouse clicks, but since my GUI is "hand drawn" from multiple sprites it doesnt return any object id.
Im affraid that I will have to manage x and y positions in GUI manually - check what windows are open and what areas are immune to mousepressing.

As I dont see any alternatives I am even considering externalizing all of my mouse press management in to 1 object.
Having an input object is generally a good idea. Some games won't need it, but if you have any type of complex interactions between objects it is generally easier. You can do all the checks you need once, store them in different variables (e.g. left_pressed, left_held) and then have instances reference them (e.g. input.left_pressed, input.left_held). These checks can be as complicated as you want. For example, your input object could check if you clicked on a GUI button and then set all left_pressed events to false.

That said, if this was working for you:

Code:
if(mouse_check_button_pressed(mb_left)){
   var click_id = instance_position(mouse_x,mouse_y,all);
   if(click_id){
       show_debug_message(click_id);
   }
}
Then this is all it needs to be changed to:

Code:
if(mouse_check_button_pressed(mb_left)){
   var click_id = instance_position(device_mouse_x_to_gui(0), device_mouse_y_to_gui(0), all);
   if(click_id){
       show_debug_message(click_id);
   }
}
as it will do exactly the same thing but in gui space rather than room space.
 
M

MajesticThe

Guest
Just to let you know I double checked your suggestion on device_mouse_x_to_gui(0), device_mouse_y_to_gui(0) and unfortunatley it stil doesnt return the id of the sprite-less objects, that use daw_sprite in DrawGUI event.
 

samspade

Member
Just to let you know I double checked your suggestion on device_mouse_x_to_gui(0), device_mouse_y_to_gui(0) and unfortunatley it stil doesnt return the id of the sprite-less objects, that use daw_sprite in DrawGUI event.
Yes. An object needs a mask in order for it to use the collision functions. However, my point is that there is no difference between the code you posted and the code I posted other than the position in space they check. If the device x to gui function doesn't work, then neither would mouse_x/y. In other words the code you wanted to use won't work in your setting as is, regardless of whether the object is being draw or thought of as in the gui or room space.

If you want to be able to use any collision function you need to give an object a mask. Otherwise the question of 'where' it is is meaningless for collision functions. My point is simply that if you want to use objects for you gui layer and use the collision functions you can, and you can do it exactly like you would in the room layer. Your current method (spriteless) will not work with collision functions in any layer or coordinate space. This isn't a problem, it's just different.
 
I feel like a lot of this thread is literally "If you want to make an apple pie from scratch, first you must invent the universe."

It's totally true that GMS lacks in the UI department. Trying to make a text box or a scroll bar or anything of that ilk means that you have to code the entire thing. This can be a genuine pain in the goddamn arse. However, after having gone through all the pain, we all come out as better programmers, having had to learn the exquisite pain that is programming a UI from scratch. Beyond that, I don't know what to say. It sucks, GMS could probably do better with a generic version of a textbox, button, scrollbar, etc, that we could then customise but hey, we all learn more through having to do it on our own.

I can personally say that the first time I coded a scroll bar it seemed like black magic to me, but having to repeatedly code things like that in order to fit them into my games has given me -way- more breadth of knowledge than I would've had if I could just type a function/D'n'D something in there to make it work. Does that make a game engine more or less useful? I'd err on the side of more, but that's just me.
 
Top