Just wanted to chip in a few thoughts:
- Generally, the primary reason a game should have resolution change support is so that the game can look "correct" at multiple different playable resolutions, rather than for a straight-up performance scaling feature.
Generally speaking, unless we are talking crazy-high resolutions, the actual render resolution is unlikely to be the main source of slow-down in your game. What tends to happen however is that as the resolution gets larger, the already-existing issues with the efficiency of your draw process get compounded.
To go over a few common issues that nuke the performance of games (Far more than resolution itself):
- Fillrate saturation:
When you render any object to the screen, it has to draw pixels. If you draw multiple objects on top of each other, you can end up in a situation where you render the same pixel multiple times. Even if you are rendering at quite a small resolution, if every pixel is getting rendered 4 times, but you are only actually seeing the colour of the final pixel to be drawn, you are wasting a lot of potential processing power.
Solutions:
- Re-structure your draw order so that the top-most object is rendered first. (The reason for this will become clear in a moment).
- For non-transparent sprites (we cannot apply this optimisation to transparent objects as they rely on compounding sequences of colours, and thus are the biggest offenders of fillrate saturation), we can enable draw_set_enable_alpha_test(true); This discards processing on fully transparent pixels below a certain threshold.
- We can also set draw_enable_alphablend(false); to further turn off alpha blending.
- Drawing batch breaks:
This one is a little more complicated. When it comes to rendering, one of the most common bottlenecks is the exchange of data between the GPU and the CPU. This is because when data is being sent, neither the CPU or GPU are doing anything. They are both sat there waiting for this slow data exchange to happen. This data exchange happens whenever you want to draw anything to the screen, or change any property (such as colour) associated with what is being drawing. These functions for example: draw_set_colour, draw_set_alpha, draw_sprite; (A big one is when the actively used texture changes).
In order to optimise this, GameMaker uses a process called batching, where it groups together draw calls of sprites that are stored on the same texture page, because it only needs to submit information to the GPU in one go. However, every time you change the rendering colour, or use a different sprite in the middle, or switch render targets (anything like this), the batch breaks, and gets split up. The more fragmented the batch is, the more idle time you get where neither the CPU nor GPU is doing a whole lot. (Note: In this case, the Intel HD 4000 still acts as a discrete GPU, even though it is integrated).
An easy way to see if this is happening is to use the profiler to evaluate how long is spent after each function. More importantly, using a GPU benchmarking tool can reveal GPU usage. If your GPU usage is low, but your game is still running poorly, this is indicative of an inefficient rendering pipeline.
Solutions:
- Optimise your texture pages manually to group together objects that are often rendered at the same time.
- Try and remove unnecessary colour swaps / shader uniform changes, and do them once at the start of each group of rendering, rather than individually for each object.
- Create one central object that is responsible for coordinating all rendering (rather than having it distributed amongst all objects.) What this can mean is that rather than having each object run its own draw event in sequence, have one object invoke the draw event of each object at a specific time. This will give you more control over the order in which things render, allowing you to optimise more heavily.
- Random render settings:
Make sure you haven't got anything like vsync enabled. Test your game with fast vertex mode enabled (in global game settings).
I hope these suggestions help you look at your rendering pipeline with a little more depth, just beyond what the resolution itself is doing. What you will often find is that higher resolutions cause you to reach a "crash" faster. (This is where the GPU suddenly can't keep up with what you are telling it to do, and performance will just drop really quickly).
This is not to say downscaling wont help, its just that downscaling and other things like frame-skipping often just hide the true cause of major slow-down. The annoying thing about downscaling is that once you've done it, there isn't much more you can do. If you then decide to add more fancy effects, well, then you've lost your flexibility for further optimisation. So yeah, still downscale if it helps, but try some other solutions as well.
The final thing to say is that you mentioned downscaling not working well in GMS 1.4+, this may be because you are still rendering the same number of objects. Offscreen rendering can also be expensive if you are submitting things from drawing that cant be seen. A better approach to downscaling would be to instead disable the application surface, and use your own back-end surface that you have more control over. The application surface can get goofy when it comes to things like the GUI layer.