GML Trying to improve the performance of drawing Isometric tiles

Hycrow

Member
Hello! I am very new with gml. I am trying to create an Isometric world with the use of a ds_grid. But if I try to create a very big room that is filled with isometric tiles I get major performance issues.
Since I am very new with gml I don't know all the options to increase performance of drawing a static isometric floor.
I draw the tiles that are only in view, that helped a little. I have seen options to work with chunks, vertex buffers and buffers. But I have no idea how to implement them correctly.

I would love some advice and ideas to improve the current code.

Create event:
GML:
// Set the layer to not visable
layer_set_visible("Tile_layer_1", false);

// Create a grid based on the map width and height in tiles (32 x 32 = 1 tile)
global.map_grid = ds_grid_create(MAP_W_IN_TILES, MAP_H_IN_TILES); // Grid starts at 0, 0

// Get the Tilemap that needs to be converted
var tile_map = layer_tilemap_get_id("Tile_layer_1");

// For each tile in the grid apply data
for (var tX = 0; tX < MAP_W_IN_TILES; tX++){
    for (var tY = 0; tY < MAP_H_IN_TILES; tY++){
        var tilemap_data = tilemap_get(tile_map, tX, tY);
     
        // Format: [Tile, Z]
        var this_tile = [-1, 0];
        this_tile[TILE.SPRITE] = tilemap_data;
        this_tile[TILE.Z] = 0;

        global.map_grid[# tX, tY] = this_tile;
        }
}
Draw event:
GML:
/// Render Isometric Game

var tilemap_data, roomX, roomY, tile_index, tile_z;

// Convert map_data from grid to isometric tiles

// Go through / for each tile in the grid
for (var tX = 0; tX < MAP_W_IN_TILES; tX++){
    for (var tY = 0; tY < MAP_H_IN_TILES; tY++){
      
        tilemap_data = global.map_grid[# tX, tY];
        roomX = Iso_TileToSceenX(tX, tY);
        roomY = Iso_TileToSceenY(tX, tY);
      
        // Only draw when tile is in view
        if tile_inview(roomX, roomY) {
          
        tile_index = tilemap_data[TILE.SPRITE];
        tile_z = tilemap_data[TILE.Z];
      
        // If there is a tile_ index with a value more than 0 and in view, then draw the designated tile
        if (tile_index != EMPTY_TILE) {
          
            // Apply shaders for specific tiles
          
            // Depth shader
            if tile_index != EMPTY_TILE shader_set(depth_shader);
                var tileposval = tX + tY;
                shader_set_uniform_f(shader_get_uniform(depth_shader, "tilepos"), tileposval * 0.02);
          
            // Water shader
            if tile_index == WATER_TILE shader_set(water_shader);
          
            // Draw the tile
            draw_tile(TileSetIso, tile_index, 0, roomX, roomY + tile_z);
          
            // Cleanup
            shader_reset();
        }
      
        // If the mouse is hovering over a tile, set Selection tile
        if (Iso_ScreenToTileX(mouse_x, mouse_y) == tX) && (Iso_ScreenToTileY(mouse_x, mouse_y) == tY){
            draw_tile(TileSetIso, SELECTION_TILE, 0, roomX, roomY + tile_z);
        }
        }
    }
}
In game image

1619035681411.png

Ps, sorry for the bad english.
 
Last edited:

O.Stogden

Member
I'm not too familiar with the tile system in GMS2 either. I have however made a game that had 90,000 tiles (similar size to your map) and it ran at over 1500FPS on a laptop, so there is an issue here.

One thing I would say is to check if you're making sure everywhere is using the correct value of either cells or room position, as the tile functions change in between them a lot.

For instance layer_tilemap_create, the width and height arguments are cells, not pixels, which threw me off at first, and caused large performance hits. So they should probably be room_width/32 and room_height/32 in your game, if they aren't already.

You should run the game in debug mode and see which scripts are taking up the most time to compute. It might help you identify scripts/functions that are particularly intensive and help you narrow down what is exactly the issue.
 

Fern

Member
You definitely would want to check for the mouse position when looping over the grid in your step event. The step event runs a lot faster than the draw event. A good rule of thumb... drawing is for the draw event, and all else goes in the step event.

shader_set() and shader_reset() also take a bit for the GPU. Reduce how often you are breaking your batches and you'll see huge performance improvements!
 

Hycrow

Member
When i use profiling in the debug tool i get this:

1619044838679.png
Does this help to get the source of the problem? I already thought it was the oIsoRender_Draw. It takes the most time (ms) and an absurd amount of Step %?
 

O.Stogden

Member
Yes, tile_inview is particularly intensive, you might want to see if there's a way of optimizing it.

It might not be strictly necessary to run it every single step the game runs also.

Like Seabass said, do as much logic and computing as possible in the step event, and use the draw event for the end results only. Try and compute data structures and arrays in your step, and then just check those things in the draw.
 

Roldy

Member
The two things to avoid that stand out are:
  1. Looping through your entire grid
  2. Changing the shader for every tile
Looping through your entire grid:

The view is only going to contain about 1000 tiles. So determine roughly the region of the grid that is in view and then only loop through that region instead of over the entire grid. So instead of determining if each tile is in view, determine the region of tiles that are in view and only loop over that region. e.g. instead of looping from 0 to MAP_W_IN_TILES determine a smaller range that is in view. Find the lowest column that will be in view and the maximum column that will be in view and only loop through that range, do the same for the rows of the grid and it will help reduce how many tiles you are trying to process.

Essentially you just need to determine the grid coordinate for the center tile and then add/subtract the view width in tiles. e.g.
GML:
// psuedo-code

var view_center_column = calculate_center_tile_column();
var view_center_row = calculate_center_tile_row();

var view_width_in_tiles = 32;
var view_hight_in_tiles = 32;

var view_min_column = view_center_column - view_width_in_tiles / 2;
var view_max_column = view_center_column + view_width_in_tiles / 2;
var view_min_row = view_center_row - view_height_in_tiles / 2;
var view_max_row = view_center_row + view_height_in_tiles / 2;

for (var tX = view_min_column; tX <= view_max_column; tX++) {
    for (var tY = view_min_row; tY <= view_max_row; tY++) {
        var tileData = myGrid[# tX, tY];
        if (tile_inview(tileData)) {
            // Do stuff
        }
    }
}
Think of the clipping in two phases:
  • broadband phase; determining a tight region of the grid that is in view
  • narrowband phase; calling tile_inview on each tile from the broadband phase
Changing the shader for every tile:

Don't change the shader constantly instead batch your tiles per shader. Instead of looping through each tile and drawing them, first loop through them and determine which tile uses which shaders and put them in a list. Once you have a list of tiles for each shader, then switch to the first shader and render all the tiles in the list for that shader, then switch to the next shader and render all the tiles in that list etc...

Effectively, instead of changing the shader 1 time per tile (1000 times per frame) you will only change it 1 time per shader (couple times per frame).

Batching the tiles into a list per shader could occur in the step event, instead of the draw event.

Once those larger optimizations are made and if you still have low performance then consider looking at optimizing your function 'tile_inview' as the profile indicates it may be taking too much time.
 
Last edited:

Hycrow

Member
The two things that stand out are:
  1. Looping through your entire grid
  2. Changing the shader for every tile
The view is only going to contain about 1000 tiles. So determine roughly the region of the grid that is in view and then only loop through that region instead of over the entire grid. So instead of determining if each tile is in view, determine the region of tiles that are in view and only loop over that region. e.g. instead of looping from 0 to MAP_W_IN_TILES, determine a smaller range that is in view.

Don't change the shader constantly instead batch your tiles per shader. Instead of looping through each tile and drawing them, first loop through them and determine which tile use which shaders and put them in a list. Then switch to the first shader and render all the tiles in the list for that shader, then switch to the next shader and render all the tiles in that list etc...

Effectively, instead of changing the shader 1 time per tile (1000 times per frame) you will only change it 1 time per shader (couple times per frame).

Batching the tiles into a list per shader could occur in the step event, instead of the draw event.
Thanks for the tip! I will look into it :). Guess I will have to make some changes in the double for loop. I really appreciate all the help guys, am learning a lot!
 
You should really draw that on a surface, and only draw the surface. You're recalculating every single tile every single game step, that's obviously not going to be efficient.
Save your stuff to a surface, draw the surface, and only reset it when something changes on the actual map. Except for the frame it is drawn, you will have a MASSIVE MASSIVE performance gain.
 
Top