Static Meshes: Vertex Buffer Combining?


I'm curious if there's an existing solution to combining several static meshes, each with unique position, scale, and rotation values, into a single vertex buffer to reduce individual buffer submissions?


As long as you know how to make a mesh with vertex buffers, you know how to do this.

You can define some shapes in arrays following the vertex format that you're using.

Then make a single vertex buffer and iterate through the shapes' vertices and translate/rotate/scale them as you see fit.

The tricky part is if you plan on using different textures/colors for each shape. That's also doable by stitching together textures but I never tried that part when doing this a while ago.

Unfortunately I don't have the source for this project since it's like 4 years old. Hopefully this is enough but if you need a quick code example I might be able to help.


There are two ways that I can think of to combine static meshes into a single vertex buffer.

The first is to copy the buffer's contents as many times as you need a transformed "instance" and apply the instance's transform to the data (putting in into world space), then combine those buffers and create a new vertex buffer from it.
In this case, the vertex format stays the same and it's the positions that are changed.
I don't have any gml code for this unfortunately since I export all vertex buffer data directly from Blender.
But it should look something like this:
// A couple of assumptions:
// vertex_format is the vertex format created by using the vertex format functions
// buff_model1 is buffer containing vertex data in a known format - can be other models too, of course
// apply_transform(vertex_buffer, transform_matrix) applies a transform to all positions in a buffer
// arr_transforms contains a transformation matrix for each instance (created using matrix_build)
instance_number = 5;
var model_size_bytes = buffer_get_size(buff_model1);
buff_combined = buffer_create(instance_number * model_size_bytes, buffer_fixed, 1);
var buff_temp, i;
i = 0;
repeat(instance_number) {
    buff_temp = buffer_create(model_size_bytes, buffer_fixed, 1);
    buffer_copy(buff_model1, 0, model_size_bytes, buff_temp, 0);    // Keep the original model intact
    apply_transform(buff_temp, arr_transforms[i]);
    buffer_copy(buff_temp, 0, model_size_bytes, buff_combined, i*model_size_bytes);

vertex_buffer = vertex_create_buffer_from_buffer(buff_combined, vertex_format);
vertex_freeze(vertex_buffer);    // Always a good thing to do

The second is to keep the vertex positions the same but add a model index to the format (so to each vertex):
// Vertex format definition
vertex_format_add_position_3d();                                        // Position
vertex_format_add_normal();                                             // Normal
vertex_format_add_custom(vertex_type_float1,vertex_usage_texcoord);     // Object index    (this one tells us which object/mesh we're working with)
fmt = vertex_format_end();
In this case, you keep the vertex data in local space.
Once again, after adding the index to each vertex, combine all vertex buffers into a single vertex buffer.
You then have a way to distinguish between different models inside the shader and you can do this:
// Simple vertex shader
attribute vec3 in_Position;
attribute vec3 in_Normal;
attribute float in_TextureCoord;    // Model/Mesh index

varying vec3 v_vNormal;

uniform mat4 u_mTransforms[16];

void main()
    int index = int(in_TextureCoord);
    vec4 object_space_pos = vec4(in_Position, 1.0);
    gl_Position = gm_Matrices[MATRIX_WORLD_VIEW_PROJECTION] * u_mTransforms[index] * object_space_pos;
    //gl_Position = gm_Matrices[MATRIX_WORLD_VIEW_PROJECTION] * object_space_pos;    // Untransformed
    v_vNormal = in_Normal;

It's possible to get this to work with multiple textures as well if you make sure in advance that all textures are on the same texture page/group.

Hope this gives a bit of an idea of what's possible :)
Last edited:


I'm also exporting buffers from blender, using the mmk scripts. I'm actually thinking of still doing that but dissecting the buffer to get all of the point locations, transforming them, saving the buffer, then merging. I've got a lot to think about so far. Thanks guys!