• Hey Guest! Ever feel like entering a Game Jam, but the time limit is always too much pressure? We get it... You lead a hectic life and dedicating 3 whole days to make a game just doesn't work for you! So, why not enter the GMC SLOW JAM? Take your time! Kick back and make your game over 4 months! Interested? Then just click here!

Shaders Lighting shader using normal maps from multiple objects

S

Shifty

Guest
I've been experimenting with lighting using shaders and normal maps. I eventually want to have a scene where i have multiple lights reacting with objects and their normal maps. However I am unsure how to efficiently get multiple normal/diffuse maps into the shader.

Heres a picture of what my current test scene looks like. Its just a brick wall diffuse map w/ corresponding normals map.



Here is my vertex and fragment shaders

Code:
attribute vec3 in_Position;                  // (x,y,z)
attribute vec4 in_Colour;                    // (r,g,b,a)
attribute vec2 in_TextureCoord;              // (u,v)

varying vec2 v_vTexcoord;
varying vec4 v_vColour;
varying vec2 Fragcoord;

void main()
{
    vec4 object_space_pos = vec4( in_Position.x, in_Position.y, in_Position.z, 1.0);
    gl_Position = gm_Matrices[MATRIX_WORLD_VIEW_PROJECTION] * object_space_pos;
    
    Fragcoord = in_Position.xy;
    v_vColour = in_Colour;
    v_vTexcoord = in_TextureCoord;
    
    
}

Code:
varying vec2 v_vTexcoord;
varying vec4 v_vColour;
varying vec2 Fragcoord;

//texture samplers
uniform sampler2D u_texture; //diffuse map
uniform sampler2D u_normals; //normal map

//shader algorithm values
uniform vec2 Resolution;    //screen resolution
uniform vec3 LightPos;        //light postion, normalized
uniform vec4 LightColor;    //light RGBA -- alpha is intensity
uniform vec4 AmbientColor;    //Ambient RGBA -- alpha is intenisty
uniform vec3 Falloff;        //attenuation coefficients


void main()
{
    //vec3 LightPos = vec3(0.1, 0.8, 0.5);
    //RGBA of diffuse color
    vec4 DiffuseColor = texture2D(u_texture, v_vTexcoord);
    
    //RGB of normal map
    vec3 NormalMap = texture2D(u_normals, v_vTexcoord).rgb;
    NormalMap.g = 1.0 - NormalMap.g;
    
    //delta position of light
    //res and light poss need to be passed in
    vec3 LightDir = vec3(LightPos.xy - (gl_FragCoord.xy / Resolution.xy), LightPos.z);
    
    //Correct for aspect ratio
    LightDir.x *= Resolution.x / Resolution.y;
    
    //Determin distance (for attenuation) BEFORE normalizing LightDir vector
    float D = length(LightDir);
    
    //Normalize vectors
    vec3 N = normalize(NormalMap * 2.0 - 1.0);
    vec3 L = normalize(LightDir);
    
    //Pre-multiply light color with intensity
    //then perform "N dot L to determine our diffuse term
    vec3 Diffuse = (LightColor.rgb * LightColor.a) * max(dot(N , L), 0.0);
    
    //pre-multiply ambient color with intensity
    vec3 Ambient = AmbientColor.rgb* AmbientColor.a;
    
    //Calculate attenuation
    float Attenuation = 1.0 / (Falloff.x + (Falloff.y*D) + (Falloff.z*D*D) );
    
    //Final color calculation
    vec3 Intensity = Ambient + Diffuse * Attenuation;
    vec3 FinalColor = DiffuseColor.rgb * Intensity;
        
    gl_FragColor = v_vColour * vec4(FinalColor, DiffuseColor.a);
}

And my light controller object

Code:
LIGHT_COLOR = [1.0, 0.8, 0.6, 1.0];
AMBIENT_COLOR = [0.7, 0.7, 1.0, 0.2];
FALLOFF = [0.5, 0.5, 10.0];

//shader uniforms
res = shader_get_uniform(normal_map_lighting, "Resolution");
lightP = shader_get_uniform(normal_map_lighting, "LightPos");
lightC = shader_get_uniform(normal_map_lighting, "LightColor");
ambientC = shader_get_uniform(normal_map_lighting, "AmbientColor");
falloff = shader_get_uniform(normal_map_lighting, "Falloff");

//shader sampler textures
diffuseSampler = shader_get_sampler_index(normal_map_lighting, "u_texture");
normalSampler = shader_get_sampler_index(normal_map_lighting, "u_normals");
spr_diffuse = sprite_get_texture(bg, 0);
spr_normals = sprite_get_texture(bg_n, 0);

//create surface
shader_surf = -1;

Code:
var RESOLUTION = [window_get_width(), window_get_height()];
var LIGHT_POS = [mx, my, 0.06]; //mx , my are just normalized mouse coords

shader_set(normal_map_lighting);
//set shader algorithm values
shader_set_uniform_f_array(res, RESOLUTION);
shader_set_uniform_f_array(lightP, LIGHT_POS);
shader_set_uniform_f_array(lightC, LIGHT_COLOR);
shader_set_uniform_f_array(ambientC, AMBIENT_COLOR);
shader_set_uniform_f_array(falloff, FALLOFF);

//set shader texture samplers
texture_set_stage(diffuseSampler, spr_diffuse);
texture_set_stage(normalSampler, spr_normals);

if !surface_exists(shader_surf) {
    shader_surf = surface_create(room_width, room_height);
} else {
    if view_current = 0 {
        draw_surface(shader_surf, 0, 0);
    }
}
shader_reset();

Now earlier i said efficiently because I have another object in the scene, a simple "player" triangle with a normal map. I can get the light to work properly with the player object as I want but i have to do the same fetching of uniforms in the create event, then set the shader in the draw event. I feel like having to do a "set uniforms and set shader" script call for every object i want interacting with this light source is going to become extraordinarily inefficient.

So how can i dynamically pass multiple sampler2D textures into this lighting shader at once?

Addon question: is there a GMS manual entry that covers some of the GM specific shader functions/built-in vars i.e: "noise1()", far, and the long list of "gl_" vars?
 
EDIT: I should probably mention that it's a good idea to set a texture ( or textures ), draw everything that uses that texture ( or textures ), before moving on to the next texture. That way you avoid a situation where you are jumping back and forth between the same textures multiple times.

Now moving on to the more general problem of getting data into shaders...

hope someday I'll get better answers about this as well. Okay, so as far as I know, this is one major area where gamemaker can use some improvement. Setting a shader uniform, or a matrix for that matter, will break the current vertex batch, and the more that happens, the worse your performance will become. You can set shader uniforms probably several hundreds of times, maybe 1500 or more, depending on your computer, before you start getting performance problems. This sounds like a lot, but it really isn't. There are techniques that allow drawing of hundreds of thousands of instances with the same performance impact.

Anyway, the best thing I know how to do is this. When you have situations where you can combine multiple geometries into a single vertex buffer, you can give individual vertices a new attribute that represents the index of which instance it belongs to. Then you can pass into a shader an array that contains data for each instance. For each vertex, the shader reads the index attribute and uses it to look up the data pertaining to the instance that vertex belongs to. So with this technique, you can set uniforms once and then draw several instances in one call.

The problem is the number of shader uniform components that you have to work with is not a very large number, and I'm pretty sure it varies from computer to computer, and thus far I know of no way of polling the device to find out how many components are available. And if you try to use more than are available, the shader will stop working. That's a pretty major drawback, but in the project I'm currently working on, I've been able to use this technique to draw 20 or 50 different instances with a single draw call, setting the uniforms only once for all of the instances in that batch.

Actually, there are two more drawbacks.

If the number of instances varies, it can be a pain in the ass to sort out which ones / how many to draw. What I end up doing is drawing the max number of instances always, and the ones that don't correspond to actual game objects are just drawn with zero alpha or else the vertices are moved way out of view.

The other drawback is that you will be duplicating geometry, and thus your vertex buffers will take up a lot more memory. This could, I suppose, be a problem, depending on the complexity of your game.

I've read about a technique, so far as I know, which can't be done in gamemaker. It's called gpu instancing. It works similarly to what I described above, except a) you don't need to duplicate data in vertex buffers. And b), the size of data you pass into the shader is limited only by the amount of memory, and not by the number of uniform components that are available. That means you can draw a hundred thousand instances in one go. If there is anyway this can be added to gamemaker, it would make a HUGE difference in drawing performance in a lot of different situations.

I've never found a comprehensive list of all functions/variables avialable with GLSLES in gamemaker, but I can tell you that some things you find on the net will not work in gamemaker.
 
Last edited:
S

Shifty

Guest
Thanks for your reply, I've been working for a couple days on how to implement what you describe but because i am unfamiliar working with vertex buffers, i'm a little lost in the sauce on this one.


This is what i'm currently understanding about how these vertices can work with what i'm trying to accomplish so please correct my logic.
> say the vertex buffer contains info for one object, my player sprite which is 64x64:
I use the vertex buffer to define a quad made of up two triangles as a triangle strip, containing four vertices total. Each vertex holds its position on screen and its placement on the texture coordinate plane (1-0). I don't believe i need to define normal data within the vertex itself since that data comes from one of the textures to be sent. Finally the diffuse texture is applied to the quad with texcoords, resulting in what its like when you link a sprite to an object.

If that's more or less correct i have some questions about how vertex attributes should be formatted to hold all the data i need from a texture page / sprite i.e:
-how to reference the correct diffuse/normal images (and other maps perhaps?) within vertex data?
-what do i need to watch for making sure these vertex buffers can be updated frequently. (vertex data for moving geometry such as a player object)
-what custom vertex format your talking about to reference instance index. from the manual it seems that vertex data is pretty limited. one float, vec n of floats (2-4), rgba, or an unsigned 4byte int. the "ubyte4" seems to be what you meant, as in mapping my instance indexes to some number between 0-255.
-what needs to change within the fragment shader to handle multiple texture samples?

Any code/examples that could point me in the right direction would be greatly appreciated. And thanks for your help!
 
I evidently misunderstood your original question. Is the question basically how to get colors from multiple textures at once?

So you can sample other textures the same way you sample the base texture, you just need to add a uniform sampler2D in your fragment shader. Then in gml, once you've got a handle for that sampler using shader_get_sampler_index, you can set your shader, then put a texture into that sampler index with texture_set_stage.

If you need to have multiple texture coordinates per vertex, then what is suggested in this page:

https://forum.yoyogames.com/index.p...st-of-glsl-es-shader-attribute-constants.669/

Now, I haven't actually tried this yet, although I've had a lot of reason to, so I have no real excuse. So supposedly, you can have up to 10 texture coordinate attributes in your vertex format. And you'd access those with in_TextureCoord[0..9]. Like I said I haven't tried that yet, so I'm not 100% sure how to set it up, you'll need to play around with it I think.

All that stuff I said about instancing is a little difficult to explain. In fact you might find it easier just to set / reset a matrix or uniform to indicate position of your characters. It does come with a performance hit, but that doesn't really start to matter until you are doing it a lot of times every frame.
 
Top