• Hey Guest! Ever feel like entering a Game Jam, but the time limit is always too much pressure? We get it... You lead a hectic life and dedicating 3 whole days to make a game just doesn't work for you! So, why not enter the GMC SLOW JAM? Take your time! Kick back and make your game over 4 months! Interested? Then just click here!

Shaders How do I apply a normalmap value to a normal surfaced model

M

Misu

Guest
This might be an easy question for some of you but its new to me. Im using a shader to do lighting and I got it working very fine. Now I want to apply the lighting details using also a normalmap texture applied although the results wont come out the same due to different normal sides on the model.

What is the correct way to actually apply a normalmap value onto a normalize model?
 
two basic approaches that I know of.

If the normal map is in tangent space, then you rotate the normal and tangent vectors according to the model-to-world transformation, then you use those two rotated vectors to construct a matrix and use that matrix to rotate the value you read out of the normal map.

the alternative is to have the normal map be written in model space.... in which case (I believe) you can skip rotating normal and tangent vectors, and instead just rotate the normal map value with the world-to-model transformation.

note: I say world-to-model, but you cannot just use the world matrix if it contains scaling. In that case you'll need the inverse transpose of that matrix (which you should calculate in gml, not in the shader). Also, you don't want translation, so w component of the vector should be zero.
 
M

Misu

Guest
I find your explanation a bit complicated... I tried to use a world matrix in fragment shader to rotate the normals I sum up but it doesnt work. It leaves a big white stain over half of my model.

:/
 
okay... let me see if i can talk you through the first approach.

first, you'll need to get a 3x3 matrix as the inverse transpose of the upper-left 3x3 corner of your model's matrix.
This matrix will be used to rotate your vertex normal and tangent vectors.
Why can't you just use the model matrix? Because the scaling of your model will mess up your normal and tangent vectors. This doesn't apply if your model matrix contains no scaling.
The inverse transpose of the upper-left 3x3 removes the scaling.
If you don't want to look up how to do inverse transpose, here's a script that will do it for you:
Code:
    var _m = argument0; //4x4 model matrix
    var _s0 = _m[5] * _m[10] - _m[6] * _m[9];
    var _s1 = _m[1] * _m[10] - _m[2] * _m[9];
    var _s2 = _m[1] * _m[6] - _m[2] * _m[5];
    var _d = _m[0] * _s0 - _m[4] * _s1 + _m[8] * _s2;
    _d = 1 / _d;
    var _i;
    _i[0] =  _s0 * _d;
    _i[3] = -_s1 * _d;
    _i[6] =  _s2 * _d;
    _i[1] = -(_m[4] * _m[10] - _m[6] * _m[8]) * _d;
    _i[4] =  (_m[0] * _m[10] - _m[2] * _m[8]) * _d;
    _i[7] = -(_m[0] * _m[6] - _m[2] * _m[4]) * _d;
    _i[2] =  (_m[4] * _m[9] - _m[5] * _m[8]) * _d;
    _i[5] = -(_m[0] * _m[9] - _m[1] * _m[8]) * _d;
    _i[8] = (_m[0] * _m[5] - _m[1] * _m[4]) * _d;
    return _i;

alright, you need to pass that in as a uniform mat3:

shader_set_uniform_f_array( uniform_name_goes_here, name_of_inverse_transpose_3x3_goes_here );

in your vertex shader you want to transform the normal and tangent vectors by that matrix, and then pass them as varying to the fragment.
I'm using one of the texture vertex attributes to carry tangent information by the way, hence in_TextureCoord0.
The tangent vector, for each vertex, should be the direction, in object space, that the positive x axis of your texture points. This can be calculated based on the vertex positions and texture coordinates if the tangent vector isn't already contained in the model.

v_vNormal = inverse_transpose_3x3 * in_Normal;
v_vTangent = inverse_transpose_3x3 * in_TextureCoord0;

In the fragment shader, the normal and tangent vectors should be normalized (otherwise the angles will be slightly off). It would be no use normalizing in the vertex shader because the error is caused by interpolation between vertices.
The bitangent vector is (normal cross tangent).
A rotation matrix can be composed of the tangent, bitangent, and normal vectors, and then that matrix is used to rotate the value read out of the normal map.

vec3 normal_n = normalize(v_vNormal);
vec3 tangent_n = normalize(v_vTangent);
vec3 bitangent_n = cross( normal_n, tangent_n );
mat3 norm_mat = mat3( tangent_n, bitangent_n, normal_n );
vec3 norm_dir = norm_mat * (texture2D( tex_normal, v_vTexcoord ).rgb * 2.0 - 1.0);

The alternative approach, where the normal map is in object space rather than tangent space, will require fewer gpu calculations. Although I can't help you with that for now because I don't have any object space normal maps to test with. Google I'm sure has good results somewhere.
 
Last edited:
M

Misu

Guest
Sorry for the late reply. Been busy for a while. Anyway your explanation really makes things much more clearier and I manage to prepare a practice shader and a script base on your explanation. The only thing I still dont find clear yet is what kind of matrix values am I inputing into the inverse transpose script? Am I suppose to use the model transformation? The camera transformation? I dont find that specified in your explanation. I use matrix_get(matrix_world) but Im assuming thats not the matrix I need to insert in the inverse transpose script.
 
yes matrix_world for whatever is presently being drawn.

but remember what I said earlier, if there is no scaling, you can rotate the normal and tangent vectors with the regular world_matrix, but make sure w component is zero so translation isn't added. i.e., (matrix * vec4(in_Normal,0.0)).xyz; (there is no w component if using the 3x3 matrix)
 
M

Misu

Guest
I think Im doing it wrong because im getting only a red, green, and black colours around the room. I believe its the shader code that is wrong.
Vertex:
Code:
uniform mat3 transpose;
attribute vec3 in_Position;                  // (x,y,z)
attribute vec3 in_Normal;                  // (x,y,z)     unused in this shader.
attribute vec2 in_TexCoord0;              // (u,v)
attribute vec3 in_TexCoord1;

varying vec2 v_vTexcoord;
varying vec3 v_vTangent;
varying vec3 v_vNormal;
void main(){
    gl_Position = gm_Matrices[MATRIX_WORLD_VIEW_PROJECTION] * vec4( in_Position, 1.0);
    v_vTexcoord = in_TexCoord0;
    v_vNormal = transpose * in_Normal;
    v_vTangent = transpose * in_TexCoord1;
}
Fragment:
Code:
uniform sampler2D normalmap;

varying vec2 v_vTexcoord;
varying vec3 v_vTangent;
varying vec3 v_vNormal;

void main()
{
    vec3 normal_n = normalize(v_vNormal);
    vec3 tangent_n = normalize(v_vTangent);
    vec3 bitangent_n = cross( normal_n, tangent_n );
    mat3 norm_mat = mat3( tangent_n, bitangent_n, normal_n );
    vec3 norm_dir = norm_mat * (texture2D( normalmap, v_vTexcoord ).rgb * 2.0 - 1.0);
    gl_FragColor = vec4(norm_dir,1.);//texture2D( gm_BaseTexture, v_vTexcoord );
}
 
Hi. You are drawing the normal vector? It will look strange for sure. Especially after changed from [0 to 1] to [-1 to 1] with the *2.0 - 1.0.

lighting amount is usually:

max(0.0, dot( norm_dir, light_dir) );

where light_dir is the direction from the fragment to the light.

then you would multiply the result of that by the texture color rgb.
 
Top