#### Misu

##### The forum's immigrant

What is the correct way to actually apply a normalmap value onto a normalize model?

What is the correct way to actually apply a normalmap value onto a normalize model?

If the normal map is in tangent space, then you rotate the normal and tangent vectors according to the model-to-world transformation, then you use those two rotated vectors to construct a matrix and use that matrix to rotate the value you read out of the normal map.

the alternative is to have the normal map be written in model space.... in which case (I believe) you can skip rotating normal and tangent vectors, and instead just rotate the normal map value with the world-to-model transformation.

note: I say world-to-model, but you cannot just use the world matrix if it contains scaling. In that case you'll need the inverse transpose of that matrix (which you should calculate in gml, not in the shader). Also, you don't want translation, so w component of the vector should be zero.

okay... let me see if i can talk you through the first approach.

first, you'll need to get a 3x3 matrix as the inverse transpose of the upper-left 3x3 corner of your model's matrix.

This matrix will be used to rotate your vertex normal and tangent vectors.

Why can't you just use the model matrix? Because the scaling of your model will mess up your normal and tangent vectors. This doesn't apply if your model matrix contains no scaling.

The inverse transpose of the upper-left 3x3 removes the scaling.

If you don't want to look up how to do inverse transpose, here's a script that will do it for you:

alright, you need to pass that in as a uniform mat3:

shader_set_uniform_f_array( uniform_name_goes_here, name_of_inverse_transpose_3x3_goes_here );

in your vertex shader you want to transform the normal and tangent vectors by that matrix, and then pass them as varying to the fragment.

I'm using one of the texture vertex attributes to carry tangent information by the way, hence in_TextureCoord0.

The tangent vector, for each vertex, should be the direction, in object space, that the positive x axis of your texture points. This can be calculated based on the vertex positions and texture coordinates if the tangent vector isn't already contained in the model.

v_vNormal = inverse_transpose_3x3 * in_Normal;

v_vTangent = inverse_transpose_3x3 * in_TextureCoord0;

In the fragment shader, the normal and tangent vectors should be normalized (otherwise the angles will be slightly off). It would be no use normalizing in the vertex shader because the error is caused by interpolation between vertices.

The bitangent vector is (normal cross tangent).

A rotation matrix can be composed of the tangent, bitangent, and normal vectors, and then that matrix is used to rotate the value read out of the normal map.

vec3 normal_n = normalize(v_vNormal);

vec3 tangent_n = normalize(v_vTangent);

vec3 bitangent_n = cross( normal_n, tangent_n );

mat3 norm_mat = mat3( tangent_n, bitangent_n, normal_n );

vec3 norm_dir = norm_mat * (texture2D( tex_normal, v_vTexcoord ).rgb * 2.0 - 1.0);

The alternative approach, where the normal map is in object space rather than tangent space, will require fewer gpu calculations. Although I can't help you with that for now because I don't have any object space normal maps to test with. Google I'm sure has good results somewhere.

first, you'll need to get a 3x3 matrix as the inverse transpose of the upper-left 3x3 corner of your model's matrix.

This matrix will be used to rotate your vertex normal and tangent vectors.

Why can't you just use the model matrix? Because the scaling of your model will mess up your normal and tangent vectors. This doesn't apply if your model matrix contains no scaling.

The inverse transpose of the upper-left 3x3 removes the scaling.

If you don't want to look up how to do inverse transpose, here's a script that will do it for you:

Code:

```
var _m = argument0; //4x4 model matrix
var _s0 = _m[5] * _m[10] - _m[6] * _m[9];
var _s1 = _m[1] * _m[10] - _m[2] * _m[9];
var _s2 = _m[1] * _m[6] - _m[2] * _m[5];
var _d = _m[0] * _s0 - _m[4] * _s1 + _m[8] * _s2;
_d = 1 / _d;
var _i;
_i[0] = _s0 * _d;
_i[3] = -_s1 * _d;
_i[6] = _s2 * _d;
_i[1] = -(_m[4] * _m[10] - _m[6] * _m[8]) * _d;
_i[4] = (_m[0] * _m[10] - _m[2] * _m[8]) * _d;
_i[7] = -(_m[0] * _m[6] - _m[2] * _m[4]) * _d;
_i[2] = (_m[4] * _m[9] - _m[5] * _m[8]) * _d;
_i[5] = -(_m[0] * _m[9] - _m[1] * _m[8]) * _d;
_i[8] = (_m[0] * _m[5] - _m[1] * _m[4]) * _d;
return _i;
```

alright, you need to pass that in as a uniform mat3:

shader_set_uniform_f_array( uniform_name_goes_here, name_of_inverse_transpose_3x3_goes_here );

in your vertex shader you want to transform the normal and tangent vectors by that matrix, and then pass them as varying to the fragment.

I'm using one of the texture vertex attributes to carry tangent information by the way, hence in_TextureCoord0.

The tangent vector, for each vertex, should be the direction, in object space, that the positive x axis of your texture points. This can be calculated based on the vertex positions and texture coordinates if the tangent vector isn't already contained in the model.

v_vNormal = inverse_transpose_3x3 * in_Normal;

v_vTangent = inverse_transpose_3x3 * in_TextureCoord0;

In the fragment shader, the normal and tangent vectors should be normalized (otherwise the angles will be slightly off). It would be no use normalizing in the vertex shader because the error is caused by interpolation between vertices.

The bitangent vector is (normal cross tangent).

A rotation matrix can be composed of the tangent, bitangent, and normal vectors, and then that matrix is used to rotate the value read out of the normal map.

vec3 normal_n = normalize(v_vNormal);

vec3 tangent_n = normalize(v_vTangent);

vec3 bitangent_n = cross( normal_n, tangent_n );

mat3 norm_mat = mat3( tangent_n, bitangent_n, normal_n );

vec3 norm_dir = norm_mat * (texture2D( tex_normal, v_vTexcoord ).rgb * 2.0 - 1.0);

The alternative approach, where the normal map is in object space rather than tangent space, will require fewer gpu calculations. Although I can't help you with that for now because I don't have any object space normal maps to test with. Google I'm sure has good results somewhere.

Last edited:

but remember what I said earlier, if there is no scaling, you can rotate the normal and tangent vectors with the regular world_matrix, but make sure w component is zero so translation isn't added. i.e., (matrix * vec4(in_Normal,0.0)).xyz; (there is no w component if using the 3x3 matrix)

Vertex:

Code:

```
uniform mat3 transpose;
attribute vec3 in_Position; // (x,y,z)
attribute vec3 in_Normal; // (x,y,z) unused in this shader.
attribute vec2 in_TexCoord0; // (u,v)
attribute vec3 in_TexCoord1;
varying vec2 v_vTexcoord;
varying vec3 v_vTangent;
varying vec3 v_vNormal;
void main(){
gl_Position = gm_Matrices[MATRIX_WORLD_VIEW_PROJECTION] * vec4( in_Position, 1.0);
v_vTexcoord = in_TexCoord0;
v_vNormal = transpose * in_Normal;
v_vTangent = transpose * in_TexCoord1;
}
```

Code:

```
uniform sampler2D normalmap;
varying vec2 v_vTexcoord;
varying vec3 v_vTangent;
varying vec3 v_vNormal;
void main()
{
vec3 normal_n = normalize(v_vNormal);
vec3 tangent_n = normalize(v_vTangent);
vec3 bitangent_n = cross( normal_n, tangent_n );
mat3 norm_mat = mat3( tangent_n, bitangent_n, normal_n );
vec3 norm_dir = norm_mat * (texture2D( normalmap, v_vTexcoord ).rgb * 2.0 - 1.0);
gl_FragColor = vec4(norm_dir,1.);//texture2D( gm_BaseTexture, v_vTexcoord );
}
```

lighting amount is usually:

max(0.0, dot( norm_dir, light_dir) );

where light_dir is the direction from the fragment to the light.

then you would multiply the result of that by the texture color rgb.