So lately I've started to work with shaders (as you probably could see with my light and shadow engine) and I've found out that the "attribute vec3 in_Position;" part only loads the model location as found in the model editor, when drawing the modelon any other location then xyz at 0 you'll still get the vertex locations returned like it's drawn at the xyz 0 location. Does somebody know how to help me with this? P.S. Where did the shader prefix go though?
attribute vec3 in_Position is the local space coordinate for the model. When you use a shader, you will see that object_space_pos gets multiplied by a matrix. This matrix passes each vertex coordinate through a transformation system which works its way through each space: We go from local model/object space to World space to view space. This diagram highlights the process more clearly: if you want the position of the model, this cannot be derived from the complete matrix form (as it is computationally more efficient to just multiply the coordinate by one matrix, rather than multiple). We also cannot necessarily decode the position directly from the world matrix (which gives us the world-space x, y, z we would want) as that also encodes information about rotations. IF, however, the translation is applied as an after transformation (which is most common), we can extract it from the 4th column: Fun fact: The reason we have a w component (and use 4-d coordinates) is because translations are not directly possible with 3D matrix transformations, however due to the way matrix multiplication by vectors works, we can achieve a 3-Dimensional translation by performing a 4-Dimensional shear. The value you want in the matrix can be accessed by using the gm_Matrices[MATRIX_WORLD] and accessing the appropriate cells in the matrix (sorry i cant remember if the matrices in GM are row-major or column major.) But you can access a matrix by using 2D array notation: Code: mat4 myMatrix = gm_Matrices[MATRIX_WORLD]; float x =myMatrix[3][0]; float y =myMatrix[3][1]; float z =myMatrix[3][2]; Though, to be honest, it is far easier just to define a uniform and pass in the position using that. If you are planning on working with shadows/lighting, it would be a good idea to try and get an understanding of the different 3D transformation spaces. One of the things that makes shader control flow difficult is that you need to be aware of what space each of your variables are working in at a time in your program. As an overview: Local space (Or object Space): Is the space relative to the original model file. This is un-transformed and what the default position is if you attempted to draw something. World Space: After transformations have been applied, such as rotations/translations. Our object exists in world space. This is the actual coordinate of the object in the game space or 3D room. View Space (Or Eye Space): In order to have a camera work, we transform everything into view space. This both rotates things and moves things relative to the camera. In view space, the point (0,0,0) corresponds to the position of the camera, and the z component represents distance from the camera. Homogeneous Clip space: We now scale things using a view frustum so that further away objects appear visible smaller. The relative scaling corresponds to the size of the viewing planes (both near and far) and these are dependent on the field of view. Normalised Device Coordinates: Before we can use the position of things on screen, we first scale our coordinate system such that the centre of the screen corresponds to 0,0 with the upper-left corner being (-1,-1) Screen space: After all of this, we simply scale and bias things back into screen space. Depending on your target system, this can either be a value between 0 and 1 (with 0 being upper left corner, and 1 being bottom right), or between (0,0) and (view_wview, view_hview ). Hope this answers your question, and aids by giving you a little more insight into the 3D transformation pipeline.