#### the_dude_abides

##### Member
I'm a noob, so bare that in mind I am looking at using vertex buffers and shaders for animating objects, and was wondering whether my current approach has any benefits for this method of "drawing". I have my shape defined and can draw it, but am totally flummoxed by how you then re-position the vertexes using a shader.

Some objects are rotating around a specific point at an unchanging length. Their positions from the origin for each angle have already been calculated, and the x difference / y difference stored in an array. The maths is only done once, and hopefully accessing one array position is better than redoing multiple lengthdir_x/y calculations each time.

1) If the shader is just saying "this corner was here before, and now I want it over there", is there anything to still be gained from handing it to the GPU in a vertex buffer? This is my code in a step event, from when I was just drawing textured primitives:
Code:
``````var is_x = x;
var is_y = y;
var is_ang = round(point_direction(is_x, is_y, mouse_x, mouse_y));
if is_ang > is_360
{is_ang -= is_360;}
image_angle = is_ang;
first_x = is_x + is_array_joint[is_ang, 0];
first_y = is_y + is_array_joint[is_ang, 1];``````
Could I pass the origin point (is_x, is_y) to the shader, along with 'is_ang' and the array, and then have it perform the basic maths of:

is_x + is_array_joint[is_ang, 0];

to get the positions? Which I guess adds up to slightly less for the CPU to do?

2)If there is still some benefit to be had from using a vertex buffer and shader: how would I go about doing it?

GML:
``````vertex_format_begin();
format = vertex_format_end();
joint_buffer = vertex_create_buffer();
vertex_begin(joint_buffer, format);

vertex_position(joint_buffer, second_x, second_y);
vertex_texcoord(joint_buffer, tex_left, tex_bottom);
vertex_colour(joint_buffer, c_white, 1.0);
vertex_float2(joint_buffer, what goes here?, what goes here?) <<<<<<<<<<<<< ??

vertex_end(joint_buffer);``````
This is trimmed down for readability. It's 2d so I am only using x and y positions, then the texture coordinates and colour.

vertex_format_add_custom etc is my guess at how I tell it to change the position, but what do I put for the x / y values when defining the buffer? Because it would seem like I don't have anything fixed to add in at this point, and it's in the shader where I'd define the value of the floats. They need to be included in the format here, but I just put a fixed value in for now and then update it in the shader?

Presumably the shader needs telling what 'is_x' / 'is_y' / 'is_ang' / 'is_array_joint' are as values, but I have no idea what changes the passthrough shader, or fragment shader, would require to alter the vertices position.

Any help with this would be appreciated #### the_dude_abides

##### Member
Well, I made some progress on this:

GML:
``````attribute vec3 in_Position;
attribute vec4 in_Colour;
attribute vec2 in_TextureCoord;
//attribute float in_Weight;
varying vec2 v_vTexcoord;
varying vec4 v_vColour;

//uniform float u_time;
void main()
{
vec4 pos = vec4(in_Position.xyz, 1.0);
pos.x = pos.x + 600.;

gl_Position = gm_Matrices[MATRIX_WORLD_VIEW_PROJECTION] * pos;

v_vColour = in_Colour;
v_vTexcoord = in_TextureCoord;

}``````
pos.x = pos.x + 600.;

This part ^^ made the difference, and actually moved what was being drawn. However, directly accessing it wouldn't work:

pos.x = 600;

'assign' : cannot convert from 'const mediump int' to 'float'
syntax error

So, if '300' was set as an 'const mediump int' this comparison would work? I guess another look at the shader instructions is required.

#### sp202

##### Member
pos.x = 600.; should work, seems like you were missing the decimal point.

• Repix and the_dude_abides

#### the_dude_abides

##### Member
pos.x = 600.; should work, seems like you were missing the decimal point.
Just saw this when I was looking through old comments to see if a current problem was ever mentioned.

This is a bit daft as it's rather later to reply now, but thanks for responding. For some reason I wasn't aware of anyone posting, and I don't want to be rude • Repix

#### Yal

##### 🐧 *penguin noises*
GMC Elder
There's a function to transform a vertex (which works just fine if you treat the vertex data as a 3-point vector) using the current transformation matrix... I think it's called something like vertex_transform, I don't remember exactly - it's listed on the "Matrix" page in the manual. So you could do all this GML-side if the shaders get too difficult... shaders are usually faster, of course.

• the_dude_abides

#### the_dude_abides

##### Member
@Yal
Thanks I did come across someone else's example, and see how it works. At least in the sense of manipulating points linearly in a shader.

The problem seems to be that I wanted to use it for sequential maths, like:

Give the shader a start point, and an end point, and then use a script for arcing trajectory to compute a curve between them.

I can do this on the cpu side, but hoped to be able to pass off the computational expense to the GPU. In the end, IIRC I was told that's not what you can do with shaders.

So.... I then thought to myself "oooh! c++ works with the shader language, and seemingly has more access to it than GMS does, so why not learn an entirely new programming language to code my first DLL as a means of dealing with these things" (I guess using c++ to make a compute shader possible, or some such fancy thinking way beyond my pay grade and comprehension)

and then I thought "yeahhhh - no! I'll stick to GML for now" #### Yal

##### 🐧 *penguin noises*
GMC Elder
Computation shaders are a thing, but a shader is much better at doing the same (very basic and inaccurate) computation 100,000 times in parallel than doing a complicated computation once... if you're just going to offload a single computation to a shader you'll most likely get a slower result than letting the CPU handle it thanks to the communication overhead between CPU and GPU.