the_dude_abides
Member
I know that using a shader does allow for offsetting mathematical processes onto the GPU, so that vertices can be translated, scaled and rotated etc. The reason for doing this is that it removes the strain from the CPU on the graphics side.
But what I'm wondering is whether that information, having been calculated by the gpu, can be returned in any way to the cpu for processes that are not graphical, or necessarily able to be implemented through the gpu.
As an example:
I am filling an mp grid where the cells are a pixel, and it will store the path of an object, including the area the objects size will cover as it travels. Some trigonometry is used, and then some basic maths. Nothing super advanced, but when applied to all the objects it adds up to being moderately costly.
Having experimented with other ways of doing this (like drawing the shapes all of the paths make to a surface, then putting that onto an object, and then the object into the grid) it is the least costly way I've found. It would seem to be the kind of thing that could be handled by the gpu, as it is maths calculations, but I can't mix and match the two sides in GLSL (?)
1) Could a shader actually set cells on the grid? I am unaware if any GML specific commands work within Studios implementation of GLSL
2) If not accessible directly, perhaps due to being different languages, or the gpu cannot communicate with the cpu in such a fashion, can the data calculated by the gpu be returned to the cpu in any way? Maybe as an array holding the positions of all the cells to be marked as occupied?
The cost of my method is heavy enough that, if I can get any part of it being handled elsewhere, the result would be very useful. Can it be done?
But what I'm wondering is whether that information, having been calculated by the gpu, can be returned in any way to the cpu for processes that are not graphical, or necessarily able to be implemented through the gpu.
As an example:
I am filling an mp grid where the cells are a pixel, and it will store the path of an object, including the area the objects size will cover as it travels. Some trigonometry is used, and then some basic maths. Nothing super advanced, but when applied to all the objects it adds up to being moderately costly.
Having experimented with other ways of doing this (like drawing the shapes all of the paths make to a surface, then putting that onto an object, and then the object into the grid) it is the least costly way I've found. It would seem to be the kind of thing that could be handled by the gpu, as it is maths calculations, but I can't mix and match the two sides in GLSL (?)
1) Could a shader actually set cells on the grid? I am unaware if any GML specific commands work within Studios implementation of GLSL
2) If not accessible directly, perhaps due to being different languages, or the gpu cannot communicate with the cpu in such a fashion, can the data calculated by the gpu be returned to the cpu in any way? Maybe as an array holding the positions of all the cells to be marked as occupied?
The cost of my method is heavy enough that, if I can get any part of it being handled elsewhere, the result would be very useful. Can it be done?