GM Version: GM:S 2 (2.2.1.357 IDE, 2.2.1.287 RunTime). Target Platform: Any (As long as they support GLSL ES Shaders). Difficulty: Hard Download Example: Google Drive Summary: This tutorial will cover the difficulties of lighting and my specific solution using my unique GPU based 2D Ray-Tracing. The goal here is to get shadow-casted lights working in orders of hundreds or even thousands--at least for pixel-art based games. I think I've accomplished that here! If you've been following me on Twitter, you'll see that I accomplished in the order of almost 3,000 ray-traced lights with a radius of 128 px before dipping below 60 FPS. Obviously the larger the radius, the less lights. However note that I actually reached a GameMaker limit of the bandwidth I can actually push between CPU and GPU leaving the maximum number of lights under all circumstances around 3,000 before GameMaker bottleneck's itself. I ran this test on a 1080 Ti and never reached max load on either the CPU or GPU, pointing in my opinion to some bandwidth limitation. Please follow my Twitter for more updates and features! TR;DL: if you'd like to simply use the system, then you can download the example from the link above and skip to the bottom of this tutorial explaining the basic API. Tutorial: Shadow-casting is quite a complex problem for 2D games and usually the result is based upon polygonal shaped shadow casters using something SDF (signed distance fields), however my specific approach allows for shadows to be casted using a surface texture representing the game environment--even using moving object shadow casters! This is accomplished using a multi-pass shader with a decent amount of optimizations. The result was rather astounding! So let's go over the complications and see how it works. I'll break the shader itself down into two parts and then we'll go over the usage and modification as well as the simple API. * Note I'll be using images to represent any code used for syntax purposes, but see links above for text versions. Complications: GLSL and shaders in general have a lot of restrictions, but I'd say the biggest one is understanding texture sizes as well as UV coordinate translations. SURFACES: When you create a surface GameMaker will actually upscale the surface width and height to the nearest power of two for optimization purposes. For example if you create a surface size of `312 x 180` it'll be upscaled to '512 x 256'. If you get the size of the surface however it'll return the expected size of '312 x 180'. In this case you'll need to make sure the sizes you pass are indeed upscaled to the nearest power of 2 as well. UV COORDINATES: The UV coordinate system represents the XY coordinates in 0...1 coordinate space. This is done by basically dividing the XY coordinate by the width(W) and height(H) of the texture. This gives us the ratio of XY to WH as a floating point number which GPUs handle very very well. If it's easier to conceptualize the XY to WH ratio is basically what % is XY to WH. So in the case of an X = 55, Y = 120 and a texture size of W = 320, H = 180 we can do the following: Code: uv = xy / wh u = x55 / w320 = 0.171875 = 17.1875%; v = y120 / h180 = 0.666667 = 66.6667%; RELATIVE UV SURFACE SPACE: When dealing with multiple surfaces, each surface will have it's own coordinate space in range of 0...1. This means that the UV and XY position used to reference one texture will not translate to another, you have to do that manually. Code: XY = 55 x 120; SURF_A = 320 x 180; SURF_B = 640 x 360; UV_A = 55/320 x 120/180 = 0.171875 x 0.666667; UV_B = 55/640 x 120/360 = 0.085937 x 0.333333; See the difference? This tripped me up a number of times by not being conscious of the problem. When converting from XY to UV, we can achieve the same result by multiplying the XY by the reciprocal of the WH--which is what I use in my code: Code: UV = XY * (1/WH); Inversely we can convert a UV coordinate to XY space simply by multiplying the UV coordinate by the WH of the texture: Code: UV = 0.171875 x 0.666667; XY = UV * WH; XY = 0.171875 * 320 x 0.666667 * 180 = 55 x 120; PARALLEL PROCESSING: Shaders work by sending one program to every pixel and every pixel will be processed using the exact same program. This means that shaders need to be written localized to each pixel, rather than as a singular program operating on a set of pixels. Technique: The idea here is to ray-trace every pixel, but using only the minimum number of rays, which optimally is the circumference of the light circle: 2PI(R). If you notice, any one ray covers EVERY pixel along it's length--even if the ray doesn't hit that pixel. This is because any pixel at a distance from the center of the light will be lit up if that distance is shorter than the ray, or will be cast in shadow if that distance is longer than the ray. The way we do this is by doing a multi-pass shader--or two shaders that work together. The first shader will trace every ray around the circle and translate length of the rays onto a grid for lookup in the second shader. The second shader then for each fragment/pixel looks up the appropriate ray and checks if the pixel falls outside or within bounds of that ray. This illustration shows the basic idea behind the first shader. Of course the shader will trace the entire circle, unlike my illustration. Due to this method we're running a shader over two different textures of different sizes. The ray-tracer uses a texture size optimized for the number of rays being traced. For example if the number of rays is 2PI(R) then the for a light radius of 128px, we get: 804 total rays. The texture size is a square with the size of the nearest higher power of two of the sqrt of the number of rays: Code: rays = 2 * pi * r size = 1 << ceil(log2(sqrt(rays))); The log2(x) gives us the approx. float power of 2 to reach the number X. So we take this and ceil/roundup that result to get the next highest whole power of 2. The second texture size is based on the radius of the light using similar method. Which is the nearest higher power of 2 of 2 * radius. Code: 1 << ceil(log2(radius*2)); Lastly the light rendered from the second shader along with all other lights are pushed to a final surface covering the screen. Vertex Shader: Spoiler This vertex shader is exactly the same for both the Ray-Tracer and Light-Sampler, all it does is the basics, passing each fragment it's coordinate as well as the texture being rendered. This is my modified version as a compaction and breakdown of the default GM:S2 vertex shader. Ray-Tracer + Light Sampler Constants: Spoiler Ray-Tracer: Light Sampler: Let me first go over the shader constants as they're pretty necessary to understand, especially to modify the shaders: MAXRADI_SIZE is the maximum radius that this shader can handle. PI is well, pi. RAYTEXT_SIZE is the texture size of the ray-tracing texture. TEXTURE_SIZE is the size of the light texture we're rendering the light with. in_TexData is the vec3 that holds some important sizing contents based on the light texture size. X: The texture size. Y: 1/2 texture size (also maxrdi_size) Z: Reciprocal of the texture size. These are used for converting around XY to UV texture space. in_TexCenter is the vec2 representing the relative middle point of the light as the center of the texture. PI2 or 2 * pi also known as tau for simplifying calculations requiring 2 * pi. You can modify the MAXRADI_SIZE, RAYTEXT_SIZE and TEXTURE_SIZE to change the maximum size of lights you'd like to render. Ray-Tracer: Spoiler The ray-tracer shader does a compact ray-trace test and traces a number of rays from the center of the light outward. The result of these rays are mapped out onto a grid based on their position. The result looks something like this for the light below in the spoiler: Looking at the image in the spoiler, you can see that the block is at angle to the light between 180 degrees and 270 degrees. If we look at our red/black ray texture, you can think of it broken down into four quadrants: top (0 to 90), top-middle (90 to 180), middle-bottom (180 to 270) and bottom (270 to 359). So you can see that the block falls in the bottom-middle of the texture which as I said correlates roughly between angles 180 and 270. You can see this because the "red" rays are longer and more red, while "black" rays are shorter, less red, more black. This is because the color of the ray from black to red is the length of the ray. Spoiler: Light Output This may look confusing, but it's a nice simple method for easily processing of rays. If you don't know a 2D grid is basically a curved 1D line, this means that all XY coordinates can be mapped to a 1D index. Each fragment represents a ray and the XY position of each fragment represents the index and simultaneously the angle of the ray. Which means that if we define the total number of rays, we can convert a fragment's XY position into a 1D index and find the ratio of the index to the total number of rays. We then check if the fragment's 1D index is between 0 and the Ray Count, if it is we can trace this ray. Tracing a ray is done by converting the fragment's ratio of 'index/ray_count' and to an angle `theta` and finding the X,Y offset direction to step in. We then step over every pixel until we hit the radius of the circle OR until we hit a collision. We then store the length of the ray as a color by converting the length of the ray to UV space as `length / radius`. LINES 15 TO 19: So find this ratio though we first have to decide, how many rays need to be cast. I have found that the optimal number of rays is directly proportional--within a small degree of error I'll go over later--to the circumference of the circle: Code: c = 2pi(r); r = 2 * pi * r; We now know all pixels with an XY coordinate with an index between 0 ... 2PI(R) are the only pixels we need to process as rays. To decide which pixels those are we convert the pixel's UV coordinate to XY and then convert that XY coordinate to a 1D index: Code: XY = UV * WH; Then we need to convert this XY coordinate to a 1D index: Code: INDEX = (Y * W) + X; Like I said all grids are just curved lines. The number of curved lines is W and the specific line we want is the Y-th line: Y * W. Then the offset on that specific line is X. See the following link for further explanation. That's the basic jist of lines 15 to 19. We take the pixel's coordinate, convert it to XY space, convert that XY coordinate to 1D space, then check if this pixel has an index between 0 and 2PI(R), if not, this pixel is NOT a ray. LINES 20 & 21: Now that we've determine whether or not a pixel is a ray, we need to find the angle of that ray. As I mentioned previously, the angle is directly equivalent to the ratio of the pixel's 1D index to the total number of rays: Code: ratio = index / ray_count; Then we need to find the angle--except we need to do that in radians. Radians are degrees in the form of 0 to PI * 2 or in some cases `-pi to pi`, here in GLSL we use `0 to PI * 2`. So instead of converting degrees to radians, we'll skip that and convert our ratio directly to radians, simply by multiplying the ratio by PI * 2: Code: theta = ratio * 2 * PI; We then need to take the angle and find the step direction. This is done by using the same method as `lengthdir_x` and `lengthdir_y` in GameMaker, except we exclude the "length" part. Lengthdir_xy is this: Code: lengthdir_x(theta, length) = cos(theta) * length; lengthdir_y(theta, length) = -sin(theta) * length; Note that here we use `-sin()` instead of `sin()` because in most game engines, including GameMaker the y-axis of the Cartesian coordinate plan is reversed. So then for the step position, we just exclude the length and just use an implicit length of `1`: Code: stepx = cos(theta); stepy = -sin(theta); LINE 23 TO 27: Alright, finally we can trace our ray, now that we know the ray angle and step form. We do this by defining a for() loop that has a length of the maximum radius allowed by this shader. This is because in GLSL we cannot use "gradient," functions or functions that can change between pixels, because the process MUST be exactly the same for every pixel. So we set the loop size to the maximum allowed radius and leave it. However we only want to ray-trace within the loop as long as the ray hasn't reached the light radius OR hit a collision. This is a bit of a complex checking function, but line 26 the variable `rad` basically says IF the iterator of the loop `d` exceeds the light radius, STOP. Or if the we've hit a collision on the ray's path, STOP. The first thing we need to do is define the ray itself, we do this by multiplying the `step` by the current iterator, this gives us that final `lengthdir_xy` by multiplying the directional position by the length the ray has traveled. We then round out the ray length to get the nearest whole pixel position to the ray. We then store this in a variable as we want to know the final length of the ray later on. Code: ray = round(step) * d; Round() doesn't exist in GLSL ES, soo... we can use it's equivalent `floor(x+0.5)`. Next we need to actually check for collisions under the ray. This is done by taking the ray itself and translating it onto the light's actual XY position in the game world, then converting that XY position to the texture space of the collision or world texture. Code: XY = LIGHT_XY + RAY_XY UV = XY * WORLD_WH. In this case the WH are actually passed to the shader in reciprocal form as mentioned before. Now we can take the UV position and sample from the world or collision texture and check if there is a collision under the ray. This is done by ONLY sampling the pixel's ALPHA value, not RGB, because the color isn't relevant, but the alpha is, 0 has no collision, 1 yes has collision. Finally we do our multi-layer ray check at line 26. We set `rad` to the iterator `d`, if `d` is greater than the light radius, stop tracing. Then we add to that result the light collision result. Except the light collision is only 0 or 1, so we multiply that by the light radius for form 0 or R, which means that if a collision occurs `rad` will stop as it'll exceed the light radius. LINES 29 TO 32: Lastly we take our final ray length and convert it to 0...1 simply by finding the ratio of the length of the ray to the light radius--because the length of the ray will NEVER exceed the light radius. Note here that the ray itself is in the form of a vector2(x,y), because this is basically our step length we stepped in the for() loop. So we need to first convert that vector2() to length using GLSL's built in length() function, which convert's a vector2 to it's length, which is in essence the distance of vector2(x,y) from point 0,0. Finally we pass the result to the fragment. Ugh, so long, done. Light Sampler: Spoiler Alright, bare with me, the light sampler is much easier to understand. Simply put, the light sampler does just that, builds the light from the previously calculated ray samples in the other shader. LINES 26 TO 30: Again we convert this fragment's XY position to UV coordinate, then we want to get the `distance` of the fragment to the center of the light, relative to the texture--not the game world. If the distance is smaller than the radius of the light, that means we can process this pixel as within the light circle. LINES 31 TO 34: Like in the first shader we want to first determine the number of rays, which we've optimally decided is 2 * PI * R. Then we want to get the angle from the light center to this pixel. This angle will correlate directly to the ray we want to find. This is done by finding the difference between the coordinate's XY position and the light center and converting that to an angle using ATAN2 or in GLSL, ATAN(Y,X)--yes Y then X, not X then Y. So first get the delta position. Code: DELTA = XY - CENTER; Then thanks to @XOR he optimized my indexing function. To compute the ray index we multiply the ratio of the angle from fragment to light center over the total number of degrees, then multiply that by the total number of rays. The key here is pulling the ratio from the `atan(y,x)/PI2` using fract()--which gives us the fractional/decimal part of the result, which is our ratio. I'll admit, I don't understand how this works 100%, my original idea was to take the angle, convert it to degrees, then get the ratio of the angle over 360 degrees, then multiple by the ray count to get the index. XOR's solution is more efficient though. Now, for each pixel we actually need to check two rays to make sure what the pixel is actually lit up. This is because the directly correlated ray might result in the pixel not being lit up, but the next nearest ray might light it up. So in this case which check both rays on either side of the angle of the pixel. This is done by computing the offset of either -1 or +1 to the nearest index: Code: // I know you can change this to '1 - (2 * floor(fract(index) + 0.5));' but it results in visual errors. 1 + (-2 * (1.-floor(fract(index) + 0.5))); How does this work? When we computed our first index, it's not rounded, leaving the fractional part which is in either the upper bound (0.5 to 0.9) or the lower bound (0.0 to 0.49). This means that the index we computed is fractionally closer to either the next or last index. We round off this fractional part, giving us 1 if the fractional part > 0 or 0 fractional part is < 0. My goal here is to get this value 0 or 1 in the form of -1 or 1, so I do a little trick to basically drop down to -1 when we're on the lower bound. So 1 is the upper bound, then the lower bound is 1 - 2 or -1. LINES 35 TO 42: Now that we have both our ray index and nearest ray index, we need to check and see if this pixel is within range of the ray. This is done by checking the distance of the pixel to the light center and comparing that to the length of the ray. If the length of the ray is longer, then this pixel is lit up, if not the pixel is in shadow. If the pixel does not pass the first ray test, we check the nearest ray using the same process. This'll cover both rays on either side of the pixel and make sure that the pixel is properly lit up or not. So how do we find the ray? We need to take our computed ray indices and do the reverse of what we need in the ray tracing shader. We need to convert the 1D index to a 2D X,Y coordinate. Code: X = mod(index, ray_texsize); Y = index / raytex_size; We then need to convert the XY coordinate of the ray in the ray tracing texture to a UV coordinate so we can sample the pixel. We then convert the sample ray length from UV space to XY space by multiplying it back out by the light radius--this is done in the getRayFromIndex(index) function: Code: length = texture2D(ray_tex, XY).x * radius; Now that we have the ray length, we compare that to the distance of the fragment/pixel to the light to either light up or shadow the pixel. Finally then we multiply the result of lit/shadow (1 or 0) by the length ratio of the distance of the pixel to the light by the light radius. This will perform a linear gradient light tonemap upon the final light output. Then repeat the process if the pixel is not lit up for the nearest ray. LINES 44 TO 48: Line 44 will do a final check on the pixel vs the world/collision texture to see if the pixel is over a collision or not. This handles an edge case where the math doesn't quite work out for some pixels over collision points. Finally multiple the resulting intensity of the pixel `result` by the light color, then pass the result out to the fragment. Hopefully that was a comprehensive of enough breakdown--I wrote this all in a single sitting. Example + API: Finally let me go over the API, it's very very simple! Four scripts in total, then a bit of excess to get it running. I'll create an example here to show you how to set it up: First off is the scripts themselves--in order of usage. The last two scripts are used by the other 4, so you won't need those typically. Code: // This will create the ray-traced lights subsystem. system = Scr_QRT_Create(raytracer, lightsampler, tracertexture, samplertexture, max_radius); // This will create a new light with the attributes in an array as [x,y,radius] and [r,g,b]. light = Scr_QRT_Light([x,y,r], [r,g,b]); // This will raytrace a single light, you'd then need to render the result to the light's surface. Scr_QRT_Tracer(system, light, collision_surface); // This will render the actual light output, which should then be rendered to the game's light surface. Scr_QRT_Tracer(system, light, light_surface, collision_surface); // Returns the nearest higher power of 2 of the specified radius (useful for getting texture sizes). Scr_QRT_TexLightPow2Size(radius) // Returns the nearest higher power of 2 of the specified length (useful for getting texture sizes). Scr_QRT_TexPow2Size(length) Before we write any code we need to setup our resources: sprites, shaders, script. Import the scripts and shaders. Name the shaders and append the max light radius of the shader to the end of the name for clarity. E.g. we have two shaders `Shd_RayTracer` and `Shd_LightSampler` and in this exmaple we're creating lights with a max radius of 64 px across, so set the shaders up as `Shd_RayTracer64` and `Shd_LightSampler64`. This is important because the maximum light radius MUST be hard-coded into the shaders, so we need the radius in the names to clarify that for us. Now create 3 sprites: Spr_Collision (size: 320 x 180), Spr_RayTracer64 (size: 32 x 32) and Spr_LightSampler64 (size: 128 x 128). In Spr_Collision feel free to experiment and hand draw your own environment! Just note that these shaders will NOT properly handle anything less than 2px diagonal when drawing angled images. The sizes here are important, they're optimized for the radius of the light. If our light radius is 64px, then that means we need a square texture of 128 x 128 px to draw the light with in the Shd_LightSampler--this texture size MUST be a power of 2. Also for the Shd_RayTracer the texture is a square that has a size equal to the nearest power of 2 of the sqrt of the number of rays being traced. The number of rays being traced is 2pi(r). As an example, if our max light radius is 64 then we'll need 402 rays to trace, we sqrt this for the square size of 20~ and bring that up to the nearest power of 2: 32. All texture sizes must be in the form of the nearest power of 2 of the desired texture size. First create a new object and call it Obj_Light and add the events: create, clean up. The create will initialize variables, clean up will free up dynamic data of the light. Obj_Light - Create Event: Initializes the light's system and surface variables. Code: light_surface = -1; light = -1; Obj_Light - Clean Up Event: Frees up the light's dynamic resources to avoid memory leaks. Code: if (surface_exists(light_surface)) surface_free(light_surface); if (ds_exists(light, ds_type_list)) ds_list_destroy(light); Now we'll create an object, name it something like `Obj_RayTracer` and create these four events: create, end step, clean up, draw. The create will setup our lighting system, the end step will create our lights on mouse click, clean up will free up our system and the draw will draw the light system. Obj_RayTracer - Create Event: This will create our system and define what shaders, sprites and maximum light radius we're using. Code: // Handles the surfaces all the lights will be rendered too. light_surface = -1; // Handles the surface our collision map will be generated onto. collision_surface = -1; // Creates our light subsystem. system = Scr_QRT_Create(Shd_RayTracer64, Shd_LightSampler64, Spr_RayTracer64, Spr_LightSampler64, 64); Obj_RayTracer - End Step Event: This will spawn a new light at the mouse position when we left-click. Note here that the color of the light RGB is in the form of 0...1, not 0...255. You can convert easily by simply dividing the RGB 0...255 value by 255. Code: // On left-mouse button press, create a new light object. if (mouse_check_button_pressed(mb_left)) with(instance_create_depth(0, 0, 0, Obj_Light)) { // Creates a new light with a position = mouse_xy, radius = random(16 to 63), color = r(0.1), g(1.0), b(0.5). light = Scr_QRT_Light([mouse_x, mouse_y, random_range(16, 63)], [0.1, 1., 0.5]); } Obj_RayTracer - Clean Up: All this does is free up our dynamic resources to avoid memory leaks. Code: if (surface_exists(light_surface)) surface_free(light_surface); if (surface_exists(collision_surface)) surface_free(collision_surface); if (ds_exists(system[0],ds_type_list)) ds_list_destroy(system[0]); Obj_RayTracer - Draw End: Okay so this is where it gets a bit more complicated, so I'll do it in parts. This first part will create any surfaces before using them, then render our collision surface. Code: // Create any surface(s) that do not exist. The surfaces can be randomly cleared, so we always need to check them before use! Here we create the light render surface and the collision surface. if (!surface_exists(light_surface)) light_surface = surface_create(512, 256); if (!surface_exists(collision_surface)) collision_surface = surface_create(512, 256); // Render our collision to the collision surface! Here if you want, you can add other objects that cast shadows as well, hence we're rendering to a surface. surface_set_target(collision_surface); draw_clear_alpha(c_black, 0); draw_sprite(Spr_Collision, 0, 0, 0); with(Obj_Sample){draw_self();} surface_reset_target(); Next we need to raytrace all of the lights! Code: // For all instances of Obj_Light, render the rays to their own light surfaces. with(Obj_Light) { if (!surface_exists(light_surface)) light_surface = surface_create(32, 32); surface_set_target(light_surface); draw_clear_alpha(c_black, 0); Scr_QRT_Tracer(other.system, light, other.collision_surface); surface_reset_target(); } Then we want to render the lights to the output game light surface. Here I am using the blend mode `bm_add` to add up the light intensities as the lights overlap for a glow effect, very basic. Code: // For all instances of Obj_Light render the lights to the game light surface. surface_set_target(light_surface); draw_clear_alpha(c_black, 0); gpu_set_blendmode(bm_add); with(Obj_Light) Scr_QRT_Sampler(other.system, light, light_surface, other.collision_surface); gpu_set_blendmode(bm_normal); surface_reset_target(); Finally we'll draw the collision surface to show our collisions, then draw our light surface to show our lights: Code: draw_surface(collision_surface, 0, 0); draw_surface(light_surface, 0, 0);
Very nice, especially with the approach of finding the smallest texture size. I will be taking a proper look at this later. I'm not experienced with HLSL, but I think this might make good use with MRTs. This will put this all into one shader, further optimizing it. I could be wrong. Edit: Nevermind, I remembered it wrong. MRTs are used for a different purpose.
@Bingdom thanks! I noticed that if I didn't cut down the texture size it would destroy performance, so finding that necessary size was pretty crucial.
@Bingdom unfortunately that's not possible. When the GPU runs a shader the original render target in RAM isn't updated until the shader terminates. So even if I used multiple render targets, it wouldn't help as you couldn't reference between them. It's also highly likely that if they were processed in parallel that a ray wouldn't be computed when a pixel looks it up.
Yeah, I know lol. What I meant with "different purpose" was for saving geometric computation (as it ultimately generates multiple outputs from a single input), rather than my initial thought (which wouldn't of worked either way). It's the result of me not checking my knowledge, haha. I haven't found a use for MRTs yet, but I think we're getting a bit off topic. I haven't looked into too much detail yet, but looking at your raycast maps, it looks like you have a limit of 255? Anyways, I do believe it's possible to take use of the other colour channels and extend the range on upwards until 2^32-1 pixels long (possibly further, but maybe a bit too much). Would be nice if we didn't have to work around like that. Edit: hmm, actually that might not work if the raycast is dependent on a loop.
@Bingdom there is no limit--I believe you're talking about the ray length? If so, that's not the case. The range is interpolated between 0 and 1. So it doesn't matter what length you use. For example if the ray length is 859px and radius of 1024, that'll be interpolated as 859/1024 = 0.8388671875. The color channels use floats, not bytes, so the range is much larger than you would think. So we don't actually need multiple color channels. Try it yourself! You can modify the shader constants to support larger light sizes. It'll accurately on larger radius even up to 1024 px.
It is true that the value in the shader typically ranges from 0-1 for GM. However, the precision within the shader doesn't reflect the precision on the surface. The surface is 32bit, and each channel uses 8bits of memory. If you were to convert that surface to a buffer, you'd get 4 8bit integers. That's why I say the radius would range 0-255. I'd be surprised if the surface is any more precise than that internally. You also have a limit to the size of for loops, which ranges from gpu to gpu. I believe complexity does play a factor. I remember being unable to run shaders from shadertoy because of this. The loop gets unrolled.
@Bingdom good point... I'll have to make some adjustments to the shaders. Thanks for the recommendation! I forgot that the surfaces reduce the precision...
@Creasion no problem! I hope this helps, feel free to message me if you're using it in a game or anything of that nature!