3D Translate 3D world space to 2D screen space?

Hello,



I'm working on mode 7 type of game. I'm using projection for most things and it's working fine. However, I want to get the object sprite (rings, springs, etc) to keep their absolute sprite size (i.e. completely ignoring deformation/transform caused by the 3D matrix). This is because I want to handle the scaling by hand drawn frames, like old school games did. Like this:



So, I need to figure out how to translate the xyz coordinates of an object in 3D world-space to where it is in on 2D screen space, and then draw the sprite there. (I can probably figure out the frame scaling after that, don't worry about that part).

Does anyone have any idea how to do this?


For reference, this is how I'm projecting the 3D, in my objCamera object.

Code:
//Create
global.ViewWidth = 398
global.ViewHeight = 224

fov = 60;
xto = 0;
yto = 0;
zto = 0;
Code:
//draw
d3d_set_projection_ortho(0, 0, 320, 240, 0);
d3d_set_projection_ext(x, y, z, xto, yto, zto, 0, 0, 1, fov, global.ViewWidth  / global.ViewHeight, 1, 32000);


note: Yes, I've search the forums and found a few topics on it. Unfortunately, most of them relate to shaders, which I don't think is a good solution for this. I did try them though, and could not get them working. Not 100% they were even trying to achieve the same thing. Others were examples from Gm8 that don't seem to translate to my application in GMS2. Trust me, I've tried.
 
Last edited:

Binsk

Member
The only difference between a shader and the rest of your game is that a shader is a program for the GPU while everything else is a program for your CPU. The math is the same regardless as to what piece of hardware performs it.

Take the math and whatnot done in those shader examples you looked at and translate it into regular GML. Now, I noticed your code is using the GMS1.4 d3d functions, meaning you aren't actually familiar with the matrix math going on behind the scenes so this may seem a bit more of a puzzle than it really is.

A matrix, in terms of rendering, basically holds transformation data. Things like scale, translation, and rotation. When you render your model each vertex in the model gets multiplied by multiple matrices to step-by-step translate where the vertex needs to be on your actual 2D screen. For example, you will have a matrix to change the coordinates from the position relative to the model to a position relative to the game world. You will then have a matrix translate it from the game world relative to the camera then another to make it relative to the screen.

When you call things like d3d_transform_* you are setting up the model-to-world matrix. Using d3d_set_projection_* is setting up the world-to-camera and camera-to-screen matrices. These are then passed into your shader and it does all the math to render things out correctly for each pixel.

So, understanding this, if you want to manually calculate a position from the world to screen coordinates it would be logical that you need to multiply that position by the appropriate matrices. Since you already have the world coordinates all you need is the world-to-camera and camera-to-screen matrices. These are called the view matrix and the projection matrix, respectively.

As said, using d3d_set_projection will set up the view matrix and the projection matrix for you in the background. You can then retrieve these matrices by using matrix_get along with the appropriate argument. Next you'd need to multiply the view matrix by the projection matrix to get your final transform matrix that will convert any 3D point in your game into 2D space on your screen. Then all you have to do is multiply your 3D point (in the form of a 4D vector [x, y, z, 0]) by this matrix and you will have some resulting vector such as [screen_x, screen_y, valuez, valuew] (where you can ignore the valuez and valuew portions). Those first two x/y coordinates would be the position on your screen.

The one hiccup here is that GMS1.4 doesn't seem to have a matrix-vector multiplication function. It only has matrix-matrix multiplication. As such, you will have to implement the function yourself. The math is not that hard and you can find the formula online fairly easily.

That should be all you need. I realize it is a long-winded post but I'm hoping to help you understand how this actually works.

One last note that I almost forgot. Your GPU is specifically designed to perform all this matrix math extremely quickly. Your CPU is not. Doing matrix multiplication takes a LOT of additions / multiplications so I would recommend you don't flood your level with objects that require you to do this (or at least have some optimization in place to prevent calculating hidden object positions). CPUs now-a-days are really powerful so you should be fine unless you have a ton of these going on in a level, but keep it in mind.
 
Last edited:
I want to show you a complete example of translating a 3d point in the world to a 2d point on your view port, because I believe some information was missing from the other reply.

The following is for translating a world space point. If you are starting with a model space point, then you first need to transform from model to world space. If that becomes necessary, I can elaborate.

Code:
var _w = _v[2]*_x + _v[6]*_y + _v[10]*_z + _v[14];
if (_w > 0) {
    var _x1 = (_v[0]*_x + _v[4]*_y + _v[8]*_z + _v[12]) * _p[0] / _w *  _pw + _pw;
    var _y1 = (_v[1]*_x + _v[5]*_y + _v[9]*_z + _v[13]) * _p[5] / _w * -_ph + _ph;
}
definitions...
_v = view matrix
_p = projection matrix
_pw = port_width / 2.
_ph = port_height / 2.
_x,_y,_z = a 3d point in world space
_x1,_y1 = a 2d point within your view port.

There could be some confusion about the definition of the port width and height. If it becomes necessary to elaborate on that, I can.

A quick explanation of a couple of things.

_w is actually the depth of the point in view space. If _w is less than or equal to zero, then the point is not in front of the camera.

The view matrix is used to transform the point from world to view space.

Then the projection matrix is used to scale the point along the x and y axes according to the field of view and aspect ratio. A division by the point's depth (_w) is done to give perspective to the projection.

At that point, the _x1,_y1 coordinates will be in the range -1 to 1, for any point within the view port. Multiplying by _pw and -_ph and adding _pw and _ph, will remap those coordinates to the range 0 to port_width, and 0 to port_height.

-----------------------------------------------------

It's actually pretty straightforward to write a shader that can draw sprites in 3d, but such that the sprites are always at 1:1 scale with the view port.

However, the major drawback is that you have to get the sprite's origin as well as the vertex offsets into the shader as seperate values. This can be done either through a shader uniform, or as an additional vertex attribute (if you are using vertex buffers)... it is also technically possible to encode either the sprite position or vertex offsets into the image_blend colour (if you aren't using it for anything else), which would allow you to draw sprites with this shader without having to either use a shader uniform or having to write a vertex buffer.

Here's an example of a vertex shader that will draw sprites in 3d at 1:1 scale with the view port.

Code:
attribute vec3 in_Position;
attribute vec4 in_Colour;
attribute vec2 in_TextureCoord;
varying vec2 v_vTexcoord;
varying vec4 v_vColour;
uniform vec3 sprite_pos;
uniform vec2 cam_size;
void main() {
    gl_Position = gm_Matrices[MATRIX_WORLD_VIEW_PROJECTION] * vec4( sprite_pos, 1.0 );
    gl_Position.xy += in_Position.xy * cam_size * gl_Position.w;
    v_vColour = in_Colour;
    v_vTexcoord = in_TextureCoord;
}
In that example, in_Position carries the vertex offsets relative to the sprite_pos, which is the position of the sprite in world space.

cam_size = (2 / port_width, 2 / port_height)
 
Last edited:
The only difference between a shader and the rest of your game is that a shader is a program for the GPU while everything else is a program for your CPU. The math is the same regardless as to what piece of hardware performs it.

Take the math and whatnot done in those shader examples you looked at and translate it into regular GML. Now, I noticed your code is using the GMS1.4 d3d functions, meaning you aren't actually familiar with the matrix math going on behind the scenes so this may seem a bit more of a puzzle than it really is.

A matrix, in terms of rendering, basically holds transformation data. Things like scale, translation, and rotation. When you render your model each vertex in the model gets multiplied by multiple matrices to step-by-step translate where the vertex needs to be on your actual 2D screen. For example, you will have a matrix to change the coordinates from the position relative to the model to a position relative to the game world. You will then have a matrix translate it from the game world relative to the camera then another to make it relative to the screen.

When you call things like d3d_transform_* you are setting up the model-to-world matrix. Using d3d_set_projection_* is setting up the world-to-camera and camera-to-screen matrices. These are then passed into your shader and it does all the math to render things out correctly for each pixel.

So, understanding this, if you want to manually calculate a position from the world to screen coordinates it would be logical that you need to multiply that position by the appropriate matrices. Since you already have the world coordinates all you need is the world-to-camera and camera-to-screen matrices. These are called the view matrix and the projection matrix, respectively.

As said, using d3d_set_projection will set up the view matrix and the projection matrix for you in the background. You can then retrieve these matrices by using matrix_get along with the appropriate argument. Next you'd need to multiply the view matrix by the projection matrix to get your final transform matrix that will convert any 3D point in your game into 2D space on your screen. Then all you have to do is multiply your 3D point (in the form of a 4D vector [x, y, z, 0]) by this matrix and you will have some resulting vector such as [screen_x, screen_y, valuez, valuew] (where you can ignore the valuez and valuew portions). Those first two x/y coordinates would be the position on your screen.

The one hiccup here is that GMS1.4 doesn't seem to have a matrix-vector multiplication function. It only has matrix-matrix multiplication. As such, you will have to implement the function yourself. The math is not that hard and you can find the formula online fairly easily.

That should be all you need. I realize it is a long-winded post but I'm hoping to help you understand how this actually works.

One last note that I almost forgot. Your GPU is specifically designed to perform all this matrix math extremely quickly. Your CPU is not. Doing matrix multiplication takes a LOT of additions / multiplications so I would recommend you don't flood your level with objects that require you to do this (or at least have some optimization in place to prevent calculating hidden object positions). CPUs now-a-days are really powerful so you should be fine unless you have a ton of these going on in a level, but keep it in mind.


Yes, of course I'm only to going to run this on objects that are visible. XD

Ok, if I follow you correctly, I got this so far...

Code:
var ViewMatrix = matrix_get(matrix_view)
var ProjMatrix = matrix_get(matrix_projection)
var TransformMatrix = matrix_multiply(ProjMatrix,ViewMatrix)
What I don't understand is how I would express the xyz point as a 4D vector in GML?
Something like this?

Code:
vec_4d = [1, 2, 3, 4];
You have to understand this is all new to me from a GM perspective. Mostly been making 2D games before that didn't require this stuff. Haven't run into this kind of math since high school XD
 
There's a problem you're going to encounter with the approach you are attempting in this thread. And that problem is one of depth sorting.

If you draw your sprites in 2d, you will have to depth sort them in gml. Now, since in order to calculate the positions of these sprites in the view port, you will need to calculate their depth in view space, what you could do is put those values into a priority queue, and then draw the sprites from greatest to least depth. However, even this solution will not solve the problem of how the sprites depth integrates with the depth of 3d geometry.

You could potentially get extremely superior performance by using a shader to draw your sprites billboarded, and scaled 1:1 with your view port, in 3d. And drawing them in actual 3d will allow you to solve depth problems more easily.
 
There's a problem you're going to encounter with the approach you are attempting in this thread. And that problem is one of depth sorting.

If you draw your sprites in 2d, you will have to depth sort them in gml. Now, since in order to calculate the positions of these sprites in the view port, you will need to calculate their depth in view space, what you could do is put those values into a priority queue, and then draw the sprites from greatest to least depth. However, even this solution will not solve the problem of how the sprites depth integrates with the depth of 3d geometry.

You could potentially get extremely superior performance by using a shader to draw your sprites billboarded, and scaled 1:1 with your view port, in 3d. And drawing them in actual 3d will allow you to solve depth problems more easily.

So you are telling me that this shader can ignore the the projection deformation with no artifacts whatsoever but still place in position? OoO nice!
I guess that makes sense, since shaders go around the way GM draws things...

I am not great with shaders, because of the stricter language. Could you walk me through this?

-Do I need to do anything with the Fragment side of the shader? I don't think so, since the colors are staying the same.
-What is port_width in this example? Like the old view_hport / wport variables? If so, I'm not using those, but I have global.ViewWidth and global.ViewHeight as an alternative.
Another alternative I use from time to time:
Code:
var c = view_camera[0];
var cx = camera_get_view_x(c);
var cw = camera_get_view_width(c)
var cy = camera_get_view_y(c);
var ch = camera_get_view_height(c)
with cw being the width.

-What sort of code do I need in the draw event to set up this shader?
Code:
shader_set(myShader);
//something with the uniform here, not sure
draw_self();
shader_reset();
-Any other variables I need to define elsewhere in create event or step event?
This at least, but I think there are more...
shader_sprite_pos = shader_get_uniform(shader_draw_billboard,"sprite_pos");

-No, I'm not using image_blend for anything.(side note: This is whole project's goal is to try to stay close to looking like a 16-bit Sega Megadrive game. The kind of stuff you do normally with image_blend won't fit within the color palette I'm using)
 
Last edited:
In my examples, port_width and height should be the size of the render target (i.e., the surface being drawin onto). However, that depends on your render target then being drawn to the window at 1:1 scale. As long as your render target is 1:1 scale, then port_width and height should be equal to the size of your render target.

If you plan on using a shader, you first have to decide how you are going to solve the problem of getting sprite positions and vertex offsets in as seperate values. Like I said before, there are a number of different ways this could be done. If you just draw a sprite, then the in_Position vertex attribute has to carry the vertex offsets, which means you have to figure out a way to get the sprite position value in seperately.

You can get it in as a shader uniform, BUT, every time you set a shader uniform, it will break the current vertex batch. So you don't want to do that if you have a lot of sprites to draw or your performance will tank.

So let's look at alternatives...

Writing your sprites to a vertex buffer can be relatively fast, especially if vertex offsets are going to be constant, meaning you don't have to perform any rotations, translations, or scaling. I think I'm leaning toward this option, because it is relatively straightforward, and should be pretty fast. It's speed should compare pretty well to the speed of drawing sprites normally using draw_sprite functions.

Alternatively, you might be able to figure out a way to encode the sprite position into the colour attribute, but it would be tricky, you will have 4 bytes (rgb,a) with which to work, which means limited precision. And this would only be a viable option if you didn't indend to use the colour attribute for anything else.

Another alternative, which I'm hesitant to mention, because it is not straightforward to setup, but should be lightning fast, is to use a kind of instnace pooling in conjunction with a static vertex buffer. With this idea, you write to the vertex buffer only one, and you put several instances of a prototype sprite into that vertex buffer. Later, when drawing with that vertex buffer, you pass in variable properties (such as position) for each instance as a uniform array. The main drawback with this approach is that the number of uniform components available (floats you can send in as part of a uniform variable), is quite small, and I know of no way to be sure precisely how many are available, or whether it varies from machine to machine. It is a major shortcoming in gamemaker that there is no documentation on this (that I am aware of).

TL-DR, I would probably choose to write the sprites to a dynamic vertex buffer. There will need to be seperate vertex attributes for the sprite position and vertex attributes. A modified version of the shader I posted before, could then be used to billboard the sprite and make its scale 1:1.



EDIT:

There's something I forgot to mention.

If any of your sprites are partially transparent (opacity other than 0 or 1), then you won't be able to use the depth buffer to sort those sprites. In which case you're back at having to sort them in gml, before drawing them.
 
Last edited:
In my examples, port_width and height should be the size of the render target (i.e., the surface being drawin onto). However, that depends on your render target then being drawn to the window at 1:1 scale. As long as your render target is 1:1 scale, then port_width and height should be equal to the size of your render target.

If you plan on using a shader, you first have to decide how you are going to solve the problem of getting sprite positions and vertex offsets in as seperate values. Like I said before, there are a number of different ways this could be done. If you just draw a sprite, then the in_Position vertex attribute has to carry the vertex offsets, which means you have to figure out a way to get the sprite position value in seperately.

You can get it in as a shader uniform, BUT, every time you set a shader uniform, it will break the current vertex batch. So you don't want to do that if you have a lot of sprites to draw or your performance will tank.

So let's look at alternatives...

Writing your sprites to a vertex buffer can be relatively fast, especially if vertex offsets are going to be constant, meaning you don't have to perform any rotations, translations, or scaling. I think I'm leaning toward this option, because it is relatively straightforward, and should be pretty fast. It's speed should compare pretty well to the speed of drawing sprites normally using draw_sprite functions.

Alternatively, you might be able to figure out a way to encode the sprite position into the colour attribute, but it would be tricky, you will have 4 bytes (rgb,a) with which to work, which means limited precision. And this would only be a viable option if you didn't indend to use the colour attribute for anything else.

Another alternative, which I'm hesitant to mention, because it is not straightforward to setup, but should be lightning fast, is to use a kind of instnace pooling in conjunction with a static vertex buffer. With this idea, you write to the vertex buffer only one, and you put several instances of a prototype sprite into that vertex buffer. Later, when drawing with that vertex buffer, you pass in variable properties (such as position) for each instance as a uniform array. The main drawback with this approach is that the number of uniform components available (floats you can send in as part of a uniform variable), is quite small, and I know of no way to be sure precisely how many are available, or whether it varies from machine to machine. It is a major shortcoming in gamemaker that there is no documentation on this (that I am aware of).

TL-DR, I would probably choose to write the sprites to a dynamic vertex buffer. There will need to be seperate vertex attributes for the sprite position and vertex attributes. A modified version of the shader I posted before, could then be used to billboard the sprite and make its scale 1:1.



EDIT:

There's something I forgot to mention.

If any of your sprites are partially transparent (opacity other than 0 or 1), then you won't be able to use the depth buffer to sort those sprites. In which case you're back at having to sort them in gml, before drawing them.

This stuff is a little out of my comfort zone, so I'm having trouble keeping up. Could you help me get this set up? Or at least point me on how I can understand the basics.
(For the record, there will be no use of any semi-transparent pxiels in this games Those weren't possible on the Sega Genesis/Megadrive. I'm not using image_blend either, or any rotating or realtime scaling, which is the point of doing this in the first place. Any change in an object sprite's size or rotation or orientation, should be done by changing the sprite or image index as any transformation will be hand drawn.)
 
Last edited:
It just occurred to me, that you could that you could draw the sprites in normal 2d, using this to compute the screen position...
Code:
var _w = _v[2]*_x + _v[6]*_y + _v[10]*_z + _v[14];
if (_w > 0) {
   var _x1 = (_v[0]*_x + _v[4]*_y + _v[8]*_z + _v[12]) * _p[0] / _w *  _pw + _pw;
   var _y1 = (_v[1]*_x + _v[5]*_y + _v[9]*_z + _v[13]) * _p[5] / _w * -_ph + _ph;
}
...while still being able to set the sprites to the correct depth.

If you encoded the depth (_w) into the colour argument sent to draw_sprite_ext, you could then use that value in the vertex shader so that the sprites appear occluded at the correct depth.

I think this would produce better performance than writing the sprites to a dynamic custom vertex buffer.
 
It just occurred to me, that you could that you could draw the sprites in normal 2d, using this to compute the screen position...
Code:
var _w = _v[2]*_x + _v[6]*_y + _v[10]*_z + _v[14];
if (_w > 0) {
   var _x1 = (_v[0]*_x + _v[4]*_y + _v[8]*_z + _v[12]) * _p[0] / _w *  _pw + _pw;
   var _y1 = (_v[1]*_x + _v[5]*_y + _v[9]*_z + _v[13]) * _p[5] / _w * -_ph + _ph;
}
...while still being able to set the sprites to the correct depth.

If you encoded the depth (_w) into the colour argument sent to draw_sprite_ext, you could then use that value in the vertex shader so that the sprites appear occluded at the correct depth.

I think this would produce better performance than writing the sprites to a dynamic custom vertex buffer.

Hm, why does this code give me _x and _y as values that are bigger than the screen/view size? It should be smaller, right?

Also, the _x and _y values should change as the camera moves. Yet, the values stay static.



For full disclosure, here's the code I'm testing right now in the Draw GUI event for an object
Code:
var _x = x
var _y = y
var _z = z

var _v = matrix_get(matrix_view)
var _p = matrix_get(matrix_projection)
var _pw = display_get_width() / 2 
var _ph = display_get_height() / 2 


var _w = _v[2]*_x + _v[6]*_y + _v[10]*_z + _v[14];
if (_w > 0) {
   var _x1 = (_v[0]*_x + _v[4]*_y + _v[8]*_z + _v[12]) * _p[0] / _w *  _pw + _pw;
   var _y1 = (_v[1]*_x + _v[5]*_y + _v[9]*_z + _v[13]) * _p[5] / _w * -_ph + _ph;
}

        draw_text(200,70, string(_x)) 
        draw_text(200,80, string(_y))
 
The view and projection matrices used with that code need to be the view and projection matrices used when drawing 3d. In the gui event, unless you've set a perspective projection there, you will be using an orthographic projection and a 2d view.

By the way, _x1 and _y1 will only produce values within the view port if the 3d point is actually within the view port.
 

Joe Ellis

Member
This is the basic script I use for this:

Code:
///pos3d_to_2d(x, y, z)

var vx,vy, vz, l, pos,
matrix_pos = matrix_get(matrix_world),
matrix_view_projection = matrix_multiply(matrix_get(matrix_view), matrix_get(matrix_projection));

matrix_pos[12] = argument0
matrix_pos[13] = argument1
matrix_pos[14] = argument2

var matrix_screen_pos = matrix_multiply(matrix_pos, matrix_view_projection);

l = 1 / matrix_screen_pos[15]

vx = matrix_screen_pos[12] * l
vy = matrix_screen_pos[13] * l
vz = matrix_screen_pos[14] * l

if abs(vx) > 1 || abs(vy) > 1 || vz > 1.00002
{return false}

pos[0] = floor(((vx + 1) * 0.5) * global.screen_width)
pos[1] = floor(((1 - vy) * 0.5) * global.screen_height)

return pos
You call pos3d_to_2d(x, y, z) and it returns an array: [0] = x, [1] = y
You need to check it doesn't return false\0, if the position is out of the screen area, and you also need to define global.screen_width\height, by getting display_get_width\height() or window_get_width\height()

I'm gonna avoid explaining the maths and specifics of the matrices, cus you don't need to understand them to use this, all this script is doing is a matrix multiplication with the view matrix, projection matrix and a position matrix(which is the position that you've inputted), it results in the same screen coordinates that the shader calculates for each vertex position.

The part at the end is just relerping it from -1 to 1 space to 0-1 space and then multiplies by the screen size to put it into screen coordinates, and then floors it, the flooring isn't necessary, but it's usually useful if say your drawing a 2d sprite over the screen of the 3d game, so it doesn't look blurry cus it's blended between pixels
 
Last edited:
So I've managed to get both FrostyCat and flyingsaucerinvasion's scripts working with equal results. :) So there's two options there, and I can compare performance if needed.Nice!

Here's a rough gif of the first test in action. The code that determines which frame should be displaying based on distance is janky right out, so I'll smooth it out later. But you can see it has that old-school Mario Kart pop-in. Love it.



The only issue I have right now is depth over the player.
I was able to fix depth of the ojbects themselves by putting "depth = min(2,distance_to_object(objSS_Camera)/20)" in the Step event of the obj.

However, the objects still draw in front of the player because I'musing the Draw GUI to draw the blue sphere sprites. Like so:
draw_sprite(sprite_index,ScaleFactor,x1,y1)
(with x1 and y1 being the values drawn the conversion script)

So I tried putting it in the Draw event like this:
Code:
d3d_set_projection_ortho(0, 0, global.ViewWidth, ViewHeight 0);

var c = view_camera[0];
var cx = camera_get_view_x(c);
var cw = camera_get_view_width(c)
var cy = camera_get_view_y(c);
var ch = camera_get_view_height(c)

draw_sprite(sprite_index,ScaleFactor,cx+x1,cy+y1)
//reset to 3d
d3d_set_projection_ext(objSS_Camera.x, objSS_Camera.y, objSS_Camera.z, objSS_Camera.xto, objSS_Camera.yto, objSS_Camera.zto, 0, 0, 1, objSS_Camera.fov, global.ViewWidth  / global.ViewHeight, 1, 32000);
But then the spheres are no where to be found
 
Last edited:
I did find a way to draw sprites in an orthographic projection, but in a way that allows them to occlude correctly with geomegry drawn in a perspective projection. However, it's kind of a pain in the ass. The upside is it doesn't require any shader uniform sets per instance, which avoids unecessary vertex batch breaks. The downside is it is not very elegant.

I still don't know what to recommend to you, because every possible solution has some kind of downside. If I were developing a solution for my own project, I would probably aim for a really complicated solution that is designed to minimize performance impact. However, if performance is not an issue (for example because the scene is not complicated), I would aim for the simplest solution.
 
C

Christopher Rosa

Guest
flyingsaucerinvasion how did you add the 2d background in 3d i can't figure it out
 
C

Christopher Rosa

Guest
can you tell me code how to. when i draw a sprite nothing pops up.
 
C

Christopher Rosa

Guest
and when i move the background moves aswell i want it static and its far away.

```
d3d_set_projection_ortho(0, 0, 640, 480, 0);
draw_sprite(spr_Background1,0,0,0);


var zto = 32;

var xcos = x + cos(degtorad(direction));
var ysin = y - sin(degtorad(direction));
var ztan = z + zto + tan(degtorad(pitch));



d3d_set_projection_ext(x,y,z + zto, xcos, ysin, ztan, 0, 0, 1, 60, 1.78, 1, 2048);

var fl = background_get_texture(texFloor0);
rh = (room_width/48);
rv = (room_height/48);```

i tried making the projection orthographic but that made the background draw over everything.
 
C

Christopher Rosa

Guest
upload_2019-12-6_21-39-27.pngok i got it to work but the 3d ground looks a bit weird because i took out 3d3_start heres my code ```
d3d_set_projection_perspective(0, 0, 640, 480, direction);
var zto = 32;

var xcos = x + cos(degtorad(direction));
var ysin = y - sin(degtorad(direction));
var ztan = z + zto + tan(degtorad(pitch));

d3d_set_projection_ext(x,y,z + zto, xcos, ysin, ztan, 0, 0, 1, 60, 1.78, 1, 2048);```
 

Joe Ellis

Member
@Noah Copeland have you got the thing you were doing working yet? if not I've got this script here:
Code:
///pos3d_get_screen_pos(x, y, z)

var pos = matrix_transform_vertex(
matrix_multiply(matrix_get(matrix_world), matrix_multiply(matrix_get(matrix_view), matrix_get(matrix_projection))), argument0, argument1, argument2);

pos[0] = (1 + (pos[0] / (pos[2] + 1))) * 0.5 * global.screen_width
pos[1] = (1 - (pos[1] / (pos[2] + 1))) * 0.5 * global.screen_height

return pos
It takes the xyz in world space of a certain object then performs the same stuff that the shader does then returns the position. You can also use the z coordinate to set the depth while drawing the sprite, which should solve any depth sorting problems. It's intended that you draw the sprites in a 2d view\projection, basically filling the screen like
d3d_set_projection_ortho(0, 0, global.ViewWidth, ViewHeight 0);

Are you using gms1.4? this is the version of the script for that:
Code:
///pos3d_get_screen_pos(x, y, z)

matrix_set(matrix_world, matrix_multiply(matrix_get(matrix_world), matrix_multiply(matrix_get(matrix_view), matrix_get(matrix_projection))))

var pos = d3d_transform_vertex(argument0, argument1, argument2);

pos[0] = (1 + (pos[0] / (pos[2] + 1))) * 0.5 * global.screen_width
pos[1] = (1 - (pos[1] / (pos[2] + 1))) * 0.5 * global.screen_height

d3d_transform_set_identity()

return pos
If you use this several times, which you will be, it's always better if you precalculate the matrix and save it to a global uniform, just after the 3d view and projection is set:
Code:
global.mat_world_view_projection = matrix_multiply(matrix_get(matrix_world), matrix_multiply(matrix_get(matrix_view), matrix_get(matrix_projection)))
Then you can set that at the start of the script instead, or even better, set it before the script, then execute the script several times. And always remember to d3d_transform_set_identity() afterwards ;)
 
Last edited:
Top