That's pretty cool.
That's cool! For performance issues I decided to use the Ear Clipping technique, but I see your script very useful, especially for texture handling. Like your project I have decided to do everything in GML, I cannot resist the temptation to run the editor on a tablet and other compatible platforms. On the 3d, I put in the list to make a function to import models in Json format, it can be very good for the easy sending of information. Thank you very much for your comments, the same I say about your excellent work.Thanks It's all in gml. That sounds good, I'm sure you'll have no trouble implementing 3d models, the stuff that you've done so far is more complicated than that
One of the methods of 'soft selection' used in most major 3d programs is topological-based -- the distance is calculated by following edges from the source point (or calculated backwards, from the end point, along edges to the vert nearest the source point) instead of just the distance between the source and end points. I mention this because it sounds like exactly what you're trying to achieve with smart-weighing bones.More progress on the skeletal system:
Now I've got auto vertex rigging\weighting set up and working properly
I've also been experimenting with setting a bone's "capture radius"
To help stop parts of the model following the wrong bone (like the side torso following the arm bone if it's too close)
This'll make it easier to rig models without having to model them in a T pose.
On the video the rig starts off not looking right, but then I adjust the bones' radiuses and the amount of pinch around the joints and it ends up looking pretty good.
Thanks, yeah it already does that, rather than measuring the distance from vertex to bone, each vertex gets the nearest point along each bone's vector from the parent to itself, basically a "nearest point on line" function.One of the methods of 'soft selection' used in most major 3d programs is topological-based -- the distance is calculated by following edges from the source point (or calculated backwards, from the end point, along edges to the vert nearest the source point) instead of just the distance between the source and end points. I mention this because it sounds like exactly what you're trying to achieve with smart-weighing bones.
<title>Bones Bones are used to animate a model. They are ordered hierarchially, starting at the root bone: <img>Bones_Hierarchy 28 119 <goto> 28 298 Each bone has the following variables: <bullet>X <bullet>Y <bullet>Z <bullet>Yaw - Horizontal rotation <bullet>Pitch - Vertical rotation <bullet>Twist - Rotation around it's vector <bullet>Length - The distance from it's parent bone <bullet>Radius - (See <<Rigging>Rigging> for more details) <title>Creating a Skeleton To begin making the skeleton, you must first create the root bone in the model's hierarchy. After that, you can right click on a bone to create other bones which stem off from it. Use the arrow keys & pageup\dn to move it horizontally\vertically Or double click on it to edit it's coordinates in the properties window
Very Cool! Your welcome man! It´s a very good project!! hand clapping emoticone!!I've been getting the 1st person shooter template ready,
so far I've got basic movement, looking up & down, strafing and swimming.
I've made a quake style water effect aswell with fullscreen underwater effect
that turns on whenever the camera is under a water plane:
(The effects are fully adjustable in realtime aswell)
Next I've gotta get weapon handling sorted, health, ammo, armour etc.
and climbing up & down ladders. Then the 1st person template will be sorted.
I've also done a really good QOL improvement:
Meshes & instances can now use either materials or just single textures.
I did this mainly cus it was really annoying having to make a material for every single texture,
I made a shortcut, press M over a texture to make a material using it, but it was still awkward how there's a folder full of textures, and a folder full of materials which use those textures.
So now if you choose a texture instead of a material, it renders it using the default shader with the global directional light, sun color, ambient color and standard phong shading (smooth or flat depending on the mesh settings).
So this has made quick prototyping alot easier, instead of having to create a material for each texture that all use the same default shader
you can now just choose a texture and the engine is able to handle either in the same way.
Then later when you want more custom effects or multiple textures like normalmapping & specular masks you can make materials for those and swap them.
So now materials are optional, if your game doesn't need them, or you just wanna make the prototype without any fancy materials, you can.
@JaimitoEs Thanks for the support! I agree with you totally, I'll be releasing it once my current to do list is done, I'm not gonna keep adding things to it now, cus they'll be extra things that could be released in an update.
You basically get that for free in OpenGL... before you actually write to video memory, you transform the input coordinates using the world / view / projection matrices to get coordinates on the screen; if the screen coordinate is outside the visible region (e.g. negative z ---> it's in front of the screen), you can just stop there and not actually render anything. You find this result out really early on in the vertex step, before most of the heavy per-pixel computations are done in the fragment step.not sure if your considering working on something like this. but i know some of the more modern tools for 3d have optimized things so it only renders whats in the players line of sight.
That's not the same though, the method in the video blocks out vertices that are obstructed even if they are in front of the camera.You basically get that for free in OpenGL... before you actually write to video memory, you transform the input coordinates using the world / view / projection matrices to get coordinates on the screen; if the screen coordinate is outside the visible region (e.g. negative z ---> it's in front of the screen), you can just stop there and not actually render anything. You find this result out really early on in the vertex step, before most of the heavy per-pixel computations are done in the fragment step.
Ah, maybe I should've actually watched the video before assuming it was just vanilla optimization. Making research before I claim things isn't my style, thoughThat's not the same though, the method in the video blocks out vertices that are obstructed even if they are in front of the camera.
Thanks for sharing the idea, but I think it's done entirely in the gpu, so it'd need a dll, and I've got hardly any experience in c++. Although, if it is possible, someone could write the dll and integrating it into a gm project would probably be really simple, just putting the dll call in the draw event. But it might only be possible with dx12not sure if your considering working on something like this. but i know some of the more modern tools for 3d have optimized things so it only renders whats in the players line of sight.
starts at about 1:40
thats a visual demo of what i mean. would help you optimize things to get a bit more performance. pretty incredible work so far btw. id love to mess around with that car demo u showed.
Quoting a fairly old post here, but I forgot to mention way back that this is a feature I'd definitely use, since I have a project that needs a sort of tile-based 3D world.I've improved the block tool to work in a minecraft voxel kind of way, but it has the advantage of not storing empty blocks and isn't confined to a grid, it just checks if a vertex exists at a position where its about to create one, and if it does it uses that vertex instead, so the blocks share the same vertices and get joined together, it also automatically deletes hidden faces, so the meshes end up hollow, which is most of the time what you want, -there is the option to disable this if you need the blocks to not be joined together.
Awesome! I'm very happy to hear that.About importing vertex buffers, it can't currently but I was planning on adding it.
It should be quite easy to do cus I've already got a set of "vbuild" scripts for building vbuffers in every format combination, so I'd just have to make a matching set of "vb_load" scripts.
Then you'd just have to choose the correct format for it to load correctly.
Interesting. I always assumed Model Creator just used d3d_model_save, but I just looked at the source (found here) and that's not the case.I'll have to study the file format, and the same with model creator's gmmod format, unless model creator has a d3d model export then I won't really need to.
All you need is a script @xygthop3 made. It's the "vertex_buffer_d3d_model_all" script found within this asset project:If you have the code for the d3d to vbuffer tool and could share it that'd be really helpful cus I could make it alot faster that way.
That would be absolutely fantastic. But - after reading your response to HTML5 support - I have another question on this topic:About it being modular, I haven't really thought about this until you mentioned it, but I'll try to make the skeletal system modular.
You should in theory be able to just load the frame data and load a vbuffer from a file with the bone rigging\weights and bypass the node structure of the bones.
I'll try to make this possible, maybe by making an export script which exports the rigged vbuffer and frame data split per animation,
then an import script which sets it all up for an instance to use independently from the engine. It shouldn't be too hard to do.
Awesome. That sounds like something I could work with once I got used to it. It sounds like I could accomplish the same texturing style that I like to do in Model Creator, and more.About the modelling things
For uv mapping there are a few different options: