Yes I am confused as to how floating point numbers work, it was never made clear to me that floating point values are not stored exactly as we read them or think of them in math. That was the reality check that stopped me in my tracks.
A computer has a memory to work with.
This memory is just a huge fixed-length sequence of bits.
A bit can be in 1 of two states: it can be 1 or it can be 0.
As a result, at any point in time, the computer memory is just a huge sequence of 0s and 1s.
If a computer wants to remember an integer, it needs to reserve some bits in its memory to represent this number.
For small numbers, reserving 8 bits might suffice.
We now need to represent a number as a sequence of 8 bits.
One way to do this, is to use the binary notation of the number as representation, padded with 0s to the left.
For example, 105 would be represented as 0111001.
Using this representation, we can represent any integer in the range 0-255, but only these numbers.
Now, you might be interested in using numbers with decimals.
You'll again need to reserve some bits in memory to represent this number.
Let's say we reserve 32 bits for it, then how can we use these 32 bits to represent such a number?
One way to do so, is to use a "fixed-point" representation for the numbers.
You would again use a binary representation for your number.
For example, 20.3 is in binay notation: 10100.01001 1001 1001 1001 1001 ...
The idea of a fixed point notation is that we only remember a fixe amount of binay digits in front of and behind the dot.
For example, we can decide to use 16 of the 32 bits to represent the integer part and the other 16 bits to represent the decimals.
The problem is however, that we can't represent 20.3 in this way, as we would need more than 16 bits (we need infinitely many bits) to represent it.
We solve this by not using 20.3, but the closest number to 20.3 that we can represent using our "fixed-point" notation.
In other words, we round towards 20.3 in binary representation to something we can represent.
So, we use the representation (already padded) 0000 0000 0001 0100 . 0100 1100 1100 1101
which corresponds to the number 20.3000030517578125.
There is still 1 issue in this representation: the fact that we use a dot to separate the integer part from the decimals and we can only use 1s and 0s.
Now, because we have a fixed amount of bits to represent the integer part and a fixed amount of bits to represent the decimals,
the dot will always appear at the same location (after the 16 bits used for the integer part).
As a result, we can just leave the dot away to get the following representation: 0000 0000 0001 0100 0100 1100 1100 1101.
Note that this is the reason why it is called "fixed-point" representation, because the dot always appears at the same location.
Using a fixed-point representation for numbers often leads to problems however.
For example, if you're a physicist, you might be working with both huge and tiny numbers at the same time.
For example, when working with gravity, you got a tiny gravitational constant, but huge masses (of planets).
You could use a different fixed-point representation for every number in your program,
but that'd end up becoming hugely impractical and it would make development of highly performant hardware more difficult.
You could also use a single fixed-point representation for all numbers,
but then you'd need a huge amount of bits to represent a single number and you'd be hugely wasting your memory.
Another example of a problem with fixed-point representations is that you're working with absolute errors instead of relative errors.
This means that if you design an algorithm, it be highly inaccurate (you can say worthless) for small numbers and be way more precise for large numbers than you needed it to be.
Clearly, another representation is needed and this is where "floating-point" representations come in.
They're based on the scientific notation for numbers where you write a significand (also known as mantissa) with a fixed precision, followed by a multiplication with a base to the power of an exponent.
For example: 20.3000 * 10^6 to represent 20300000 with an error of at most 50.
If we know which base is used out of context (most of the times 10), then we also write this as 20.3000 e 6 (where e stands for "with exponent").
We now use this same principle to represent numbers by a sequence of bits.
We first decide to use a fixed base and we choose as base 2 instead of 10.
This is because a base of 2 is much easier for a computer to work with.
We then represent the significand (20.3000 in the above example) using a fixed-point representation.
This means we choose a fixed amount of bits to represent the integer part and a fixed amount of bits to represent the decimals and no bits to represent the dot.
We then also represent the exponent (which is an integer) in the remaining bits.
For example, trying to represent 20.3e6 with 8 bits for the integer part, 8 bits for the decimals
and 16 bits for the exponent could look like this: 0001 0100 . 0100 1101 e 0000 0000 0000 0110
and we just leave away the dot and e for the final representation to obtain: 0001 0100 0100 1101 0000 0000 0000 0110.
The reason this is called a "floating-point" representation is because the exponent can move where the dot in non-scientific notation actually occurs with respect to the significand.
As a result, the dot "floats", meaning it moves.
There are still some issues with this representation however.
For example, we currently don't support negative numbers or negative exponents.
There are multiple ways to fix this, but usually the following approach is taken.
We reserve 1 bit to represent the sign of the number.
This bit is 0 for positive numbers and 1 for negative numbers.
We don't represent the sign of the exponent, but we use something called a "biased representation".
In a biased representation with bias B, you don't use the binary representation of your number to encode it in bits,
but you use the binary representation of your number + B to encode it in bits.
For example, to encode -11 in 8 bits with a bias of 128, it would look like this: 0111 0101, because this is the binary representation of 117.
Now, there exist infinitely many floating-point representations,
but when people refer to floating-point representations, they almost always refer to the IEEE standard.
This standard defines 2 main "floating-point" formats: single pecision and double precision.
Single precision uses 32 bits in total, 1 to represent a sign, 23 to represent the significand (excluding sign) and 8 to represent the exponent.
Double precision uses 64 bits in total, 1 to represent a sign, 52 to represent the significand (excluding sign) and 11 to represent the exponent.
In both cases the significand uses 0 bits to represent the integer part of the significand and all the bits for the decimals.
In C, the type "float" almost always corresponds to single precision IEEE floating-point numbers
and the type "double" almost always corresponds to double precision IEEE floating-point numbers.
The IEEE standard actually does a little bit more in their floating-point representation than I have explained above.
They make sure to optimize the precision, amount of representable numbers and efficiency for a computer to work with such numbers in their exact formats.
They also include 3 special values in their format that floating-point numbers can represent: +Inf, -Inf and NaN.
The value +Inf indicates the occurence of a positive overflow, meaning you're trying to represent a positive number which is too large to represent with the given amount of bits.
The value -Inf is analogous for a negative overflow (this can also occur for opeations like -1 / 0).
The value NaN means "Not a Number" and occurs if you perform an operation where your computer doesn't know what to do with, like 0 / 0.
For more information on how the IEEE standard works exactly, have a look at:
https://en.wikipedia.org/wiki/IEEE_754
So, knowing all of this, the reason they're not stored exactly as we read them is because they're stored in binary notation, to optimize memory usage.
The reason they don't work exactly like real numbers in math is beause of memory limitations.
We've only got a finite amount of bits to represent them (and to optimize hardware, we only use a fixed amount of bits),
which doesn't suffice to represent all real numbers.
Becuase I program in C ( as to avoid C++, which there is a reason - related to this topic ), Ive notice that GML does not use something called streams that are used in C or C++. What does GML use for the standard output , input, and error streams ?
The philosophy of GameMaker programs is more oriented towards having a game run in a self contained window,
reading inputs (loading settings and savegames) and writing outputs (saving settings and savegames) to local files
that are either hard coded in the program or are provided through an interface inside the program itself as opposed to being passed to it through a terminal.
You can still use parameter_string and parameter_count to get parameter input from the terminal.
You can use extensions that deal with input and output streams.
The function show_debug_message might be writing strings to the error or output stream (I'm not sure).
However, this isn't really something GameMaker deals with.
GameMaker deals with using the screen and audio for output and keyboard presses and mouse movement as input.
I feel like file handling comes closest to what you're seeking for:
GM:S 1.4:
https://docs.yoyogames.com/source/dadiospice/002_reference/file handling/index.html
GM:S 2:
https://docs2.yoyogames.com/source/_build/3_scripting/4_gml_reference/file handling/index.html
And one of the reasons that I do not program in C++ is because you have to do more work with printing floating point values in a format you want, than in C when your using the cout stream. In C, the convenience is using printf with the % specifiers, which lets you determine how many digits are displayed and the format of the digits with decimal that you want. But in C - the same stream is called stdout.
printf from the cstdio should work in C++ as well.
I just tested this:
Code:
#include <cstdio>
#include <iostream>
#include <string>
using namespace std;
main() {
string s = "Did you know?";
printf("10 / 3 is approximately %.5f. %s\n", 10.0 / 3.0, s.c_str());
cout << "I think you did.\n";
return 0;
}
and it woks perfectly fine for me.
Secondly, in regards to C++ I didn't understand how to build a dynamically linked node list. Its something I will have to relearn , again. I was going to try building a dynamically linked node it in C, but malloc() and free() are totally different in their syntax from new and delete in C++. Its possible to do it, but its easier said than done using C. Can you do dynamically linked node lists in GML ( unless something better exists ) ?
The way you created linked lists in C will probably be almost exactly the same as in C++.
How do you create linked lists in C?
As for linked lists in GML, you could do that (using some special array or object construction), but it is rarely done.
In GML you would probably end up using a ds_list instead.
So because GMS exists for linux ( but not for red hat's fedora core versions, which is what I was using ), can you use a game executable ( made by GMS ) with redirection " << , < , > , >> " and pipes" | " via bash shell scripting?
Redirection and pipes ae dealt with by the terminal / OS itself.
Whether this works in GML is equivalent to your earlier question about whether input and output streams work in GML.
You can make it work in GML, but it's not quite something GML deals with.