To answer this question, let's look at the example you called out. Let's compare
to using
in an enum.
In the first case, the compiler will replace the token "orange" with the characters "5 * 2" (quotes added for clarity) anywhere it appears. In the second case, the compiler will evaluate 5 * 2 to a number (10) at compile time and store 10 as the value of orange in the enumeration.
Now, it might be silly to do this, but suppose that you did something like
mycolor = 100 / orange
. In the first case, this would expand to
mycolor = 100 / 5 * 2
, which would evaluate left to right and give you 40 rather than the expected 10. For this reason, you would want to define the macro as
if you wanted to use it this way. But the point is that the macro is just a substitution of characters - there is no processing on them until they're put into place as part of another statement. The enumeration, on the other hand, is evaluated to a value at compile time, so using mathematical expressions is valid here.
100 / rainbowcolors.orange
immediately becomes
100 / 10
as desired. Using 5 * 2 in a macro would mean that every time you use the macro, it has to multiply 5 * 2. In the enumeration, that happens only once, when it's defined.
In terms of use, I like using enumerations to make it clear that a particular set of values are all of the allowed values for a type. I could do a switch on a variable with a rainbowcolors value, and I know what all of the possible values are. If I'm using an integer and the values are macros, there might be other values. Since GML is not strictly typed, that's not as useful as it might be in other languages, but it's likely more comfortable for people who have worked with those languages. In practice, there's no difference between integer-valued macros and an enumeration other than notation.