Games made in GMS are generally flat becouse most of them is aiming "old-school-retro-pixel-art" style, so a bunch of beeps, clicks and brumms is all they possibly need, more would spoil the mood.
And yet you can see no reason at all why the ability to program dynamic sound in the way it used to be done on 8-bit systems would be desirable beyond a tiny audience.
Note, I wasn't asking for anyone to provide a chip emulator for the NES or C64 or some specific system; I just expressed that it's desirable to me (and I assume many others) to be able to generate audio programmatically from audio primitives, analogous to the way we often have occasion to procedurally draw video from drawing primitives.
Currently, the capability offered by GMS is to store a sound file and play it back. I use some external tool, such as sfxr/bfxr/jfxr to generate sound effects, export them from that tool into .wav format, add the wave files to my project as sound resources, and then play them at run time when needed.
For years, I've thought that it would be much cooler if the code inside of *fxr could be brought into GMS, and I could then make calls to it through its API at runtime, setting up instruments and adjusting their settings procedurally, to create dynamic sounds that are unique to a specific set of conditions that emerge during play. That capability would yield essentially infinite flexibility at runtime, and enable compelling audio-based game genres that aren't feasible within the current limitations of GMS.
Here's something to consider: "the way most gamemaker games are" is a chicken-egg situation. The games are the way they are because that's what GMS is good at doing. If GMS added [new feature] then over time you might expect to see more games made using [new feature] than were made before when implementing [similar feature] took a lot of work/skill and specialized knowledge.
But if it goes for flatness in audio - in my opinion it has nothing to do with lack of functions. Simply creating good music and sfx is a very hard job.
Indeed, it is not easy to make good sound effects. But the only thing you can do with them in GMS is store a sound file as a resource in your project, and then play it. You can do some limited stuff like adjusting the gain, the position, and pitch shift, but that's about it, at least with the standard audio functions. I don't know what can be done with audio buffers, though, apart from syncing tracks to play together so you can mix gain levels at runtime. Which, don't get me wrong, is pretty cool, but I'd love to know what other capabilities are available via audio buffers.
Generally GMS has the main tool (audio buffers) to create sound fx. When I have a little bit time I'll see if implementing some basic tools like eq, compression/limiting, and reverb is possible using only gml code is possible and/or efficient enough.
I've asked in the past (elsewhere in these forums) for any kind of tutorial showing how to make use of audio buffers to do cool stuff, and still haven't seen anything.
I get that audio processing is hard and requires some technical/scientific knowledge. But really, no different from programming shaders, I think. If YYG can support shader language, then it'd be nice to give our ears similar treatment.
And I also get that there may be legal constraints to using some existing audio engine/library/framework, but that in itself isn't reason not to do something like what I'm envisioning. Work out the licensing and do it, if that's what it takes. Are there no technologies available under a liberal license such as the BSD license or MIT license? The sfxr tool I mentioned above is widely used, is free/open source, and has been extended by various developers to create other projects (such as bfxr and jfxr) which are also free, so seemingly isn't IP-encumbered.