Music sequencer with audio buffers

Hello everyone,

Hope you're all keeping safe in these troubling times.

I'm currently working on a music sequencer that is playing samples. Much like this: https://808303.studio/

I've gotten it to work very will with almost no lag. The sequencer is sample based so it plays Wav file using
the audio_play_sound() function. On web it works surprisingly well - with little to no lag even on low-end devices.
You can try it here: https://gamejolt.com/claim/YcHZVdpx

The idea is to make this into an app.
But as soon as I export the project as an android app I get crazy lagg and the sequencer basically becomes useless.

I've tried using delta time, which basically made it worse. And I've optimised the project to the peak of my abilities.
It runs as fast as can be.

So to fix this, I've decided on using audio buffers. I've search the entire web and back but I can't really wrap my head around how they work.
I already have audio files imported into the project - how do I create audio buffers from those files, is that even possible?
Has anyone attempted anything similar? Is there a way to fix this without audio buffers?

Any and all help would be very appreciated. Need to wrap my head around this before I pull out my hair!

Thanks in advance.
 

rytan451

Member
Sounds, in computers, are represented by a stream of numbers, representing the sound waves that make up the sound. Audio buffers are, in short, a way to change these numbers directly. You can create a sound from a buffer using audio_create_buffer_sound(bufferId, bufferFormat, bufferRate, bufferOffset, bufferLength, bufferChannels);.

The first parameter, bufferId is a buffer holding the sound data. This could have been loaded from a file, or made programmatically. The second parameter, bufferFormat notes the number of bits used per sample (sound is split into samples; each sample is a number, and each number is either represented with 8 or 16 bits). The third parameter bufferRate is the sample rate in hertz. So, this many samples would take up one second. The example value is 44100, though this is determined per sound file. The fourth parameter bufferOffset is how many bytes from the start of the buffer the sound starts. The fifth parameter bufferLength is the length of the audio data (including the header, which I'll discuss later). Finally, the sixth parameter, bufferChannels represents how many audio channels are in the audio data. So, for example, a mono audio track has a single channel (a single sound), while stereo audio has two channels (one sound for the left ear and one for the right ear), and 3D audio (apparently) works with the 5.1 standard, and has 6 channels (for surround sound).

What is header data?

Header data is metadata, that is, data about data. So, you might have a music file, and inside the file, there's metadata stating what the title of the music is, or what the bitrate of the sound is, or a number of other things. This header data is sometimes essential for making sound work (it may describe the compression method of the following data), or unnecessary (if the sound is uncompressed, and the attributes of the sound are already known).

How to create audio buffers from these files?

In short, you load a buffer from the file, and then you create a sound from the buffer.

(Warning: these are all from my layman understanding, and I may be wrong. Hopefully, I'm correct enough to be helpful)
 
Last edited:

obscene

Member
About a month ago I set out on this exact venture and realized there was no good info anywhere about it. No tutorials, no examples, just people giving theoretical advice. What I learned for sure is that you cannot put a normal sound asset into a buffer. An external file is required, meaning every one of your samples will have to be external files. Beyond that I have no solid advice to give.
 
About a month ago I set out on this exact venture and realized there was no good info anywhere about it. No tutorials, no examples, just people giving theoretical advice. What I learned for sure is that you cannot put a normal sound asset into a buffer. An external file is required, meaning every one of your samples will have to be external files. Beyond that I have no solid advice to give.
Is that so? Crap.
Is using external files possible is the same sense when publishing for android?
 
Is it even possible creating a music sequencer in Game Maker correctly? I feel like I'm running out of options. I can literally find no examples of how to actually use an audio_buffer? So you can't use the normal sound asset into the buffer, but I can't seem to get an external audio file to load into the buffer either.
 

GMWolf

aka fel666
Audio buffers play raw pcm data.
That means if you want to play a wav file, you need to first extract the samples and write them to the buffers.

You can totally write a sequencer in GM.
You will be limited however:
GML is slow so you can only process so many tracks.
Audio buffers events are synchronous, so you need large buffers to keep the queue fed. This means real time audio is hard if impossible. You can get pretty close though.
Lack of libraries, loading data from audio files will require you to write your own decoding functions (wav is fairly simple thankfully)
 
Audio buffers play raw pcm data.
That means if you want to play a wav file, you need to first extract the samples and write them to the buffers.

You can totally write a sequencer in GM.
You will be limited however:
GML is slow so you can only process so many tracks.
Audio buffers events are synchronous, so you need large buffers to keep the queue fed. This means real time audio is hard if impossible. You can get pretty close though.
Lack of libraries, loading data from audio files will require you to write your own decoding functions (wav is fairly simple thankfully)
That's a relief at least.
I did extract them and are trying to load them into the project, without any success. The sound isn't being played and I get this error: Error: no sound exists for soundid 258082910. Code below.

Writing my own decoding functions? That seems advanced. Do you have an example of that?


create event
GML:
buffstart = 1;
buffaudio = 0;
buffsound = working_directory + "sfx_12_aud1.wav"
audio_buff = buffer_create(1024, buffer_grow, 2);
and this is the step event:
Code:
if buffstart = 1
{
    if(file_exists(buffsound))
    {
        bufffile = file_bin_open(buffsound,0);
        buffsize = file_bin_size(bufffile);
      
        for(var i=0;i<buffsize;i++;)
        {
            buffer_write(audio_buff, buffer_s16, file_bin_read_byte(bufffile));
        }
      
        file_bin_close(bufffile);
    }
  
    buffaudio = audio_create_buffer_sound(audio_buff, buffer_s16, 48000, 0, 384000, audio_mono);
    buffstart = 2
}

if keyboard_check_pressed(vk_shift)
{
    audio_play_sound(buffaudio, 1, false);
}
 

GMWolf

aka fel666
Before playing around with files I would start simple and just try to write a sine wave to your audio buffer. (Try getting it at 440hz that's a middle A). This way you eliminate any potential errors to do with files and can just focus on getting audio buffers working.

The errors younger seem odd to me, it should have create a sound. Make sure that your length is correct.
Also try a lower sample rate like 44100.
 
Top