Hello everyone. I have been trying to generate sound waves, and play them back in near-real time using audio buffers. I came up with two techniques: The first was to have a number of small buffers (10ms), and queue them up onto a audio queue. whenever an audio async event was triggered, i would get the buffer ID, generate more data, and push it back onto the queue. The problem with this technique was that i needed either many buffers, or larger buffers in order for the queue to keep playing. With smaller buffers the queue would simply stop after each buffered had played once. The problem with larger buffers was that it ended up having too much lag. im looking for sub 20ms, optimally. Technique 2 was to have two longer buffers, one "active" and one "passive". The two buffers would be queud up, and every async event, i would queue them back onto the queue. In the step event, I would generate a bit more audio data onto the "active" buffer, in order to be just ahead of the current track position (adjusted to take into account how many bufferes already played). When generating samples, if i ever exceeded the length of the active buffer, i would switch the two buffers around, and start generating on the other buffer. This technique seemed to work, but was quite glitchy, It seemed like rather than streaming from the buffers, the queue was taking chunks from the buffer. Those chunks seemed larger than the ammount I would buffer ahead by. This resulted in some of the buffers to either not play or play some leftover garbage data. From these two tests, It seems to me like the qudio queues would copy a chunk of data from the queue, and start playing it. The size of chunk taken is too long for my usage. Do any of you know of a way to get near-real time audio generation to work? If not i guess Ill be building a DLL or something. (but i'd rather not)  Not sure if important, but i was using s16 formated buffers. Would that make a difference when it comes to minimum buffer sizes?