ASPiK SDK
|
We have finally arrived at the function that will do all of the cool audio processing stuff our plugin was designed for. At this point, you need to make a fairly critical decision about how that processing will occur – by buffer or by frame. All APIs deliver audio data to the buffer processing function in separated buffers, one for each channel. If we assemble one audio sample from each channel for each sample period, we create a frame. So a set of channel buffers containing M samples each would be broken into M frames. A stereo input frame would consist of an array that contains two audio samples, one for the left channel and one for the right. We might indicate that as {left, right}. A frame of surround sound 5.1 audio data is an array of six audio samples, one from each channel, organized in the following manner: {left, right, center, LFE, left surround, right surround} where LFE stands for “Low Frequency Effects” or the sub-woofer channel.
In the PluginCore, you may choose whether you want to process complete buffers or frames of data. There may be good reasons for choosing one or the other. For example, if your plugin implemented a simple filter operation, you might process the channel buffers since the audio data in them is unrelated as far as your plugin is concerned. But if your plugin required information from each channel to affect processing on the other, you would need to process frames. An example of this might be a stereo ping-pong delay plugin where left and right channel data are both needed on a sample-by-sample basis. Another example is a stereo-linked compressor, whose side-chain information is derived from both left and right inputs in a way such that the left channel cannot be processed without the right channel, and vice-versa. For all of our example plugin projects, we will use frame-based processing. The reasons are:
• Frame processing is simpler to understand – you do not need to keep track of buffer sample counters or pointers
• Frame processing is universal – all plugin algorithms may be viewed as frame processing systems, even if the channels do not interact; the opposite is not true for buffer processing
• Frame processing follows more closely how our algorithms are encoded, both in DSP difference equations as well as flowcharts and signal flow diagrams
If you want to process buffers instead of frames, the method is simple: override the processAudioBuffers function and implement it. If you look in your plugincore.h file, you will see that the buffer processing function is commented out. To use it, un-comment it and implement the function in the plugincore.cpp file.
// — uncomment and override this for buffer processing
virtual bool processAudioBuffers(ProcessBufferInfo& processBufferInfo);
You can use the processAudioBuffers implementation in the pluginbase.cpp file to help you understand the arguments in the ProcessBufferInfo structure (it is actually very straightforward). One cautionary comment is needed here: many users will try to process buffers because they are using an algorithm like the FFT that is designed to process blocks of data rather than frames of data and that is perfectly OK. The problem lies in the buffer sizes. In some DAWs, the user sets the buffer size in an audio buffer setup panel. In others, the buffer size is set in the DAW. In others, the buffer size is never guaranteed to be the same from one buffer process call to the next. For example, the AU API specifies that automation will be handled by breaking buffers into smaller pieces, each of which is processed with one set of parameter values. You should never assume that the size of the incoming audio buffers will be fixed and unchanging and you should not try to tie your buffer-based processing to the audio buffer sizes. There are just too many unknowns across the different APIs and the different DAW manufacturers that interpret those APIs. If your algorithm processes blocks of data, then you will be responsible for filling your own local buffers with data. You should expect that you might receive larger or smaller buffers than your processing calls for. You should also expect to receive partially filled or incomplete buffers from time to time, based on system overhead or the end-of-audio-file situation.
Another issue with buffer based processing involves parameter smoothing. In order to make GUI control changes less glitch-y, we can smooth and update their parameters on each sample interval. If you are processing buffers, then this operation is going to break your buffer processing into sample interval chunks. So, if you want to use parameter smoothing, you might consider always processing frames rather than buffers.