FineAudioBuffer takes an AudioDeviceBuffer (ADB) which deals with audio data corresponding to 10ms of data. It then allows for this data to be pulled in a finer or coarser granularity. I.e. interacting with this class instead of directly with the AudioDeviceBuffer one can ask for any number of audio data samples. This class also ensures that audio data can be delivered to the ADB in 10ms chunks when the size of the provided audio buffers differs from 10ms. As an example: calling DeliverRecordedData() with 5ms buffers will deliver accumulated 10ms worth of data to the ADB every second call.
Constructor is like below.
FineAudioBuffer(AudioDeviceBuffer* device_buffer,
size_t desired_frame_size_bytes,
int sample_rate);
device_buffer => Is the buffer that provides 10 ms of audio data.
desired_frame_size_bytes => is the number of bytes of audio data GetPlayoutData() should return on success. It is also the required each recorded buffer used in DeliverRecordedData calls
sample_rate => is the sample rate of audio data. This is needed because |device_buffer| delivers 10ms of data. Given the sample rate the number of samples can be calculated.
The two main function in this class are the ones below
1. void FineAudioBuffer::GetPlayoutData(int8_t* buffer)
This method asks webrtc for data in 10 milliseconds
2. void FineAudioBuffer::DeliverRecordedData(const int8_t* buffer,
size_t size_in_bytes,
int playout_delay_ms,
int record_delay_ms)
Deliver the recorded data in in 10ms samples to the observer. Consume samples from buffer in chunks of 10ms until there is not enough data left. The number of remaining bytes in the cache is given by the new size of the buffer.
references:
https://chromium.googlesource.com/external/webrtc/+/master/webrtc/modules/audio_device/fine_audio_buffer.cc
Constructor is like below.
FineAudioBuffer(AudioDeviceBuffer* device_buffer,
size_t desired_frame_size_bytes,
int sample_rate);
device_buffer => Is the buffer that provides 10 ms of audio data.
desired_frame_size_bytes => is the number of bytes of audio data GetPlayoutData() should return on success. It is also the required each recorded buffer used in DeliverRecordedData calls
sample_rate => is the sample rate of audio data. This is needed because |device_buffer| delivers 10ms of data. Given the sample rate the number of samples can be calculated.
The two main function in this class are the ones below
1. void FineAudioBuffer::GetPlayoutData(int8_t* buffer)
This method asks webrtc for data in 10 milliseconds
2. void FineAudioBuffer::DeliverRecordedData(const int8_t* buffer,
size_t size_in_bytes,
int playout_delay_ms,
int record_delay_ms)
Deliver the recorded data in in 10ms samples to the observer. Consume samples from buffer in chunks of 10ms until there is not enough data left. The number of remaining bytes in the cache is given by the new size of the buffer.
references:
https://chromium.googlesource.com/external/webrtc/+/master/webrtc/modules/audio_device/fine_audio_buffer.cc
No comments:
Post a Comment