The data bits for each sample should be left-justified and padded with 0s. For example, consider the case of a 10-bit sample (as samples must be multiples of 8, we need to represent it as 16 bits). The 10 bits should be left-justified so that they become bits 6 to 15 inclusive, and bits 0 to 5 should be set to zero.
As an example, here is a 10-bit sample with a value of 0100001111 left-justified as a 16-bit word.
Given the fact that the WAVE format uses Intel's little endian byte order, the LSB is stored first, as shown here:
The analogy is for mono audio, meaning that you have just one "channel." When you deal with stereo audio, 3D audio, and so forth, you are in effect dealing with multiple channels, meaning you have multiple samples describing the audio in any given moment in time. For example, for stereo audio, at any given point in time you need to know what the audio signal was for the left channel as well as the right channel. So, you will have to read and write two samples at a time.
Say you sample at 44 KHz for stereo audio; then effectively, you will have 44 K * 2 samples. If you are using 16 bits per sample, then given the duration of audio, you can calculate the total size of the wave file as:
Size in bytes = sampling rate * number of channels * (bits per sample / 8) * duration in seconds
Number of samples per second = sampling rate * number of channels
When you are dealing with such multi-channel sounds, single sample points from each channel are interleaved. Instead of storing all of the sample points for the left channel first, and then storing all of the sample points for the right channel next, you "interleave" the two channels' samples together. You would store the first sample of the left channel. Then, you would store the first sample of the right channel, and so on.
references:
http://www.codeguru.com/cpp/g-m/multimedia/audio/article.php/c8935/PCM-Audio-and-Wave-Files.htm
As an example, here is a 10-bit sample with a value of 0100001111 left-justified as a 16-bit word.
Given the fact that the WAVE format uses Intel's little endian byte order, the LSB is stored first, as shown here:
The analogy is for mono audio, meaning that you have just one "channel." When you deal with stereo audio, 3D audio, and so forth, you are in effect dealing with multiple channels, meaning you have multiple samples describing the audio in any given moment in time. For example, for stereo audio, at any given point in time you need to know what the audio signal was for the left channel as well as the right channel. So, you will have to read and write two samples at a time.
Say you sample at 44 KHz for stereo audio; then effectively, you will have 44 K * 2 samples. If you are using 16 bits per sample, then given the duration of audio, you can calculate the total size of the wave file as:
Size in bytes = sampling rate * number of channels * (bits per sample / 8) * duration in seconds
Number of samples per second = sampling rate * number of channels
When you are dealing with such multi-channel sounds, single sample points from each channel are interleaved. Instead of storing all of the sample points for the left channel first, and then storing all of the sample points for the right channel next, you "interleave" the two channels' samples together. You would store the first sample of the left channel. Then, you would store the first sample of the right channel, and so on.
references:
http://www.codeguru.com/cpp/g-m/multimedia/audio/article.php/c8935/PCM-Audio-and-Wave-Files.htm
No comments:
Post a Comment