1.native层audiotrack中,write操作是如何发起的
在代码调试中,我们看到,thread线程不停地读取audioflinger中共享buffer的数据,而audiotrack的write则是不停地往里面写,那么这个write操作到底是谁来发起的?首先需要回到应用层audiotrack的创建,audiotrack创建分为static模式和stream模式,如果是前者的话,则会在java去申请一个共享buffer,一次性将数据写进去,如果是stream模式的话,buffer则是由audioflinger来申请的,java层将数据拷贝到native层中。
public int write(byte[] audioData, int offsetInBytes, int sizeInBytes) {
...
int ret = native_write_byte(audioData, offsetInBytes, sizeInBytes, mAudioFormat,true /*isBlocking*/);
...
}
audioData为java层申请的buffer,用于存储从文件或者解码器中读取到的数据,不管数据类型是啥,都会调用native_write_byte进到JNI层,JNI层最终都会去调用writeToTrack函数:
jint writeToTrack(const sp<AudioTrack>& track, jint audioFormat, const jbyte* data,
jint offsetInBytes, jint sizeInBytes, bool blocking = true) {
ssize_t written = 0;
/* stream模式走此if分支 */
if (track->sharedBuffer() == 0) {
written = track->write(data + offsetInBytes, sizeInBytes, blocking);
// for compatibility with earlier behavior of write(), return 0 in this case
if (written == (ssize_t) WOULD_BLOCK) {
written = 0;
}
} else {
/* static模式 */
const audio_format_t format = audioFormatToNative(audioFormat);
switch (format) {
default:
case AUDIO_FORMAT_PCM_FLOAT:
case AUDIO_FORMAT_PCM_16_BIT: {
// writing to shared memory, check for capacity
if ((size_t)sizeInBytes > track->sharedBuffer()->size()) {
sizeInBytes = track->sharedBuffer()->size();
}
memcpy(track->sharedBuffer()->pointer(), data + offsetInBytes, sizeInBytes);
written = sizeInBytes;
} break;
case AUDIO_FORMAT_PCM_8_BIT: {
// data contains 8bit data we need to expand to 16bit before copying
// to the shared memory
// writing to shared memory, check for capacity,
// note that input data will occupy 2X the input space due to 8 to 16bit conversion
if (((size_t)sizeInBytes)*2 > track->sharedBuffer()->size()) {
sizeInBytes = track->sharedBuffer()->size() / 2;
}
int count = sizeInBytes;
int16_t *dst = (int16_t *)track->sharedBuffer()->pointer();
const uint8_t *src = (const uint8_t *)(data + offsetInBytes);
memcpy_to_i16_from_u8(dst, src, count);
// even though we wrote 2*sizeInBytes, we only report sizeInBytes as written to hide
// the 8bit mixer restriction from the user of this function
written = sizeInBytes;
} break;
}
}
return written;
}
这个函数其实很简单,根据track类型进行memcpy操作,如果是stream模式,那么数据会被copy至native层中audioflinger申请的buffer里,如果是static模式,则直接copy到java申请的共享buffer中;
通过以上的分析,可以看到,如果是stream模式的话,native层的write操作是由上层发起的,而上层的发起一共有两种方式:首先是主动push,其次是native层回调;
a.主动push,比如:
/* while循环中不停调用write写数据 */
if (readCount != 0 && readCount != -1) {
if (mTrack.getPlayState() == AudioTrack.PLAYSTATE_PLAYING){
mTrack.write(mTempBuffer, 0, readCount);
}
}
b.申请回调,在createtrack的时候,java层可以设置一个回调函数:
set@AudioTrack.cpp
{
if (cbf != NULL) {
mAudioTrackThread = new AudioTrackThread(*this, threadCanCallJava);
mAudioTrackThread->run("AudioTrack", ANDROID_PRIORITY_AUDIO, 0 /*stack*/);
}
}
如果cbf不为空的话,那么将创建一个AudioTrackThread线程,不停地去写数据,具体的操作在processAudioBuffer@AudioTrack::AudioTrackThread::threadLoop()中,就不去具体分析了;
2.audiotrack中的生产者-消费者模式
整个audiotrack对音频数据的消耗是动态的生产者-消费者模式,audiotrack可以理解为生产者,而audioflinger操控的thread可以理解为最终的消费者,两者的关系如下:
audiotrack通过IAudioTrack实现与audioflinger协作,而IAudioTrack不支持binder通信,audioflinger是通过代理模式现在对track的操作的,再看一下下面这张图:
这张图涵盖了AT,AF及thread的继承关系,audioflinger掌管了两个线程,一个是用于回放的PlaybackThread,另一个则是用于录音的RecordThread,而PlaybackThread根据是否混音分为MixerThread和DirectOutputThread,两者最终都是通过代理TrackHandle实现对track的控制,注意,TrackHandle是支持binder通信的,那么,生产者与消费者是如何操作共享buffer的呢?下面这张图给出了明说,接下来的两节,将分别介绍生产者和消费者:
3.生产者如何往共享buffer中写数据
在分析生产者和消费者模型之前,我们先看一下非常重要的结构体audio_track_cblk_t:
struct audio_track_cblk_t
{
// Since the control block is always located in shared memory, this constructor
// is only used for placement new(). It is never used for regular new() or stack.
audio_track_cblk_t();
/*virtual*/ ~audio_track_cblk_t() { }
friend class Proxy;
friend class ClientProxy;
friend class AudioTrackClientProxy;
friend class AudioRecordClientProxy;
friend class ServerProxy;
friend class AudioTrackServerProxy;
friend class AudioRecordServerProxy;
// The data members are grouped so that members accessed frequently and in the same context
// are in the same line of data cache.
uint32_t mServer; // Number of filled frames consumed by server (mIsOut),
// or filled frames provided by server (!mIsOut).
// It is updated asynchronously by server without a barrier.
// The value should be used "for entertainment purposes only",
// which means don't make important decisions based on it.
uint32_t mPad1; // unused
volatile int32_t mFutex; // event flag: down (P) by client,
// up (V) by server or binderDied() or interrupt()
#define CBLK_FUTEX_WAKE 1 // if event flag bit is set, then a deferred wake is pending
private:
// This field should be a size_t, but since it is located in shared memory we
// force to 32-bit. The client and server may have different typedefs for size_t.
uint32_t mMinimum; // server wakes up client if available >= mMinimum
// Stereo gains for AudioTrack only, not used by AudioRecord.
gain_minifloat_packed_t mVolumeLR;
uint32_t mSampleRate; // AudioTrack only: client's requested sample rate in Hz
// or 0 == default. Write-only client, read-only server.
// client write-only, server read-only
uint16_t mSendLevel; // Fixed point U4.12 so 0x1000 means 1.0
uint16_t mPad2; // unused
public:
volatile int32_t mFlags; // combinations of CBLK_*
// Cache line boundary (32 bytes)
public:
union {
AudioTrackSharedStreaming mStreaming;
AudioTrackSharedStatic mStatic;
int mAlign[8];
} u;
// Cache line boundary (32 bytes)
};
这里面特别需要注意的几个变量有前面定义的几个友元类,其中的client和service端在实际的读写数据中需要明了,还有就是最后union中的变量mStreaming,看一下这个声明:
struct AudioTrackSharedStreaming {
// similar to NBAIO MonoPipe
// in continuously incrementing frame units, take modulo buffer size, which must be a power of 2
volatile int32_t mFront; // read by server
volatile int32_t mRear; // write by client
volatile int32_t mFlush; // incremented by client to indicate a request to flush;
// server notices and discards all data between mFront and mRear
volatile uint32_t mUnderrunFrames; // server increments for each unavailable but desired frame
};
audiotrack的buffer模式依旧是沿用的环形buffer模式,但mFront和mRear并不是单纯的读指针和写指针的关系,实际上这两个值记录了整个track中生产者写入到共享buffer中的总帧数和消费者从共享buffer中读出的总帧数,mUnderrunFrames记录的是underrun的帧数。
从前面的分析我们已经知道,audiotrack代表的是生产者,不停地往共享buffer中写数据,我们截取write@AudioTrack.cpp的部分代码:
ssize_t AudioTrack::write(const void* buffer, size_t userSize, bool blocking)
{
...
size_t written = 0;
Buffer audioBuffer;
while (userSize >= mFrameSize) {
/* 1.将待写的数据转换成帧数 */
audioBuffer.frameCount = userSize / mFrameSize;
/* 2.在共享buffer中找到一段可写的内存 */
status_t err = obtainBuffer(&audioBuffer,
blocking ? &ClientProxy::kForever : &ClientProxy::kNonBlocking);
...
toWrite = audioBuffer.size;
/* 3.假设位宽为16bit,将数据拷贝至共享内存 */
memcpy(audioBuffer.i8, buffer, toWrite);
...
/* 4.必要的计算,因为是循环写,确保写完 */
buffer = ((const char *) buffer) + toWrite;
userSize -= toWrite;
written += toWrite;
/* 5.释放掉这段buffer */
releaseBuffer(&audioBuffer);
}
return written;
}
代码逻辑上很简单,通过obtainBuffer函数找到一段可以写数据的内存,然后拷贝数据,最后调用releaseBuffer释放这段buffer(这里的释放理解为填充完成,待读取,不是delete),obtainBuffer强调一点,AudioTrack类中重载了一个obtainBuffer函数,之前版本的那个已经废弃,注意看入参,不要找错了,具体的内容就不去分析了,主要是通过obtainBuffer(AudioTrackShared.cpp中的client端)来获取mCblk,进而计算出来的,release也是一样的分析方法,另外,在实际问题的调试过程中,如果想要dump混音之前的数据,就在write函数中去实现;
4.消费者如何从共享buffer中读数据
消费者端比较复杂,我们先从消费者端线程如何转起来说起。在前面的分析中,audioflinger中createTrack函数说到,会通过checkPlaybackThread_l由传入的output找到一个已经存在的PlaybackThread,可以看到,在应用创建audiotrack的时候,此时的线程已经创建并转起来了,创建的地方在哪儿?就是系统启机加载APS时,通过加载audio_policy.conf文件时创建的,AP最终会通过AF的openOutput_l去创建线程:
if (flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) {
thread = new OffloadThread(this, outputStream, *output, devices);
ALOGV("openOutput_l() created offload output: ID %d thread %p", *output, thread);
} else if ((flags & AUDIO_OUTPUT_FLAG_DIRECT)
|| !isValidPcmSinkFormat(config->format)
|| !isValidPcmSinkChannelMask(config->channel_mask)) {
thread = new DirectOutputThread(this, outputStream, *output, devices);
ALOGV("openOutput_l() created direct output: ID %d thread %p", *output, thread);
} else {
thread = new MixerThread(this, outputStream, *output, devices);
ALOGV("openOutput_l() created mixer output: ID %d thread %p", *output, thread);
}
可以看到上面的代码片段,这里会根据flag去创建对应的线程,由前面的继承关系图可以看到,这几个线程都是继承自PlaybackThread的,不同于录音的单独线程RecordThread,没有子类,既然线程创建的地方已经找到了,那么是如何run的呢?这里借助了强指针第一次引用调用onFirstRef函数的特点,先看第一次强指针引用的位置:
openOutput@AudioFlinger.cpp
sp<PlaybackThread> thread = openOutput_l(module, output, config, *devices, address, flags);
再来看PlaybackThread的onFirstRef函数:
void AudioFlinger::PlaybackThread::onFirstRef()
{
run(mName, ANDROID_PRIORITY_URGENT_AUDIO);
}
消费端的线程成功run起来之后,怎么知道哪个track有数据,需要去取呢?这里还得由audiotrack的start指令来完成,我们看audiotrack的start片段:
status_t AudioTrack::start()
{
...
status_t status = NO_ERROR;
if (!(flags & CBLK_INVALID)) {
/* 调用track的start */
status = mAudioTrack->start();
if (status == DEAD_OBJECT) {
flags |= CBLK_INVALID;
}
}
...
}
这里的mAudioTrack是sp,一看到IAudioTrack就应该知道是audioflinger端,并且是trackhandle代理的track,我们去Tracks.cpp中看看:
status_t AudioFlinger::PlaybackThread::Track::start(AudioSystem::sync_event_t event __unused,
int triggerSession __unused)
{
...
PlaybackThread *playbackThread = (PlaybackThread *)thread.get();
status = playbackThread->addTrack_l(this);
...
}
调用addTrack_l将当前的track添加进去,添加到哪里呢?
addTrack_l@Threads.cpp
mActiveTracks.add(track);
mActiveTracks是播放线程PlaybackThread的成员变量,记录了整个播放线程中所有活跃的track,dumpsys查看audioflinger服务的时候也可以看到这个数据,总结一下应用调用audiotrack的start接口,对于native层audioflinger而言,就是将此track加入到播放/录音线程的活跃track中,以便进行数据处理。
“万事俱备,只欠东风”,所有的准备工作都已经做好了,假设现在应用已经通过调用write不停地往共享buffer中写数据,那么,消费端又是怎么取的呢?答案就在PlaybackThread的threadloop函数中,threadLoop简化一下,就是“三板斧”,截取代码片段:
bool AudioFlinger::PlaybackThread::threadLoop()
{
...
/* 1.找到活跃的track */
mMixerStatus = prepareTracks_l(&tracksToRemove);
...
/* 2.进行buffer的混音算法 */
threadLoop_mix();
...
/* 3.将处理完后的混音数据写入hal层的output中 */
ssize_t ret = threadLoop_write();
...
}
threadloop重要就看这三个函数分别实现了什么功能,“一板斧”prepareTracks_l总体比较复杂,就是通过mActiveTracks获得活跃的track,但是我们需要注意里面对成员变量mAudioMixer相关的操作:
mAudioMixer->setBufferProvider(name, track);
mAudioMixer->enable(name);
这里面设置了mAudioMixer的setBufferProvider并使能了该mixer,重点是这个enable,我们继续跟进一下,进入到AudioMixer.cpp:
void AudioMixer::enable(int name)
{
name -= TRACK0;
ALOG_ASSERT(uint32_t(name) < MAX_NUM_TRACKS, "bad track name %d", name);
track_t& track = mState.tracks[name];
if (!track.enabled) {
track.enabled = true;
ALOGV("enable(%d)", name);
invalidateState(1 << name);
}
}
这里的name实际上就是track的编号,继续看invalidateState:
void AudioMixer::invalidateState(uint32_t mask)
{
if (mask != 0) {
mState.needsChanged |= mask;
mState.hook = process__validate;
}
}
你会看到这里有一个函数指针hook,能猜得到通过process__validate指向一个合适的可执行函数,process__validate我们就不具体进去分析了,你只需要知道的是,根据track的场景选择对应的函数进行混音处理,哪里会用这个函数指针呢?
“二板斧”threadLoop_mix立马告诉你:
threadLoop_mix@MixerThread:
mAudioMixer->process(pts);
mAudioMixer是mixer,通过mixer的process来进行混音,代码所在:
process@AudioMixer.cpp
void AudioMixer::process(int64_t pts)
{
mState.hook(&mState, pts);
}
这下知道“一板斧”prepareTracks_l中对mAudioMixer进行enable的用意了吧,按照我当前的场景,我只进行了一个track用于播放,那么它调用的函数就会是process__OneTrack16BitsStereoNoResampling,到这里我们都还没有看到有操作共享内存的地方,再坚持一下,看该函数代码片段:
void AudioMixer::process__OneTrack16BitsStereoNoResampling(state_t* state,
int64_t pts)
{
...
/* 1.获取可读的buffer */
t.bufferProvider->getNextBuffer(&b, outputPTS);
....
/* 2.进行算法处理后写入out中 */
*out++ = (r<<16) | (l & 0xFFFF);
...
/* 3.处理完后,释放掉这段buffer */
t.bufferProvider->releaseBuffer(&b);
...
}
这个操作跟audiotrack中write函数对共享buffer的操作非常相似,t.bufferProvider就是track对象,具体看一下这个getNextBuffer:
status_t AudioFlinger::PlaybackThread::Track::getNextBuffer(
AudioBufferProvider::Buffer* buffer, int64_t pts __unused)
{
ServerProxy::Buffer buf;
size_t desiredFrames = buffer->frameCount;
buf.mFrameCount = desiredFrames;
status_t status = mServerProxy->obtainBuffer(&buf);
buffer->frameCount = buf.mFrameCount;
buffer->raw = buf.mRaw;
if (buf.mFrameCount == 0) {
mAudioTrackServerProxy->tallyUnderrunFrames(desiredFrames);
}
return status;
}
我们之前在分析生产者的时候,遇到过ClientProxy,现在看到了mServerProxy,感觉距离不远了,看一下obtainBuffer@AudioTrackShared.cpp
status_t ServerProxy::obtainBuffer(Buffer* buffer, bool ackFlush)
{
...
audio_track_cblk_t* cblk = mCblk;
...
}
一进来,你终于发现看到了熟悉的mCblk,原来生产者和消费者都是通过obtainBuffer和releaseBuffer来操作共享buffer的,只不过,消费者端封装太过复杂。总结一下“二板斧”,就是通过audio_track_cblk_t结构体找到可读的共享buffer,然后进行混音处理,最后一板斧应该就比较明了,将处理过的数据写入最终的输出终端:
threadLoop_write@Threads.cpp
ssize_t AudioFlinger::PlaybackThread::threadLoop_write()
{
...
bytesWritten = mOutput->stream->write(mOutput->stream,
(char *)mSinkBuffer + offset, mBytesRemaining);
...
}
mOutput就是audiopolicyservice定下的输出策略,这里就直接将数据写到了对应hal层中了。