• 深入理解Android音视频同步机制(一)ExoPlayer的avsync逻辑


    对于此前没有了解过ExoPlayer的朋友,我们在这里先用下面的时序图简单介绍一下ExoPlayer在音视频同步这块的基本流程:

    图中

    ExoPlayerImplInternal是Exoplayer的主loop所在处,这个大loop不停的循环运转,将下载、解封装的数据送给AudioTrack和MediaCodec去播放。

    MediaCodecAudioRenderer和MediaCodecVideoRenderer分别是处理音频和视频数据的类,在MediaCodecAudioRenderer中会调用AudioTrack的write方法,写入音频数据,同时还会调用AudioTrack的getTimeStamp、getPlaybackHeadPosition、getLantency方法来获得“Audio当前播放的时间”。在MediaCodecVideoRenderer中会调用MediaCodec的几个关键API,例如通过调用releaseOutputBuffer方法来将视频帧送显。在MediaCodecVideoRenderer类中,会依据avsync逻辑调整视频帧的pts,并且控制着丢帧的逻辑。

    VideoFrameReleaseTimeHelper可以获取系统的vsync时间和间隔,并且利用vsync信号调整视频帧的送显时间。

    下面我会先简要的介绍ExoPlayer avsync逻辑中的关键点,最后再进行详细的代码分析。

    本文福利, 免费领取C++音视频学习资料包、技术视频,内容包括(音视频开发,面试题,FFmpeg webRTC rtmp hls rtsp ffplay srs↓↓↓↓↓↓见下面↓↓文章底部点击免费领取↓↓

    Video部分

    1.利用pts和系统时间计算预计送显时间(视频帧应该在这个时间点显示)

    1. MediaCodecVideoRenderer#processOutputBuffer
    2. //计算 “当前帧的pts(bufferPresentationTimeUs )” 与“Audio当前播放时间(positionUs )”之间的时间间隔,
    3. //最后还减去了一个elapsedSinceStartOfLoopUs的值,代表的是程序运行到此处的耗时,
    4. //减去这个值可以看做一种使计算值更精准的做法
    5. long elapsedSinceStartOfLoopUs = (SystemClock.elapsedRealtime() * 1000) - elapsedRealtimeUs;
    6. earlyUs = bufferPresentationTimeUs - positionUs - elapsedSinceStartOfLoopUs;
    7. // Compute the buffer's desired release time in nanoseconds.
    8. // 用当前系统时间加上前面计算出来的时间间隔,即为“预计送显时间”
    9. long systemTimeNs = System.nanoTime();
    10. long unadjustedFrameReleaseTimeNs = systemTimeNs + (earlyUs * 1000);

    2、利用vsync对预计送显时间进行调整 

    1. MediaCodecVideoRenderer#processOutputBuffer
    2. long adjustedReleaseTimeNs = frameReleaseTimeHelper.adjustReleaseTime(
    3. bufferPresentationTimeUs, unadjustedFrameReleaseTimeNs);

    adjustReleaseTime方法里面干了几件事:

    a.计算ns级别的平均帧间隔时间,因为vsync的精度是ns

    b.寻找距离当前送显时间点(unadjustedFrameReleaseTimeNs)最近(可能是在送显时间点之前,也可能是在送显时间点之后)的vsync时间点,我们的目标是在这个vsync时间点让视频帧显示出去

    c.上面计算出的是我们的目标vsync显示时间,但是要提前送,给后面的显示流程以时间,所以再减去一个vsyncOffsetNs时间,这个时间是写死的,定义为.8*vsyncDuration,减完之后的这个值就是真正给MediaCodec.releaseOutputBuffer方法的时间戳

    这里其实有问题:首先是这里的0.8系数设置的是否合理,其次是否能有办法验证这一帧真的在这一个vsync信号时间点显示出去了。按照mediacodec.releaseOutputbuffer的说法注释,应该在两个vsync信号之前调用release方法,但是从目前的做法来看并没有follow注释的说法。

    调研之后,我们发现,利用dumpsys SurfaceFlinger --latency SurfaceView方法我们可以知道每一帧的desiredPresentationTime和actualPresentationTime,经过实测,在一些平台上这两个值得差距在一个vsync时间以上,一般为22ms左右,所以ExoPlayer里面设置的这个0.8的系数也许不甚合理。其次我们观察了NuPlayer的avsync逻辑,发现在NuPlayer中就是严格按照releaseOutputbuffer注释所说的,提前两个vsync时间调用release方法。

    上面的提到的注释内容如下

    1. /**
    2. * If you are done with a buffer, use this call to update its surface timestamp
    3. * and return it to the codec to render it on the output surface. If you
    4. * have not specified an output surface when configuring this video codec,
    5. * this call will simply return the buffer to the codec.

    6. *
    7. * The timestamp may have special meaning depending on the destination surface.
    8. *
    9. *
    10. *
    11. *
    12. *
    13. SurfaceView specifics
    14. * If you render your buffer on a {@link android.view.SurfaceView},
    15. * you can use the timestamp to render the buffer at a specific time (at the
    16. * VSYNC at or after the buffer timestamp). For this to work, the timestamp
    17. * needs to be reasonably close to the current {@link System#nanoTime}.
    18. * Currently, this is set as within one (1) second. A few notes:
    19. *
    20. *
      • *
      • the buffer will not be returned to the codec until the timestamp
      • * has passed and the buffer is no longer used by the {@link android.view.Surface}.
      • *
      • buffers are processed sequentially, so you may block subsequent buffers to
      • * be displayed on the {@link android.view.Surface}. This is important if you
      • * want to react to user action, e.g. stop the video or seek.
      • *
      • if multiple buffers are sent to the {@link android.view.Surface} to be
      • * rendered at the same VSYNC, the last one will be shown, and the other ones
      • * will be dropped.
      • *
      • if the timestamp is not "reasonably close" to the current system
      • * time, the {@link android.view.Surface} will ignore the timestamp, and
      • * display the buffer at the earliest feasible time. In this mode it will not
      • * drop frames.
      • * 注意这里!!!!!!
      • *
      • for best performance and quality, call this method when you are about
      • * two VSYNCs' time before the desired render time. For 60Hz displays, this is
      • * about 33 msec.
      • *
      • *
      • *
      • * Once an output buffer is released to the codec, it MUST NOT
      • * be used until it is later retrieved by {@link #getOutputBuffer} in response
      • * to a {@link #dequeueOutputBuffer} return value or a
      • * {@link Callback#onOutputBufferAvailable} callback.
      • *
      • * @param index The index of a client-owned output buffer previously returned
      • * from a call to {@link #dequeueOutputBuffer}.
      • * @param renderTimestampNs The timestamp to associate with this buffer when
      • * it is sent to the Surface.
      • * @throws IllegalStateException if not in the Executing state.
      • * @throws MediaCodec.CodecException upon codec error.
      • */
      • public final void releaseOutputBuffer(int index, long renderTimestampNs)

      3、丢帧和送显 

      1. MediaCodecVideoRenderer#processOutputBuffer
      2. //计算实际送显时间与当前系统时间之间的时间差
      3. earlyUs = (adjustedReleaseTimeNs - systemTimeNs) / 1000;
      4. //将上面计算出来的时间差与预设的门限值进行对比
      5. if (shouldDropOutputBuffer(earlyUs, elapsedRealtimeUs)) {
      6. dropOutputBuffer(codec, bufferIndex);
      7. return true;
      8. }
      9. if (earlyUs < 50000) {
      10. //视频帧来的太晚会被丢掉, 来的太早则先不予显示,进入下次loop,再行判断
      11. renderOutputBufferV21(codec, bufferIndex, adjustedReleaseTimeNs);

      如果earlyUs 时间差为正值,代表视频帧应该在当前系统时间之后被显示,换言之,代表视频帧来早了,反之,如果时间差为负值,代表视频帧应该在当前系统时间之前被显示,换言之,代表视频帧来晚了。如果超过一定的门限值,即该视频帧来的太晚了,则将这一帧丢掉,不予显示。按照预设的门限值,视频帧比预定时间来的早了50ms以上,则进入下一个间隔为10ms的循环,再继续判断,否则,将视频帧送显。

      小结

      1.我们平时一般理解avsync就是比较audio pts和video pts,也就是比较码流层面的“播放”时间,来早了就等,来晚了就丢帧,但为了更精确地计算这个差值,exoplayer里面一方面统计了函数调用的一些耗时,一方面实际上是在比较系统时间和当前视频帧的送显时间来判断要不要丢帧,也就是脱离了码流层面

      2.既然牵涉到实际送显时间的计算,就需要将播放时间映射到vsync时间上,也就有了cloestVsync的计算,也有了提前80% vsync信号间隔时间送显的做法,同时因为vsync信号时间的精度为ns,为了更好匹配这一精度,而没有直接用ms精度的码流pts值,而是另外计算了ns级别的视频帧间隔时间

      Audio部分

      1.1.get current play time – 使用AudioTrack.getTimeStamp方法

      1. AudioTrack#getCurrentPositionUs(boolean sourceEnded)
      2. positionUs = framesToDurationUs(AudioTimestamp.framePosition)
      3. + systemClockUs – AudioTimestamp.nanoTime/1000

      对getTimeStamp方法的调用是以500ms为间隔的,所以AudioTimestamp.nanoTime是上次调用时拿到的结果,systemClockUs – AudioTimestamp.nanoTime得到的就是距离上次调用所经过的系统时间,framesToDurationUs(AudioTimestamp.framePosition)代表的是上次调用时获取到的“Audio当前播放的时间”,二者相加即为当前系统时间下的“Audio当前播放的时间”

      为什么要以500ms为间隔调用getTimeStamp方法?参见API注释,如下

      1. /**
      2. * Poll for a timestamp on demand.
      3. *

      4. * If you need to track timestamps during initial warmup or after a routing or mode change,
      5. * you should request a new timestamp periodically until the reported timestamps
      6. * show that the frame position is advancing, or until it becomes clear that
      7. * timestamps are unavailable for this route.
      8. *

      9. * After the clock is advancing at a stable rate,
      10. * query for a new timestamp approximately once every 10 seconds to once per minute.
      11. * 注意这里!!!!!
      12. * Calling this method more often is inefficient.
      13. * It is also counter-productive to call this method more often than recommended,
      14. * because the short-term differences between successive timestamp reports are not meaningful.
      15. * If you need a high-resolution mapping between frame position and presentation time,
      16. * consider implementing that at application level, based on low-resolution timestamps.
      17. *

      18. * The audio data at the returned position may either already have been
      19. * presented, or may have not yet been presented but is committed to be presented.
      20. * It is not possible to request the time corresponding to a particular position,
      21. * or to request the (fractional) position corresponding to a particular time.
      22. * If you need such features, consider implementing them at application level.
      23. *
      24. * @param timestamp a reference to a non-null AudioTimestamp instance allocated
      25. * and owned by caller.
      26. * @return true if a timestamp is available, or false if no timestamp is available.
      27. * If a timestamp if available,
      28. * the AudioTimestamp instance is filled in with a position in frame units, together
      29. * with the estimated time when that frame was presented or is committed to
      30. * be presented.
      31. * In the case that no timestamp is available, any supplied instance is left unaltered.
      32. * A timestamp may be temporarily unavailable while the audio clock is stabilizing,
      33. * or during and immediately after a route change.
      34. * A timestamp is permanently unavailable for a given route if the route does not support
      35. * timestamps. In this case, the approximate frame position can be obtained
      36. * using {@link #getPlaybackHeadPosition}.
      37. * However, it may be useful to continue to query for
      38. * timestamps occasionally, to recover after a route change.
      39. */
      40. // Add this text when the "on new timestamp" API is added:
      41. // Use if you need to get the most recent timestamp outside of the event callback handler.
      42. public boolean getTimestamp(AudioTimestamp timestamp)

      1.2.get current play time – 使用AudioTrack.getPlaybackHeadPosition方法 

      1. AudioTrack#getCurrentPositionUs(boolean sourceEnded)
      2. //因为 getPlayheadPositionUs() 的粒度只有约20ms, 如果直接拿来用的话精度不够
      3. //要进行采样和平滑演算得到playback position
      4. positionUs = systemClockUs + smoothedPlayheadOffsetUs
      5. = systemClockUs + avg[playbackPositionUs(i) – systemClock(i)]
      6. positionUs -= latencyUs ;

      上式中i最大取10,因为getPlayheadPositionUs的精度不足以用来做音视频同步,所以这里通过计算每次getPlayheadPositionUs拿到的值与系统时钟的offset,并且取平均值,来解决精度不足的问题,平滑后的值即为smoothedPlayheadOffsetUs,再加上系统时钟即为“Audio当前播放的时间”。当然,最后要减去通过AudioTrack.getLatency方法获取到的底层delay值,才是最终的结果。

      本文福利, 免费领取C++音视频学习资料包、技术视频,内容包括(音视频开发,面试题,FFmpeg webRTC rtmp hls rtsp ffplay srs↓↓↓↓↓↓见下面↓↓文章底部点击免费领取↓↓

      小结

      总体来说,音视频同步机制中的同步基准有两种选择:利用系统时间或audio playback position. 如果是video only的流,则利用系统时间,这方面比较简单,不再赘述

      a. 如果是用audio position的话, 首先明确是通过下式来计算

      startMediaTimeUs + positionUs

      式中startMediaTimeUs为码流中拿到的初始audio pts值, positionUs是一个以0为起点的时间值,代表audio 播放了多长时间的数据

      b.计算positionUs值则有两个方法, 根据设备支持情况来选择:

      b.1.用AudioTimeStamp值来计算,需要注意的是,因为getTimeStamp方法不建议频繁调用,在ExoPlayer中是以500ms为间隔调用的,所以对应的逻辑可以化简为:

      positionUs = framePosition/sampleRate + systemClock – nanoTime/1000

      b.2. 用audioTrack.getPlaybackHeadPosition方法来计算, 但是因为这个值的粒度只有20ms, 可能存在一些抖动, 所以做了一些平滑处理, 对应的逻辑可以化简为:

      positionUs = systemClockUs + smoothedPlayheadOffsetUs - latencyUs

      = systemClockUs + avg[playbackPositionUs(i) - systemClock(i)] - latencyUs

      = systemClockUs + avg[(audioTrack.getPlaybackHeadPosition/sampleRate)(i) -systemClock(i)] - latencyUs

      ExoPlayer avsync逻辑代码精读

      还是一样,先来看video部分,avsync逻辑的入口在下面的方法

      1. com.google.android.exoplayer2.video.MediaCodecVideoRenderer#processOutputBuffer
      2. protected boolean processOutputBuffer(long positionUs/*当前播放时间,由系统时间或audioClock计算*/, long elapsedRealtimeUs, MediaCodec codec,
      3. ByteBuffer buffer, int bufferIndex, int bufferFlags, long bufferPresentationTimeUs/*当前帧的pts*/,
      4. boolean shouldSkip) {
      5. ....
      6. // Compute how many microseconds it is until the buffer's presentation time.
      7. //计算当前帧的pts与当前播放时间之间的时间间隔,需要注意的是,最后还减去了一个elapsedSinceStartOfLoopUs的值,这个值代表的是从当前播放时间更新到程序运行到此处 的耗时,减去这个值可以看做一种使计算值更精准的做法
      8. long elapsedSinceStartOfLoopUs = (SystemClock.elapsedRealtime() * 1000) - elapsedRealtimeUs;
      9. earlyUs = bufferPresentationTimeUs - positionUs - elapsedSinceStartOfLoopUs;
      10. // Compute the buffer's desired release time in nanoseconds.
      11. // 用当前系统时间加上前面计算出来的时间间隔,即为初步计算出来的预计送显时间
      12. long systemTimeNs = System.nanoTime();
      13. long unadjustedFrameReleaseTimeNs = systemTimeNs + (earlyUs * 1000);
      14.  // Apply a timestamp adjustment, if there is one.
      15.  // 对预计送显时间进行调整, 得到实际送显时间, 调整的逻辑详见下面1.1
      16. long adjustedReleaseTimeNs = frameReleaseTimeHelper.adjustReleaseTime(
      17. bufferPresentationTimeUs, unadjustedFrameReleaseTimeNs);
      18. //计算实际送显时间与当前系统时间之间的时间差, 如果时间差为正值, 代表视频帧应该在当前系统时间之后被显示,换言之,代表视频帧来早了, 反之, 如果时间差为负值, 代表视频帧应该在当前系统时间之前被显示, 换言之, 代表视频帧来晚了
      19. earlyUs = (adjustedReleaseTimeNs - systemTimeNs) / 1000;
      20. //将上面计算出来的时间差与预设的门限值进行对比, 如果超过门限值, 即该视频帧来的太晚了, 则将这一帧丢掉, 不予显示, 详细的对比与丢帧的逻辑如下1.2, 1.3所示
      21. if (shouldDropOutputBuffer(earlyUs, elapsedRealtimeUs)) {
      22. dropOutputBuffer(codec, bufferIndex);
      23. return true;
      24. }
      25. if (Util.SDK_INT >= 21) {
      26. // Let the underlying framework time the release.
      27. if (earlyUs < 50000) {
      28. //视频帧来的太晚会被丢掉, 来的太早同样有问题, 按照预设的门限值, 视频帧比预定时间来的早了50ms以上, 则进入下一个间隔为10ms的循环,再继续判断, 否则, 将视频帧送显, 送显的详细逻辑如下面1.4所示
      29. renderOutputBufferV21(codec, bufferIndex, adjustedReleaseTimeNs);
      30. return true;
      31. }
      32. } else {
      33. ....
      34. }
      35. }
      36. ....
      37. }

      1.1

      调整送显时间的逻辑如下

      1. com.google.android.exoplayer2.video.VideoFrameReleaseTimeHelper#adjustReleaseTime
      2. /**
      3. * Adjusts a frame release timestamp.
      4. *
      5. * @param framePresentationTimeUs The frame's presentation time, in microseconds.
      6. * @param unadjustedReleaseTimeNs The frame's unadjusted release time, in nanoseconds and in
      7. * the same time base as {@link System#nanoTime()}.
      8. * @return The adjusted frame release timestamp, in nanoseconds and in the same time base as
      9. * {@link System#nanoTime()}.
      10. */
      11. public long adjustReleaseTime(long framePresentationTimeUs, long unadjustedReleaseTimeNs) {
      12. long framePresentationTimeNs = framePresentationTimeUs * 1000;
      13. // Until we know better, the adjustment will be a no-op.
      14. //一开始没事干就别瞎调,保持原样
      15. long adjustedFrameTimeNs = framePresentationTimeNs; //调整后的视频帧pts
      16. long adjustedReleaseTimeNs = unadjustedReleaseTimeNs;//调整后的视频帧送显时间
      17. if (haveSync) { //在第一次的时候不走这个if逻辑
      18. // See if we've advanced to the next frame.
      19. if (framePresentationTimeUs != lastFramePresentationTimeUs) {
      20. frameCount++;//下一帧了
      21. adjustedLastFrameTimeNs = pendingAdjustedFrameTimeNs;//上一个帧调整后的pts
      22. }
      23. if (frameCount >= MIN_FRAMES_FOR_ADJUSTMENT) {
      24. // We're synced and have waited the required number of frames to apply an adjustment.
      25. // Calculate the average frame time across all the frames we've seen since the last sync.
      26. // This will typically give us a frame rate at a finer granularity than the frame times
      27. // themselves (which often only have millisecond granularity).
      28. //处于sync状态大于6帧的时间才做调整
      29. //关于何为sync状态: 如果视频帧的pts和他的送显时间之间差了20ms以上,就认为偏移过大,也就认为失去sync了. 在理想的情况下,一个视频帧的pts应该和它的送显时间一一对应,pts本身是个常量,送显时间的计算过程中存在着一个不确定变量,就是 elapsedSinceStartOfLoopUs, 这玩意的理想值永远是0,但实际上并不是,所以会在pts和对应的送显时间之间引入一些偏差,如果这个偏差大于20ms,就认为失去sync了,否则认为还没有失去sync.从一些简单的实验测试结果看,很少会有失去sync的情况出现
      30. // 以6帧作为间隔来计算平均帧间隔的问题是收敛会比较慢,可能算了半天都还有误差.当然好处是能够尽早开始计算,比较适合于码流本身帧间隔就不均匀的情况
      31. //首先计算平均帧间隔
      32. long averageFrameDurationNs = (framePresentationTimeNs - syncFramePresentationTimeNs)
      33. / frameCount;
      34. // Project the adjusted frame time forward using the average.
      35. //然后根据平均帧间隔,加前一帧的pts,计算出一个ns级别的视频帧pts时间,否则从码流中读出来的pts往往只有ms精度.
      36. long candidateAdjustedFrameTimeNs = adjustedLastFrameTimeNs + averageFrameDurationNs;
      37. if (isDriftTooLarge(candidateAdjustedFrameTimeNs, unadjustedReleaseTimeNs)) {
      38. haveSync = false;
      39. } else {
      40. //如果还在sync的状态,就将ns级别的帧pts作为调整后的视频帧pts
      41. //既然认为还在sync状态,那么送显时间的变化量应该和视频帧pts的变化量相同,所以有下面的式子
      42. adjustedFrameTimeNs = candidateAdjustedFrameTimeNs;
      43. adjustedReleaseTimeNs = syncUnadjustedReleaseTimeNs + adjustedFrameTimeNs
      44. - syncFramePresentationTimeNs;
      45. }
      46. } else {
      47. // We're synced but haven't waited the required number of frames to apply an adjustment.
      48. // Check drift anyway.
      49. // 距离上一次sync之后还没过去6帧,先查查是否有drift too large的问题,即视频帧的pts和他的送显时间之间是否差了20ms以上
      50. if (isDriftTooLarge(framePresentationTimeNs, unadjustedReleaseTimeNs)) {
      51. haveSync = false;
      52. }
      53. }
      54. }
      55. // If we need to sync, do so now.最开始的时候进入这里
      56. if (!haveSync) {
      57. syncFramePresentationTimeNs = framePresentationTimeNs; //sync状态下的视频帧pts
      58. syncUnadjustedReleaseTimeNs = unadjustedReleaseTimeNs; //sync状态下的送显时间
      59. frameCount = 0;
      60. haveSync = true;//相当于默认第一帧是have sync的
      61. onSynced();//do nothing
      62. }
      63. lastFramePresentationTimeUs = framePresentationTimeUs;//记录上一帧的pts
      64. pendingAdjustedFrameTimeNs = adjustedFrameTimeNs;//将要送显帧的pts
      65. if (vsyncSampler == null || vsyncSampler.sampledVsyncTimeNs == 0) {
      66. //vsyncSampler会返回每个vsync信号的时间,正常情况下不会走到这个if逻辑里面
      67. return adjustedReleaseTimeNs;
      68. }
      69. // Find the timestamp of the closest vsync. This is the vsync that we're targeting.
      70. // 寻找距离当前送显时间点最近(可能是在送显时间点之前,也可能是在送显时间点之后)的vsync时间点,我们的目标是在这个vsync时间点让视频帧显示出去,关于这里的计算逻辑请见下面的1.1.1
      71. long snappedTimeNs = closestVsync(adjustedReleaseTimeNs,
      72. vsyncSampler.sampledVsyncTimeNs, vsyncDurationNs);
      73. // Apply an offset so that we release before the target vsync, but after the previous one.
      74. // 上面计算出的是我们的目标vsync显示时间,但是要提前送,给后面的流程以时间,所以再减去vsyncOffsetNs时间,这个时间是写死的,定义为0.8*vsyncDuration,减完之后的这个值就是真正给mediacodec.releaseOutputBuffer方法的时间戳
      75. //这里其实有问题:首先这里的0.8系数设置的是否合理;其次是否能有办法验证这一帧真的在这一个vsync信号时间点显示出去了.按照mediacodec.releaseOutputbuffer的说法,应该在两个vsync信号之前调用release方法,但是从目前的做法来看并没有follow注释的说法
      76. //利用dumpsys SurfaceFlinger --latency SurfaceView方法我们可以知道每一帧的desiredPresentationTime和actualPresentationTime,经过实测,在某些平台上这两个值差距在一个vsync时间以上,一般为22ms左右,所以ExoPlayer里面设置的这个0.8的系数不甚合理。其次我们观察了NuPlayer的avsync逻辑,发现在NuPlayer就是严格按照API注释所说的,提前两个vsync时间调用release方法。
      77. return snappedTimeNs - vsyncOffsetNs;
      78. }

      1.1.1

      寻找距离当前送显时间最近的vsync时间点的方法如下:

      1. com.google.android.exoplayer2.video.VideoFrameReleaseTimeHelper#closestVsync
      2. private static long closestVsync(long releaseTime, long sampledVsyncTime, long vsyncDuration) {
      3. long vsyncCount = (releaseTime - sampledVsyncTime) / vsyncDuration;
      4. long snappedTimeNs = sampledVsyncTime + (vsyncDuration * vsyncCount);
      5. long snappedBeforeNs;
      6. long snappedAfterNs;
      7. if (releaseTime <= snappedTimeNs) {
      8. // snappedTimeNs-vsyncDuration ---- releaseTime ----- snappedTimeNs
      9. snappedBeforeNs = snappedTimeNs - vsyncDuration;
      10. snappedAfterNs = snappedTimeNs;
      11. } else {
      12. // snappedTimeNs ---- releaseTime ----- snappedTimeNs+vsyncDuration
      13. snappedBeforeNs = snappedTimeNs;
      14. snappedAfterNs = snappedTimeNs + vsyncDuration;
      15. }
      16. long snappedAfterDiff = snappedAfterNs - releaseTime;
      17. long snappedBeforeDiff = releaseTime – snappedBeforeNs;
      18. //后面那个vsync信号离得更近的话,就选后面那个vsync信号,否则选前面那个vsync信号
      19. return snappedAfterDiff < snappedBeforeDiff ? snappedAfterNs : snappedBeforeNs;
      20. }

      1.1.2

      判断视频帧的pts距离他的送显时间是否有过大的偏移量

      1. com.google.android.exoplayer2.video.VideoFrameReleaseTimeHelper#isDriftTooLarge
      2. private boolean isDriftTooLarge(long frameTimeNs, long releaseTimeNs) {
      3. //如果视频帧的pts和他的送显时间之间差了20ms以上,就认为偏移过大,也就认为失去sync
      4. //在理想的情况下,一个视频帧的pts应该和它的送显时间一一对应,pts是不会变的,送显时间的计算过程中存在着一个不确定变量,就是 elapsedSinceStartOfLoopUs, 这玩意的理想值永远是0,但实际上并不是,所以会在pts和对应的送显时间之间引入一些偏差,如果这个偏差大于20ms,就认为失去sync了,否则认为还没有失去sync.从一些简单的实验测试结果看,很少会有失去sync的情况出现
      5. long elapsedFrameTimeNs = frameTimeNs - syncFramePresentationTimeNs;
      6. long elapsedReleaseTimeNs = releaseTimeNs - syncUnadjustedReleaseTimeNs;
      7. return Math.abs(elapsedReleaseTimeNs - elapsedFrameTimeNs) > MAX_ALLOWED_DRIFT_NS;
      8. }

      1.2

      判断丢帧的逻辑如下:

      1. com.google.android.exoplayer2.video.MediaCodecVideoRenderer#shouldDropOutputBuffer
      2. /**
      3. * Returns whether the buffer being processed should be dropped.
      4. *
      5. * @param earlyUs The time until the buffer should be presented in microseconds. A negative value
      6. * indicates that the buffer is late.
      7. * @param elapsedRealtimeUs {@link android.os.SystemClock#elapsedRealtime()} in microseconds,
      8. * measured at the start of the current iteration of the rendering loop.
      9. */
      10. protected boolean shouldDropOutputBuffer(long earlyUs, long elapsedRealtimeUs) {
      11. /* For fps > 30fps, drop the frame if we're more than 30 ms late rendering the frame.
      12. * For fps <= 30fps, drop the frame if we're more than (1/fps*1000) ms late rendering the frame.
      13. */
      14. return earlyUs < -frameDropThres;
      15. }

      1.3

      进行丢帧的逻辑如下:

      1. com.google.android.exoplayer2.video.MediaCodecVideoRenderer#dropOutputBuffer
      2. private void dropOutputBuffer(MediaCodec codec, int bufferIndex) {
      3. TraceUtil.beginSection("dropVideoBuffer");
      4. //注意这里是false就代表不予显示,也就是丢掉这一帧
      5. codec.releaseOutputBuffer(bufferIndex, false);
      6. TraceUtil.endSection();
      7. decoderCounters.droppedOutputBufferCount++;
      8. droppedFrames++;
      9. consecutiveDroppedFrameCount++;
      10. decoderCounters.maxConsecutiveDroppedOutputBufferCount = Math.max(consecutiveDroppedFrameCount,
      11. decoderCounters.maxConsecutiveDroppedOutputBufferCount);
      12. if (droppedFrames == maxDroppedFramesToNotify) {
      13. maybeNotifyDroppedFrames();
      14. }
      15. }

      1.4

      送显的地方如下:

      1. com.google.android.exoplayer2.video.MediaCodecVideoRenderer#renderOutputBufferV21
      2. private void renderOutputBufferV21(MediaCodec codec, int bufferIndex, long releaseTimeNs) {
      3. maybeNotifyVideoSizeChanged();
      4. TraceUtil.beginSection("releaseOutputBuffer");
      5. codec.releaseOutputBuffer(bufferIndex, releaseTimeNs);
      6. TraceUtil.endSection();
      7. decoderCounters.renderedOutputBufferCount++;
      8. consecutiveDroppedFrameCount = 0;
      9. maybeNotifyRenderedFirstFrame();
      10. }

      核心是调用了如下的API

      1. android.media.MediaCodec#releaseOutputBuffer(int, long)
      2. /**
      3. *
      4. * The timestamp may have special meaning depending on the destination surface.
      5. *
      6. *
      7. *
      8. *
      9. *
      10. SurfaceView specifics
      11. * If you render your buffer on a {@link android.view.SurfaceView},
      12. * you can use the timestamp to render the buffer at a specific time (at the
      13. * VSYNC at or after the buffer timestamp). For this to work, the timestamp
      14. * needs to be reasonably close to the current {@link System#nanoTime}.
      15. * Currently, this is set as within one (1) second. A few notes:
      16. *
      17. *
      18. if multiple buffers are sent to the {@link android.view.Surface} to be
      19. * rendered at the same VSYNC, the last one will be shown, and the other ones
      20. * will be dropped.
      21. *
      22. if the timestamp is not "reasonably close" to the current system
      23. * time, the {@link android.view.Surface} will ignore the timestamp, and
      24. * display the buffer at the earliest feasible time. In this mode it will not
      25. * drop frames.
      26. *
      27. for best performance and quality, call this method when you are about
      28. * two VSYNCs' time before the desired render time. For 60Hz displays, this is
      29. * about 33 msec.
      30. *
      31. *
      32. *
      33. *
      34. * @param index The index of a client-owned output buffer previously returned
      35. * from a call to {@link #dequeueOutputBuffer}.
      36. * @param renderTimestampNs The timestamp to associate with this buffer when
      37. * it is sent to the Surface.
      38. */
      39. public final void releaseOutputBuffer(int index, long renderTimestampNs)

      下面来看audio部分,在上面介绍video的同步逻辑时, 提到了下面的函数processOutputBuffer, 他的一个入参是positionUs, 这个值代表当前音频播放时间,由系统时钟或者audioClock来计算,下面就来看一下它是如何计算出来的, 关键代码如下

      1. com.google.android.exoplayer2.ExoPlayerImplInternal#updatePlaybackPositions
      2. private void updatePlaybackPositions() throws ExoPlaybackException {
      3. ...
      4. // Update the playback position.
      5. ...
      6. } else {
      7. if (rendererMediaClockSource != null && !rendererMediaClockSource.isEnded()) {
      8. //使用audio playback position作为render position,详见2.2
      9. rendererPositionUs = rendererMediaClock.getPositionUs();
      10. standaloneMediaClock.setPositionUs(rendererPositionUs);
      11. } else {
      12. // 使用系统时间作为render position,详见2.1
      13. rendererPositionUs = standaloneMediaClock.getPositionUs();
      14. }
      15. periodPositionUs = playingPeriodHolder.toPeriodTime(rendererPositionUs);
      16. }
      17. ...
      18. }

      先来看比较简单的方法, 也就是用系统时间计算renderposition的方法

      2.1

      1. com.google.android.exoplayer2.util.StandaloneMediaClock#getPositionUs
      2. public long getPositionUs() {
      3. long positionUs = baseUs;
      4. if (started) {
      5. // 可以看到postionUs = baseUs + elapsedSinceBaseMs, 这两个值的计算如下2.1.1
      6. long elapsedSinceBaseMs = SystemClock.elapsedRealtime() - baseElapsedMs;
      7. if (playbackParameters.speed == 1f) {
      8. positionUs += C.msToUs(elapsedSinceBaseMs);
      9. } else {
      10. positionUs += playbackParameters.getSpeedAdjustedDurationUs(elapsedSinceBaseMs);
      11. }
      12. }
      13. return positionUs;
      14. }

      2.1.1

      setPosition方法可以看做专用于更新baseUs和baseElapsedMs的方法, 他会在两种情况下被调用: 第一种情况下他只会被调用一次, 也就是在播放刚开始的时候, 前提是所使用的render没有实现getPostionUs方法(这种情况在exoplayer里面实际上并不会出现). 对于这种情况, 在2.1中的计算就比较好理解了. 而第二种情况是在使用audio playback position作为render时间的前提下, 每次都会在 updatePlaybackPositions 中调用 setPosition方法, 传入参数则为audio playback position, 也就是保持和audio playback position对齐

      1. com.google.android.exoplayer2.util.StandaloneMediaClock#setPositionUs
      2. public void setPositionUs(long positionUs) {
      3. baseUs = positionUs;
      4. if (started) {
      5. baseElapsedMs = SystemClock.elapsedRealtime();
      6. }
      7. }

      本文福利, 免费领取C++音视频学习资料包、技术视频,内容包括(音视频开发,面试题,FFmpeg webRTC rtmp hls rtsp ffplay srs↓↓↓↓↓↓见下面↓↓文章底部点击免费领取↓↓

      2.2

      看完简单的方法, 接下来我们来看如何通过audio playback时间计算render时间

      1. com.google.android.exoplayer2.audio.MediaCodecAudioRenderer#getPositionUs
      2. public long getPositionUs() {
      3. long newCurrentPositionUs = audioTrack.getCurrentPositionUs(isEnded());
      4. if (newCurrentPositionUs != AudioTrack.CURRENT_POSITION_NOT_SET) {
      5. currentPositionUs = allowPositionDiscontinuity ? newCurrentPositionUs
      6. : Math.max(currentPositionUs, newCurrentPositionUs);
      7. allowPositionDiscontinuity = false;
      8. }
      9. return currentPositionUs;
      10. }

      实际上调用的是exoplayer所封装的audioTrack的 getCurrentPositionUs方法

      1. /**
      2. * Returns the playback position in the stream starting at zero, in microseconds, or
      3. * {@link #CURRENT_POSITION_NOT_SET} if it is not yet available.
      4. *
      5. *

        If the device supports it, the method uses the playback timestamp from

      6. * {@link android.media.AudioTrack#getTimestamp}. Otherwise, it derives a smoothed position by
      7. * sampling the {@link android.media.AudioTrack}'s frame position.
      8. * 从注释来看,这里的一个核心逻辑是: 如果设备支持, 则从 android.media.AudioTrack#getTimestamp方法获取playback timestamp, 这里一方面是要求android api > 19, 一方面要求底层有对应的getTimestamp实现;否则通过对AudioTrack的frame position进行采样和平滑来演算出一个playback postion
      9. *
      10. * @param sourceEnded Specify {@code true} if no more input buffers will be provided.
      11. * @return The playback position relative to the start of playback, in microseconds.
      12. */
      13. public long getCurrentPositionUs(boolean sourceEnded) {
      14. ...
      15. if (audioTrack.getPlayState() == PLAYSTATE_PLAYING) {
      16. //在maybeSampleSyncParams方法中干了三件事, 一方面进行audio track frame position的采样和平滑处理, 一方面对audiotrack.getTimeStamp方法获取的结果进行检验, 一方面获取audiotrack的latency, 详见下面的2.2.1的分析
      17. maybeSampleSyncParams();
      18. }
      19. long systemClockUs = System.nanoTime() / 1000;
      20. long positionUs;
      21. if (audioTimestampSet) {
      22. // Calculate the speed-adjusted position using the timestamp (which may be in the future).
      23. // 如果上面拿到了audioTrack.getTimeStamp,就用AudioTrack.getTimestamp方法进行计算,前提是需要设备支持,例如当前很多tv设备上蓝牙音箱的情况下就走不了这个通路
      24.  // 关于这块的逻辑,首先需要说明一下audioTrackUtil的几个方法分别是干什么的,详细见下面2.2.2的分析
      25. // 还有一点需要说明的是,在exoplayer里面默认是500ms更新一次audiostamp.
      26. //下面的逻辑可以化简为下式:
      27. //positionUs = framesToDurationUs(AudioTimestamp.framePosition)
      28. // + systemClockUs – AudioTimestamp.nanoTime/1000
      29. long elapsedSinceTimestampUs = systemClockUs - (audioTrackUtil.getTimestampNanoTime() / 1000);
      30. long elapsedSinceTimestampFrames = durationUsToFrames(elapsedSinceTimestampUs);
      31. long elapsedFrames = audioTrackUtil.getTimestampFramePosition() + elapsedSinceTimestampFrames;
      32. positionUs = framesToDurationUs(elapsedFrames);
      33. } else {
      34. //如果上面没拿到AudioTrack.getTimeStamp, 就利用AudioTrack.getPlayheadPostionUs方法来计算
      35.  //对getPlayheadPostionUs方法的详细分析见下面2.2.3
      36. if (playheadOffsetCount == 0) {
      37. // The AudioTrack has started, but we don't have any samples to compute a smoothed position.
      38. positionUs = audioTrackUtil.getPositionUs();
      39. } else {
      40. // getPlayheadPositionUs() only has a granularity of ~20 ms, so we base the position off the
      41. // system clock (and a smoothed offset between it and the playhead position) so as to
      42. // prevent jitter in the reported positions.
      43. // 在这里解释了为什么要进行采样和平滑演算得到playback position: 因为 getPlayheadPositionUs()的粒度只有约20ms, 如果直接拿来用的话精度不够
      44. positionUs = systemClockUs + smoothedPlayheadOffsetUs;
      45. }
      46. if (!sourceEnded) {
      47. //如果使用getPositionUs的话,还要再减去一个latencyUs, 它是在maybeSampleSyncParams方法中计算得到的. 这里其实是一个化简的写法, 其实应该展开为
      48. avg[(audioTrack.getPlaybackHeadPosition/sampleRate)(i)] + sysClock – (sysClock + latency)
      49. = avg[(audioTrack.getPlaybackHeadPosition/sampleRate)(i)] - latency
      50. positionUs -= latencyUs;
      51. }
      52. }
      53. //上面算出来的positionUs是从0开始的audio播放时长,需要加上一个时间基,也就是startMediaTimeUs才能得到实际的audio playback position.startMediaTimeUs的计算详见2.2.5
      54. return startMediaTimeUs + applySpeedup(positionUs);
      55. }

      2.2.1

      maybeSampleSyncParams是一个比较关键的方法, 包含了playback position的平滑,TimeStamp的获取,和Latency的获取

      1. com.google.android.exoplayer2.audio.AudioTrack#maybeSampleSyncParams
      2. /**
      3. * Updates the audio track latency and playback position parameters.
      4. */
      5. private void maybeSampleSyncParams() {
      6. //从AudioTrack获取playbackPosition, 详见下面2.2.3的分析
      7. long playbackPositionUs = audioTrackUtil.getPositionUs();
      8. if (playbackPositionUs == 0) {
      9. // The AudioTrack hasn't output anything yet.
      10. return;
      11. }
      12. long systemClockUs = System.nanoTime() / 1000;
      13. if (systemClockUs - lastPlayheadSampleTimeUs >= MIN_PLAYHEAD_OFFSET_SAMPLE_INTERVAL_US) {
      14. // Take a new sample and update the smoothed offset between the system clock and the playhead.
      15. //采样的时间间隔是30ms
      16. //下面的采样逻辑可以化简为
      17. //smoothedPlayheadOffsetUs = avg[playbackPositionUs(i) – systemClock(i)],i最大取10,
      18. //意义在于,平均掉playbackPostionUs可能存在的抖动
      19. playheadOffsets[nextPlayheadOffsetIndex] = playbackPositionUs - systemClockUs;
      20. nextPlayheadOffsetIndex = (nextPlayheadOffsetIndex + 1) % MAX_PLAYHEAD_OFFSET_COUNT;
      21. if (playheadOffsetCount < MAX_PLAYHEAD_OFFSET_COUNT) {
      22. playheadOffsetCount++;
      23. }
      24. lastPlayheadSampleTimeUs = systemClockUs;
      25. smoothedPlayheadOffsetUs = 0;
      26. for (int i = 0; i < playheadOffsetCount; i++) {
      27. smoothedPlayheadOffsetUs += playheadOffsets[i] / playheadOffsetCount;
      28. }
      29. }
      30. ...
      31. if (systemClockUs - lastTimestampSampleTimeUs >= MIN_TIMESTAMP_SAMPLE_INTERVAL_US) {
      32. //以500ms为间隔获取audioTrack timeStamp,关于AudioTrack.getTimeStamp方法,详见下面2.2.2的分析
      33. audioTimestampSet = audioTrackUtil.updateTimestamp();
      34. if (audioTimestampSet) {
      35. // Perform sanity checks on the timestamp.
      36. //如果获取到了新的AudioTimeStamp,则在下面做三个校验
      37. long audioTimestampUs = audioTrackUtil.getTimestampNanoTime() / 1000;
      38. long audioTimestampFramePosition = audioTrackUtil.getTimestampFramePosition();
      39. if (audioTimestampUs < resumeSystemTimeUs) {
      40. // The timestamp corresponds to a time before the track was most recently resumed.
      41. //首先确定获取到的时间不是已经过去的时间
      42. audioTimestampSet = false;
      43. } else if (Math.abs(audioTimestampUs - systemClockUs) > MAX_AUDIO_TIMESTAMP_OFFSET_US) {
      44. // The timestamp time base is probably wrong.
      45. //再确认获取到的时间与当前系统时间差的不太大,阈值为5s
      46. String message = "Spurious audio timestamp (system clock mismatch): "
      47. + audioTimestampFramePosition + ", " + audioTimestampUs + ", " + systemClockUs + ", "
      48. + playbackPositionUs + ", " + getSubmittedFrames() + ", " + getWrittenFrames();
      49. if (failOnSpuriousAudioTimestamp) {
      50. throw new InvalidAudioTrackTimestampException(message);
      51. }
      52. Log.w(TAG, message);
      53. audioTimestampSet = false;
      54. } else if (Math.abs(framesToDurationUs(audioTimestampFramePosition) - playbackPositionUs)
      55. > MAX_AUDIO_TIMESTAMP_OFFSET_US) {
      56. // The timestamp frame position is probably wrong.
      57. //再确认通过getTimeStamp和getPlaybackHeadPostion方法获取到的时间差的不太大,阈值同样为5s
      58. String message = "Spurious audio timestamp (frame position mismatch): "
      59. + audioTimestampFramePosition + ", " + audioTimestampUs + ", " + systemClockUs + ", "
      60. + playbackPositionUs + ", " + getSubmittedFrames() + ", " + getWrittenFrames();
      61. if (failOnSpuriousAudioTimestamp) {
      62. throw new InvalidAudioTrackTimestampException(message);
      63. }
      64. Log.w(TAG, message);
      65. audioTimestampSet = false;
      66. }
      67. }
      68. if (getLatencyMethod != null && !passthrough) {
      69. try {
      70. // Compute the audio track latency, excluding the latency due to the buffer (leaving
      71. // latency due to the mixer and audio hardware driver).
      72. // 从AudioTrack获得latency,详细的分析请见下面2.2.4
      73. // 需要注意的是,这里还减掉了一个bufferSizeUs,只留下mixer和audio hardware driver引发的延迟
      74. // bufferSizeUs的计算在audioTrack.configure方法中可以看到
      75. latencyUs = (Integer) getLatencyMethod.invoke(audioTrack, (Object[]) null) * 1000L
      76. - bufferSizeUs;
      77. // Sanity check that the latency is non-negative.
      78. latencyUs = Math.max(latencyUs, 0);
      79. // Sanity check that the latency isn't too large.
      80. if (latencyUs > MAX_LATENCY_US) {
      81. Log.w(TAG, "Ignoring impossibly large audio latency: " + latencyUs);
      82. latencyUs = 0;
      83. }
      84. } catch (Exception e) {
      85. // The method existed, but doesn't work. Don't try again.
      86. getLatencyMethod = null;
      87. }
      88. }
      89. lastTimestampSampleTimeUs = systemClockUs;
      90. }
      91. }

      计算bufferSizeUs

      1. public void configure(String mimeType, int channelCount, int sampleRate,
      2. @C.PcmEncoding int pcmEncoding, int specifiedBufferSize, int[] outputChannels){
      3. if (specifiedBufferSize != 0) {
      4. ….
      5. } else if (passthrough) {
      6. ….
      7. } else {
      8. //从audioTrack拿到minBufferSize, 关于getMinBufferSize,详见2.2.6
      9. int minBufferSize =
      10. android.media.AudioTrack.getMinBufferSize(sampleRate, channelConfig, outputEncoding);
      11. Assertions.checkState(minBufferSize != ERROR_BAD_VALUE);
      12. //乘上一个系数,取值4
      13. int multipliedBufferSize = minBufferSize * BUFFER_MULTIPLICATION_FACTOR;
      14. int minAppBufferSize = (int) durationUsToFrames(MIN_BUFFER_DURATION_US) * outputPcmFrameSize;
      15. int maxAppBufferSize = (int) Math.max(minBufferSize,
      16. durationUsToFrames(MAX_BUFFER_DURATION_US) * outputPcmFrameSize);
      17. //bufferSizeUs取值在[250ms,750ms]之间
      18. bufferSize = multipliedBufferSize < minAppBufferSize ? minAppBufferSize
      19. : multipliedBufferSize > maxAppBufferSize ? maxAppBufferSize
      20. : multipliedBufferSize;
      21. }
      22. bufferSizeUs = passthrough ? C.TIME_UNSET : framesToDurationUs(bufferSize / outputPcmFrameSize);
      23. ...
      24. }

      2.2.2

      如果走了getTimeStamp通路,可以看到关键的两个方法getTimestampNanoTime和getTimestampFramePosition返回的分别是AudioTimestamp类的两个变量,而AudioTimestamp就是通过audioTrack.getTimestamp方法获得的

      1. com.google.android.exoplayer2.audio.AudioTrack.AudioTrackUtilV19
      2. private static class AudioTrackUtilV19 extends AudioTrackUtil {
      3. private final AudioTimestamp audioTimestamp;
      4. private long rawTimestampFramePositionWrapCount;
      5. private long lastRawTimestampFramePosition;
      6. private long lastTimestampFramePosition;
      7. public AudioTrackUtilV19() {
      8. audioTimestamp = new AudioTimestamp();
      9. }
      10. ....
      11. @Override
      12. public boolean updateTimestamp() {
      13. boolean updated = audioTrack.getTimestamp(audioTimestamp);
      14. if (updated) {
      15. long rawFramePosition = audioTimestamp.framePosition;
      16. if (lastRawTimestampFramePosition > rawFramePosition) {
      17. // The value must have wrapped around.
      18. rawTimestampFramePositionWrapCount++;
      19. }
      20. lastRawTimestampFramePosition = rawFramePosition;
      21. lastTimestampFramePosition = rawFramePosition + (rawTimestampFramePositionWrapCount << 32);
      22. }
      23. return updated;
      24. }
      25. @Override
      26. public long getTimestampNanoTime() {
      27. return audioTimestamp.nanoTime;
      28. }
      29. @Override
      30. public long getTimestampFramePosition() {
      31. return lastTimestampFramePosition;
      32. }
      33. }

      AudioTimestamp的定义如下,它有两个关键的变量,分别是framePostition和nanoTime, 都是从HAL层拿到的值

      1. android.media.AudioTimestamp
      2. /**
      3. * Structure that groups a position in frame units relative to an assumed audio stream,
      4. * together with the estimated time when that frame enters or leaves the audio
      5. * processing pipeline on that device. This can be used to coordinate events
      6. * and interactions with the external environment.
      7. *

      8. * The time is based on the implementation's best effort, using whatever knowledge
      9. * is available to the system, but cannot account for any delay unknown to the implementation.
      10. *
      11. * @see AudioTrack#getTimestamp AudioTrack.getTimestamp(AudioTimestamp)
      12. * @see AudioRecord#getTimestamp AudioRecord.getTimestamp(AudioTimestamp, int)
      13. */
      14. public final class AudioTimestamp
      15. {
      16. ...
      17. /**
      18. * Position in frames relative to start of an assumed audio stream.
      19. * When obtained through
      20. * {@link AudioTrack#getTimestamp AudioTrack.getTimestamp(AudioTimestamp)},
      21. * the low-order 32 bits of position is in wrapping frame units similar to
      22. * {@link AudioTrack#getPlaybackHeadPosition AudioTrack.getPlaybackHeadPosition()}.
      23.   * 从HAL层拿到的值,代表刚播放完的,或者已经在pipeline中马上就要播放的帧的位置
      24. */
      25. public long framePosition;
      26. /**
      27. * Time associated with the frame in the audio pipeline.
      28. * When obtained through
      29. * {@link AudioTrack#getTimestamp AudioTrack.getTimestamp(AudioTimestamp)},
      30. * this is the estimated time when the frame was presented or is committed to be presented,
      31. * with a timebase of {@link #TIMEBASE_MONOTONIC}.
      32.   * 上面framePostion对应帧的播放时间或者将要被播出的时间,以系统时间表示
      33. */
      34. public long nanoTime;
      35. }

      而AudioTrack.getTimeStamp方法的定义如下,注意注释中提到的,这个方法返回的值不一定总是变化的,同时注释还提到不要频繁调用它,否则会有性能上的问题

      1. android.media.AudioTrack#getTimestamp
      2. /**
      3. * Poll for a timestamp on demand.
      4. *

      5. * If you need to track timestamps during initial warmup or after a routing or mode change,
      6. * you should request a new timestamp periodically until the reported timestamps
      7. * show that the frame position is advancing, or until it becomes clear that
      8. * timestamps are unavailable for this route.
      9. *

      10. * After the clock is advancing at a stable rate,
      11. * query for a new timestamp approximately once every 10 seconds to once per minute.
      12. * Calling this method more often is inefficient.
      13. * It is also counter-productive to call this method more often than recommended,
      14. * because the short-term differences between successive timestamp reports are not meaningful.
      15. * If you need a high-resolution mapping between frame position and presentation time,
      16. * consider implementing that at application level, based on low-resolution timestamps.
      17. *

      18. * The audio data at the returned position may either already have been
      19. * presented, or may have not yet been presented but is committed to be presented.
      20. * It is not possible to request the time corresponding to a particular position,
      21. * or to request the (fractional) position corresponding to a particular time.
      22. * If you need such features, consider implementing them at application level.
      23. *
      24. * @param timestamp a reference to a non-null AudioTimestamp instance allocated
      25. * and owned by caller.
      26. * @return true if a timestamp is available, or false if no timestamp is available.
      27. * If a timestamp if available,
      28. * the AudioTimestamp instance is filled in with a position in frame units, together
      29. * with the estimated time when that frame was presented or is committed to
      30. * be presented.
      31. * In the case that no timestamp is available, any supplied instance is left unaltered.
      32. * A timestamp may be temporarily unavailable while the audio clock is stabilizing,
      33. * or during and immediately after a route change.
      34. * A timestamp is permanently unavailable for a given route if the route does not support
      35. * timestamps. In this case, the approximate frame position can be obtained
      36. * using {@link #getPlaybackHeadPosition}.
      37. * However, it may be useful to continue to query for
      38. * timestamps occasionally, to recover after a route change.
      39. */
      40. // Add this text when the "on new timestamp" API is added:
      41. // Use if you need to get the most recent timestamp outside of the event callback handler.
      42. public boolean getTimestamp(AudioTimestamp timestamp)

      我们可以再往framework里面看看这个方法是如何获取到framePosition和nanoTime的,这里看的是Android M

      frameworks/av/media/libmedia/AudioTrack.cpp

       frameworks/av/media/libmedia/IAudioTrack.cpp

      frameworks/av/services/audioflinger/Tracks.cpp

      binder调过来的

      本文福利, 免费领取C++音视频学习资料包、技术视频,内容包括(音视频开发,面试题,FFmpeg webRTC rtmp hls rtsp ffplay srs↓↓↓↓↓↓见下面↓↓文章底部点击免费领取↓↓

      然后调用PlaybackThread里Track的getTimeStamp

       

      这里timestamp里的mPosition和mTime都由mLatchQ获取。
      不过在Android7.0以后就不用mLatchD、mLatchQ了。

       

      后边就调到Hal了,mLatchD的mTimestamp就是hal返回的。
      hardware/mstar/audio/audio_hw_6_0/audio_hw.cpp

       

      2.2.3

      如果走了getPlaybackPosition通路, 调用的是下面的方法

      1. com.google.android.exoplayer2.audio.AudioTrack.AudioTrackUtil#getPositionUs
      2. /**
      3. * Returns the duration of played media since reconfiguration, in microseconds.
      4. */
      5. public long getPositionUs() {
      6. return (getPlaybackHeadPosition() * C.MICROS_PER_SECOND) / sampleRate;
      7. }

      利用下面方法的返回值进行计算

      1. com.google.android.exoplayer2.audio.AudioTrack.AudioTrackUtil#getPlaybackHeadPosition
      2. /**
      3. * {@link android.media.AudioTrack#getPlaybackHeadPosition()} returns a value intended to be
      4. * interpreted as an unsigned 32 bit integer, which also wraps around periodically. This method
      5. * returns the playback head position as a long that will only wrap around if the value exceeds
      6. * {@link Long#MAX_VALUE} (which in practice will never happen).
      7. *
      8. * @return The playback head position, in frames.
      9. */
      10. public long getPlaybackHeadPosition() {
      11. ...
      12. long rawPlaybackHeadPosition = 0xFFFFFFFFL & audioTrack.getPlaybackHeadPosition();
      13. ...
      14. if (lastRawPlaybackHeadPosition > rawPlaybackHeadPosition) {
      15. // The value must have wrapped around.
      16. rawPlaybackHeadWrapCount++;
      17. }
      18. lastRawPlaybackHeadPosition = rawPlaybackHeadPosition;
      19. return rawPlaybackHeadPosition + (rawPlaybackHeadWrapCount << 32);
      20. }

      实际调用的是

      1. android.media.AudioTrack#getPlaybackHeadPosition
      2. /**
      3. * Returns the playback head position expressed in frames.
      4. * Though the "int" type is signed 32-bits, the value should be reinterpreted as if it is
      5. * unsigned 32-bits. That is, the next position after 0x7FFFFFFF is (int) 0x80000000.
      6. * This is a continuously advancing counter. It will wrap (overflow) periodically,
      7. * for example approximately once every 27:03:11 hours:minutes:seconds at 44.1 kHz.
      8. * It is reset to zero by {@link #flush()}, {@link #reloadStaticData()}, and {@link #stop()}.
      9. * If the track's creation mode is {@link #MODE_STATIC}, the return value indicates
      10. * the total number of frames played since reset,
      11. * not the current offset within the buffer.
      12. */
      13. public int getPlaybackHeadPosition()

      它返回的是AudioFlinger里面的共享内存的位置,跟一下framework里面的实现如下
      /frameworks/av/media/libmedia/AudioTrack.cpp

      1. status_t AudioTrack::getPosition(uint32_t *position)
      2. {
      3. if (position == NULL) {
      4. return BAD_VALUE;
      5. }
      6. AutoMutex lock(mLock);
      7. if (isOffloadedOrDirect_l()) {
      8. ...
      9. } else {
      10. if (mCblk->mFlags & CBLK_INVALID) {
      11. (void) restoreTrack_l("getPosition");
      12. // FIXME: for compatibility with the Java API we ignore the restoreTrack_l()
      13. // error here (e.g. DEAD_OBJECT) and return OK with the last recorded server position.
      14. }
      15. // IAudioTrack::stop() isn't synchronous; we don't know when presentation completes
      16. *position = (mState == STATE_STOPPED || mState == STATE_FLUSHED) ?
      17. 0 : updateAndGetPosition_l();
      18. }
      19. ....
      20. uint32_t AudioTrack::updateAndGetPosition_l()
      21. {
      22. // This is the sole place to read server consumed frames
      23. uint32_t newServer = mProxy->getPosition();
      24. int32_t delta = newServer - mServer;
      25. mServer = newServer;
      26. // TODO There is controversy about whether there can be "negative jitter" in server position.
      27. // This should be investigated further, and if possible, it should be addressed.
      28. // A more definite failure mode is infrequent polling by client.
      29. // One could call (void)getPosition_l() in releaseBuffer(),
      30. // so mReleased and mPosition are always lock-step as best possible.
      31. // That should ensure delta never goes negative for infrequent polling
      32. // unless the server has more than 2^31 frames in its buffer,
      33. // in which case the use of uint32_t for these counters has bigger issues.
      34. if (delta < 0) {
      35. ALOGE("detected illegal retrograde motion by the server: mServer advanced by %d", delta);
      36. delta = 0;
      37. }
      38. return mPosition += (uint32_t) delta;
      39. }
      40. /frameworks/av/include/private/media/AudioTrackShared.h
      41. // Proxy used by AudioTrack client, which also includes AudioFlinger::PlaybackThread::OutputTrack
      42. class AudioTrackClientProxy : public ClientProxy
      43. // Proxy seen by AudioTrack client and AudioRecord client
      44. class ClientProxy : public Proxy {
      45. ...
      46. size_t getPosition() {
      47. return mEpoch + mCblk->mServer;
      48. }
      49. ...
      50. // Important: do not add any virtual methods, including ~
      51. struct audio_track_cblk_t
      52. {
      53. ...
      54. uint32_t mServer; // Number of filled frames consumed by server (mIsOut),
      55. // or filled frames provided by server (!mIsOut).
      56. // It is updated asynchronously by server without a barrier.
      57. // The value should be used
      58. // "for entertainment purposes only",
      59. // which means don't make important decisions based on it.
      60. ...

      2.2.4

      如果走了getPlaybackPosition通路,还要在position基础上减去latency

      1. android.media.AudioTrack#getLatency
      2. /**
      3. * Returns this track's estimated latency in milliseconds. This includes the latency due
      4. * to AudioTrack buffer size, AudioMixer (if any) and audio hardware driver.
      5. * getlatency返回的值包含了三部分:AudioTrack buffer size, AudioMixer带来的延迟以及audio hardware driver带来的延迟
      6. * DO NOT UNHIDE. The existing approach for doing A/V sync has too many problems. We need
      7. * a better solution.
      8. * 注释中也提到这个返回值可能是有问题的,这可能也是api19后来增加了AudioTimeStamp类的原因
      9. * @hide
      10. */
      11. public int getLatency() {
      12. return native_get_latency();
      13. }

      直接调用的jni

      frameworks/base/core/jni/android_media_AudioTrack.cpp

       

      如上两部分代码是计算framecount,如果src采样率、 dst采样率 都为 48K,播放速度speed默认为1,dstFramesRequired为afFrameCount是1024

      frameCount =(1024*1 + 1 +1) * 2 = 2052

      2.2.5

      在handleBuffer中计算startMediaTimeUs, 在他的基础上再加上postionUs

      1. com.google.android.exoplayer2.audio.AudioTrack#handleBuffer
      2. public boolean handleBuffer(ByteBuffer buffer, long presentationTimeUs)
      3. throws InitializationException, WriteException {
      4. if (startMediaTimeState == START_NOT_SET) {
      5. //startMediaTimeState的初始状态是START_NOT_SET,将第一个进来的audio pts时间赋个startMediaTimeUs,修改startMediaTimeState状态为IN_SYNC
      6. startMediaTimeUs = Math.max(0, presentationTimeUs);
      7. startMediaTimeState = START_IN_SYNC;
      8. } else {
      9. // Sanity check that presentationTimeUs is consistent with the expected value.
      10. //在这里会根据之前给audioTrack的buffer计算出对应的frameSize,计算的方法见下面getSubmittedFrames, 再加上startMediaTimeUs即可估计当前到来的audio pts
      11. long expectedPresentationTimeUs = startMediaTimeUs
      12. + framesToDurationUs(getSubmittedFrames());
      13. if (startMediaTimeState == START_IN_SYNC
      14. && Math.abs(expectedPresentationTimeUs - presentationTimeUs) > 200000
      15. && !needsWrongSampleRateWorkarounds()) {
      16. //如果估计的pts值和实际到来的值差了200ms以上,就认为遇到了discont.相应的修改状态为NEED_SYNC
      17. Log.e(TAG, "Discontinuity detected [expected " + expectedPresentationTimeUs + ", got "
      18. + presentationTimeUs + "]");
      19. startMediaTimeState = START_NEED_SYNC;
      20. }
      21. if (startMediaTimeState == START_NEED_SYNC) {
      22. // Adjust startMediaTimeUs to be consistent with the current buffer's start time and the
      23. // number of bytes submitted.
      24. // 如果遇到了discont.则将startMediaTimeUs对齐到当前buffer的实际其实时间上,在把状态改为IN_SYNC
      25. startMediaTimeUs += (presentationTimeUs - expectedPresentationTimeUs);
      26. startMediaTimeState = START_IN_SYNC;
      27. listener.onPositionDiscontinuity();
      28. }
      29. }
      30. if (passthrough) {
      31. submittedEncodedFrames += framesPerEncodedSample;
      32. } else {
      33. //在这里更新submittedPcmBytes
      34. submittedPcmBytes += buffer.remaining();
      35. }
      36. inputBuffer = buffer;
      37. }
      38. if (passthrough) {
      39. // Passthrough buffers are not processed.
      40. writeBuffer(inputBuffer, presentationTimeUs);
      41. } else {
      42. processBuffers(presentationTimeUs);
      43. }
      44. if (!inputBuffer.hasRemaining()) {
      45. inputBuffer = null;
      46. return true;
      47. }
      48. return false;
      49. }
      50. private long getSubmittedFrames() {
      51. return passthrough ? submittedEncodedFrames : (submittedPcmBytes / pcmFrameSize);
      52. }

      2.2.6

      1. android.media.AudioTrack#getMinBufferSize
      2. /**
      3. * Returns the estimated minimum buffer size required for an AudioTrack
      4. * object to be created in the {@link #MODE_STREAM} mode.
      5. * The size is an estimate because it does not consider either the route or the sink,
      6. * since neither is known yet. Note that this size doesn't
      7. * guarantee a smooth playback under load, and higher values should be chosen according to
      8. * the expected frequency at which the buffer will be refilled with additional data to play.
      9. * For example, if you intend to dynamically set the source sample rate of an AudioTrack
      10. * to a higher value than the initial source sample rate, be sure to configure the buffer size
      11. * based on the highest planned sample rate.
      12. * @param sampleRateInHz the source sample rate expressed in Hz.
      13. * {@link AudioFormat#SAMPLE_RATE_UNSPECIFIED} is not permitted.
      14. * @param channelConfig describes the configuration of the audio channels.
      15. * See {@link AudioFormat#CHANNEL_OUT_MONO} and
      16. * {@link AudioFormat#CHANNEL_OUT_STEREO}
      17. * @param audioFormat the format in which the audio data is represented.
      18. * See {@link AudioFormat#ENCODING_PCM_16BIT} and
      19. * {@link AudioFormat#ENCODING_PCM_8BIT},
      20. * and {@link AudioFormat#ENCODING_PCM_FLOAT}.
      21. * @return {@link #ERROR_BAD_VALUE} if an invalid parameter was passed,
      22. * or {@link #ERROR} if unable to query for output properties,
      23. * or the minimum buffer size expressed in bytes.
      24. */
      25. static public int getMinBufferSize(int sampleRateInHz, int channelConfig, int audioFormat)

      本文福利, 免费领取C++音视频学习资料包、技术视频,内容包括(音视频开发,面试题,FFmpeg webRTC rtmp hls rtsp ffplay srs↓↓↓↓↓↓见下面↓↓文章底部点击免费领取↓↓ 

       

    21. 相关阅读:
      从8连挂到面面offer,我只用了一个月,最后定薪25K,分享面经血泪史...
      1. 什么是微服务 ?
      【深入浅出玩转FPGA学习2----设计技巧(基本语法)】
      CVE-2022-22980 Spring​ Data MongoDB SpEL表达式注入
      Springboot集成ItextPdf
      Golang 泛型的介绍
      IDEA创建SpringCloud项目(使用SpringAssistant插件)
      jupyter notebook anaconda环境下查看|加载|更换内核
      Python Django 零基础从零到一部署服务,Hello Django!全文件夹目录和核心代码!
      物联网通信技术|课堂笔记|week10-2 11月1日|应用层协议|域名解析
    22. 原文地址:https://blog.csdn.net/m0_60259116/article/details/126956392