• FFmpeg源代码简单分析-其他-libswscale的sws_scale()


    参考链接

    libswscale的sws_scale()

    • FFmpeg的图像处理(缩放,YUV/RGB格式转换)类库libswsscale中的sws_scale()函数。
    • libswscale是一个主要用于处理图片像素数据的类库。可以完成图片像素格式的转换,图片的拉伸等工作。
    • 该类库常用的函数数量很少,一般情况下就3个:
      • sws_getContext():初始化一个SwsContext。
      • sws_scale():处理图像数据。
      • sws_freeContext():释放一个SwsContext。

    Libswscale处理数据流程

    • Libswscale处理像素数据的流程可以概括为下图

    • 从图中可以看出,libswscale处理数据有两条最主要的方式:unscaled和scaled。
    • unscaled用于处理不需要拉伸的像素数据(属于比较特殊的情况),scaled用于处理需要拉伸的像素数据。
    • Unscaled只需要对图像像素格式进行转换;而Scaled则除了对像素格式进行转换之外,还需要对图像进行缩放。
    • Scaled方式可以分成以下几个步骤:
      • XXX to YUV Converter:首先将数据像素数据转换为8bitYUV格式;
      • Horizontal scaler:水平拉伸图像,并且转换为15bitYUV;
      • Vertical scaler:垂直拉伸图像;
      • Output converter:转换为输出像素格式。

    sws_scale()

    • sws_scale()是用于转换像素的函数。它的声明位于libswscale\swscale.h,如下所示。
    1. /**
    2. * Scale the image slice in srcSlice and put the resulting scaled
    3. * slice in the image in dst. A slice is a sequence of consecutive
    4. * rows in an image.
    5. *
    6. * Slices have to be provided in sequential order, either in
    7. * top-bottom or bottom-top order. If slices are provided in
    8. * non-sequential order the behavior of the function is undefined.
    9. *
    10. * @param c the scaling context previously created with
    11. * sws_getContext()
    12. * @param srcSlice the array containing the pointers to the planes of
    13. * the source slice
    14. * @param srcStride the array containing the strides for each plane of
    15. * the source image
    16. * @param srcSliceY the position in the source image of the slice to
    17. * process, that is the number (counted starting from
    18. * zero) in the image of the first row of the slice
    19. * @param srcSliceH the height of the source slice, that is the number
    20. * of rows in the slice
    21. * @param dst the array containing the pointers to the planes of
    22. * the destination image
    23. * @param dstStride the array containing the strides for each plane of
    24. * the destination image
    25. * @return the height of the output slice
    26. */
    27. int sws_scale(struct SwsContext *c, const uint8_t *const srcSlice[],
    28. const int srcStride[], int srcSliceY, int srcSliceH,
    29. uint8_t *const dst[], const int dstStride[]);
    • sws_scale()的定义位于libswscale\swscale.c,如下所示。
    1. /**
    2. * swscale wrapper, so we don't need to export the SwsContext.
    3. * Assumes planar YUV to be in YUV order instead of YVU.
    4. */
    5. int attribute_align_arg sws_scale(struct SwsContext *c,
    6. const uint8_t * const srcSlice[],
    7. const int srcStride[], int srcSliceY,
    8. int srcSliceH, uint8_t *const dst[],
    9. const int dstStride[])
    10. {
    11. if (c->nb_slice_ctx)
    12. c = c->slice_ctx[0];
    13. return scale_internal(c, srcSlice, srcStride, srcSliceY, srcSliceH,
    14. dst, dstStride, 0, c->dstH);
    15. }
    • sws_scale内部调用了scale_internal,scale_internal函数封装了sws_scale的大多数代码 
    1. static int scale_internal(SwsContext *c,
    2. const uint8_t * const srcSlice[], const int srcStride[],
    3. int srcSliceY, int srcSliceH,
    4. uint8_t *const dstSlice[], const int dstStride[],
    5. int dstSliceY, int dstSliceH)
    6. {
    7. const int scale_dst = dstSliceY > 0 || dstSliceH < c->dstH;
    8. const int frame_start = scale_dst || !c->sliceDir;
    9. int i, ret;
    10. const uint8_t *src2[4];
    11. uint8_t *dst2[4];
    12. int macro_height_src = isBayer(c->srcFormat) ? 2 : (1 << c->chrSrcVSubSample);
    13. int macro_height_dst = isBayer(c->dstFormat) ? 2 : (1 << c->chrDstVSubSample);
    14. // copy strides, so they can safely be modified
    15. int srcStride2[4];
    16. int dstStride2[4];
    17. int srcSliceY_internal = srcSliceY;
    18. if (!srcStride || !dstStride || !dstSlice || !srcSlice) {
    19. av_log(c, AV_LOG_ERROR, "One of the input parameters to sws_scale() is NULL, please check the calling code\n");
    20. return AVERROR(EINVAL);
    21. }
    22. if ((srcSliceY & (macro_height_src - 1)) ||
    23. ((srcSliceH & (macro_height_src - 1)) && srcSliceY + srcSliceH != c->srcH) ||
    24. srcSliceY + srcSliceH > c->srcH) {
    25. av_log(c, AV_LOG_ERROR, "Slice parameters %d, %d are invalid\n", srcSliceY, srcSliceH);
    26. return AVERROR(EINVAL);
    27. }
    28. if ((dstSliceY & (macro_height_dst - 1)) ||
    29. ((dstSliceH & (macro_height_dst - 1)) && dstSliceY + dstSliceH != c->dstH) ||
    30. dstSliceY + dstSliceH > c->dstH) {
    31. av_log(c, AV_LOG_ERROR, "Slice parameters %d, %d are invalid\n", dstSliceY, dstSliceH);
    32. return AVERROR(EINVAL);
    33. }
    34. if (!check_image_pointers(srcSlice, c->srcFormat, srcStride)) {
    35. av_log(c, AV_LOG_ERROR, "bad src image pointers\n");
    36. return AVERROR(EINVAL);
    37. }
    38. if (!check_image_pointers((const uint8_t* const*)dstSlice, c->dstFormat, dstStride)) {
    39. av_log(c, AV_LOG_ERROR, "bad dst image pointers\n");
    40. return AVERROR(EINVAL);
    41. }
    42. // do not mess up sliceDir if we have a "trailing" 0-size slice
    43. if (srcSliceH == 0)
    44. return 0;
    45. if (c->gamma_flag && c->cascaded_context[0])
    46. return scale_gamma(c, srcSlice, srcStride, srcSliceY, srcSliceH,
    47. dstSlice, dstStride, dstSliceY, dstSliceH);
    48. if (c->cascaded_context[0] && srcSliceY == 0 && srcSliceH == c->cascaded_context[0]->srcH)
    49. return scale_cascaded(c, srcSlice, srcStride, srcSliceY, srcSliceH,
    50. dstSlice, dstStride, dstSliceY, dstSliceH);
    51. if (!srcSliceY && (c->flags & SWS_BITEXACT) && c->dither == SWS_DITHER_ED && c->dither_error[0])
    52. for (i = 0; i < 4; i++)
    53. memset(c->dither_error[i], 0, sizeof(c->dither_error[0][0]) * (c->dstW+2));
    54. if (usePal(c->srcFormat))
    55. update_palette(c, (const uint32_t *)srcSlice[1]);
    56. memcpy(src2, srcSlice, sizeof(src2));
    57. memcpy(dst2, dstSlice, sizeof(dst2));
    58. memcpy(srcStride2, srcStride, sizeof(srcStride2));
    59. memcpy(dstStride2, dstStride, sizeof(dstStride2));
    60. if (frame_start && !scale_dst) {
    61. if (srcSliceY != 0 && srcSliceY + srcSliceH != c->srcH) {
    62. av_log(c, AV_LOG_ERROR, "Slices start in the middle!\n");
    63. return AVERROR(EINVAL);
    64. }
    65. c->sliceDir = (srcSliceY == 0) ? 1 : -1;
    66. } else if (scale_dst)
    67. c->sliceDir = 1;
    68. if (c->src0Alpha && !c->dst0Alpha && isALPHA(c->dstFormat)) {
    69. uint8_t *base;
    70. int x,y;
    71. av_fast_malloc(&c->rgb0_scratch, &c->rgb0_scratch_allocated,
    72. FFABS(srcStride[0]) * srcSliceH + 32);
    73. if (!c->rgb0_scratch)
    74. return AVERROR(ENOMEM);
    75. base = srcStride[0] < 0 ? c->rgb0_scratch - srcStride[0] * (srcSliceH-1) :
    76. c->rgb0_scratch;
    77. for (y=0; y<srcSliceH; y++){
    78. memcpy(base + srcStride[0]*y, src2[0] + srcStride[0]*y, 4*c->srcW);
    79. for (x=c->src0Alpha-1; x<4*c->srcW; x+=4) {
    80. base[ srcStride[0]*y + x] = 0xFF;
    81. }
    82. }
    83. src2[0] = base;
    84. }
    85. if (c->srcXYZ && !(c->dstXYZ && c->srcW==c->dstW && c->srcH==c->dstH)) {
    86. uint8_t *base;
    87. av_fast_malloc(&c->xyz_scratch, &c->xyz_scratch_allocated,
    88. FFABS(srcStride[0]) * srcSliceH + 32);
    89. if (!c->xyz_scratch)
    90. return AVERROR(ENOMEM);
    91. base = srcStride[0] < 0 ? c->xyz_scratch - srcStride[0] * (srcSliceH-1) :
    92. c->xyz_scratch;
    93. xyz12Torgb48(c, (uint16_t*)base, (const uint16_t*)src2[0], srcStride[0]/2, srcSliceH);
    94. src2[0] = base;
    95. }
    96. if (c->sliceDir != 1) {
    97. // slices go from bottom to top => we flip the image internally
    98. for (i=0; i<4; i++) {
    99. srcStride2[i] *= -1;
    100. dstStride2[i] *= -1;
    101. }
    102. src2[0] += (srcSliceH - 1) * srcStride[0];
    103. if (!usePal(c->srcFormat))
    104. src2[1] += ((srcSliceH >> c->chrSrcVSubSample) - 1) * srcStride[1];
    105. src2[2] += ((srcSliceH >> c->chrSrcVSubSample) - 1) * srcStride[2];
    106. src2[3] += (srcSliceH - 1) * srcStride[3];
    107. dst2[0] += ( c->dstH - 1) * dstStride[0];
    108. dst2[1] += ((c->dstH >> c->chrDstVSubSample) - 1) * dstStride[1];
    109. dst2[2] += ((c->dstH >> c->chrDstVSubSample) - 1) * dstStride[2];
    110. dst2[3] += ( c->dstH - 1) * dstStride[3];
    111. srcSliceY_internal = c->srcH-srcSliceY-srcSliceH;
    112. }
    113. reset_ptr(src2, c->srcFormat);
    114. reset_ptr((void*)dst2, c->dstFormat);
    115. if (c->convert_unscaled) {
    116. int offset = srcSliceY_internal;
    117. int slice_h = srcSliceH;
    118. // for dst slice scaling, offset the pointers to match the unscaled API
    119. if (scale_dst) {
    120. av_assert0(offset == 0);
    121. for (i = 0; i < 4 && src2[i]; i++) {
    122. if (!src2[i] || (i > 0 && usePal(c->srcFormat)))
    123. break;
    124. src2[i] += (dstSliceY >> ((i == 1 || i == 2) ? c->chrSrcVSubSample : 0)) * srcStride2[i];
    125. }
    126. for (i = 0; i < 4 && dst2[i]; i++) {
    127. if (!dst2[i] || (i > 0 && usePal(c->dstFormat)))
    128. break;
    129. dst2[i] -= (dstSliceY >> ((i == 1 || i == 2) ? c->chrDstVSubSample : 0)) * dstStride2[i];
    130. }
    131. offset = dstSliceY;
    132. slice_h = dstSliceH;
    133. }
    134. ret = c->convert_unscaled(c, src2, srcStride2, offset, slice_h,
    135. dst2, dstStride2);
    136. if (scale_dst)
    137. dst2[0] += dstSliceY * dstStride2[0];
    138. } else {
    139. ret = swscale(c, src2, srcStride2, srcSliceY_internal, srcSliceH,
    140. dst2, dstStride2, dstSliceY, dstSliceH);
    141. }
    142. if (c->dstXYZ && !(c->srcXYZ && c->srcW==c->dstW && c->srcH==c->dstH)) {
    143. uint16_t *dst16;
    144. if (scale_dst) {
    145. dst16 = (uint16_t *)dst2[0];
    146. } else {
    147. int dstY = c->dstY ? c->dstY : srcSliceY + srcSliceH;
    148. av_assert0(dstY >= ret);
    149. av_assert0(ret >= 0);
    150. av_assert0(c->dstH >= dstY);
    151. dst16 = (uint16_t*)(dst2[0] + (dstY - ret) * dstStride2[0]);
    152. }
    153. /* replace on the same data */
    154. rgb48Toxyz12(c, dst16, dst16, dstStride2[0]/2, ret);
    155. }
    156. /* reset slice direction at end of frame */
    157. if ((srcSliceY_internal + srcSliceH == c->srcH) || scale_dst)
    158. c->sliceDir = 0;
    159. return ret;
    160. }

    • 从sws_scale()的定义可以看出,它封装了SwsContext中的swscale()(注意这个函数中间没有“_”)。函数最重要的一句代码就是“swscale()”。
    • 除此之外,函数还做了一些增加“兼容性”的一些处理。
    • 函数的主要步骤如下所示。

    1.检查输入的图像参数的合理性。

    • 这一步骤首先检查输入输出的参数是否为空,然后通过调用check_image_pointers()检查输入输出图像的内存是否正确分配。
    • check_image_pointers()的定义如下所示。
    1. static int check_image_pointers(const uint8_t * const data[4], enum AVPixelFormat pix_fmt,
    2. const int linesizes[4])
    3. {
    4. const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(pix_fmt);
    5. int i;
    6. av_assert2(desc);
    7. for (i = 0; i < 4; i++) {
    8. int plane = desc->comp[i].plane;
    9. if (!data[plane] || !linesizes[plane])
    10. return 0;
    11. }
    12. return 1;
    13. }
    • 从check_image_pointers()的定义可以看出,在特定像素格式前提下,如果该像素格式应该包含像素的分量为空,就返回0,否则返回1。

    2.如果输入像素数据中使用了“调色板”(palette),则进行一些相应的处理。

    • 这一步通过函数usePal()来判定。
    • usePal()的定义如下。
    1. static av_always_inline int usePal(enum AVPixelFormat pix_fmt)
    2. {
    3. switch (pix_fmt) {
    4. case AV_PIX_FMT_PAL8:
    5. case AV_PIX_FMT_BGR4_BYTE:
    6. case AV_PIX_FMT_BGR8:
    7. case AV_PIX_FMT_GRAY8:
    8. case AV_PIX_FMT_RGB4_BYTE:
    9. case AV_PIX_FMT_RGB8:
    10. return 1;
    11. default:
    12. return 0;
    13. }
    14. }

    3.其它一些特殊格式的处理,比如说Alpha,XYZ等的处理(这方面没有研究过)。
    4.如果输入的图像的扫描方式是从底部到顶部的(一般情况下是从顶部到底部),则将图像进行反转。
    5.调用SwsContext中的swscale()。
    SwsContext中的swscale()

    • swscale这个变量的类型是SwsFunc,实际上就是一个函数指针。它是整个类库的核心。当我们从外部调用swscale()函数的时候,实际上就是调用了SwsContext中的这个名称为swscale的变量(注意外部函数接口和这个内部函数指针的名字是一样的,但不是一回事)。
    • 可以看一下SwsFunc这个类型的定义:
    1. typedef int (*SwsFunc)(struct SwsContext *context, const uint8_t *src[],
    2. int srcStride[], int srcSliceY, int srcSliceH,
    3. uint8_t *dst[], int dstStride[]);
    • 可以看出SwsFunc的定义的参数类型和libswscale类库外部接口函数swscale()的参数类型一模一样。
    • 在libswscale中,该指针的指向可以分成2种情况:
      • 1.图像没有伸缩的时候,指向专有的像素转换函数
      • 2.图像有伸缩的时候,指向swscale()函数。
    • 在调用sws_getContext()初始化SwsContext的时候,会在其子函数sws_init_context()中对swscale指针进行赋值。如果图像没有进行拉伸,则会调用ff_get_unscaled_swscale()对其进行赋值;如果图像进行了拉伸,则会调用ff_getSwsFunc()对其进行赋值。
    • 下面分别看一下这2种情况。

    没有拉伸--专有的像素转换函数

    • 如果图像没有进行拉伸,则会调用ff_get_unscaled_swscale()对SwsContext的swscale进行赋值。
    • 上篇文章中记录了这个函数,在这里回顾一下。

    ff_get_unscaled_swscale()

    • ff_get_unscaled_swscale()的定义如下。
    1. void ff_get_unscaled_swscale(SwsContext *c)
    2. {
    3. const enum AVPixelFormat srcFormat = c->srcFormat;
    4. const enum AVPixelFormat dstFormat = c->dstFormat;
    5. const int flags = c->flags;
    6. const int dstH = c->dstH;
    7. const int dstW = c->dstW;
    8. int needsDither;
    9. needsDither = isAnyRGB(dstFormat) &&
    10. c->dstFormatBpp < 24 &&
    11. (c->dstFormatBpp < c->srcFormatBpp || (!isAnyRGB(srcFormat)));
    12. /* yv12_to_nv12 */
    13. if ((srcFormat == AV_PIX_FMT_YUV420P || srcFormat == AV_PIX_FMT_YUVA420P) &&
    14. (dstFormat == AV_PIX_FMT_NV12 || dstFormat == AV_PIX_FMT_NV21)) {
    15. c->convert_unscaled = planarToNv12Wrapper;
    16. }
    17. /* yv24_to_nv24 */
    18. if ((srcFormat == AV_PIX_FMT_YUV444P || srcFormat == AV_PIX_FMT_YUVA444P) &&
    19. (dstFormat == AV_PIX_FMT_NV24 || dstFormat == AV_PIX_FMT_NV42)) {
    20. c->convert_unscaled = planarToNv24Wrapper;
    21. }
    22. /* nv12_to_yv12 */
    23. if (dstFormat == AV_PIX_FMT_YUV420P &&
    24. (srcFormat == AV_PIX_FMT_NV12 || srcFormat == AV_PIX_FMT_NV21)) {
    25. c->convert_unscaled = nv12ToPlanarWrapper;
    26. }
    27. /* nv24_to_yv24 */
    28. if (dstFormat == AV_PIX_FMT_YUV444P &&
    29. (srcFormat == AV_PIX_FMT_NV24 || srcFormat == AV_PIX_FMT_NV42)) {
    30. c->convert_unscaled = nv24ToPlanarWrapper;
    31. }
    32. /* yuv2bgr */
    33. if ((srcFormat == AV_PIX_FMT_YUV420P || srcFormat == AV_PIX_FMT_YUV422P ||
    34. srcFormat == AV_PIX_FMT_YUVA420P) && isAnyRGB(dstFormat) &&
    35. !(flags & SWS_ACCURATE_RND) && (c->dither == SWS_DITHER_BAYER || c->dither == SWS_DITHER_AUTO) && !(dstH & 1)) {
    36. c->convert_unscaled = ff_yuv2rgb_get_func_ptr(c);
    37. c->dst_slice_align = 2;
    38. }
    39. /* yuv420p1x_to_p01x */
    40. if ((srcFormat == AV_PIX_FMT_YUV420P10 || srcFormat == AV_PIX_FMT_YUVA420P10 ||
    41. srcFormat == AV_PIX_FMT_YUV420P12 ||
    42. srcFormat == AV_PIX_FMT_YUV420P14 ||
    43. srcFormat == AV_PIX_FMT_YUV420P16 || srcFormat == AV_PIX_FMT_YUVA420P16) &&
    44. (dstFormat == AV_PIX_FMT_P010 || dstFormat == AV_PIX_FMT_P016)) {
    45. c->convert_unscaled = planarToP01xWrapper;
    46. }
    47. /* yuv420p_to_p01xle */
    48. if ((srcFormat == AV_PIX_FMT_YUV420P || srcFormat == AV_PIX_FMT_YUVA420P) &&
    49. (dstFormat == AV_PIX_FMT_P010LE || dstFormat == AV_PIX_FMT_P016LE)) {
    50. c->convert_unscaled = planar8ToP01xleWrapper;
    51. }
    52. if (srcFormat == AV_PIX_FMT_YUV410P && !(dstH & 3) &&
    53. (dstFormat == AV_PIX_FMT_YUV420P || dstFormat == AV_PIX_FMT_YUVA420P) &&
    54. !(flags & SWS_BITEXACT)) {
    55. c->convert_unscaled = yvu9ToYv12Wrapper;
    56. c->dst_slice_align = 4;
    57. }
    58. /* bgr24toYV12 */
    59. if (srcFormat == AV_PIX_FMT_BGR24 &&
    60. (dstFormat == AV_PIX_FMT_YUV420P || dstFormat == AV_PIX_FMT_YUVA420P) &&
    61. !(flags & SWS_ACCURATE_RND) && !(dstW&1))
    62. c->convert_unscaled = bgr24ToYv12Wrapper;
    63. /* RGB/BGR -> RGB/BGR (no dither needed forms) */
    64. if (isAnyRGB(srcFormat) && isAnyRGB(dstFormat) && findRgbConvFn(c)
    65. && (!needsDither || (c->flags&(SWS_FAST_BILINEAR|SWS_POINT))))
    66. c->convert_unscaled = rgbToRgbWrapper;
    67. /* RGB to planar RGB */
    68. if ((srcFormat == AV_PIX_FMT_GBRP && dstFormat == AV_PIX_FMT_GBRAP) ||
    69. (srcFormat == AV_PIX_FMT_GBRAP && dstFormat == AV_PIX_FMT_GBRP))
    70. c->convert_unscaled = planarRgbToplanarRgbWrapper;
    71. #define isByteRGB(f) ( \
    72. f == AV_PIX_FMT_RGB32 || \
    73. f == AV_PIX_FMT_RGB32_1 || \
    74. f == AV_PIX_FMT_RGB24 || \
    75. f == AV_PIX_FMT_BGR32 || \
    76. f == AV_PIX_FMT_BGR32_1 || \
    77. f == AV_PIX_FMT_BGR24)
    78. if (srcFormat == AV_PIX_FMT_GBRP && isPlanar(srcFormat) && isByteRGB(dstFormat))
    79. c->convert_unscaled = planarRgbToRgbWrapper;
    80. if (srcFormat == AV_PIX_FMT_GBRAP && isByteRGB(dstFormat))
    81. c->convert_unscaled = planarRgbaToRgbWrapper;
    82. if ((srcFormat == AV_PIX_FMT_RGB48LE || srcFormat == AV_PIX_FMT_RGB48BE ||
    83. srcFormat == AV_PIX_FMT_BGR48LE || srcFormat == AV_PIX_FMT_BGR48BE ||
    84. srcFormat == AV_PIX_FMT_RGBA64LE || srcFormat == AV_PIX_FMT_RGBA64BE ||
    85. srcFormat == AV_PIX_FMT_BGRA64LE || srcFormat == AV_PIX_FMT_BGRA64BE) &&
    86. (dstFormat == AV_PIX_FMT_GBRP9LE || dstFormat == AV_PIX_FMT_GBRP9BE ||
    87. dstFormat == AV_PIX_FMT_GBRP10LE || dstFormat == AV_PIX_FMT_GBRP10BE ||
    88. dstFormat == AV_PIX_FMT_GBRP12LE || dstFormat == AV_PIX_FMT_GBRP12BE ||
    89. dstFormat == AV_PIX_FMT_GBRP14LE || dstFormat == AV_PIX_FMT_GBRP14BE ||
    90. dstFormat == AV_PIX_FMT_GBRP16LE || dstFormat == AV_PIX_FMT_GBRP16BE ||
    91. dstFormat == AV_PIX_FMT_GBRAP10LE || dstFormat == AV_PIX_FMT_GBRAP10BE ||
    92. dstFormat == AV_PIX_FMT_GBRAP12LE || dstFormat == AV_PIX_FMT_GBRAP12BE ||
    93. dstFormat == AV_PIX_FMT_GBRAP16LE || dstFormat == AV_PIX_FMT_GBRAP16BE ))
    94. c->convert_unscaled = Rgb16ToPlanarRgb16Wrapper;
    95. if ((srcFormat == AV_PIX_FMT_GBRP9LE || srcFormat == AV_PIX_FMT_GBRP9BE ||
    96. srcFormat == AV_PIX_FMT_GBRP16LE || srcFormat == AV_PIX_FMT_GBRP16BE ||
    97. srcFormat == AV_PIX_FMT_GBRP10LE || srcFormat == AV_PIX_FMT_GBRP10BE ||
    98. srcFormat == AV_PIX_FMT_GBRP12LE || srcFormat == AV_PIX_FMT_GBRP12BE ||
    99. srcFormat == AV_PIX_FMT_GBRP14LE || srcFormat == AV_PIX_FMT_GBRP14BE ||
    100. srcFormat == AV_PIX_FMT_GBRAP10LE || srcFormat == AV_PIX_FMT_GBRAP10BE ||
    101. srcFormat == AV_PIX_FMT_GBRAP12LE || srcFormat == AV_PIX_FMT_GBRAP12BE ||
    102. srcFormat == AV_PIX_FMT_GBRAP16LE || srcFormat == AV_PIX_FMT_GBRAP16BE) &&
    103. (dstFormat == AV_PIX_FMT_RGB48LE || dstFormat == AV_PIX_FMT_RGB48BE ||
    104. dstFormat == AV_PIX_FMT_BGR48LE || dstFormat == AV_PIX_FMT_BGR48BE ||
    105. dstFormat == AV_PIX_FMT_RGBA64LE || dstFormat == AV_PIX_FMT_RGBA64BE ||
    106. dstFormat == AV_PIX_FMT_BGRA64LE || dstFormat == AV_PIX_FMT_BGRA64BE))
    107. c->convert_unscaled = planarRgb16ToRgb16Wrapper;
    108. if (av_pix_fmt_desc_get(srcFormat)->comp[0].depth == 8 &&
    109. isPackedRGB(srcFormat) && dstFormat == AV_PIX_FMT_GBRP)
    110. c->convert_unscaled = rgbToPlanarRgbWrapper;
    111. if (isBayer(srcFormat)) {
    112. if (dstFormat == AV_PIX_FMT_RGB24)
    113. c->convert_unscaled = bayer_to_rgb24_wrapper;
    114. else if (dstFormat == AV_PIX_FMT_RGB48)
    115. c->convert_unscaled = bayer_to_rgb48_wrapper;
    116. else if (dstFormat == AV_PIX_FMT_YUV420P)
    117. c->convert_unscaled = bayer_to_yv12_wrapper;
    118. else if (!isBayer(dstFormat)) {
    119. av_log(c, AV_LOG_ERROR, "unsupported bayer conversion\n");
    120. av_assert0(0);
    121. }
    122. }
    123. /* bswap 16 bits per pixel/component packed formats */
    124. if (IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_BAYER_BGGR16) ||
    125. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_BAYER_RGGB16) ||
    126. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_BAYER_GBRG16) ||
    127. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_BAYER_GRBG16) ||
    128. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_BGR444) ||
    129. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_BGR48) ||
    130. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_BGR555) ||
    131. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_BGR565) ||
    132. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_BGRA64) ||
    133. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GRAY9) ||
    134. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GRAY10) ||
    135. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GRAY12) ||
    136. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GRAY14) ||
    137. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GRAY16) ||
    138. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YA16) ||
    139. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_AYUV64) ||
    140. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GBRP9) ||
    141. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GBRP10) ||
    142. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GBRP12) ||
    143. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GBRP14) ||
    144. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GBRP16) ||
    145. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GBRAP10) ||
    146. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GBRAP12) ||
    147. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GBRAP16) ||
    148. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_RGB444) ||
    149. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_RGB48) ||
    150. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_RGB555) ||
    151. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_RGB565) ||
    152. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_RGBA64) ||
    153. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_XYZ12) ||
    154. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV420P9) ||
    155. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV420P10) ||
    156. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV420P12) ||
    157. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV420P14) ||
    158. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV420P16) ||
    159. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV422P9) ||
    160. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV422P10) ||
    161. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV422P12) ||
    162. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV422P14) ||
    163. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV422P16) ||
    164. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV440P10) ||
    165. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV440P12) ||
    166. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV444P9) ||
    167. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV444P10) ||
    168. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV444P12) ||
    169. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV444P14) ||
    170. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV444P16))
    171. c->convert_unscaled = bswap_16bpc;
    172. /* bswap 32 bits per pixel/component formats */
    173. if (IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GBRPF32) ||
    174. IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GBRAPF32))
    175. c->convert_unscaled = bswap_32bpc;
    176. if (usePal(srcFormat) && isByteRGB(dstFormat))
    177. c->convert_unscaled = palToRgbWrapper;
    178. if (srcFormat == AV_PIX_FMT_YUV422P) {
    179. if (dstFormat == AV_PIX_FMT_YUYV422)
    180. c->convert_unscaled = yuv422pToYuy2Wrapper;
    181. else if (dstFormat == AV_PIX_FMT_UYVY422)
    182. c->convert_unscaled = yuv422pToUyvyWrapper;
    183. }
    184. /* uint Y to float Y */
    185. if (srcFormat == AV_PIX_FMT_GRAY8 && dstFormat == AV_PIX_FMT_GRAYF32){
    186. c->convert_unscaled = uint_y_to_float_y_wrapper;
    187. }
    188. /* float Y to uint Y */
    189. if (srcFormat == AV_PIX_FMT_GRAYF32 && dstFormat == AV_PIX_FMT_GRAY8){
    190. c->convert_unscaled = float_y_to_uint_y_wrapper;
    191. }
    192. /* LQ converters if -sws 0 or -sws 4*/
    193. if (c->flags&(SWS_FAST_BILINEAR|SWS_POINT)) {
    194. /* yv12_to_yuy2 */
    195. if (srcFormat == AV_PIX_FMT_YUV420P || srcFormat == AV_PIX_FMT_YUVA420P) {
    196. if (dstFormat == AV_PIX_FMT_YUYV422)
    197. c->convert_unscaled = planarToYuy2Wrapper;
    198. else if (dstFormat == AV_PIX_FMT_UYVY422)
    199. c->convert_unscaled = planarToUyvyWrapper;
    200. }
    201. }
    202. if (srcFormat == AV_PIX_FMT_YUYV422 &&
    203. (dstFormat == AV_PIX_FMT_YUV420P || dstFormat == AV_PIX_FMT_YUVA420P))
    204. c->convert_unscaled = yuyvToYuv420Wrapper;
    205. if (srcFormat == AV_PIX_FMT_UYVY422 &&
    206. (dstFormat == AV_PIX_FMT_YUV420P || dstFormat == AV_PIX_FMT_YUVA420P))
    207. c->convert_unscaled = uyvyToYuv420Wrapper;
    208. if (srcFormat == AV_PIX_FMT_YUYV422 && dstFormat == AV_PIX_FMT_YUV422P)
    209. c->convert_unscaled = yuyvToYuv422Wrapper;
    210. if (srcFormat == AV_PIX_FMT_UYVY422 && dstFormat == AV_PIX_FMT_YUV422P)
    211. c->convert_unscaled = uyvyToYuv422Wrapper;
    212. #define isPlanarGray(x) (isGray(x) && (x) != AV_PIX_FMT_YA8 && (x) != AV_PIX_FMT_YA16LE && (x) != AV_PIX_FMT_YA16BE)
    213. /* simple copy */
    214. if ( srcFormat == dstFormat ||
    215. (srcFormat == AV_PIX_FMT_YUVA420P && dstFormat == AV_PIX_FMT_YUV420P) ||
    216. (srcFormat == AV_PIX_FMT_YUV420P && dstFormat == AV_PIX_FMT_YUVA420P) ||
    217. (isFloat(srcFormat) == isFloat(dstFormat)) && ((isPlanarYUV(srcFormat) && isPlanarGray(dstFormat)) ||
    218. (isPlanarYUV(dstFormat) && isPlanarGray(srcFormat)) ||
    219. (isPlanarGray(dstFormat) && isPlanarGray(srcFormat)) ||
    220. (isPlanarYUV(srcFormat) && isPlanarYUV(dstFormat) &&
    221. c->chrDstHSubSample == c->chrSrcHSubSample &&
    222. c->chrDstVSubSample == c->chrSrcVSubSample &&
    223. !isSemiPlanarYUV(srcFormat) && !isSemiPlanarYUV(dstFormat))))
    224. {
    225. if (isPacked(c->srcFormat))
    226. c->convert_unscaled = packedCopyWrapper;
    227. else /* Planar YUV or gray */
    228. c->convert_unscaled = planarCopyWrapper;
    229. }
    230. if (ARCH_PPC)
    231. ff_get_unscaled_swscale_ppc(c);
    232. if (ARCH_ARM)
    233. ff_get_unscaled_swscale_arm(c);
    234. if (ARCH_AARCH64)
    235. ff_get_unscaled_swscale_aarch64(c);
    236. }
    • 从代码中可以看出,它根据输入输出像素格式的不同,选择了不同的转换函数。
    • 例如YUV420P转换NV12的时候,就会将planarToNv12Wrapper()赋值给SwsContext的swscale指针。

    有拉伸--swscale()

    1. void ff_sws_init_scale(SwsContext *c)
    2. {
    3. sws_init_swscale(c);
    4. if (ARCH_PPC)
    5. ff_sws_init_swscale_ppc(c);
    6. if (ARCH_X86)
    7. ff_sws_init_swscale_x86(c);
    8. if (ARCH_AARCH64)
    9. ff_sws_init_swscale_aarch64(c);
    10. if (ARCH_ARM)
    11. ff_sws_init_swscale_arm(c);
    12. }
    1. static av_cold void sws_init_swscale(SwsContext *c)
    2. {
    3. enum AVPixelFormat srcFormat = c->srcFormat;
    4. ff_sws_init_output_funcs(c, &c->yuv2plane1, &c->yuv2planeX,
    5. &c->yuv2nv12cX, &c->yuv2packed1,
    6. &c->yuv2packed2, &c->yuv2packedX, &c->yuv2anyX);
    7. ff_sws_init_input_funcs(c);
    8. if (c->srcBpc == 8) {
    9. if (c->dstBpc <= 14) {
    10. c->hyScale = c->hcScale = hScale8To15_c;
    11. if (c->flags & SWS_FAST_BILINEAR) {
    12. c->hyscale_fast = ff_hyscale_fast_c;
    13. c->hcscale_fast = ff_hcscale_fast_c;
    14. }
    15. } else {
    16. c->hyScale = c->hcScale = hScale8To19_c;
    17. }
    18. } else {
    19. c->hyScale = c->hcScale = c->dstBpc > 14 ? hScale16To19_c
    20. : hScale16To15_c;
    21. }
    22. ff_sws_init_range_convert(c);
    23. if (!(isGray(srcFormat) || isGray(c->dstFormat) ||
    24. srcFormat == AV_PIX_FMT_MONOBLACK || srcFormat == AV_PIX_FMT_MONOWHITE))
    25. c->needs_hcscale = 1;
    26. }
    • 未找到 代码依据
    •  注意,sws_init_context()对SwsContext的swscale进行赋值的语句是:
    • c->swscale = ff_getSwsFunc(c);
    • 即把ff_getSwsFunc()的返回值赋值给SwsContext的swscale指针;而ff_getSwsFunc()的返回值是一个静态函数,名称就叫做“swscale”。
    • 下面我们看一下这个swscale()静态函数的定义。
    1. static int swscale(SwsContext *c, const uint8_t *src[],
    2. int srcStride[], int srcSliceY, int srcSliceH,
    3. uint8_t *dst[], int dstStride[],
    4. int dstSliceY, int dstSliceH)
    5. {
    6. const int scale_dst = dstSliceY > 0 || dstSliceH < c->dstH;
    7. /* load a few things into local vars to make the code more readable?
    8. * and faster */
    9. const int dstW = c->dstW;
    10. int dstH = c->dstH;
    11. const enum AVPixelFormat dstFormat = c->dstFormat;
    12. const int flags = c->flags;
    13. int32_t *vLumFilterPos = c->vLumFilterPos;
    14. int32_t *vChrFilterPos = c->vChrFilterPos;
    15. const int vLumFilterSize = c->vLumFilterSize;
    16. const int vChrFilterSize = c->vChrFilterSize;
    17. yuv2planar1_fn yuv2plane1 = c->yuv2plane1;
    18. yuv2planarX_fn yuv2planeX = c->yuv2planeX;
    19. yuv2interleavedX_fn yuv2nv12cX = c->yuv2nv12cX;
    20. yuv2packed1_fn yuv2packed1 = c->yuv2packed1;
    21. yuv2packed2_fn yuv2packed2 = c->yuv2packed2;
    22. yuv2packedX_fn yuv2packedX = c->yuv2packedX;
    23. yuv2anyX_fn yuv2anyX = c->yuv2anyX;
    24. const int chrSrcSliceY = srcSliceY >> c->chrSrcVSubSample;
    25. const int chrSrcSliceH = AV_CEIL_RSHIFT(srcSliceH, c->chrSrcVSubSample);
    26. int should_dither = isNBPS(c->srcFormat) ||
    27. is16BPS(c->srcFormat);
    28. int lastDstY;
    29. /* vars which will change and which we need to store back in the context */
    30. int dstY = c->dstY;
    31. int lastInLumBuf = c->lastInLumBuf;
    32. int lastInChrBuf = c->lastInChrBuf;
    33. int lumStart = 0;
    34. int lumEnd = c->descIndex[0];
    35. int chrStart = lumEnd;
    36. int chrEnd = c->descIndex[1];
    37. int vStart = chrEnd;
    38. int vEnd = c->numDesc;
    39. SwsSlice *src_slice = &c->slice[lumStart];
    40. SwsSlice *hout_slice = &c->slice[c->numSlice-2];
    41. SwsSlice *vout_slice = &c->slice[c->numSlice-1];
    42. SwsFilterDescriptor *desc = c->desc;
    43. int needAlpha = c->needAlpha;
    44. int hasLumHoles = 1;
    45. int hasChrHoles = 1;
    46. if (isPacked(c->srcFormat)) {
    47. src[1] =
    48. src[2] =
    49. src[3] = src[0];
    50. srcStride[1] =
    51. srcStride[2] =
    52. srcStride[3] = srcStride[0];
    53. }
    54. srcStride[1] *= 1 << c->vChrDrop;
    55. srcStride[2] *= 1 << c->vChrDrop;
    56. DEBUG_BUFFERS("swscale() %p[%d] %p[%d] %p[%d] %p[%d] -> %p[%d] %p[%d] %p[%d] %p[%d]\n",
    57. src[0], srcStride[0], src[1], srcStride[1],
    58. src[2], srcStride[2], src[3], srcStride[3],
    59. dst[0], dstStride[0], dst[1], dstStride[1],
    60. dst[2], dstStride[2], dst[3], dstStride[3]);
    61. DEBUG_BUFFERS("srcSliceY: %d srcSliceH: %d dstY: %d dstH: %d\n",
    62. srcSliceY, srcSliceH, dstY, dstH);
    63. DEBUG_BUFFERS("vLumFilterSize: %d vChrFilterSize: %d\n",
    64. vLumFilterSize, vChrFilterSize);
    65. if (dstStride[0]&15 || dstStride[1]&15 ||
    66. dstStride[2]&15 || dstStride[3]&15) {
    67. SwsContext *const ctx = c->parent ? c->parent : c;
    68. if (flags & SWS_PRINT_INFO &&
    69. !atomic_exchange_explicit(&ctx->stride_unaligned_warned, 1, memory_order_relaxed)) {
    70. av_log(c, AV_LOG_WARNING,
    71. "Warning: dstStride is not aligned!\n"
    72. " ->cannot do aligned memory accesses anymore\n");
    73. }
    74. }
    75. #if ARCH_X86
    76. if ( (uintptr_t)dst[0]&15 || (uintptr_t)dst[1]&15 || (uintptr_t)dst[2]&15
    77. || (uintptr_t)src[0]&15 || (uintptr_t)src[1]&15 || (uintptr_t)src[2]&15
    78. || dstStride[0]&15 || dstStride[1]&15 || dstStride[2]&15 || dstStride[3]&15
    79. || srcStride[0]&15 || srcStride[1]&15 || srcStride[2]&15 || srcStride[3]&15
    80. ) {
    81. SwsContext *const ctx = c->parent ? c->parent : c;
    82. int cpu_flags = av_get_cpu_flags();
    83. if (flags & SWS_PRINT_INFO && HAVE_MMXEXT && (cpu_flags & AV_CPU_FLAG_SSE2) &&
    84. !atomic_exchange_explicit(&ctx->stride_unaligned_warned,1, memory_order_relaxed)) {
    85. av_log(c, AV_LOG_WARNING, "Warning: data is not aligned! This can lead to a speed loss\n");
    86. }
    87. }
    88. #endif
    89. if (scale_dst) {
    90. dstY = dstSliceY;
    91. dstH = dstY + dstSliceH;
    92. lastInLumBuf = -1;
    93. lastInChrBuf = -1;
    94. } else if (srcSliceY == 0) {
    95. /* Note the user might start scaling the picture in the middle so this
    96. * will not get executed. This is not really intended but works
    97. * currently, so people might do it. */
    98. dstY = 0;
    99. lastInLumBuf = -1;
    100. lastInChrBuf = -1;
    101. }
    102. if (!should_dither) {
    103. c->chrDither8 = c->lumDither8 = sws_pb_64;
    104. }
    105. lastDstY = dstY;
    106. ff_init_vscale_pfn(c, yuv2plane1, yuv2planeX, yuv2nv12cX,
    107. yuv2packed1, yuv2packed2, yuv2packedX, yuv2anyX, c->use_mmx_vfilter);
    108. ff_init_slice_from_src(src_slice, (uint8_t**)src, srcStride, c->srcW,
    109. srcSliceY, srcSliceH, chrSrcSliceY, chrSrcSliceH, 1);
    110. ff_init_slice_from_src(vout_slice, (uint8_t**)dst, dstStride, c->dstW,
    111. dstY, dstSliceH, dstY >> c->chrDstVSubSample,
    112. AV_CEIL_RSHIFT(dstSliceH, c->chrDstVSubSample), scale_dst);
    113. if (srcSliceY == 0) {
    114. hout_slice->plane[0].sliceY = lastInLumBuf + 1;
    115. hout_slice->plane[1].sliceY = lastInChrBuf + 1;
    116. hout_slice->plane[2].sliceY = lastInChrBuf + 1;
    117. hout_slice->plane[3].sliceY = lastInLumBuf + 1;
    118. hout_slice->plane[0].sliceH =
    119. hout_slice->plane[1].sliceH =
    120. hout_slice->plane[2].sliceH =
    121. hout_slice->plane[3].sliceH = 0;
    122. hout_slice->width = dstW;
    123. }
    124. for (; dstY < dstH; dstY++) {
    125. const int chrDstY = dstY >> c->chrDstVSubSample;
    126. int use_mmx_vfilter= c->use_mmx_vfilter;
    127. // First line needed as input
    128. const int firstLumSrcY = FFMAX(1 - vLumFilterSize, vLumFilterPos[dstY]);
    129. const int firstLumSrcY2 = FFMAX(1 - vLumFilterSize, vLumFilterPos[FFMIN(dstY | ((1 << c->chrDstVSubSample) - 1), c->dstH - 1)]);
    130. // First line needed as input
    131. const int firstChrSrcY = FFMAX(1 - vChrFilterSize, vChrFilterPos[chrDstY]);
    132. // Last line needed as input
    133. int lastLumSrcY = FFMIN(c->srcH, firstLumSrcY + vLumFilterSize) - 1;
    134. int lastLumSrcY2 = FFMIN(c->srcH, firstLumSrcY2 + vLumFilterSize) - 1;
    135. int lastChrSrcY = FFMIN(c->chrSrcH, firstChrSrcY + vChrFilterSize) - 1;
    136. int enough_lines;
    137. int i;
    138. int posY, cPosY, firstPosY, lastPosY, firstCPosY, lastCPosY;
    139. // handle holes (FAST_BILINEAR & weird filters)
    140. if (firstLumSrcY > lastInLumBuf) {
    141. hasLumHoles = lastInLumBuf != firstLumSrcY - 1;
    142. if (hasLumHoles) {
    143. hout_slice->plane[0].sliceY = firstLumSrcY;
    144. hout_slice->plane[3].sliceY = firstLumSrcY;
    145. hout_slice->plane[0].sliceH =
    146. hout_slice->plane[3].sliceH = 0;
    147. }
    148. lastInLumBuf = firstLumSrcY - 1;
    149. }
    150. if (firstChrSrcY > lastInChrBuf) {
    151. hasChrHoles = lastInChrBuf != firstChrSrcY - 1;
    152. if (hasChrHoles) {
    153. hout_slice->plane[1].sliceY = firstChrSrcY;
    154. hout_slice->plane[2].sliceY = firstChrSrcY;
    155. hout_slice->plane[1].sliceH =
    156. hout_slice->plane[2].sliceH = 0;
    157. }
    158. lastInChrBuf = firstChrSrcY - 1;
    159. }
    160. DEBUG_BUFFERS("dstY: %d\n", dstY);
    161. DEBUG_BUFFERS("\tfirstLumSrcY: %d lastLumSrcY: %d lastInLumBuf: %d\n",
    162. firstLumSrcY, lastLumSrcY, lastInLumBuf);
    163. DEBUG_BUFFERS("\tfirstChrSrcY: %d lastChrSrcY: %d lastInChrBuf: %d\n",
    164. firstChrSrcY, lastChrSrcY, lastInChrBuf);
    165. // Do we have enough lines in this slice to output the dstY line
    166. enough_lines = lastLumSrcY2 < srcSliceY + srcSliceH &&
    167. lastChrSrcY < AV_CEIL_RSHIFT(srcSliceY + srcSliceH, c->chrSrcVSubSample);
    168. if (!enough_lines) {
    169. lastLumSrcY = srcSliceY + srcSliceH - 1;
    170. lastChrSrcY = chrSrcSliceY + chrSrcSliceH - 1;
    171. DEBUG_BUFFERS("buffering slice: lastLumSrcY %d lastChrSrcY %d\n",
    172. lastLumSrcY, lastChrSrcY);
    173. }
    174. av_assert0((lastLumSrcY - firstLumSrcY + 1) <= hout_slice->plane[0].available_lines);
    175. av_assert0((lastChrSrcY - firstChrSrcY + 1) <= hout_slice->plane[1].available_lines);
    176. posY = hout_slice->plane[0].sliceY + hout_slice->plane[0].sliceH;
    177. if (posY <= lastLumSrcY && !hasLumHoles) {
    178. firstPosY = FFMAX(firstLumSrcY, posY);
    179. lastPosY = FFMIN(firstLumSrcY + hout_slice->plane[0].available_lines - 1, srcSliceY + srcSliceH - 1);
    180. } else {
    181. firstPosY = posY;
    182. lastPosY = lastLumSrcY;
    183. }
    184. cPosY = hout_slice->plane[1].sliceY + hout_slice->plane[1].sliceH;
    185. if (cPosY <= lastChrSrcY && !hasChrHoles) {
    186. firstCPosY = FFMAX(firstChrSrcY, cPosY);
    187. lastCPosY = FFMIN(firstChrSrcY + hout_slice->plane[1].available_lines - 1, AV_CEIL_RSHIFT(srcSliceY + srcSliceH, c->chrSrcVSubSample) - 1);
    188. } else {
    189. firstCPosY = cPosY;
    190. lastCPosY = lastChrSrcY;
    191. }
    192. ff_rotate_slice(hout_slice, lastPosY, lastCPosY);
    193. if (posY < lastLumSrcY + 1) {
    194. for (i = lumStart; i < lumEnd; ++i)
    195. desc[i].process(c, &desc[i], firstPosY, lastPosY - firstPosY + 1);
    196. }
    197. lastInLumBuf = lastLumSrcY;
    198. if (cPosY < lastChrSrcY + 1) {
    199. for (i = chrStart; i < chrEnd; ++i)
    200. desc[i].process(c, &desc[i], firstCPosY, lastCPosY - firstCPosY + 1);
    201. }
    202. lastInChrBuf = lastChrSrcY;
    203. if (!enough_lines)
    204. break; // we can't output a dstY line so let's try with the next slice
    205. #if HAVE_MMX_INLINE
    206. ff_updateMMXDitherTables(c, dstY);
    207. #endif
    208. if (should_dither) {
    209. c->chrDither8 = ff_dither_8x8_128[chrDstY & 7];
    210. c->lumDither8 = ff_dither_8x8_128[dstY & 7];
    211. }
    212. if (dstY >= c->dstH - 2) {
    213. /* hmm looks like we can't use MMX here without overwriting
    214. * this array's tail */
    215. ff_sws_init_output_funcs(c, &yuv2plane1, &yuv2planeX, &yuv2nv12cX,
    216. &yuv2packed1, &yuv2packed2, &yuv2packedX, &yuv2anyX);
    217. use_mmx_vfilter= 0;
    218. ff_init_vscale_pfn(c, yuv2plane1, yuv2planeX, yuv2nv12cX,
    219. yuv2packed1, yuv2packed2, yuv2packedX, yuv2anyX, use_mmx_vfilter);
    220. }
    221. for (i = vStart; i < vEnd; ++i)
    222. desc[i].process(c, &desc[i], dstY, 1);
    223. }
    224. if (isPlanar(dstFormat) && isALPHA(dstFormat) && !needAlpha) {
    225. int offset = lastDstY - dstSliceY;
    226. int length = dstW;
    227. int height = dstY - lastDstY;
    228. if (is16BPS(dstFormat) || isNBPS(dstFormat)) {
    229. const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(dstFormat);
    230. fillPlane16(dst[3], dstStride[3], length, height, offset,
    231. 1, desc->comp[3].depth,
    232. isBE(dstFormat));
    233. } else if (is32BPS(dstFormat)) {
    234. const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(dstFormat);
    235. fillPlane32(dst[3], dstStride[3], length, height, offset,
    236. 1, desc->comp[3].depth,
    237. isBE(dstFormat), desc->flags & AV_PIX_FMT_FLAG_FLOAT);
    238. } else
    239. fillPlane(dst[3], dstStride[3], length, height, offset, 255);
    240. }
    241. #if HAVE_MMXEXT_INLINE
    242. if (av_get_cpu_flags() & AV_CPU_FLAG_MMXEXT)
    243. __asm__ volatile ("sfence" ::: "memory");
    244. #endif
    245. emms_c();
    246. /* store changed local vars back in the context */
    247. c->dstY = dstY;
    248. c->lastInLumBuf = lastInLumBuf;
    249. c->lastInChrBuf = lastInChrBuf;
    250. return dstY - lastDstY;
    251. }
    • 可以看出swscale()是一行一行的进行图像缩放工作的。其中每行数据的处理按照“先水平拉伸,然后垂直拉伸”的方式进行处理。
    • 具体的实现函数如下所示:
    • 1.  水平拉伸
      • a) 亮度水平拉伸:hyscale()
      • b) 色度水平拉伸:hcscale()
    • 2. 垂直拉伸
    • a) Planar
      • i. 亮度垂直拉伸-不拉伸:yuv2plane1()
      • ii. 亮度垂直拉伸-拉伸:yuv2planeX()
      • iii. 色度垂直拉伸-不拉伸:yuv2plane1()
      • iv. 色度垂直拉伸-拉伸:yuv2planeX()
    • b) Packed
      • i. 垂直拉伸-不拉伸:yuv2packed1()
      • ii. 垂直拉伸-拉伸:yuv2packedX()
    • 下面具体看看这几个函数的定义。

    hyscale()

    • 水平亮度拉伸函数hyscale()的定义位于libswscale\swscale.c,如下所示。  并不存在

    1. /**
    2. * Scale one horizontal line of input data using a filter over the input
    3. * lines, to produce one (differently sized) line of output data.
    4. *
    5. * @param dst pointer to destination buffer for horizontally scaled
    6. * data. If the number of bits per component of one
    7. * destination pixel (SwsContext->dstBpc) is <= 10, data
    8. * will be 15 bpc in 16 bits (int16_t) width. Else (i.e.
    9. * SwsContext->dstBpc == 16), data will be 19bpc in
    10. * 32 bits (int32_t) width.
    11. * @param dstW width of destination image
    12. * @param src pointer to source data to be scaled. If the number of
    13. * bits per component of a source pixel (SwsContext->srcBpc)
    14. * is 8, this is 8bpc in 8 bits (uint8_t) width. Else
    15. * (i.e. SwsContext->dstBpc > 8), this is native depth
    16. * in 16 bits (uint16_t) width. In other words, for 9-bit
    17. * YUV input, this is 9bpc, for 10-bit YUV input, this is
    18. * 10bpc, and for 16-bit RGB or YUV, this is 16bpc.
    19. * @param filter filter coefficients to be used per output pixel for
    20. * scaling. This contains 14bpp filtering coefficients.
    21. * Guaranteed to contain dstW * filterSize entries.
    22. * @param filterPos position of the first input pixel to be used for
    23. * each output pixel during scaling. Guaranteed to
    24. * contain dstW entries.
    25. * @param filterSize the number of input coefficients to be used (and
    26. * thus the number of input pixels to be used) for
    27. * creating a single output pixel. Is aligned to 4
    28. * (and input coefficients thus padded with zeroes)
    29. * to simplify creating SIMD code.
    30. */
    31. /** @{ */
    32. void (*hyScale)(struct SwsContext *c, int16_t *dst, int dstW,
    33. const uint8_t *src, const int16_t *filter,
    34. const int32_t *filterPos, int filterSize);
    35. void (*hcScale)(struct SwsContext *c, int16_t *dst, int dstW,
    36. const uint8_t *src, const int16_t *filter,
    37. const int32_t *filterPos, int filterSize);

    缺失内容

  • 相关阅读:
    字符串的方法
    通讯网关软件008——利用CommGate X2Mysql实现OPC数据转储Mysql
    议程公布!Web3 建设者汇聚 DESTINATION MOON 分享见解与探讨
    基于C++和QT实现的二进制数独游戏求解
    Databend 开源周报第 114 期
    Apache Storm 2.5.0 集群安装与配置
    Linux 用户和组管理
    git仓库中增加子仓库
    php一句话木马免杀
    设计模式:单例模式(C#、JAVA、JavaScript、C++、Python、Go、PHP)
  • 原文地址:https://blog.csdn.net/CHYabc123456hh/article/details/125434533