• 流媒体分析之rtmp 协议封装


     1. rtmp  消息解析:

    1.1 、RTMP Message格式概述
    RTMP协议面向上层用户定义了一种Message数据结构用于封装音视频的帧数据和协议控制命令,具体格式如下:

    1.      0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
    2.     +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    3.     | Message Type  |               Payload length                  |  
    4.     +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+  
    5.     |                    Timestamp                                  |  
    6.     +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+  
    7.     |                    Stream ID                                  |  
    8.     +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    9.     |                 Message   Payload                             |
    10.     +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+


    Message Type(1 Byte):消息类型ID,主要分为三类:协议控制消息、音视频数据帧、命令消息。
    Payload Length(3-Bytes):报文净荷的字节数。以大端格式保存。
    Timestamp(4-Bytes):当前消息的时间戳。以大端格式保存
    StreamID(4-Bytes):消息流ID,其实从抓包看,一般只有0、1两种值,0:信令,1:音视频数据
    Message Payload:Message报文净荷


    1.2、RTMP Chunk格式概述
    RTMP协议的一个Message数据包可能会非常大(尤其是传输一个视频帧时,最大可能16M字节)。

    为了保证TCP高效传输(超大数据包在网络传输中受MTU限制会自动切片,超大数据包在网络传输过程中丢失一个数据片,都需要重传整个报文,导致传输效率降低)。

    所以,RTMP协议的设计者决定在RTMP协议内部对Message数据包分片,这里分片的单位被称为Chunk(数据块)。

    Chunk报文长度默认128字节,同时用户也可以修改本端发送的Chunk报文长度,并通过RTMP协议通知接收端。所以,网络中实际传输的RTMP报文,总是如下Chunk封装格式:

    1.       1~3字节                
    2. |<-Chunk Header->|   0/3/7/11字节         04字节
    3. +----------------+----------------+-------------------+-----------+  
    4. |  Basic header  |   Msg Header   | Extended Timestamp|Chunk Data |  
    5. +----------------+----------------+-------------------+-----------+ 


    注:因为Msg Header里有一个3字节的timestamp,当timestamp 被设置为0xffffff时,才会出现Extended Timestamp字段
    1.2.1 Basic Header
    Basic Header也被称为Chunk header,它的实际格式一共有3种(1~3个字节),目的是为了满足不同的长度的CSID(Chunk Stream ID),所以,在足够存储CSID字段的前提下应该用尽量少的字节格式从而减少由于引入Basic Header而增加的数据量。

    1. +-----------------------+------------------+
    2. |   format  [2bit]    |    CSID [6bit]   |    结构11字节
    3. +-----------------------+------------------+
    4. +-----------------------+------------------+--------------------+
    5. |   format  [2bit]    |      0 [6bit]    |   扩展 CSID [8bit]  |   结构22字节
    6. +-----------------------+------------------+--------------------+
    7. +-----------------------+------------------+--------------------------------+
    8. |   format  [2bit]    |     1 [6bit]     |       扩展 CSID [16bit]         |    结构33字节
    9. +-----------------------+------------------+--------------------------------+


    实际上为了处理方便,RTMP协议软件在具体实现时总是先读取一个字节的Basic Header,并根据前面2bit的format信息和后面6bit的CSID信息,判断报文的实际结构。

    首先判断第一个字节的低6bit的CSID
     当低6bit的CSID为0时,Basic Header占用2个字节,扩展CSID在[64+0,255+64=319]之间。
     当低6bit的CSID为1时,Basic Header占用3个字节,扩展CSID在[64+0,65535+64=65599]之间。
     当低6bit的CSID为2~63时,Basic Header占用1个字节,这种情况适用于绝大多数的RTMP Chunk报文。
     CSID=2时,Message Type为1,2,3,5,6对应Chunk层控制协议,4对应用户控制命令
     CSID=3~8,用于connect、createStream、releaseStream 、publish、metaData、音视频数据
     (不同厂家之间,如FFMPEG、OBS在这里使用的CSID差异很大)
      显然,接下来更多的CSID已经意义不大了,毕竟有Message Type也够了。当然,用户可以使用更大的CSID做一些私有协议的扩展。
     实际上,真实环境中需要的CSID并不多,所以一般情况下,Basic Header长度总是一个字节。

    1.2.2 Msg Header

    接下来通过2bit的format字段,判断后面Msg Header格式
    format =00,Msg Header占用11个字节,这种结构最浪费,一般用于流开始发送的第一个chunk报文,且只有这种情况下,报文中的timestamp才是一个绝对时间, 后续chunk报文的Msg Header中要么是没有timestamp,有timestamp也只是相对前一个chunk报文的时间增量。

    1. |<--Basic Header->|<------------------------Msg Header----------------------->|
    2. +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    3. | format  | CSID  | timestamp | message length | message type | msg stream id |
    4. |    00   |       |           |                |              |               |
    5. +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    6.   2 bits   6 bits    3 bytes      3 bytes          1 bytes         4 bytes


    format =01,Msg Header占用7个字节,省去了4个字节的msg stream id,表示当前chunk和上一次发送的chunk所在的流相同

    1. |<--Basic Header->|<---------------Msg Header---------------->|
    2. +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    3. | format  | CSID  | timestamp | message length | message type |
    4. |    01   |       |           |                |              | 
    5. +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    6.   2 bits   6 bits    3 bytes       3 bytes          1 bytes      


    format =10,Msg Header占用3个字节,相对于format=01格式又省去了表示消息长度的3个字节和表示消息类型的1个字节。

    1. |<--Basic Header->|<--Msg Header--->|
    2. +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    3. | format  | CSID  |    timestamp    |
    4. |    10   |       |                 | 
    5. +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    6.   2 bits   6 bits     3 bytes   


    format =11,Msg Header占用0个字节,它表示这个chunk的Msg Header和上一个chunk是完全相同的。

    1. |<--Basic Header->|
    2. +-+-+-+-+-+-+-+-+-+
    3. | format  | CSID  | 
    4. |    11   |       | 
    5. +-+-+-+-+-+-+-+-+-+
    6.   2 bits   6 bits  

     

    1.3 音视频数据帧

    RTMP客户端与服务器之间连接建立后,开始推拉流时,总是先传输和音视频编码信息相关的MetaData(元数据),再传输音视频数据帧:
    1)音视频数据总是采用FLV封装格式:tagHeader(1字节) + tagData(编码帧)
    2)对于H.264视频,第一个视频帧必须是SPS和PPS,后面才是I帧和P帧。
    3)对于AAC音频,第一个音频帧必须是AAC sequence header,后面才是AAC编码音频数据。

    所谓元数据MetaData,其实就是一些用字符串描述的音视频编码器ID和宽高信息,接下来以H264和ACC为例初步了解一下音视频数据在RTMP中的Chunk封装格式:

    Message Type =8;Audio message

    1.     协议层                封装层
    2. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 
    3. |RTMP Chunk Header | FLV AudioTagHeader | FLV AudioTagBody |     
    4. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    5. FLV AudioTagHeader
    6. ++++++++++++++++++++++++++++++++++++++++++++++++++         +++++++++++++++
    7. |SoundFormat | SoundRate | SoundSize | SoundType |   OR +  |AACPacketType|
    8. ++++++++++++++++++++++++++++++++++++++++++++++++++         +++++++++++++++
    9.    4bits        2bits        1bit        1bit     
    10.   编码格式       采样率    采样精度816位  声道数(单/双)


    如果音频格式是AAC(0x0A),上面的AudioTagHeader中会多出1个字节的AACPacketType描述后续的ACC包类型:
    AACPacketType = 0x00 表示后续数据是AAC sequence header,
    AACPacketType = 0x01 表示后续数据是AAC音频数据

    最终,AAC sequence header由AudioSpecificConfig定义,简化的AudioSpecificConfig信息包括2字节如下:

    1. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    2. |AAC Profile 5bits | 采样率 4bits | 声道数 4bits | 其他 3bits |
    3. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    4. 对于RTMP推拉流,在发送第一个音频数据包前必须要发送这个AAC sequence header包。


    Message Type =9;Video message 

    1.     协议层                封装层
    2. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    3. |RTMP Chunk Header | FLV VideoTagHeader | FLV VideoTagBody |     
    4. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    5. FLV VideoTagHeader
    6. +++++++++++++++++++++++      ++++++++++++++++        ++++++++++++++++++ 
    7. |Frame Type | CodecID |  +   |AVCPacketType |    +   |CompositionTime |    
    8. +++++++++++++++++++++++      ++++++++++++++++        ++++++++++++++++++
    9.    4 bits      4 bits           1 byte                    3 byte
    10.  视频帧类型   编码器ID


    Frame Type = 1表示H264的关键帧,包括IDR;Frame Type=2表示H264的非关键帧
    codecID = 7表示AVC
    当采用AVC编码(即H264)时,会增加1个字节的AVCPacketType字段(描述后续的AVC包类型)和3个字节的CompositionTime :
    AVCPacketType=0时,表示后续数据AVC sequence header,此时3字节CompositionTime内容为0
    AVCPacketType=1时,表示后续数据AVC NALU
    AVCPacketType=2时,表示后续数据AVC end of sequence (一般不需要)

    1. FLV VideoTagBody 
    2. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    3. |   Size   | AVCDecoderConfigurationRecord or ( one or more NALUs ) |    
    4. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    5.   4 bytes

    2. ffmpeg 实现rtmp 协议

    1. const URLProtocol ff_##flavor##_protocol = { \
    2. .name = #flavor, \
    3. .url_open2 = rtmp_open, \
    4. .url_read = rtmp_read, \
    5. .url_read_seek = rtmp_seek, \
    6. .url_read_pause = rtmp_pause, \
    7. .url_write = rtmp_write, \
    8. .url_close = rtmp_close, \
    9. .priv_data_size = sizeof(RTMPContext), \
    10. .flags = URL_PROTOCOL_FLAG_NETWORK, \
    11. .priv_data_class= &flavor##_class, \
    12. };

     2.1 rtmp_open 分析;

      1. tcp 连接

      2.rtmp 握手

      3.rtmp 连接

      4.rtmp 连接后处理:

      设置chunk_size

      设置WINDOW_ACK_SIZE

      invoke 操作等等.......

       

    1. static int rtmp_open(URLContext *s, const char *uri, int flags, AVDictionary **opts)
    2. {
    3. RTMPContext *rt = s->priv_data;
    4. char proto[8], hostname[256], path[1024], auth[100], *fname;
    5. char *old_app, *qmark, *n, fname_buffer[1024];
    6. uint8_t buf[2048];
    7. int port;
    8. int ret;
    9. if (rt->listen_timeout > 0)
    10. rt->listen = 1;
    11. rt->is_input = !(flags & AVIO_FLAG_WRITE);
    12. av_url_split(proto, sizeof(proto), auth, sizeof(auth),
    13. hostname, sizeof(hostname), &port,
    14. path, sizeof(path), s->filename);
    15. n = strchr(path, ' ');
    16. if (n) {
    17. av_log(s, AV_LOG_WARNING,
    18. "Detected librtmp style URL parameters, these aren't supported "
    19. "by the libavformat internal RTMP handler currently enabled. "
    20. "See the documentation for the correct way to pass parameters.\n");
    21. *n = '\0'; // Trim not supported part
    22. }
    23. if (auth[0]) {
    24. char *ptr = strchr(auth, ':');
    25. if (ptr) {
    26. *ptr = '\0';
    27. av_strlcpy(rt->username, auth, sizeof(rt->username));
    28. av_strlcpy(rt->password, ptr + 1, sizeof(rt->password));
    29. }
    30. }
    31. if (rt->listen && strcmp(proto, "rtmp")) {
    32. av_log(s, AV_LOG_ERROR, "rtmp_listen not available for %s\n",
    33. proto);
    34. return AVERROR(EINVAL);
    35. }
    36. if (!strcmp(proto, "rtmpt") || !strcmp(proto, "rtmpts")) {
    37. if (!strcmp(proto, "rtmpts"))
    38. av_dict_set(opts, "ffrtmphttp_tls", "1", 1);
    39. /* open the http tunneling connection */
    40. ff_url_join(buf, sizeof(buf), "ffrtmphttp", NULL, hostname, port, NULL);
    41. } else if (!strcmp(proto, "rtmps")) {
    42. /* open the tls connection */
    43. if (port < 0)
    44. port = RTMPS_DEFAULT_PORT;
    45. ff_url_join(buf, sizeof(buf), "tls", NULL, hostname, port, NULL);
    46. } else if (!strcmp(proto, "rtmpe") || (!strcmp(proto, "rtmpte"))) {
    47. if (!strcmp(proto, "rtmpte"))
    48. av_dict_set(opts, "ffrtmpcrypt_tunneling", "1", 1);
    49. /* open the encrypted connection */
    50. ff_url_join(buf, sizeof(buf), "ffrtmpcrypt", NULL, hostname, port, NULL);
    51. rt->encrypted = 1;
    52. } else {
    53. /* open the tcp connection */
    54. if (port < 0)
    55. port = RTMP_DEFAULT_PORT;
    56. if (rt->listen)
    57. ff_url_join(buf, sizeof(buf), "tcp", NULL, hostname, port,
    58. "?listen&listen_timeout=%d",
    59. rt->listen_timeout * 1000);
    60. else
    61. ff_url_join(buf, sizeof(buf), "tcp", NULL, hostname, port, NULL);
    62. }
    63. reconnect:
    64. // tcp 连接
    65. if ((ret = ffurl_open_whitelist(&rt->stream, buf, AVIO_FLAG_READ_WRITE,
    66. &s->interrupt_callback, opts,
    67. s->protocol_whitelist, s->protocol_blacklist, s)) < 0) {
    68. av_log(s , AV_LOG_ERROR, "Cannot open connection %s\n", buf);
    69. goto fail;
    70. }
    71. if (rt->swfverify) {
    72. if ((ret = rtmp_calc_swfhash(s)) < 0)
    73. goto fail;
    74. }
    75. rt->state = STATE_START;
    76. // rtmp hand shake 发送C0 C1 C2 接受S0 S1 S2
    77. if (!rt->listen && (ret = rtmp_handshake(s, rt)) < 0)
    78. goto fail;
    79. if (rt->listen && (ret = rtmp_server_handshake(s, rt)) < 0)
    80. goto fail;
    81. rt->out_chunk_size = 128;
    82. rt->in_chunk_size = 128; // Probably overwritten later
    83. rt->state = STATE_HANDSHAKED;
    84. // Keep the application name when it has been defined by the user.
    85. old_app = rt->app;
    86. rt->app = av_malloc(APP_MAX_LENGTH);
    87. if (!rt->app) {
    88. ret = AVERROR(ENOMEM);
    89. goto fail;
    90. }
    91. //extract "app" part from path
    92. qmark = strchr(path, '?');
    93. if (qmark && strstr(qmark, "slist=")) {
    94. char* amp;
    95. // After slist we have the playpath, the full path is used as app
    96. av_strlcpy(rt->app, path + 1, APP_MAX_LENGTH);
    97. fname = strstr(path, "slist=") + 6;
    98. // Strip any further query parameters from fname
    99. amp = strchr(fname, '&');
    100. if (amp) {
    101. av_strlcpy(fname_buffer, fname, FFMIN(amp - fname + 1,
    102. sizeof(fname_buffer)));
    103. fname = fname_buffer;
    104. }
    105. } else if (!strncmp(path, "/ondemand/", 10)) {
    106. fname = path + 10;
    107. memcpy(rt->app, "ondemand", 9);
    108. } else {
    109. char *next = *path ? path + 1 : path;
    110. char *p = strchr(next, '/');
    111. if (!p) {
    112. if (old_app) {
    113. // If name of application has been defined by the user, assume that
    114. // playpath is provided in the URL
    115. fname = next;
    116. } else {
    117. fname = NULL;
    118. av_strlcpy(rt->app, next, APP_MAX_LENGTH);
    119. }
    120. } else {
    121. // make sure we do not mismatch a playpath for an application instance
    122. char *c = strchr(p + 1, ':');
    123. fname = strchr(p + 1, '/');
    124. if (!fname || (c && c < fname)) {
    125. fname = p + 1;
    126. av_strlcpy(rt->app, path + 1, FFMIN(p - path, APP_MAX_LENGTH));
    127. } else {
    128. fname++;
    129. av_strlcpy(rt->app, path + 1, FFMIN(fname - path - 1, APP_MAX_LENGTH));
    130. }
    131. }
    132. }
    133. if (old_app) {
    134. // The name of application has been defined by the user, override it.
    135. if (strlen(old_app) >= APP_MAX_LENGTH) {
    136. ret = AVERROR(EINVAL);
    137. goto fail;
    138. }
    139. av_free(rt->app);
    140. rt->app = old_app;
    141. }
    142. if (!rt->playpath) {
    143. int max_len = 1;
    144. if (fname)
    145. max_len = strlen(fname) + 5; // add prefix "mp4:"
    146. rt->playpath = av_malloc(max_len);
    147. if (!rt->playpath) {
    148. ret = AVERROR(ENOMEM);
    149. goto fail;
    150. }
    151. if (fname) {
    152. int len = strlen(fname);
    153. if (!strchr(fname, ':') && len >= 4 &&
    154. (!strcmp(fname + len - 4, ".f4v") ||
    155. !strcmp(fname + len - 4, ".mp4"))) {
    156. memcpy(rt->playpath, "mp4:", 5);
    157. } else {
    158. if (len >= 4 && !strcmp(fname + len - 4, ".flv"))
    159. fname[len - 4] = '\0';
    160. rt->playpath[0] = 0;
    161. }
    162. av_strlcat(rt->playpath, fname, max_len);
    163. } else {
    164. rt->playpath[0] = '\0';
    165. }
    166. }
    167. if (!rt->tcurl) {
    168. rt->tcurl = av_malloc(TCURL_MAX_LENGTH);
    169. if (!rt->tcurl) {
    170. ret = AVERROR(ENOMEM);
    171. goto fail;
    172. }
    173. ff_url_join(rt->tcurl, TCURL_MAX_LENGTH, proto, NULL, hostname,
    174. port, "/%s", rt->app);
    175. }
    176. if (!rt->flashver) {
    177. rt->flashver = av_malloc(FLASHVER_MAX_LENGTH);
    178. if (!rt->flashver) {
    179. ret = AVERROR(ENOMEM);
    180. goto fail;
    181. }
    182. if (rt->is_input) {
    183. snprintf(rt->flashver, FLASHVER_MAX_LENGTH, "%s %d,%d,%d,%d",
    184. RTMP_CLIENT_PLATFORM, RTMP_CLIENT_VER1, RTMP_CLIENT_VER2,
    185. RTMP_CLIENT_VER3, RTMP_CLIENT_VER4);
    186. } else {
    187. snprintf(rt->flashver, FLASHVER_MAX_LENGTH,
    188. "FMLE/3.0 (compatible; %s)", LIBAVFORMAT_IDENT);
    189. }
    190. }
    191. rt->receive_report_size = 1048576;
    192. rt->bytes_read = 0;
    193. rt->has_audio = 0;
    194. rt->has_video = 0;
    195. rt->received_metadata = 0;
    196. rt->last_bytes_read = 0;
    197. rt->max_sent_unacked = 2500000;
    198. rt->duration = 0;
    199. av_log(s, AV_LOG_DEBUG, "Proto = %s, path = %s, app = %s, fname = %s\n",
    200. proto, path, rt->app, rt->playpath);
    201. if (!rt->listen) {
    202. //rtmp 连接服务器
    203. if ((ret = gen_connect(s, rt)) < 0)
    204. goto fail;
    205. } else {
    206. if ((ret = read_connect(s, s->priv_data)) < 0)
    207. goto fail;
    208. }
    209. do {
    210. //连接rtmp服务 后处理
    211. ret = get_packet(s, 1);
    212. } while (ret == AVERROR(EAGAIN));
    213. if (ret < 0)
    214. goto fail;
    215. if (rt->do_reconnect) {
    216. int i;
    217. ffurl_closep(&rt->stream);
    218. rt->do_reconnect = 0;
    219. rt->nb_invokes = 0;
    220. for (i = 0; i < 2; i++)
    221. memset(rt->prev_pkt[i], 0,
    222. sizeof(**rt->prev_pkt) * rt->nb_prev_pkt[i]);
    223. free_tracked_methods(rt);
    224. goto reconnect;
    225. }
    226. if (rt->is_input) {
    227. // generate FLV header for demuxer
    228. rt->flv_size = 13;
    229. if ((ret = av_reallocp(&rt->flv_data, rt->flv_size)) < 0)
    230. goto fail;
    231. rt->flv_off = 0;
    232. memcpy(rt->flv_data, "FLV\1\0\0\0\0\011\0\0\0\0", rt->flv_size);
    233. // Read packets until we reach the first A/V packet or read metadata.
    234. // If there was a metadata package in front of the A/V packets, we can
    235. // build the FLV header from this. If we do not receive any metadata,
    236. // the FLV decoder will allocate the needed streams when their first
    237. // audio or video packet arrives.
    238. while (!rt->has_audio && !rt->has_video && !rt->received_metadata) {
    239. if ((ret = get_packet(s, 0)) < 0)
    240. goto fail;
    241. }
    242. // Either after we have read the metadata or (if there is none) the
    243. // first packet of an A/V stream, we have a better knowledge about the
    244. // streams, so set the FLV header accordingly.
    245. if (rt->has_audio) {
    246. rt->flv_data[4] |= FLV_HEADER_FLAG_HASAUDIO;
    247. }
    248. if (rt->has_video) {
    249. rt->flv_data[4] |= FLV_HEADER_FLAG_HASVIDEO;
    250. }
    251. // If we received the first packet of an A/V stream and no metadata but
    252. // the server returned a valid duration, create a fake metadata packet
    253. // to inform the FLV decoder about the duration.
    254. if (!rt->received_metadata && rt->duration > 0) {
    255. if ((ret = inject_fake_duration_metadata(rt)) < 0)
    256. goto fail;
    257. }
    258. } else {
    259. rt->flv_size = 0;
    260. rt->flv_data = NULL;
    261. rt->flv_off = 0;
    262. rt->skip_bytes = 13;
    263. }
    264. s->max_packet_size = rt->stream->max_packet_size;
    265. s->is_streamed = 1;
    266. return 0;
    267. fail:
    268. av_freep(&rt->playpath);
    269. av_freep(&rt->tcurl);
    270. av_freep(&rt->flashver);
    271. av_dict_free(opts);
    272. rtmp_close(s);
    273. return ret;
    274. }

    2.2 rtmp_write 分析

          主要去掉flv tag header,调用rtmp_send_packet

    1. static int rtmp_write(URLContext *s, const uint8_t *buf, int size)
    2. {
    3. RTMPContext *rt = s->priv_data;
    4. int size_temp = size;
    5. int pktsize, pkttype, copy;
    6. uint32_t ts;
    7. const uint8_t *buf_temp = buf;
    8. uint8_t c;
    9. int ret;
    10. int i = 0;
    11. for(i = 0; i < 20; i++)
    12. {
    13. printf("%2x ",buf[i]);
    14. }
    15. printf("\n");
    16. do {
    17. if (rt->skip_bytes) {
    18. int skip = FFMIN(rt->skip_bytes, size_temp);
    19. buf_temp += skip;
    20. size_temp -= skip;
    21. rt->skip_bytes -= skip;
    22. continue;
    23. }
    24. if (rt->flv_header_bytes < RTMP_HEADER) {
    25. const uint8_t *header = rt->flv_header;
    26. int channel = RTMP_AUDIO_CHANNEL;
    27. copy = FFMIN(RTMP_HEADER - rt->flv_header_bytes, size_temp);
    28. bytestream_get_buffer(&buf_temp, rt->flv_header + rt->flv_header_bytes, copy);
    29. rt->flv_header_bytes += copy;
    30. size_temp -= copy;
    31. if (rt->flv_header_bytes < RTMP_HEADER)
    32. break;
    33. pkttype = bytestream_get_byte(&header);
    34. pktsize = bytestream_get_be24(&header);
    35. ts = bytestream_get_be24(&header);
    36. ts |= bytestream_get_byte(&header) << 24;
    37. bytestream_get_be24(&header);
    38. rt->flv_size = pktsize;
    39. if (pkttype == RTMP_PT_VIDEO)
    40. channel = RTMP_VIDEO_CHANNEL;
    41. if (((pkttype == RTMP_PT_VIDEO || pkttype == RTMP_PT_AUDIO) && ts == 0) ||
    42. pkttype == RTMP_PT_NOTIFY) {
    43. if ((ret = ff_rtmp_check_alloc_array(&rt->prev_pkt[1],
    44. &rt->nb_prev_pkt[1],
    45. channel)) < 0)
    46. return ret;
    47. // Force sending a full 12 bytes header by clearing the
    48. // channel id, to make it not match a potential earlier
    49. // packet in the same channel.
    50. rt->prev_pkt[1][channel].channel_id = 0;
    51. }
    52. //this can be a big packet, it's better to send it right here
    53. if ((ret = ff_rtmp_packet_create(&rt->out_pkt, channel,
    54. pkttype, ts, pktsize)) < 0)
    55. return ret;
    56. rt->out_pkt.extra = rt->stream_id;
    57. rt->flv_data = rt->out_pkt.data;
    58. }
    59. copy = FFMIN(rt->flv_size - rt->flv_off, size_temp);
    60. bytestream_get_buffer(&buf_temp, rt->flv_data + rt->flv_off, copy);
    61. rt->flv_off += copy;
    62. size_temp -= copy;
    63. if (rt->flv_off == rt->flv_size) {
    64. rt->skip_bytes = 4;
    65. if (rt->out_pkt.type == RTMP_PT_NOTIFY) {
    66. // For onMetaData and |RtmpSampleAccess packets, we want
    67. // @setDataFrame prepended to the packet before it gets sent.
    68. // However, not all RTMP_PT_NOTIFY packets (e.g., onTextData
    69. // and onCuePoint).
    70. uint8_t commandbuffer[64];
    71. int stringlen = 0;
    72. GetByteContext gbc;
    73. bytestream2_init(&gbc, rt->flv_data, rt->flv_size);
    74. if (!ff_amf_read_string(&gbc, commandbuffer, sizeof(commandbuffer),
    75. &stringlen)) {
    76. if (!strcmp(commandbuffer, "onMetaData") ||
    77. !strcmp(commandbuffer, "|RtmpSampleAccess")) {
    78. uint8_t *ptr;
    79. if ((ret = av_reallocp(&rt->out_pkt.data, rt->out_pkt.size + 16)) < 0) {
    80. rt->flv_size = rt->flv_off = rt->flv_header_bytes = 0;
    81. return ret;
    82. }
    83. memmove(rt->out_pkt.data + 16, rt->out_pkt.data, rt->out_pkt.size);
    84. rt->out_pkt.size += 16;
    85. ptr = rt->out_pkt.data;
    86. ff_amf_write_string(&ptr, "@setDataFrame");
    87. }
    88. }
    89. }
    90. printf("\n");
    91. uint8_t *ptr;
    92. ptr = rt->out_pkt.data;
    93. for(i = 0; i < 20; i++)
    94. {
    95. printf("%2x ",ptr[i]);
    96. }
    97. printf("\n");
    98. if ((ret = rtmp_send_packet(rt, &rt->out_pkt, 0)) < 0)
    99. return ret;
    100. rt->flv_size = 0;
    101. rt->flv_off = 0;
    102. rt->flv_header_bytes = 0;
    103. rt->flv_nb_packets++;
    104. }
    105. } while (buf_temp - buf < size);
    106. if (rt->flv_nb_packets < rt->flush_interval)
    107. return size;
    108. rt->flv_nb_packets = 0;
    109. /* set stream into nonblocking mode */
    110. rt->stream->flags |= AVIO_FLAG_NONBLOCK;
    111. /* try to read one byte from the stream */
    112. ret = ffurl_read(rt->stream, &c, 1);
    113. /* switch the stream back into blocking mode */
    114. rt->stream->flags &= ~AVIO_FLAG_NONBLOCK;
    115. if (ret == AVERROR(EAGAIN)) {
    116. /* no incoming data to handle */
    117. return size;
    118. } else if (ret < 0) {
    119. return ret;
    120. } else if (ret == 1) {
    121. RTMPPacket rpkt = { 0 };
    122. if ((ret = ff_rtmp_packet_read_internal(rt->stream, &rpkt,
    123. rt->in_chunk_size,
    124. &rt->prev_pkt[0],
    125. &rt->nb_prev_pkt[0], c)) <= 0)
    126. return ret;
    127. if ((ret = rtmp_parse_result(s, rt, &rpkt)) < 0)
    128. return ret;
    129. ff_rtmp_packet_destroy(&rpkt);
    130. }
    131. return size;
    132. }

    rtmp_send_packet
            ff_rtmp_packet_write

    ff_rtmp_packet_write

             将rtmp message 拆分chunk  size发送

    1. int ff_rtmp_packet_write(URLContext *h, RTMPPacket *pkt,
    2. int chunk_size, RTMPPacket **prev_pkt_ptr,
    3. int *nb_prev_pkt)
    4. {
    5. uint8_t pkt_hdr[16], *p = pkt_hdr;
    6. int mode = RTMP_PS_TWELVEBYTES;
    7. int off = 0;
    8. int written = 0;
    9. int ret;
    10. RTMPPacket *prev_pkt;
    11. int use_delta; // flag if using timestamp delta, not RTMP_PS_TWELVEBYTES
    12. uint32_t timestamp; // full 32-bit timestamp or delta value
    13. if ((ret = ff_rtmp_check_alloc_array(prev_pkt_ptr, nb_prev_pkt,
    14. pkt->channel_id)) < 0)
    15. return ret;
    16. prev_pkt = *prev_pkt_ptr;
    17. //if channel_id = 0, this is first presentation of prev_pkt, send full hdr.
    18. use_delta = prev_pkt[pkt->channel_id].channel_id &&
    19. pkt->extra == prev_pkt[pkt->channel_id].extra &&
    20. pkt->timestamp >= prev_pkt[pkt->channel_id].timestamp;
    21. timestamp = pkt->timestamp;
    22. if (use_delta) {
    23. timestamp -= prev_pkt[pkt->channel_id].timestamp;
    24. }
    25. if (timestamp >= 0xFFFFFF) {
    26. pkt->ts_field = 0xFFFFFF;
    27. } else {
    28. pkt->ts_field = timestamp;
    29. }
    30. if (use_delta) {
    31. if (pkt->type == prev_pkt[pkt->channel_id].type &&
    32. pkt->size == prev_pkt[pkt->channel_id].size) {
    33. mode = RTMP_PS_FOURBYTES;
    34. if (pkt->ts_field == prev_pkt[pkt->channel_id].ts_field)
    35. mode = RTMP_PS_ONEBYTE;
    36. } else {
    37. mode = RTMP_PS_EIGHTBYTES;
    38. }
    39. }
    40. if (pkt->channel_id < 64) {
    41. bytestream_put_byte(&p, pkt->channel_id | (mode << 6));
    42. } else if (pkt->channel_id < 64 + 256) {
    43. bytestream_put_byte(&p, 0 | (mode << 6));
    44. bytestream_put_byte(&p, pkt->channel_id - 64);
    45. } else {
    46. bytestream_put_byte(&p, 1 | (mode << 6));
    47. bytestream_put_le16(&p, pkt->channel_id - 64);
    48. }
    49. if (mode != RTMP_PS_ONEBYTE) {
    50. bytestream_put_be24(&p, pkt->ts_field);
    51. if (mode != RTMP_PS_FOURBYTES) {
    52. bytestream_put_be24(&p, pkt->size);
    53. bytestream_put_byte(&p, pkt->type);
    54. if (mode == RTMP_PS_TWELVEBYTES)
    55. bytestream_put_le32(&p, pkt->extra);
    56. }
    57. }
    58. if (pkt->ts_field == 0xFFFFFF)
    59. bytestream_put_be32(&p, timestamp);
    60. // save history
    61. prev_pkt[pkt->channel_id].channel_id = pkt->channel_id;
    62. prev_pkt[pkt->channel_id].type = pkt->type;
    63. prev_pkt[pkt->channel_id].size = pkt->size;
    64. prev_pkt[pkt->channel_id].timestamp = pkt->timestamp;
    65. prev_pkt[pkt->channel_id].ts_field = pkt->ts_field;
    66. prev_pkt[pkt->channel_id].extra = pkt->extra;
    67. if ((ret = ffurl_write(h, pkt_hdr, p - pkt_hdr)) < 0)
    68. return ret;
    69. written = p - pkt_hdr + pkt->size;
    70. while (off < pkt->size) {
    71. int towrite = FFMIN(chunk_size, pkt->size - off);
    72. if ((ret = ffurl_write(h, pkt->data + off, towrite)) < 0)
    73. return ret;
    74. off += towrite;
    75. if (off < pkt->size) {
    76. uint8_t marker = 0xC0 | pkt->channel_id;
    77. if ((ret = ffurl_write(h, &marker, 1)) < 0)
    78. return ret;
    79. written++;
    80. if (pkt->ts_field == 0xFFFFFF) {
    81. uint8_t ts_header[4];
    82. AV_WB32(ts_header, timestamp);
    83. if ((ret = ffurl_write(h, ts_header, 4)) < 0)
    84. return ret;
    85. written += 4;
    86. }
    87. }
    88. }
    89. return written;
    90. }

  • 相关阅读:
    某高校的毕设
    我的 Kafka 旅程 - SASL+ACL 认证授权 · 配置 · 创建账号 · 用户授权 · .NET接入
    蓝桥杯构造法|两道例题(C++)
    【C#】XML的基础知识以及读取XML文件
    [附源码]SSM计算机毕业设计重庆工程学院教师宿舍管理系统论文JAVA
    聚丙乙烯修饰小鼠血清白蛋白(MSA)/大鼠血清白蛋白(RSA)/小麦麦清白蛋白纳米粒(PS-MSA/RSA)
    亚马逊国际获得AMAZON商品详情 API 返回值说明
    SpringCloud进阶-Eureka基础知识与搭建单机Eureka
    mysql的基本知识点
    猿创征文 | 一个大四学长分享自己的web前端学习路线--vue篇(1/3)
  • 原文地址:https://blog.csdn.net/u012794472/article/details/126763159