• 深度解析RocketMq源码-持久化组件(四) CommitLog


    1.绪论

    commitLog是rocketmq存储的核心,前面我们介绍了mappedfile、mappedfilequeue、刷盘策略,其实commitlog的核心组件我们基本上已经介绍完成。

    2.commitLog的组成

    commitLog的核心其实就是MqppedFilequeue,它本质上就是多个mappedFile的queue,所以可以看出commitLog和mappedFile是一对多的关系。

    2.1 commitLog的基本组件

    下面是commitLog的具体组成的一些组件:

    1. public class CommitLog implements Swappable {
    2. // Message's MAGIC CODE daa320a7
    3. //commitLog的魔数
    4. public final static int MESSAGE_MAGIC_CODE = -626843481;
    5. protected static final Logger log = LoggerFactory.getLogger(LoggerName.STORE_LOGGER_NAME);
    6. // End of file empty MAGIC CODE cbd43194
    7. public final static int BLANK_MAGIC_CODE = -875286124;
    8. /**
    9. * CRC32 Format: [PROPERTY_CRC32 + NAME_VALUE_SEPARATOR + 10-digit fixed-length string + PROPERTY_SEPARATOR]
    10. */
    11. public static final int CRC32_RESERVED_LEN = MessageConst.PROPERTY_CRC32.length() + 1 + 10 + 1;
    12. //核心:即MqppedFilequeue是真正存储消息的地方,可以看出是由多个MappedFile组成
    13. protected final MappedFileQueue mappedFileQueue;
    14. //当前commitLog所属的MessageStore组件
    15. protected final DefaultMessageStore defaultMessageStore;
    16. //进行flush的组件
    17. private final FlushManager flushManager;
    18. //不常用的数据的检查组件
    19. private final ColdDataCheckService coldDataCheckService;
    20. //真正写入数据的组件
    21. private final AppendMessageCallback appendMessageCallback;
    22. private final ThreadLocal putMessageThreadLocal;
    23. protected volatile long confirmOffset = -1L;
    24. private volatile long beginTimeInLock = 0;
    25. protected final PutMessageLock putMessageLock;
    26. protected final TopicQueueLock topicQueueLock;
    27. private volatile Set fullStorePaths = Collections.emptySet();
    28. //刷新磁盘的监视器
    29. private final FlushDiskWatcher flushDiskWatcher;
    30. //commitlog大小,默认为1gb
    31. protected int commitLogSize;
    32. private final boolean enabledAppendPropCRC;
    33. //多路分发器,rocketmq为了支持mutt协议的组件
    34. protected final MultiDispatch multiDispatch;
    35. }

    构造函数:

    1. public CommitLog(final DefaultMessageStore defaultMessageStore) {
    2. String storePath = defaultMessageStore.getMessageStoreConfig().getStorePathCommitLog();
    3. //初始化messagequeue,这个时候文件会与直接内存建立映射关系,并且完成文件预热
    4. if (storePath.contains(MessageStoreConfig.MULTI_PATH_SPLITTER)) {
    5. this.mappedFileQueue = new MultiPathMappedFileQueue(defaultMessageStore.getMessageStoreConfig(),
    6. defaultMessageStore.getMessageStoreConfig().getMappedFileSizeCommitLog(),
    7. defaultMessageStore.getAllocateMappedFileService(), this::getFullStorePaths);
    8. } else {
    9. this.mappedFileQueue = new MappedFileQueue(storePath,
    10. defaultMessageStore.getMessageStoreConfig().getMappedFileSizeCommitLog(),
    11. defaultMessageStore.getAllocateMappedFileService());
    12. }
    13. this.defaultMessageStore = defaultMessageStore;
    14. //如果是同步的时候,采用GroupCommitService进行flush
    15. if (FlushDiskType.SYNC_FLUSH == defaultMessageStore.getMessageStoreConfig().getFlushDiskType()) {
    16. this.flushCommitLogService = new GroupCommitService();
    17. } else {
    18. //如果是异步刷盘,采用FlushRealTimeService进行flush
    19. this.flushCommitLogService = new FlushRealTimeService();
    20. }
    21. //开启瞬时缓存池技术的话,采用CommitRealTimeService进行commit
    22. this.commitLogService = new CommitRealTimeService();
    23. this.appendMessageCallback = new DefaultAppendMessageCallback();
    24. //每个线程都有一个消息的编码器,MessageExtEncoder负责将消息进行编码到bytebuffer中
    25. putMessageThreadLocal = new ThreadLocal() {
    26. @Override
    27. protected PutMessageThreadLocal initialValue() {
    28. return new PutMessageThreadLocal(defaultMessageStore.getMessageStoreConfig().getMaxMessageSize());
    29. }
    30. };
    31. this.putMessageLock = defaultMessageStore.getMessageStoreConfig().isUseReentrantLockWhenPutMessage() ? new PutMessageReentrantLock() : new PutMessageSpinLock();
    32. //rocketmq实现,mutt协议的组件,它可以让消息只有一个commitlog但是分发到多个queue中
    33. this.multiDispatch = new MultiDispatch(defaultMessageStore, this);
    34. //当采用同步刷盘的时候,调用线程会阻塞等待刷盘结果,FlushDiskWatcher会检测每一个flush请求,如果超过完成时间
    35. //便会自动的唤醒调用线程,防止调用线程阻塞过久的场景出现
    36. flushDiskWatcher = new FlushDiskWatcher();
    37. }

    2.2 commtLog真正存储消息的地方-MappedFileQueue

    mappedfilequeue是commitLog的核心,在前面mappedfilequeue中已经仔细分析过,这里不再赘述。详情请看:深度解析RocketMq源码-持久化组件(二) MappedFileQueue-CSDN博客

    2.3 commitLog是如何落盘的-FlushCommitLogService

    flushmanager包含了是对commotlog罗盘进行处理的组件,它包含了commit和flush两个逻辑。在前面刷盘策略中已经仔细分析过,这里不再赘述。详情请看:

    深度解析RocketMq源码-持久化组件(三) 刷盘策略-CSDN博客

    2.4 同步刷盘失败怎么办 -FlushDiskWatcher

    当采用同步刷盘的时候,调用线程会阻塞等待刷盘结果,FlushDiskWatcher会检测每一个flush请求,如果超过完成时间便会自动的唤醒调用线程,防止调用线程阻塞过久的场景出现。由此可以看出,就算是同步刷盘策略,也有可能因为超过3s没有刷盘成功导致数据丢失。
    1. @Override
    2. public void run() {
    3. while (!isStopped()) {
    4. GroupCommitRequest request = null;
    5. try {
    6. //获取提交的flush请求
    7. request = commitRequests.take();
    8. } catch (InterruptedException e) {
    9. log.warn("take flush disk commit request, but interrupted, this may caused by shutdown");
    10. continue;
    11. }
    12. //
    13. while (!request.future().isDone()) {
    14. long now = System.nanoTime();
    15. //如果超过了请求的超时时间
    16. if (now - request.getDeadLine() >= 0) {
    17. //返回刷盘超时
    18. request.wakeupCustomer(PutMessageStatus.FLUSH_DISK_TIMEOUT);
    19. break;
    20. }
    21. // To avoid frequent thread switching, replace future.get with sleep here,
    22. long sleepTime = (request.getDeadLine() - now) / 1_000_000;
    23. sleepTime = Math.min(10, sleepTime);
    24. if (sleepTime == 0) {
    25. request.wakeupCustomer(PutMessageStatus.FLUSH_DISK_TIMEOUT);
    26. break;
    27. }
    28. try {
    29. Thread.sleep(sleepTime);
    30. } catch (InterruptedException e) {
    31. log.warn(
    32. "An exception occurred while waiting for flushing disk to complete. this may caused by shutdown");
    33. break;
    34. }
    35. }
    36. }
    37. }

    3. commitLog的常见方法

    3.1 commitLog是如何写入消息的 - asyncPutMessage

    commitLog的核心方法就是写入消息,本质上就是在写入消息前对消息设置一些参数,比如存储时间戳,或者顺序消息的话,会转移topic,会把消息真正的topic覆盖原来的topic。然后获取到最后一个commitLog,并调用commitLog的appendMessage方法写入消息。所以commitLog其实是将消息顺序写入的。

    1. public CompletableFuture asyncPutMessage(final MessageExtBrokerInner msg) {
    2. // Set the storage time
    3. //设置存储时间戳
    4. msg.setStoreTimestamp(System.currentTimeMillis());
    5. // Set the message body BODY CRC (consider the most appropriate setting
    6. // on the client)
    7. //将消息体进行crc编码,并存储到bodyCrc中
    8. msg.setBodyCRC(UtilAll.crc32(msg.getBody()));
    9. // Back to Results
    10. AppendMessageResult result = null;
    11. StoreStatsService storeStatsService = this.defaultMessageStore.getStoreStatsService();
    12. //获取到对应的topic
    13. String topic = msg.getTopic();
    14. // int queueId msg.getQueueId();
    15. //如果是事务消息,获取到事务类型prepared或者commit或者rollback
    16. final int tranType = MessageSysFlag.getTransactionValue(msg.getSysFlag());
    17. //如果是非事务消息或者commit消息
    18. if (tranType == MessageSysFlag.TRANSACTION_NOT_TYPE
    19. || tranType == MessageSysFlag.TRANSACTION_COMMIT_TYPE) {
    20. // Delay Delivery
    21. if (msg.getDelayTimeLevel() > 0) {
    22. //如果是延迟消息的话,需要校验延迟消息的最大延迟是否在mq的要求的范围内
    23. if (msg.getDelayTimeLevel() > this.defaultMessageStore.getScheduleMessageService().getMaxDelayLevel()) {
    24. msg.setDelayTimeLevel(this.defaultMessageStore.getScheduleMessageService().getMaxDelayLevel());
    25. }
    26. topic = TopicValidator.RMQ_SYS_SCHEDULE_TOPIC;
    27. //并且将消息的延迟级别获取到延迟消息需要放入的queueid
    28. int queueId = ScheduleMessageService.delayLevel2QueueId(msg.getDelayTimeLevel());
    29. // Backup real topic, queueId
    30. //并且设置消息真正放入的地方也就是SCHEDULE_TOPIC_XXXX中
    31. MessageAccessor.putProperty(msg, MessageConst.PROPERTY_REAL_TOPIC, msg.getTopic());
    32. MessageAccessor.putProperty(msg, MessageConst.PROPERTY_REAL_QUEUE_ID, String.valueOf(msg.getQueueId()));
    33. msg.setPropertiesString(MessageDecoder.messageProperties2String(msg.getProperties()));
    34. msg.setTopic(topic);
    35. msg.setQueueId(queueId);
    36. }
    37. }
    38. //设置存储的时间戳
    39. InetSocketAddress bornSocketAddress = (InetSocketAddress) msg.getBornHost();
    40. if (bornSocketAddress.getAddress() instanceof Inet6Address) {
    41. msg.setBornHostV6Flag();
    42. }
    43. InetSocketAddress storeSocketAddress = (InetSocketAddress) msg.getStoreHost();
    44. if (storeSocketAddress.getAddress() instanceof Inet6Address) {
    45. msg.setStoreHostAddressV6Flag();
    46. }
    47. PutMessageThreadLocal putMessageThreadLocal = this.putMessageThreadLocal.get();
    48. updateMaxMessageSize(putMessageThreadLocal);
    49. if (!multiDispatch.isMultiDispatchMsg(msg)) {
    50. //获取到消息的编码器,并且编码,因为encode为一个公共类,为了防止锁竞争,所以放入到threadLocal中
    51. PutMessageResult encodeResult = putMessageThreadLocal.getEncoder().encode(msg);
    52. if (encodeResult != null) {
    53. return CompletableFuture.completedFuture(encodeResult);
    54. }
    55. msg.setEncodedBuff(putMessageThreadLocal.getEncoder().getEncoderBuffer());
    56. }
    57. //构建消息上下文中
    58. PutMessageContext putMessageContext = new PutMessageContext(generateKey(putMessageThreadLocal.getKeyBuilder(), msg));
    59. long elapsedTimeInLock = 0;
    60. MappedFile unlockMappedFile = null;
    61. //开始写入消息
    62. putMessageLock.lock(); //spin or ReentrantLock ,depending on store config
    63. try {
    64. MappedFile mappedFile = this.mappedFileQueue.getLastMappedFile();
    65. long beginLockTimestamp = this.defaultMessageStore.getSystemClock().now();
    66. this.beginTimeInLock = beginLockTimestamp;
    67. // Here settings are stored timestamp, in order to ensure an orderly
    68. // global
    69. msg.setStoreTimestamp(beginLockTimestamp);
    70. if (null == mappedFile || mappedFile.isFull()) {
    71. //获取到最后一个mappedfile
    72. mappedFile = this.mappedFileQueue.getLastMappedFile(0); // Mark: NewFile may be cause noise
    73. }
    74. if (null == mappedFile) {
    75. log.error("create mapped file1 error, topic: " + msg.getTopic() + " clientAddr: " + msg.getBornHostString());
    76. return CompletableFuture.completedFuture(new PutMessageResult(PutMessageStatus.CREATE_MAPEDFILE_FAILED, null));
    77. }
    78. //调用mappedfile的appendMessage方法写入消息
    79. result = mappedFile.appendMessage(msg, this.appendMessageCallback, putMessageContext);
    80. switch (result.getStatus()) {
    81. case PUT_OK:
    82. break;
    83. case END_OF_FILE:
    84. unlockMappedFile = mappedFile;
    85. // Create a new file, re-write the message
    86. mappedFile = this.mappedFileQueue.getLastMappedFile(0);
    87. if (null == mappedFile) {
    88. // XXX: warn and notify me
    89. log.error("create mapped file2 error, topic: " + msg.getTopic() + " clientAddr: " + msg.getBornHostString());
    90. return CompletableFuture.completedFuture(new PutMessageResult(PutMessageStatus.CREATE_MAPEDFILE_FAILED, result));
    91. }
    92. result = mappedFile.appendMessage(msg, this.appendMessageCallback, putMessageContext);
    93. break;
    94. case MESSAGE_SIZE_EXCEEDED:
    95. case PROPERTIES_SIZE_EXCEEDED:
    96. return CompletableFuture.completedFuture(new PutMessageResult(PutMessageStatus.MESSAGE_ILLEGAL, result));
    97. case UNKNOWN_ERROR:
    98. return CompletableFuture.completedFuture(new PutMessageResult(PutMessageStatus.UNKNOWN_ERROR, result));
    99. default:
    100. return CompletableFuture.completedFuture(new PutMessageResult(PutMessageStatus.UNKNOWN_ERROR, result));
    101. }
    102. elapsedTimeInLock = this.defaultMessageStore.getSystemClock().now() - beginLockTimestamp;
    103. } finally {
    104. beginTimeInLock = 0;
    105. putMessageLock.unlock();
    106. }
    107. if (elapsedTimeInLock > 500) {
    108. log.warn("[NOTIFYME]putMessage in lock cost time(ms)={}, bodyLength={} AppendMessageResult={}", elapsedTimeInLock, msg.getBody().length, result);
    109. }
    110. if (null != unlockMappedFile && this.defaultMessageStore.getMessageStoreConfig().isWarmMapedFileEnable()) {
    111. this.defaultMessageStore.unlockMappedFile(unlockMappedFile);
    112. }
    113. PutMessageResult putMessageResult = new PutMessageResult(PutMessageStatus.PUT_OK, result);
    114. // Statistics
    115. storeStatsService.getSinglePutMessageTopicTimesTotal(msg.getTopic()).add(1);
    116. storeStatsService.getSinglePutMessageTopicSizeTotal(topic).add(result.getWroteBytes());
    117. CompletableFuture flushResultFuture = submitFlushRequest(result, msg);
    118. //设置返回结果
    119. CompletableFuture replicaResultFuture = submitReplicaRequest(result, msg);
    120. return flushResultFuture.thenCombine(replicaResultFuture, (flushStatus, replicaStatus) -> {
    121. if (flushStatus != PutMessageStatus.PUT_OK) {
    122. putMessageResult.setPutMessageStatus(flushStatus);
    123. }
    124. if (replicaStatus != PutMessageStatus.PUT_OK) {
    125. putMessageResult.setPutMessageStatus(replicaStatus);
    126. }
    127. return putMessageResult;
    128. });
    129. }

    3.2 commitLog是如何flush消息的-submitFlushRequest

    commitLog如果要flush磁盘的话,其实是提交一个flush请求,然后根据同步刷盘还是异步刷盘,最后交由FlushCommitLogService来消费请求,最后调用mappedFile的flush方法,将数据从buffer中flush到磁盘中去的。

    1. /**
    2. * @param result 向mappedfile中追加消息的结果
    3. * @param messageExt 具体的消息内容已经协议头等
    4. * @return
    5. */
    6. public CompletableFuture submitFlushRequest(AppendMessageResult result, MessageExt messageExt) {
    7. // Synchronization flush
    8. //如果采用同步刷盘
    9. if (FlushDiskType.SYNC_FLUSH == this.defaultMessageStore.getMessageStoreConfig().getFlushDiskType()) {
    10. //采用的是GroupCommitService进行刷盘
    11. final GroupCommitService service = (GroupCommitService) this.flushCommitLogService;
    12. //默认是需要等待消息刷盘成功的
    13. if (messageExt.isWaitStoreMsgOK()) {
    14. //构建刷盘请求,包括当前消息的在buffer中的写指针
    15. GroupCommitRequest request = new GroupCommitRequest(result.getWroteOffset() + result.getWroteBytes(),
    16. this.defaultMessageStore.getMessageStoreConfig().getSyncFlushTimeout());
    17. flushDiskWatcher.add(request);
    18. //调用GroupCommitService的putRequest推送刷盘请求
    19. service.putRequest(request);
    20. //当前线程阻塞等待刷盘结果
    21. return request.future();
    22. } else {
    23. //如果不需要等待刷盘线程成功,便直接唤醒刷盘线程,并且返回成功(此时也可能有丢失数据的风险)
    24. service.wakeup();
    25. return CompletableFuture.completedFuture(PutMessageStatus.PUT_OK);
    26. }
    27. }
    28. // Asynchronous flush
    29. else {
    30. //如果采用异步刷盘策略
    31. if (!this.defaultMessageStore.getMessageStoreConfig().isTransientStorePoolEnable()) {
    32. //如果未采用了瞬时缓存池技术,便唤醒flush服务线程
    33. flushCommitLogService.wakeup();
    34. } else {
    35. //如果未采用了瞬时缓存池技术,便唤醒commit服务线程
    36. commitLogService.wakeup();
    37. }
    38. //返回成功
    39. return CompletableFuture.completedFuture(PutMessageStatus.PUT_OK);
    40. }
    41. }

    3.3 rocketMq是如何实现主从同步的

    rocketmq以前采用的方式是通过主从的方式,主节点负责写入,然后并且commitLog同步到从节点的方式实现高可用的,但是这样有个问题,就是主节点宕机过后,需要手动的修改某个从节点为主节点。现在这种架构基本上没有使用,而换用的是后面要讲解的用raft协议实现dledger高可用架构。我们接下来大致了解一下主从架构是如何同步的。

    3.3.1 提交复制请求 -submitReplicaRequest

    1. public CompletableFuture submitReplicaRequest(AppendMessageResult result, MessageExt messageExt) {
    2. //如果节点为主节点,并且为同步复制
    3. if (BrokerRole.SYNC_MASTER == this.defaultMessageStore.getMessageStoreConfig().getBrokerRole()) {
    4. HAService service = this.defaultMessageStore.getHaService();
    5. //如果消息已经同步完毕
    6. if (messageExt.isWaitStoreMsgOK()) {
    7. if (service.isSlaveOK(result.getWroteBytes() + result.getWroteOffset())) {
    8. GroupCommitRequest request = new GroupCommitRequest(result.getWroteOffset() + result.getWroteBytes(),
    9. this.defaultMessageStore.getMessageStoreConfig().getSlaveTimeout());
    10. //会调用HAserver的putRequest向里面发送复制请求
    11. service.putRequest(request);
    12. service.getWaitNotifyObject().wakeupAll();
    13. return request.future();
    14. }
    15. else {
    16. return CompletableFuture.completedFuture(PutMessageStatus.SLAVE_NOT_AVAILABLE);
    17. }
    18. }
    19. }
    20. return CompletableFuture.completedFuture(PutMessageStatus.PUT_OK);
    21. }

    3.3.2 rocketmq的主从同步组件-HAService

    1.主节点监听从节点的连接请求,并且建立socket连接

    下面的代码是nio的监听并且建立连接的标准写法。

    1. public void run() {
    2. log.info(this.getServiceName() + " service started");
    3. while (!this.isStopped()) {
    4. try {
    5. this.selector.select(1000);
    6. Set selected = this.selector.selectedKeys();
    7. if (selected != null) {
    8. for (SelectionKey k : selected) {
    9. if ((k.readyOps() & SelectionKey.OP_ACCEPT) != 0) {
    10. //监听多路复用的请求
    11. SocketChannel sc = ((ServerSocketChannel) k.channel()).accept();
    12. if (sc != null) {
    13. HAService.log.info("HAService receive new connection, "
    14. + sc.socket().getRemoteSocketAddress());
    15. try {
    16. //拿到与从节点的channel过后,并且凑早HAconnection
    17. HAConnection conn = new HAConnection(HAService.this, sc);
    18. conn.start();
    19. //加入到链接池connectionList中
    20. HAService.this.addConnection(conn);
    21. } catch (Exception e) {
    22. log.error("new HAConnection exception", e);
    23. sc.close();
    24. }
    25. }
    26. } else {
    27. log.warn("Unexpected ops in select " + k.readyOps());
    28. }
    29. }
    30. selected.clear();
    31. }
    32. } catch (Exception e) {
    33. log.error(this.getServiceName() + " service has exception.", e);
    34. }
    35. }
    36. log.info(this.getServiceName() + " service end");
    37. }
    2.将commitLog通过网络连接发送至从节点-GroupTransferService
    1. public void run() {
    2. log.info(this.getServiceName() + " service started");
    3. while (!this.isStopped()) {
    4. try {
    5. this.waitForRunning(10);
    6. this.doWaitTransfer();
    7. } catch (Exception e) {
    8. log.warn(this.getServiceName() + " service has exception. ", e);
    9. }
    10. }
    11. log.info(this.getServiceName() + " service end");
    12. }

    3.从节点读取消息并且将其加入到自己的commitLog中-HAClient

    1. private boolean processReadEvent() {
    2. int readSizeZeroTimes = 0;
    3. while (this.byteBufferRead.hasRemaining()) {
    4. try {
    5. //从网络中读取消息到byteBufferRead中
    6. int readSize = this.socketChannel.read(this.byteBufferRead);
    7. if (readSize > 0) {
    8. readSizeZeroTimes = 0;
    9. //将bytebuffer中的数据写入到磁盘中
    10. boolean result = this.dispatchReadRequest();
    11. if (!result) {
    12. log.error("HAClient, dispatchReadRequest error");
    13. return false;
    14. }
    15. } else if (readSize == 0) {
    16. if (++readSizeZeroTimes >= 3) {
    17. break;
    18. }
    19. } else {
    20. log.info("HAClient, processReadEvent read socket < 0");
    21. return false;
    22. }
    23. } catch (IOException e) {
    24. log.info("HAClient, processReadEvent read socket exception", e);
    25. return false;
    26. }
    27. }
    28. return true;
    29. }
    1. private boolean dispatchReadRequest() {
    2. //获取到消息头大小
    3. final int msgHeaderSize = 8 + 4; // phyoffset + size
    4. while (true) {
    5. int diff = this.byteBufferRead.position() - this.dispatchPosition;
    6. //获取到完整的消息内容
    7. if (diff >= msgHeaderSize) {
    8. //获取到master的物理偏移量
    9. long masterPhyOffset = this.byteBufferRead.getLong(this.dispatchPosition);
    10. //获取到消息体
    11. int bodySize = this.byteBufferRead.getInt(this.dispatchPosition + 8);
    12. //当前同步的最大偏移量
    13. long slavePhyOffset = HAService.this.defaultMessageStore.getMaxPhyOffset();
    14. if (slavePhyOffset != 0) {
    15. //如果从节点偏移量和主节点偏移量不想当便直接返回
    16. if (slavePhyOffset != masterPhyOffset) {
    17. log.error("master pushed offset not equal the max phy offset in slave, SLAVE: "
    18. + slavePhyOffset + " MASTER: " + masterPhyOffset);
    19. return false;
    20. }
    21. }
    22. if (diff >= (msgHeaderSize + bodySize)) {
    23. byte[] bodyData = byteBufferRead.array();
    24. //获取到消息体开始的指针
    25. int dataStart = this.dispatchPosition + msgHeaderSize;
    26. //调用mappedfile的appendToCommitLog方法直接写入待commitLog中
    27. HAService.this.defaultMessageStore.appendToCommitLog(
    28. masterPhyOffset, bodyData, dataStart, bodySize);
    29. //更新同步偏移量
    30. this.dispatchPosition += msgHeaderSize + bodySize;
    31. if (!reportSlaveMaxOffsetPlus()) {
    32. return false;
    33. }
    34. continue;
    35. }
    36. }
    37. if (!this.byteBufferRead.hasRemaining()) {
    38. this.reallocateByteBuffer();
    39. }
    40. break;
    41. }
    42. return true;
    43. }

    4.总结

    至此,rocketmq的核心持久化主键commitLog我们便已经全部分析完成,简单而言,commitLog持有一个messageFileQueue,而mappedFileQueue对应不同的mappedFile文件,而mappedFile通过mmap技术与磁盘中的文件建立映射。而磁盘中的物理文件的每个文件的名称都是以它当前的文件的起始偏移量命名的,比如第一个文件为0.log,1个mappedfile的大小为1gb,假设第一个文件的内容刚好全部写满,所以其第二个文件的名称为1024*1024*1024.log。一有消息到来,就会向commitLog中写入文件,进行持久化,其实就是获取到最后一个mappedFile,并且调用他的appendMessage方法完成消息的写入。

  • 相关阅读:
    [AGC041F] Histogram Rooks(神仙题 网格 容斥计数)
    【Cocos新手进阶】父级预制体中的数据列表,在子预制体中的控制方法!
    猿人学 第一题
    CF 1899A 学习笔记
    LaTex模板免费下载网站
    树莓派系统安装,使用SSD/U盘启动centos
    Unity C#随笔:简述String和StringBuilder的区别
    ORA-00600 【3948】,ORA-00600 【3949】
    mac pro M1(ARM)安装:php开发环境
    Three.js一学就会系列:02 画线
  • 原文地址:https://blog.csdn.net/zhifou123456/article/details/139757610