• 【YOLOv5/v7改进系列】引入RT-DETR的RepC3


    一、导言

    RT-DETR(Real-Time Detection Transformer)是一种针对实时目标检测任务的创新方法,它旨在克服YOLO系列和其他基于Transformer的检测器存在的局限性。RT-DETR的主要优点包括:

    1. 无NMS(非极大值抑制)优势:传统的YOLO系列检测器的性能受到NMS过程的负面影响,而RT-DETR利用端到端Transformer架构,消除了对NMS的依赖,从而提高了效率和准确性。

    2. 高效混合编码器:设计了一种高效的混合编码器,通过解耦处理多尺度特征的内部尺度交互和跨尺度融合,大大提升了处理速度。这种方法能够快速高效地处理来自不同尺度的信息,同时保持了准确性。

    3. 不确定性最小化查询选择:提出了一个创新的查询选择机制,旨在提供高质量的初始查询给解码器,从而提升检测的准确性。这个机制通过显式优化不确定性来避免选择具有低定位置信度的特征作为对象查询,减少了检测结果的不确定性。

    4. 灵活的速度调节:RT-DETR支持通过简单调整解码器层数的方式来适应不同场景下的速度需求,无需重新训练模型,为实际应用提供了极大的灵活性。

    5. 卓越的性能:在COCO数据集上,RT-DETR-R50和RT-DETR-R101分别达到了53.1%和54.3%的平均精度(AP),同时在T4 GPU上分别实现了108和74的帧每秒(FPS)。这不仅超越了先前的先进YOLO模型,还在速度和准确性上都优于DINO-R50,且在FPS上快约21倍。

    6. 预训练增强性能:经过Objects365数据集的预训练后,RT-DETR-R50和RT-DETR-R101的性能进一步提升至55.3%和56.2%的AP,显示了巨大的性能提升潜力。

    7. 技术扩展性:RT-DETR及其模型缩放策略拓宽了实时目标检测的技术路径,为多样化实时应用场景提供了超越YOLO的新可能性。

    综上,RT-DETR通过其创新的设计,在保证实时性的前提下,实现了速度与准确性的优化,为实时目标检测领域带来了一种新的、性能优越的解决方案。

    二、RepC3的特点
    RepC3 类

    RepC3是基于RepConv构建的一个模块,它是CSP(Cross Stage Partial)结构的一个变体,常用于神经网络的瓶颈层。主要特性包括:

    • 残差连接:类似于ResNet中的残差结构,RepC3通过cv1和cv2路径与中间的多层RepConv模块相加,形成残差连接,有助于梯度传播,加快训练速度,提高模型收敛性。
    • 高效计算:通过多个连续的RepConv模块,以较少的计算资源实现更强的表达能力。每个RepConv模块内部的两个分支在训练时提供多样性,而融合后的结构在推理时保持高效。
    • 通道缩放:引入了膨胀因子e,允许对中间通道数进行动态调整,以平衡模型的深度和宽度,优化模型的计算成本和表示能力。
    总结优点
    • 高效推理:通过融合技术和残差结构,RepC3模块在保持高性能的同时,优化了模型的推理速度,特别适合于实时目标检测任务。
    • 灵活性与可扩展性:模块化的设计允许根据需要调整网络的深度和宽度,为不同任务和资源限制提供了高度的定制化能力。
    • 性能与计算效率的平衡:通过精细的结构设计,RepC3能够以较低的计算成本实现较高的检测精度,这对于实时应用至关重要。
    三、准备工作

    首先在YOLOv5/v7的models文件夹下新建文件repc3.py,导入如下代码

    1. from models.common import *
    2. class RepConv(nn.Module):
    3. """
    4. RepConv is a basic rep-style block, including training and deploy status.
    5. This module is used in RT-DETR.
    6. Based on https://github.com/DingXiaoH/RepVGG/blob/main/repvgg.py
    7. """
    8. default_act = nn.SiLU() # default activation
    9. def __init__(self, c1, c2, k=3, s=1, p=1, g=1, d=1, act=True, bn=False, deploy=False):
    10. """Initializes Light Convolution layer with inputs, outputs & optional activation function."""
    11. super().__init__()
    12. assert k == 3 and p == 1
    13. self.g = g
    14. self.c1 = c1
    15. self.c2 = c2
    16. self.act = self.default_act if act is True else act if isinstance(act, nn.Module) else nn.Identity()
    17. self.bn = nn.BatchNorm2d(num_features=c1) if bn and c2 == c1 and s == 1 else None
    18. self.conv1 = Conv(c1, c2, k, s, p=p, g=g, act=False)
    19. self.conv2 = Conv(c1, c2, 1, s, p=(p - k // 2), g=g, act=False)
    20. def forward_fuse(self, x):
    21. """Forward process."""
    22. return self.act(self.conv(x))
    23. def forward(self, x):
    24. """Forward process."""
    25. id_out = 0 if self.bn is None else self.bn(x)
    26. return self.act(self.conv1(x) + self.conv2(x) + id_out)
    27. def get_equivalent_kernel_bias(self):
    28. """Returns equivalent kernel and bias by adding 3x3 kernel, 1x1 kernel and identity kernel with their biases."""
    29. kernel3x3, bias3x3 = self._fuse_bn_tensor(self.conv1)
    30. kernel1x1, bias1x1 = self._fuse_bn_tensor(self.conv2)
    31. kernelid, biasid = self._fuse_bn_tensor(self.bn)
    32. return kernel3x3 + self._pad_1x1_to_3x3_tensor(kernel1x1) + kernelid, bias3x3 + bias1x1 + biasid
    33. def _pad_1x1_to_3x3_tensor(self, kernel1x1):
    34. """Pads a 1x1 tensor to a 3x3 tensor."""
    35. if kernel1x1 is None:
    36. return 0
    37. else:
    38. return torch.nn.functional.pad(kernel1x1, [1, 1, 1, 1])
    39. def _fuse_bn_tensor(self, branch):
    40. """Generates appropriate kernels and biases for convolution by fusing branches of the neural network."""
    41. if branch is None:
    42. return 0, 0
    43. if isinstance(branch, Conv):
    44. kernel = branch.conv.weight
    45. running_mean = branch.bn.running_mean
    46. running_var = branch.bn.running_var
    47. gamma = branch.bn.weight
    48. beta = branch.bn.bias
    49. eps = branch.bn.eps
    50. elif isinstance(branch, nn.BatchNorm2d):
    51. if not hasattr(self, 'id_tensor'):
    52. input_dim = self.c1 // self.g
    53. kernel_value = np.zeros((self.c1, input_dim, 3, 3), dtype=np.float32)
    54. for i in range(self.c1):
    55. kernel_value[i, i % input_dim, 1, 1] = 1
    56. self.id_tensor = torch.from_numpy(kernel_value).to(branch.weight.device)
    57. kernel = self.id_tensor
    58. running_mean = branch.running_mean
    59. running_var = branch.running_var
    60. gamma = branch.weight
    61. beta = branch.bias
    62. eps = branch.eps
    63. std = (running_var + eps).sqrt()
    64. t = (gamma / std).reshape(-1, 1, 1, 1)
    65. return kernel * t, beta - running_mean * gamma / std
    66. def fuse_convs(self):
    67. """Combines two convolution layers into a single layer and removes unused attributes from the class."""
    68. if hasattr(self, 'conv'):
    69. return
    70. kernel, bias = self.get_equivalent_kernel_bias()
    71. self.conv = nn.Conv2d(in_channels=self.conv1.conv.in_channels,
    72. out_channels=self.conv1.conv.out_channels,
    73. kernel_size=self.conv1.conv.kernel_size,
    74. stride=self.conv1.conv.stride,
    75. padding=self.conv1.conv.padding,
    76. dilation=self.conv1.conv.dilation,
    77. groups=self.conv1.conv.groups,
    78. bias=True).requires_grad_(False)
    79. self.conv.weight.data = kernel
    80. self.conv.bias.data = bias
    81. for para in self.parameters():
    82. para.detach_()
    83. self.__delattr__('conv1')
    84. self.__delattr__('conv2')
    85. if hasattr(self, 'nm'):
    86. self.__delattr__('nm')
    87. if hasattr(self, 'bn'):
    88. self.__delattr__('bn')
    89. if hasattr(self, 'id_tensor'):
    90. self.__delattr__('id_tensor')
    91. class RepC3(nn.Module):
    92. """Rep C3."""
    93. def __init__(self, c1, c2, n=3, e=1.0):
    94. """Initialize CSP Bottleneck with a single convolution using input channels, output channels, and number."""
    95. super().__init__()
    96. c_ = int(c2 * e) # hidden channels
    97. self.cv1 = Conv(c1, c_, 1, 1)
    98. self.cv2 = Conv(c1, c_, 1, 1)
    99. self.m = nn.Sequential(*[RepConv(c_, c_) for _ in range(n)])
    100. self.cv3 = Conv(c_, c2, 1, 1) if c_ != c2 else nn.Identity()
    101. def forward(self, x):
    102. """Forward pass of RT-DETR neck layer."""
    103. return self.cv3(self.m(self.cv1(x)) + self.cv2(x))

    其次在在YOLOv5/v7项目文件下的models/yolo.py中在文件首部添加代码

    from models.repc3 import RepC3

    并搜索def parse_model(d, ch)

    定位到如下行添加以下代码

    RepC3, 

    四、YOLOv7-tiny改进工作

    完成二后,在YOLOv7项目文件下的models文件夹下创建新的文件yolov7-tiny-repc3.yaml,导入如下代码。

    1. # parameters
    2. nc: 80 # number of classes
    3. depth_multiple: 1.0 # model depth multiple
    4. width_multiple: 1.0 # layer channel multiple
    5. # anchors
    6. anchors:
    7. - [10,13, 16,30, 33,23] # P3/8
    8. - [30,61, 62,45, 59,119] # P4/16
    9. - [116,90, 156,198, 373,326] # P5/32
    10. # yolov7-tiny backbone
    11. backbone:
    12. # [from, number, module, args] c2, k=1, s=1, p=None, g=1, act=True
    13. [[-1, 1, Conv, [32, 3, 2, None, 1, nn.LeakyReLU(0.1)]], # 0-P1/2
    14. [-1, 1, Conv, [64, 3, 2, None, 1, nn.LeakyReLU(0.1)]], # 1-P2/4
    15. [-1, 1, Conv, [32, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
    16. [-2, 1, Conv, [32, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
    17. [-1, 1, Conv, [32, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
    18. [-1, 1, Conv, [32, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
    19. [[-1, -2, -3, -4], 1, Concat, [1]],
    20. [-1, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 7
    21. [-1, 1, MP, []], # 8-P3/8
    22. [-1, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
    23. [-2, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
    24. [-1, 1, Conv, [64, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
    25. [-1, 1, Conv, [64, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
    26. [[-1, -2, -3, -4], 1, Concat, [1]],
    27. [-1, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 14
    28. [-1, 1, MP, []], # 15-P4/16
    29. [-1, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
    30. [-2, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
    31. [-1, 1, Conv, [128, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
    32. [-1, 1, Conv, [128, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
    33. [[-1, -2, -3, -4], 1, Concat, [1]],
    34. [-1, 1, Conv, [256, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 21
    35. [-1, 1, MP, []], # 22-P5/32
    36. [-1, 1, Conv, [256, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
    37. [-2, 1, Conv, [256, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
    38. [-1, 1, Conv, [256, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
    39. [-1, 1, Conv, [256, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
    40. [[-1, -2, -3, -4], 1, Concat, [1]],
    41. [-1, 1, Conv, [512, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 28
    42. ]
    43. # yolov7-tiny head
    44. head:
    45. [[-1, 1, v7tiny_SPP, [256]], # 29
    46. [-1, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
    47. [-1, 1, nn.Upsample, [None, 2, 'nearest']],
    48. [21, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # route backbone P4
    49. [[-1, -2], 1, Concat, [1]],
    50. [-1, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
    51. [-2, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
    52. [-1, 1, Conv, [64, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
    53. [-1, 1, Conv, [64, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
    54. [[-1, -2, -3, -4], 1, Concat, [1]],
    55. [-1, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 39
    56. [-1, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
    57. [-1, 1, nn.Upsample, [None, 2, 'nearest']],
    58. [14, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # route backbone P3
    59. [[-1, -2], 1, Concat, [1]],
    60. [-1, 1, Conv, [32, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
    61. [-2, 1, Conv, [32, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
    62. [-1, 1, Conv, [32, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
    63. [-1, 1, Conv, [32, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
    64. [[-1, -2, -3, -4], 1, Concat, [1]],
    65. [-1, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 49
    66. [-1, 1, Conv, [128, 3, 2, None, 1, nn.LeakyReLU(0.1)]],
    67. [[-1, 39], 1, Concat, [1]],
    68. [-1, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
    69. [-2, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
    70. [-1, 1, Conv, [64, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
    71. [-1, 1, Conv, [64, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
    72. [[-1, -2, -3, -4], 1, Concat, [1]],
    73. [-1, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 57
    74. [-1, 1, Conv, [256, 3, 2, None, 1, nn.LeakyReLU(0.1)]],
    75. [[-1, 29], 1, Concat, [1]],
    76. [-1, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
    77. [-2, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
    78. [-1, 1, Conv, [128, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
    79. [-1, 1, Conv, [128, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
    80. [[-1, -2, -3, -4], 1, Concat, [1]],
    81. [-1, 3, RepC3, [256, 0.5]], # 65
    82. [49, 1, Conv, [128, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
    83. [57, 1, Conv, [256, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
    84. [65, 1, Conv, [512, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
    85. [[66, 67, 68], 1, IDetect, [nc, anchors]], # Detect(P3, P4, P5)
    86. ]
    1. from n params module arguments
    2. 0 -1 1 928 models.common.Conv [3, 32, 3, 2, None, 1, LeakyReLU(negative_slope=0.1)]
    3. 1 -1 1 18560 models.common.Conv [32, 64, 3, 2, None, 1, LeakyReLU(negative_slope=0.1)]
    4. 2 -1 1 2112 models.common.Conv [64, 32, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    5. 3 -2 1 2112 models.common.Conv [64, 32, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    6. 4 -1 1 9280 models.common.Conv [32, 32, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    7. 5 -1 1 9280 models.common.Conv [32, 32, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    8. 6 [-1, -2, -3, -4] 1 0 models.common.Concat [1]
    9. 7 -1 1 8320 models.common.Conv [128, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    10. 8 -1 1 0 models.common.MP []
    11. 9 -1 1 4224 models.common.Conv [64, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    12. 10 -2 1 4224 models.common.Conv [64, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    13. 11 -1 1 36992 models.common.Conv [64, 64, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    14. 12 -1 1 36992 models.common.Conv [64, 64, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    15. 13 [-1, -2, -3, -4] 1 0 models.common.Concat [1]
    16. 14 -1 1 33024 models.common.Conv [256, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    17. 15 -1 1 0 models.common.MP []
    18. 16 -1 1 16640 models.common.Conv [128, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    19. 17 -2 1 16640 models.common.Conv [128, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    20. 18 -1 1 147712 models.common.Conv [128, 128, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    21. 19 -1 1 147712 models.common.Conv [128, 128, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    22. 20 [-1, -2, -3, -4] 1 0 models.common.Concat [1]
    23. 21 -1 1 131584 models.common.Conv [512, 256, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    24. 22 -1 1 0 models.common.MP []
    25. 23 -1 1 66048 models.common.Conv [256, 256, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    26. 24 -2 1 66048 models.common.Conv [256, 256, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    27. 25 -1 1 590336 models.common.Conv [256, 256, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    28. 26 -1 1 590336 models.common.Conv [256, 256, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    29. 27 [-1, -2, -3, -4] 1 0 models.common.Concat [1]
    30. 28 -1 1 525312 models.common.Conv [1024, 512, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    31. 29 -1 1 657408 models.common.v7tiny_SPP [512, 256]
    32. 30 -1 1 33024 models.common.Conv [256, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    33. 31 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
    34. 32 21 1 33024 models.common.Conv [256, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    35. 33 [-1, -2] 1 0 models.common.Concat [1]
    36. 34 -1 1 16512 models.common.Conv [256, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    37. 35 -2 1 16512 models.common.Conv [256, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    38. 36 -1 1 36992 models.common.Conv [64, 64, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    39. 37 -1 1 36992 models.common.Conv [64, 64, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    40. 38 [-1, -2, -3, -4] 1 0 models.common.Concat [1]
    41. 39 -1 1 33024 models.common.Conv [256, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    42. 40 -1 1 8320 models.common.Conv [128, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    43. 41 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
    44. 42 14 1 8320 models.common.Conv [128, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    45. 43 [-1, -2] 1 0 models.common.Concat [1]
    46. 44 -1 1 4160 models.common.Conv [128, 32, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    47. 45 -2 1 4160 models.common.Conv [128, 32, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    48. 46 -1 1 9280 models.common.Conv [32, 32, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    49. 47 -1 1 9280 models.common.Conv [32, 32, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    50. 48 [-1, -2, -3, -4] 1 0 models.common.Concat [1]
    51. 49 -1 1 8320 models.common.Conv [128, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    52. 50 -1 1 73984 models.common.Conv [64, 128, 3, 2, None, 1, LeakyReLU(negative_slope=0.1)]
    53. 51 [-1, 39] 1 0 models.common.Concat [1]
    54. 52 -1 1 16512 models.common.Conv [256, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    55. 53 -2 1 16512 models.common.Conv [256, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    56. 54 -1 1 36992 models.common.Conv [64, 64, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    57. 55 -1 1 36992 models.common.Conv [64, 64, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    58. 56 [-1, -2, -3, -4] 1 0 models.common.Concat [1]
    59. 57 -1 1 33024 models.common.Conv [256, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    60. 58 -1 1 295424 models.common.Conv [128, 256, 3, 2, None, 1, LeakyReLU(negative_slope=0.1)]
    61. 59 [-1, 29] 1 0 models.common.Concat [1]
    62. 60 -1 1 65792 models.common.Conv [512, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    63. 61 -2 1 65792 models.common.Conv [512, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    64. 62 -1 1 147712 models.common.Conv [128, 128, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    65. 63 -1 1 147712 models.common.Conv [128, 128, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    66. 64 [-1, -2, -3, -4] 1 0 models.common.Concat [1]
    67. 65 -1 1 657920 models.repc3.RepC3 [512, 256, 3, 0.5]
    68. 66 49 1 73984 models.common.Conv [64, 128, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    69. 67 57 1 295424 models.common.Conv [128, 256, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    70. 68 65 1 1180672 models.common.Conv [256, 512, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
    71. 69 [66, 67, 68] 1 17132 models.yolo.IDetect [1, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]]
    72. Model Summary: 298 layers, 6541324 parameters, 6541324 gradients, 13.6 GFLOPS

    运行后若打印出如上文本代表改进成功。

    五、YOLOv5s改进工作

    完成二后,在YOLOv5项目文件下的models文件夹下创建新的文件yolov5s-repc3.yaml,导入如下代码。

    1. # Parameters
    2. nc: 80 # number of classes
    3. depth_multiple: 0.33 # model depth multiple
    4. width_multiple: 0.50 # layer channel multiple
    5. anchors:
    6. - [10,13, 16,30, 33,23] # P3/8
    7. - [30,61, 62,45, 59,119] # P4/16
    8. - [116,90, 156,198, 373,326] # P5/32
    9. # YOLOv5 v6.0 backbone
    10. backbone:
    11. # [from, number, module, args]
    12. [[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
    13. [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
    14. [-1, 3, C3, [128]],
    15. [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
    16. [-1, 6, C3, [256]],
    17. [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
    18. [-1, 9, C3, [512]],
    19. [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
    20. [-1, 3, C3, [1024]],
    21. [-1, 1, SPPF, [1024, 5]], # 9
    22. ]
    23. # YOLOv5 v6.0 head
    24. head:
    25. [[-1, 1, Conv, [512, 1, 1]],
    26. [-1, 1, nn.Upsample, [None, 2, 'nearest']],
    27. [[-1, 6], 1, Concat, [1]], # cat backbone P4
    28. [-1, 3, C3, [512, False]], # 13
    29. [-1, 1, Conv, [256, 1, 1]],
    30. [-1, 1, nn.Upsample, [None, 2, 'nearest']],
    31. [[-1, 4], 1, Concat, [1]], # cat backbone P3
    32. [-1, 3, RepC3, [256, 0.5]], # 17 (P3/8-small)
    33. [-1, 1, Conv, [256, 3, 2]],
    34. [[-1, 14], 1, Concat, [1]], # cat head P4
    35. [-1, 3, C3, [512, False]], # 20 (P4/16-medium)
    36. [-1, 1, Conv, [512, 3, 2]],
    37. [[-1, 10], 1, Concat, [1]], # cat head P5
    38. [-1, 3, C3, [1024, False]], # 23 (P5/32-large)
    39. [[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
    40. ]
    1. from n params module arguments
    2. 0 -1 1 3520 models.common.Conv [3, 32, 6, 2, 2]
    3. 1 -1 1 18560 models.common.Conv [32, 64, 3, 2]
    4. 2 -1 1 18816 models.common.C3 [64, 64, 1]
    5. 3 -1 1 73984 models.common.Conv [64, 128, 3, 2]
    6. 4 -1 2 115712 models.common.C3 [128, 128, 2]
    7. 5 -1 1 295424 models.common.Conv [128, 256, 3, 2]
    8. 6 -1 3 625152 models.common.C3 [256, 256, 3]
    9. 7 -1 1 1180672 models.common.Conv [256, 512, 3, 2]
    10. 8 -1 1 1182720 models.common.C3 [512, 512, 1]
    11. 9 -1 1 656896 models.common.SPPF [512, 512, 5]
    12. 10 -1 1 131584 models.common.Conv [512, 256, 1, 1]
    13. 11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
    14. 12 [-1, 6] 1 0 models.common.Concat [1]
    15. 13 -1 1 361984 models.common.C3 [512, 256, 1, False]
    16. 14 -1 1 33024 models.common.Conv [256, 128, 1, 1]
    17. 15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
    18. 16 [-1, 4] 1 0 models.common.Concat [1]
    19. 17 -1 1 82688 models.repc3.RepC3 [256, 128, 1, 0.5]
    20. 18 -1 1 147712 models.common.Conv [128, 128, 3, 2]
    21. 19 [-1, 14] 1 0 models.common.Concat [1]
    22. 20 -1 1 296448 models.common.C3 [256, 256, 1, False]
    23. 21 -1 1 590336 models.common.Conv [256, 256, 3, 2]
    24. 22 [-1, 10] 1 0 models.common.Concat [1]
    25. 23 -1 1 1182720 models.common.C3 [512, 512, 1, False]
    26. 24 [17, 20, 23] 1 16182 models.yolo.Detect [1, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]]
    27. Model Summary: 271 layers, 7014134 parameters, 7014134 gradients, 15.8 GFLOPs

    运行后若打印出如上文本代表改进成功。

    六、YOLOv5n改进工作

    完成二后,在YOLOv5项目文件下的models文件夹下创建新的文件yolov5n-repc3.yaml,导入如下代码。

    1. # Parameters
    2. nc: 80 # number of classes
    3. depth_multiple: 0.33 # model depth multiple
    4. width_multiple: 0.25 # layer channel multiple
    5. anchors:
    6. - [10,13, 16,30, 33,23] # P3/8
    7. - [30,61, 62,45, 59,119] # P4/16
    8. - [116,90, 156,198, 373,326] # P5/32
    9. # YOLOv5 v6.0 backbone
    10. backbone:
    11. # [from, number, module, args]
    12. [[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
    13. [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
    14. [-1, 3, C3, [128]],
    15. [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
    16. [-1, 6, C3, [256]],
    17. [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
    18. [-1, 9, C3, [512]],
    19. [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
    20. [-1, 3, C3, [1024]],
    21. [-1, 1, SPPF, [1024, 5]], # 9
    22. ]
    23. # YOLOv5 v6.0 head
    24. head:
    25. [[-1, 1, Conv, [512, 1, 1]],
    26. [-1, 1, nn.Upsample, [None, 2, 'nearest']],
    27. [[-1, 6], 1, Concat, [1]], # cat backbone P4
    28. [-1, 3, C3, [512, False]], # 13
    29. [-1, 1, Conv, [256, 1, 1]],
    30. [-1, 1, nn.Upsample, [None, 2, 'nearest']],
    31. [[-1, 4], 1, Concat, [1]], # cat backbone P3
    32. [-1, 3, RepC3, [256, 0.5]], # 17 (P3/8-small)
    33. [-1, 1, Conv, [256, 3, 2]],
    34. [[-1, 14], 1, Concat, [1]], # cat head P4
    35. [-1, 3, C3, [512, False]], # 20 (P4/16-medium)
    36. [-1, 1, Conv, [512, 3, 2]],
    37. [[-1, 10], 1, Concat, [1]], # cat head P5
    38. [-1, 3, C3, [1024, False]], # 23 (P5/32-large)
    39. [[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
    40. ]
    1. from n params module arguments
    2. 0 -1 1 1760 models.common.Conv [3, 16, 6, 2, 2]
    3. 1 -1 1 4672 models.common.Conv [16, 32, 3, 2]
    4. 2 -1 1 4800 models.common.C3 [32, 32, 1]
    5. 3 -1 1 18560 models.common.Conv [32, 64, 3, 2]
    6. 4 -1 2 29184 models.common.C3 [64, 64, 2]
    7. 5 -1 1 73984 models.common.Conv [64, 128, 3, 2]
    8. 6 -1 3 156928 models.common.C3 [128, 128, 3]
    9. 7 -1 1 295424 models.common.Conv [128, 256, 3, 2]
    10. 8 -1 1 296448 models.common.C3 [256, 256, 1]
    11. 9 -1 1 164608 models.common.SPPF [256, 256, 5]
    12. 10 -1 1 33024 models.common.Conv [256, 128, 1, 1]
    13. 11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
    14. 12 [-1, 6] 1 0 models.common.Concat [1]
    15. 13 -1 1 90880 models.common.C3 [256, 128, 1, False]
    16. 14 -1 1 8320 models.common.Conv [128, 64, 1, 1]
    17. 15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
    18. 16 [-1, 4] 1 0 models.common.Concat [1]
    19. 17 -1 1 20864 models.repc3.RepC3 [128, 64, 1, 0.5]
    20. 18 -1 1 36992 models.common.Conv [64, 64, 3, 2]
    21. 19 [-1, 14] 1 0 models.common.Concat [1]
    22. 20 -1 1 74496 models.common.C3 [128, 128, 1, False]
    23. 21 -1 1 147712 models.common.Conv [128, 128, 3, 2]
    24. 22 [-1, 10] 1 0 models.common.Concat [1]
    25. 23 -1 1 296448 models.common.C3 [256, 256, 1, False]
    26. 24 [17, 20, 23] 1 8118 models.yolo.Detect [1, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [64, 128, 256]]
    27. Model Summary: 271 layers, 1763222 parameters, 1763222 gradients, 4.2 GFLOPs

    运行后打印如上代码说明改进成功。

    更多文章产出中,主打简洁和准确,欢迎关注我,共同探讨!

  • 相关阅读:
    普通索引和唯一索引,应该怎么选择?
    递归与回溯法
    2024年注册安全工程师报名常见问题汇总!
    【蓝桥杯大赛】简单回忆一下我的蓝桥杯比赛历程
    玩玩群晖NAS-搭建一个私有的Git服务
    从程序员的角度看人类通信史
    记录数据库备份与检查脚本
    python的request库使用
    nodejs+express设置和获取cookie,session
    LeetCode498. 对角线遍历(C++)
  • 原文地址:https://blog.csdn.net/2401_84870184/article/details/140028178