
RT-DETR(Real-Time Detection Transformer)是一种针对实时目标检测任务的创新方法,它旨在克服YOLO系列和其他基于Transformer的检测器存在的局限性。RT-DETR的主要优点包括:
无NMS(非极大值抑制)优势:传统的YOLO系列检测器的性能受到NMS过程的负面影响,而RT-DETR利用端到端Transformer架构,消除了对NMS的依赖,从而提高了效率和准确性。
高效混合编码器:设计了一种高效的混合编码器,通过解耦处理多尺度特征的内部尺度交互和跨尺度融合,大大提升了处理速度。这种方法能够快速高效地处理来自不同尺度的信息,同时保持了准确性。
不确定性最小化查询选择:提出了一个创新的查询选择机制,旨在提供高质量的初始查询给解码器,从而提升检测的准确性。这个机制通过显式优化不确定性来避免选择具有低定位置信度的特征作为对象查询,减少了检测结果的不确定性。
灵活的速度调节:RT-DETR支持通过简单调整解码器层数的方式来适应不同场景下的速度需求,无需重新训练模型,为实际应用提供了极大的灵活性。
卓越的性能:在COCO数据集上,RT-DETR-R50和RT-DETR-R101分别达到了53.1%和54.3%的平均精度(AP),同时在T4 GPU上分别实现了108和74的帧每秒(FPS)。这不仅超越了先前的先进YOLO模型,还在速度和准确性上都优于DINO-R50,且在FPS上快约21倍。
预训练增强性能:经过Objects365数据集的预训练后,RT-DETR-R50和RT-DETR-R101的性能进一步提升至55.3%和56.2%的AP,显示了巨大的性能提升潜力。
技术扩展性:RT-DETR及其模型缩放策略拓宽了实时目标检测的技术路径,为多样化实时应用场景提供了超越YOLO的新可能性。
综上,RT-DETR通过其创新的设计,在保证实时性的前提下,实现了速度与准确性的优化,为实时目标检测领域带来了一种新的、性能优越的解决方案。

RepC3是基于RepConv构建的一个模块,它是CSP(Cross Stage Partial)结构的一个变体,常用于神经网络的瓶颈层。主要特性包括:
首先在YOLOv5/v7的models文件夹下新建文件repc3.py,导入如下代码
- from models.common import *
-
-
- class RepConv(nn.Module):
- """
- RepConv is a basic rep-style block, including training and deploy status.
- This module is used in RT-DETR.
- Based on https://github.com/DingXiaoH/RepVGG/blob/main/repvgg.py
- """
- default_act = nn.SiLU() # default activation
-
- def __init__(self, c1, c2, k=3, s=1, p=1, g=1, d=1, act=True, bn=False, deploy=False):
- """Initializes Light Convolution layer with inputs, outputs & optional activation function."""
- super().__init__()
- assert k == 3 and p == 1
- self.g = g
- self.c1 = c1
- self.c2 = c2
- self.act = self.default_act if act is True else act if isinstance(act, nn.Module) else nn.Identity()
-
- self.bn = nn.BatchNorm2d(num_features=c1) if bn and c2 == c1 and s == 1 else None
- self.conv1 = Conv(c1, c2, k, s, p=p, g=g, act=False)
- self.conv2 = Conv(c1, c2, 1, s, p=(p - k // 2), g=g, act=False)
-
- def forward_fuse(self, x):
- """Forward process."""
- return self.act(self.conv(x))
-
- def forward(self, x):
- """Forward process."""
- id_out = 0 if self.bn is None else self.bn(x)
- return self.act(self.conv1(x) + self.conv2(x) + id_out)
-
- def get_equivalent_kernel_bias(self):
- """Returns equivalent kernel and bias by adding 3x3 kernel, 1x1 kernel and identity kernel with their biases."""
- kernel3x3, bias3x3 = self._fuse_bn_tensor(self.conv1)
- kernel1x1, bias1x1 = self._fuse_bn_tensor(self.conv2)
- kernelid, biasid = self._fuse_bn_tensor(self.bn)
- return kernel3x3 + self._pad_1x1_to_3x3_tensor(kernel1x1) + kernelid, bias3x3 + bias1x1 + biasid
-
- def _pad_1x1_to_3x3_tensor(self, kernel1x1):
- """Pads a 1x1 tensor to a 3x3 tensor."""
- if kernel1x1 is None:
- return 0
- else:
- return torch.nn.functional.pad(kernel1x1, [1, 1, 1, 1])
-
- def _fuse_bn_tensor(self, branch):
- """Generates appropriate kernels and biases for convolution by fusing branches of the neural network."""
- if branch is None:
- return 0, 0
- if isinstance(branch, Conv):
- kernel = branch.conv.weight
- running_mean = branch.bn.running_mean
- running_var = branch.bn.running_var
- gamma = branch.bn.weight
- beta = branch.bn.bias
- eps = branch.bn.eps
- elif isinstance(branch, nn.BatchNorm2d):
- if not hasattr(self, 'id_tensor'):
- input_dim = self.c1 // self.g
- kernel_value = np.zeros((self.c1, input_dim, 3, 3), dtype=np.float32)
- for i in range(self.c1):
- kernel_value[i, i % input_dim, 1, 1] = 1
- self.id_tensor = torch.from_numpy(kernel_value).to(branch.weight.device)
- kernel = self.id_tensor
- running_mean = branch.running_mean
- running_var = branch.running_var
- gamma = branch.weight
- beta = branch.bias
- eps = branch.eps
- std = (running_var + eps).sqrt()
- t = (gamma / std).reshape(-1, 1, 1, 1)
- return kernel * t, beta - running_mean * gamma / std
-
- def fuse_convs(self):
- """Combines two convolution layers into a single layer and removes unused attributes from the class."""
- if hasattr(self, 'conv'):
- return
- kernel, bias = self.get_equivalent_kernel_bias()
- self.conv = nn.Conv2d(in_channels=self.conv1.conv.in_channels,
- out_channels=self.conv1.conv.out_channels,
- kernel_size=self.conv1.conv.kernel_size,
- stride=self.conv1.conv.stride,
- padding=self.conv1.conv.padding,
- dilation=self.conv1.conv.dilation,
- groups=self.conv1.conv.groups,
- bias=True).requires_grad_(False)
- self.conv.weight.data = kernel
- self.conv.bias.data = bias
- for para in self.parameters():
- para.detach_()
- self.__delattr__('conv1')
- self.__delattr__('conv2')
- if hasattr(self, 'nm'):
- self.__delattr__('nm')
- if hasattr(self, 'bn'):
- self.__delattr__('bn')
- if hasattr(self, 'id_tensor'):
- self.__delattr__('id_tensor')
-
-
- class RepC3(nn.Module):
- """Rep C3."""
-
- def __init__(self, c1, c2, n=3, e=1.0):
- """Initialize CSP Bottleneck with a single convolution using input channels, output channels, and number."""
- super().__init__()
- c_ = int(c2 * e) # hidden channels
- self.cv1 = Conv(c1, c_, 1, 1)
- self.cv2 = Conv(c1, c_, 1, 1)
- self.m = nn.Sequential(*[RepConv(c_, c_) for _ in range(n)])
- self.cv3 = Conv(c_, c2, 1, 1) if c_ != c2 else nn.Identity()
-
- def forward(self, x):
- """Forward pass of RT-DETR neck layer."""
- return self.cv3(self.m(self.cv1(x)) + self.cv2(x))
其次在在YOLOv5/v7项目文件下的models/yolo.py中在文件首部添加代码
from models.repc3 import RepC3
并搜索def parse_model(d, ch)
定位到如下行添加以下代码
RepC3,

完成二后,在YOLOv7项目文件下的models文件夹下创建新的文件yolov7-tiny-repc3.yaml,导入如下代码。
- # parameters
- nc: 80 # number of classes
- depth_multiple: 1.0 # model depth multiple
- width_multiple: 1.0 # layer channel multiple
-
- # anchors
- anchors:
- - [10,13, 16,30, 33,23] # P3/8
- - [30,61, 62,45, 59,119] # P4/16
- - [116,90, 156,198, 373,326] # P5/32
-
- # yolov7-tiny backbone
- backbone:
- # [from, number, module, args] c2, k=1, s=1, p=None, g=1, act=True
- [[-1, 1, Conv, [32, 3, 2, None, 1, nn.LeakyReLU(0.1)]], # 0-P1/2
-
- [-1, 1, Conv, [64, 3, 2, None, 1, nn.LeakyReLU(0.1)]], # 1-P2/4
-
- [-1, 1, Conv, [32, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-2, 1, Conv, [32, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [32, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [32, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 7
-
- [-1, 1, MP, []], # 8-P3/8
- [-1, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-2, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [64, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [64, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 14
-
- [-1, 1, MP, []], # 15-P4/16
- [-1, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-2, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [128, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [128, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [256, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 21
-
- [-1, 1, MP, []], # 22-P5/32
- [-1, 1, Conv, [256, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-2, 1, Conv, [256, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [256, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [256, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [512, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 28
- ]
-
- # yolov7-tiny head
- head:
- [[-1, 1, v7tiny_SPP, [256]], # 29
-
- [-1, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [21, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # route backbone P4
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-2, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [64, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [64, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 39
-
- [-1, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [14, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # route backbone P3
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [32, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-2, 1, Conv, [32, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [32, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [32, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 49
-
- [-1, 1, Conv, [128, 3, 2, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, 39], 1, Concat, [1]],
-
- [-1, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-2, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [64, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [64, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 57
-
- [-1, 1, Conv, [256, 3, 2, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, 29], 1, Concat, [1]],
-
- [-1, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-2, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [128, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [128, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 3, RepC3, [256, 0.5]], # 65
-
- [49, 1, Conv, [128, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [57, 1, Conv, [256, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [65, 1, Conv, [512, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
-
- [[66, 67, 68], 1, IDetect, [nc, anchors]], # Detect(P3, P4, P5)
- ]
-
- from n params module arguments
- 0 -1 1 928 models.common.Conv [3, 32, 3, 2, None, 1, LeakyReLU(negative_slope=0.1)]
- 1 -1 1 18560 models.common.Conv [32, 64, 3, 2, None, 1, LeakyReLU(negative_slope=0.1)]
- 2 -1 1 2112 models.common.Conv [64, 32, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 3 -2 1 2112 models.common.Conv [64, 32, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 4 -1 1 9280 models.common.Conv [32, 32, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 5 -1 1 9280 models.common.Conv [32, 32, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 6 [-1, -2, -3, -4] 1 0 models.common.Concat [1]
- 7 -1 1 8320 models.common.Conv [128, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 8 -1 1 0 models.common.MP []
- 9 -1 1 4224 models.common.Conv [64, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 10 -2 1 4224 models.common.Conv [64, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 11 -1 1 36992 models.common.Conv [64, 64, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 12 -1 1 36992 models.common.Conv [64, 64, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 13 [-1, -2, -3, -4] 1 0 models.common.Concat [1]
- 14 -1 1 33024 models.common.Conv [256, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 15 -1 1 0 models.common.MP []
- 16 -1 1 16640 models.common.Conv [128, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 17 -2 1 16640 models.common.Conv [128, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 18 -1 1 147712 models.common.Conv [128, 128, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 19 -1 1 147712 models.common.Conv [128, 128, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 20 [-1, -2, -3, -4] 1 0 models.common.Concat [1]
- 21 -1 1 131584 models.common.Conv [512, 256, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 22 -1 1 0 models.common.MP []
- 23 -1 1 66048 models.common.Conv [256, 256, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 24 -2 1 66048 models.common.Conv [256, 256, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 25 -1 1 590336 models.common.Conv [256, 256, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 26 -1 1 590336 models.common.Conv [256, 256, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 27 [-1, -2, -3, -4] 1 0 models.common.Concat [1]
- 28 -1 1 525312 models.common.Conv [1024, 512, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 29 -1 1 657408 models.common.v7tiny_SPP [512, 256]
- 30 -1 1 33024 models.common.Conv [256, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 31 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
- 32 21 1 33024 models.common.Conv [256, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 33 [-1, -2] 1 0 models.common.Concat [1]
- 34 -1 1 16512 models.common.Conv [256, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 35 -2 1 16512 models.common.Conv [256, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 36 -1 1 36992 models.common.Conv [64, 64, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 37 -1 1 36992 models.common.Conv [64, 64, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 38 [-1, -2, -3, -4] 1 0 models.common.Concat [1]
- 39 -1 1 33024 models.common.Conv [256, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 40 -1 1 8320 models.common.Conv [128, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 41 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
- 42 14 1 8320 models.common.Conv [128, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 43 [-1, -2] 1 0 models.common.Concat [1]
- 44 -1 1 4160 models.common.Conv [128, 32, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 45 -2 1 4160 models.common.Conv [128, 32, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 46 -1 1 9280 models.common.Conv [32, 32, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 47 -1 1 9280 models.common.Conv [32, 32, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 48 [-1, -2, -3, -4] 1 0 models.common.Concat [1]
- 49 -1 1 8320 models.common.Conv [128, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 50 -1 1 73984 models.common.Conv [64, 128, 3, 2, None, 1, LeakyReLU(negative_slope=0.1)]
- 51 [-1, 39] 1 0 models.common.Concat [1]
- 52 -1 1 16512 models.common.Conv [256, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 53 -2 1 16512 models.common.Conv [256, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 54 -1 1 36992 models.common.Conv [64, 64, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 55 -1 1 36992 models.common.Conv [64, 64, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 56 [-1, -2, -3, -4] 1 0 models.common.Concat [1]
- 57 -1 1 33024 models.common.Conv [256, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 58 -1 1 295424 models.common.Conv [128, 256, 3, 2, None, 1, LeakyReLU(negative_slope=0.1)]
- 59 [-1, 29] 1 0 models.common.Concat [1]
- 60 -1 1 65792 models.common.Conv [512, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 61 -2 1 65792 models.common.Conv [512, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 62 -1 1 147712 models.common.Conv [128, 128, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 63 -1 1 147712 models.common.Conv [128, 128, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 64 [-1, -2, -3, -4] 1 0 models.common.Concat [1]
- 65 -1 1 657920 models.repc3.RepC3 [512, 256, 3, 0.5]
- 66 49 1 73984 models.common.Conv [64, 128, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 67 57 1 295424 models.common.Conv [128, 256, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 68 65 1 1180672 models.common.Conv [256, 512, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 69 [66, 67, 68] 1 17132 models.yolo.IDetect [1, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]]
-
- Model Summary: 298 layers, 6541324 parameters, 6541324 gradients, 13.6 GFLOPS
运行后若打印出如上文本代表改进成功。
完成二后,在YOLOv5项目文件下的models文件夹下创建新的文件yolov5s-repc3.yaml,导入如下代码。
- # Parameters
- nc: 80 # number of classes
- depth_multiple: 0.33 # model depth multiple
- width_multiple: 0.50 # layer channel multiple
- anchors:
- - [10,13, 16,30, 33,23] # P3/8
- - [30,61, 62,45, 59,119] # P4/16
- - [116,90, 156,198, 373,326] # P5/32
-
- # YOLOv5 v6.0 backbone
- backbone:
- # [from, number, module, args]
- [[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
- [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
- [-1, 3, C3, [128]],
- [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
- [-1, 6, C3, [256]],
- [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
- [-1, 9, C3, [512]],
- [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
- [-1, 3, C3, [1024]],
- [-1, 1, SPPF, [1024, 5]], # 9
- ]
-
- # YOLOv5 v6.0 head
- head:
- [[-1, 1, Conv, [512, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [[-1, 6], 1, Concat, [1]], # cat backbone P4
- [-1, 3, C3, [512, False]], # 13
-
- [-1, 1, Conv, [256, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [[-1, 4], 1, Concat, [1]], # cat backbone P3
- [-1, 3, RepC3, [256, 0.5]], # 17 (P3/8-small)
-
- [-1, 1, Conv, [256, 3, 2]],
- [[-1, 14], 1, Concat, [1]], # cat head P4
- [-1, 3, C3, [512, False]], # 20 (P4/16-medium)
-
- [-1, 1, Conv, [512, 3, 2]],
- [[-1, 10], 1, Concat, [1]], # cat head P5
- [-1, 3, C3, [1024, False]], # 23 (P5/32-large)
-
- [[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
- ]
-
- from n params module arguments
- 0 -1 1 3520 models.common.Conv [3, 32, 6, 2, 2]
- 1 -1 1 18560 models.common.Conv [32, 64, 3, 2]
- 2 -1 1 18816 models.common.C3 [64, 64, 1]
- 3 -1 1 73984 models.common.Conv [64, 128, 3, 2]
- 4 -1 2 115712 models.common.C3 [128, 128, 2]
- 5 -1 1 295424 models.common.Conv [128, 256, 3, 2]
- 6 -1 3 625152 models.common.C3 [256, 256, 3]
- 7 -1 1 1180672 models.common.Conv [256, 512, 3, 2]
- 8 -1 1 1182720 models.common.C3 [512, 512, 1]
- 9 -1 1 656896 models.common.SPPF [512, 512, 5]
- 10 -1 1 131584 models.common.Conv [512, 256, 1, 1]
- 11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
- 12 [-1, 6] 1 0 models.common.Concat [1]
- 13 -1 1 361984 models.common.C3 [512, 256, 1, False]
- 14 -1 1 33024 models.common.Conv [256, 128, 1, 1]
- 15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
- 16 [-1, 4] 1 0 models.common.Concat [1]
- 17 -1 1 82688 models.repc3.RepC3 [256, 128, 1, 0.5]
- 18 -1 1 147712 models.common.Conv [128, 128, 3, 2]
- 19 [-1, 14] 1 0 models.common.Concat [1]
- 20 -1 1 296448 models.common.C3 [256, 256, 1, False]
- 21 -1 1 590336 models.common.Conv [256, 256, 3, 2]
- 22 [-1, 10] 1 0 models.common.Concat [1]
- 23 -1 1 1182720 models.common.C3 [512, 512, 1, False]
- 24 [17, 20, 23] 1 16182 models.yolo.Detect [1, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]]
-
- Model Summary: 271 layers, 7014134 parameters, 7014134 gradients, 15.8 GFLOPs
运行后若打印出如上文本代表改进成功。
完成二后,在YOLOv5项目文件下的models文件夹下创建新的文件yolov5n-repc3.yaml,导入如下代码。
- # Parameters
- nc: 80 # number of classes
- depth_multiple: 0.33 # model depth multiple
- width_multiple: 0.25 # layer channel multiple
- anchors:
- - [10,13, 16,30, 33,23] # P3/8
- - [30,61, 62,45, 59,119] # P4/16
- - [116,90, 156,198, 373,326] # P5/32
-
- # YOLOv5 v6.0 backbone
- backbone:
- # [from, number, module, args]
- [[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
- [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
- [-1, 3, C3, [128]],
- [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
- [-1, 6, C3, [256]],
- [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
- [-1, 9, C3, [512]],
- [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
- [-1, 3, C3, [1024]],
- [-1, 1, SPPF, [1024, 5]], # 9
- ]
-
- # YOLOv5 v6.0 head
- head:
- [[-1, 1, Conv, [512, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [[-1, 6], 1, Concat, [1]], # cat backbone P4
- [-1, 3, C3, [512, False]], # 13
-
- [-1, 1, Conv, [256, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [[-1, 4], 1, Concat, [1]], # cat backbone P3
- [-1, 3, RepC3, [256, 0.5]], # 17 (P3/8-small)
-
- [-1, 1, Conv, [256, 3, 2]],
- [[-1, 14], 1, Concat, [1]], # cat head P4
- [-1, 3, C3, [512, False]], # 20 (P4/16-medium)
-
- [-1, 1, Conv, [512, 3, 2]],
- [[-1, 10], 1, Concat, [1]], # cat head P5
- [-1, 3, C3, [1024, False]], # 23 (P5/32-large)
-
- [[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
- ]
-
- from n params module arguments
- 0 -1 1 1760 models.common.Conv [3, 16, 6, 2, 2]
- 1 -1 1 4672 models.common.Conv [16, 32, 3, 2]
- 2 -1 1 4800 models.common.C3 [32, 32, 1]
- 3 -1 1 18560 models.common.Conv [32, 64, 3, 2]
- 4 -1 2 29184 models.common.C3 [64, 64, 2]
- 5 -1 1 73984 models.common.Conv [64, 128, 3, 2]
- 6 -1 3 156928 models.common.C3 [128, 128, 3]
- 7 -1 1 295424 models.common.Conv [128, 256, 3, 2]
- 8 -1 1 296448 models.common.C3 [256, 256, 1]
- 9 -1 1 164608 models.common.SPPF [256, 256, 5]
- 10 -1 1 33024 models.common.Conv [256, 128, 1, 1]
- 11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
- 12 [-1, 6] 1 0 models.common.Concat [1]
- 13 -1 1 90880 models.common.C3 [256, 128, 1, False]
- 14 -1 1 8320 models.common.Conv [128, 64, 1, 1]
- 15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
- 16 [-1, 4] 1 0 models.common.Concat [1]
- 17 -1 1 20864 models.repc3.RepC3 [128, 64, 1, 0.5]
- 18 -1 1 36992 models.common.Conv [64, 64, 3, 2]
- 19 [-1, 14] 1 0 models.common.Concat [1]
- 20 -1 1 74496 models.common.C3 [128, 128, 1, False]
- 21 -1 1 147712 models.common.Conv [128, 128, 3, 2]
- 22 [-1, 10] 1 0 models.common.Concat [1]
- 23 -1 1 296448 models.common.C3 [256, 256, 1, False]
- 24 [17, 20, 23] 1 8118 models.yolo.Detect [1, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [64, 128, 256]]
-
- Model Summary: 271 layers, 1763222 parameters, 1763222 gradients, 4.2 GFLOPs
运行后打印如上代码说明改进成功。
更多文章产出中,主打简洁和准确,欢迎关注我,共同探讨!