• ResNet网络的搭建


    亮点

    • 引入了残差结构

    • 使用Batch Normalization加速训练(丢弃dropout)

    这两个方法,解决了梯度消失和梯度爆炸等问题,使得构建深层网络成为可能

    残差结构

    计算量

    CSDN_1661409554561

    左边是ResNet18/34的残差结构,右边是ResNet101/152的残差结构

    1. 左边计算量:3x3x256x256+3x3x256x256=1179648
    2. 右边计算量:1x1x256x64+3x3x64x64+1x1x64x256=69632

    由此可见,右边一个残差结构的计算量更少,原因是右边的残差结构,使用阿一个1*1的卷积核,用来降维和升维

    虚线残差结构

    CSDN_1661409775078

    上图有两个残差结构,它们的区别在于:

    • 左图的输入(Input)直接和输出(Output)相加,而右图的输入(Input2)需要经过一个1*1的卷积核,才能与输出(Output2)相加。

    • 虚线残差第一个卷积的步距stride=2,而实线残差结构stride=1

    下图中,1号框的64表示卷积核的个数,等于输出深度。2号框的既是虚线残差结构

    eweq

    代码解析

    resnet18/34的残差结构

    BasicBlock如下图所示,它包括实线残差和虚线残差两种结构

    CSDN_1661409775078
    # resnet18和resnet34的残差结构
    class BasicBlock(nn.Module):
        # 卷积核个数改变的倍数,如果一样,则expansion=1
        expansion = 1
    
        # ----------------------------残差结构--------------------------------------------
        # in_channel        输入特征矩阵的深度
        # out_channel       输出特征矩阵的深度,既是卷积核的个数
        # stride            步距,当stride=1时,宽高不变,当stride=2时,宽高为原来的一半
        # downsample        下采样,默认为None。只有使用到虚线残差结构才设为True
        # bias=False        使用BN结构时,不需要使用偏置(bias)
        # -------------------------------------------------------------------------------
        def __init__(self, in_channel, out_channel, stride=1, downsample=None, **kwargs):
            super(BasicBlock, self).__init__()
            self.conv1 = nn.Conv2d(in_channels=in_channel, out_channels=out_channel,
                                   kernel_size=3, stride=stride, padding=1, bias=False)
            self.bn1 = nn.BatchNorm2d(out_channel)
            self.relu = nn.ReLU()
            self.conv2 = nn.Conv2d(in_channels=out_channel, out_channels=out_channel,
                                   kernel_size=3, stride=1, padding=1, bias=False)
            self.bn2 = nn.BatchNorm2d(out_channel)
            self.downsample = downsample
    
        def forward(self, x):
            identity = x
            # 判断self.downsample是否为空,如果不为空,则进行实线的残差结构
            if self.downsample is not None:
                identity = self.downsample(x)
    
            # 主支线的正向传播
            out = self.conv1(x)
            out = self.bn1(out)
            out = self.relu(out)
    
            out = self.conv2(out)
            out = self.bn2(out)
    
            # 主线和残差相加
            out += identity
            out = self.relu(out)
    
            return out
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42

    下面的代码段,对应的是一次卷积操作

    self.conv1 = nn.Conv2d(in_channels=in_channel, out_channels=out_channel,kernel_size=3, stride=stride, padding=1, bias=False)
    self.bn1 = nn.BatchNorm2d(out_channel)
    self.relu = nn.ReLU()
    
    • 1
    • 2
    • 3

    重复两次,既有

        # ----------------------------残差结构--------------------------------------------
        # in_channel        输入特征矩阵的深度
        # out_channel       输出特征矩阵的深度,既是卷积核的个数
        # stride            步距,当stride=1时,宽高不变,当stride=2时,宽高为原来的一半
        # downsample        下采样,默认为None。只有使用到虚线残差结构才设为True
        # bias=False        使用BN结构时,不需要使用偏置(bias)
        # -------------------------------------------------------------------------------
        def __init__(self, in_channel, out_channel, stride=1, downsample=None, **kwargs):
            super(BasicBlock, self).__init__()
            self.conv1 = nn.Conv2d(in_channels=in_channel, out_channels=out_channel,
                                   kernel_size=3, stride=stride, padding=1, bias=False)
            self.bn1 = nn.BatchNorm2d(out_channel)
            self.relu = nn.ReLU()
            self.conv2 = nn.Conv2d(in_channels=out_channel, out_channels=out_channel,
                                   kernel_size=3, stride=1, padding=1, bias=False)
            self.bn2 = nn.BatchNorm2d(out_channel)
            self.downsample = downsample
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17

    self.downsample = downsample对应的下采样,即是用一个1*1的卷积实现虚线残差结构。如果self.downsample为空,机型实线残差。如果不为空,则进行虚线的残差结构

    out += identity进行残差结构+主线结构

    resnet50/101/152的残差结构Bottleneck

    mmexport1661412751426
    class Bottleneck(nn.Module):
        # 卷积核个数改变的倍数,这里主干上的输出深度为输入深度的4倍,则expansion=4
        expansion = 4
    
        def __init__(self, in_channel, out_channel, stride=1, downsample=None,
                     groups=1, width_per_group=64):
            super(Bottleneck, self).__init__()
    
            width = int(out_channel * (width_per_group / 64.)) * groups
    
            self.conv1 = nn.Conv2d(in_channels=in_channel, out_channels=width,
                                   kernel_size=1, stride=1, bias=False)  # squeeze channels
            self.bn1 = nn.BatchNorm2d(width)
            # -----------------------------------------
            self.conv2 = nn.Conv2d(in_channels=width, out_channels=width, groups=groups,
                                   kernel_size=3, stride=stride, bias=False, padding=1)
            self.bn2 = nn.BatchNorm2d(width)
            # -----------------------------------------
            self.conv3 = nn.Conv2d(in_channels=width, out_channels=out_channel*self.expansion,
                                   kernel_size=1, stride=1, bias=False)  # unsqueeze channels
            self.bn3 = nn.BatchNorm2d(out_channel*self.expansion)
            self.relu = nn.ReLU(inplace=True)
            self.downsample = downsample
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23

    Bottleneck与BasicBlock相似,仅在几个地方有些许差异

    1. expansion = 4:conv1和conv3的输出深度分别为64、256,它们相差了4倍,即是expansion,它在self.conv3 = nn.Conv2d(in_channels=width, out_channels=out_channel*self.expansion,kernel_size=1, stride=1, bias=False)中被使用
    2. self.conv2 = nn.Conv2d(in_channels=width, out_channels=width, groups=groups,kernel_size=3, stride=stride, bias=False, padding=1):注意stride=stride,它对应着虚线残差结构中的conv2,stride=2

    一层layer的结构(_make_layer()函数)

    # ----------------------------一个layer的结构--------------------------------------
    # block             残差结构,有BasicBlock、Bottleneck
    # channel           与blocks_num对应,残差结构的卷积核数目,为一个列表。如resnet18为[2,2,2,2]
    # block_num         该层一共包含了几个残差块,即使执行的次数
    # -------------------------------------------------------------------------------
    def _make_layer(self, block, channel, block_num, stride=1):
        downsample = None
    
        # 判断是resnet18/resnet34还是resnet50/resnet101/resnet152
        if stride != 1 or self.in_channel != channel * block.expansion:
            downsample = nn.Sequential(
                nn.Conv2d(self.in_channel, channel * block.expansion, kernel_size=1, stride=stride, bias=False),
                nn.BatchNorm2d(channel * block.expansion))
    
        # 若有downsample则为虚线残差结构,否则,仍是虚线残差结构
        layers = []
        layers.append(block(self.in_channel,
                            channel,
                            downsample=downsample,
                            stride=stride,
                            groups=self.groups,
                            width_per_group=self.width_per_group))
        self.in_channel = channel * block.expansion
    
        # 实线残差结构
        for _ in range(1, block_num):
            layers.append(block(self.in_channel,
                                channel,
                                groups=self.groups,
                                width_per_group=self.width_per_group))
    
        return nn.Sequential(*layers)
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32

    block:残差结构,有BasicBlock、Bottleneck

    channel:与blocks_num对应,残差结构的卷积核数目,为一个列表。如resnet18为[2,2,2,2]

    block_num:该层一共包含了几个残差块,即使执行的次数

    1. downsample = None默认下采样为空,即默认不执行1*1卷积核的虚线残差结构
    2. if stride != 1 or self.in_channel != channel * block.expansion:默认in_channel=64channel=64;对于18、34来说:expansion=1,所以self.in_channel != channel * block.expansion成立,故不进入if语句。对于50、101来说:expansion=4,所以不相等,进入if语句
    3. if语句就是一个1*1的卷积操作

    下图的【💔红色】1对应着代码1部分;【💚绿色】2代表的代码2部分

    112

    image-20220902211514788

    ResNet主网络

    faf

    class ResNet(nn.Module):
    
        def __init__(self,
                     block,
                     blocks_num,
                     num_classes=1000,
                     include_top=True,
                     groups=1,
                     width_per_group=64):
            super(ResNet, self).__init__()
            self.include_top = include_top
            self.in_channel = 64
    
            self.groups = groups
            self.width_per_group = width_per_group
    
            self.conv1 = nn.Conv2d(3, self.in_channel, kernel_size=7, stride=2,
                                   padding=3, bias=False)
            self.bn1 = nn.BatchNorm2d(self.in_channel)
            self.relu = nn.ReLU(inplace=True)
            self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
            self.layer1 = self._make_layer(block, 64, blocks_num[0])
            self.layer2 = self._make_layer(block, 128, blocks_num[1], stride=2)
            self.layer3 = self._make_layer(block, 256, blocks_num[2], stride=2)
            self.layer4 = self._make_layer(block, 512, blocks_num[3], stride=2)
            if self.include_top:
                self.avgpool = nn.AdaptiveAvgPool2d((1, 1))  # output size = (1, 1)
                self.fc = nn.Linear(512 * block.expansion, num_classes)
    
            for m in self.modules():
                if isinstance(m, nn.Conv2d):
                    nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32

    代码仓库

    Image_Classification_Net/ResNet at main · yzfzzz/Image_Classification_Net (github.com)

  • 相关阅读:
    订水商城实战教程10-宫格导航
    Retrofit项目 - Android和Java的类型安全的HTTP客户端
    粘包/拆包问题一直都存在,只是到TCP就拆不动了。
    单链表的定义— 不带头结点 + 带头结点
    测开 - 进阶篇 - 细节狂魔
    sql:group by和聚合函数的使用
    java boolean占用内存是多少
    玩转gRPC—深入概念与原理
    华中科技大学机试大位数加法器C语言编程解答
    139. 单词拆分
  • 原文地址:https://blog.csdn.net/henghuizan2771/article/details/126670698