• BiSeNet v2


    paper:BiSeNet V2: Bilateral Network with Guided Aggregation for Real-time Semantic Segmentation

    v2中的Detail Path和Semantic Path分别对应v1中的Spatial Path和Context Path 

    和v1相比,主要有以下两点改进

    1. 移除了耗时的跨层连接,简化了模型结构。
    2. 重新设计了整体架构。具体包括(1)加深了Detail Path来编码更多的细节信息(2)对于Semantic Path,基于深度可分离卷积设计了轻量的components(3)提出了一个有效的aggregation layer来增强两条路径之间的联系

    Bilateral Segmentation Network

    整体结构如下图所示

     

    细节分支和语义分支的具体结构如下表所示

     

    Detail Branch 

    细节分支负责提取空间细节信息,即low-level信息,因此该分支需要丰富的通道容量即通道数要大这样才能编码丰富的空间细节特征。同时因为该分支专注于low-level信息,因此需要是一个stride小的浅层结构。综合来看细节分支需要通道数大层数少。此外最好不要使用residual connection,额外增加内存访问成本降低了速度。

    如表(1)所示,细节分支包含3个stage,每个stage包含2个卷积层,每个卷积层后都有一个BN和一个ReLU,每个stage的第一个卷积层stride=2,因此该分支的输出特征图大小是模型输入的1/8。

    细节分支的具体结构如下

    1. DetailBranch(
    2. (detail_branch): ModuleList(
    3. (0): Sequential(
    4. (0): ConvModule(
    5. (conv): Conv2d(3, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
    6. (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    7. (activate): ReLU(inplace=True)
    8. )
    9. (1): ConvModule(
    10. (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    11. (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    12. (activate): ReLU(inplace=True)
    13. )
    14. )
    15. (1): Sequential(
    16. (0): ConvModule(
    17. (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
    18. (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    19. (activate): ReLU(inplace=True)
    20. )
    21. (1): ConvModule(
    22. (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    23. (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    24. (activate): ReLU(inplace=True)
    25. )
    26. (2): ConvModule(
    27. (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    28. (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    29. (activate): ReLU(inplace=True)
    30. )
    31. )
    32. (2): Sequential(
    33. (0): ConvModule(
    34. (conv): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
    35. (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    36. (activate): ReLU(inplace=True)
    37. )
    38. (1): ConvModule(
    39. (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    40. (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    41. (activate): ReLU(inplace=True)
    42. )
    43. (2): ConvModule(
    44. (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    45. (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    46. (activate): ReLU(inplace=True)
    47. )
    48. )
    49. )
    50. )

    Semantic Branch

    同时考虑到大感受野和小计算量,作者借鉴了轻量型网络如Xception、MobileNet、ShuffleNet设计了语义分支的结构,与细节分支大通道数浅层的特点相反,语义分支需要小通道数深层的结构,具体如下

    Stem Block

    作者采用Stem Block作为语义分支的第一个stage,如下图(a)所示,它使用了两种不同的降采样方式来减小特征表示,然后将两个分支的输出进行concatenate,这个结构具有高效的计算成本和特征表达能力。

     

    Stem Block的具体结构如下

    1. (stage1): StemBlock(
    2. (conv_first): ConvModule(
    3. (conv): Conv2d(3, 16, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
    4. (bn): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    5. (activate): ReLU(inplace=True)
    6. )
    7. (convs): Sequential(
    8. (0): ConvModule(
    9. (conv): Conv2d(16, 8, kernel_size=(1, 1), stride=(1, 1), bias=False)
    10. (bn): BatchNorm2d(8, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    11. (activate): ReLU(inplace=True)
    12. )
    13. (1): ConvModule(
    14. (conv): Conv2d(8, 16, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
    15. (bn): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    16. (activate): ReLU(inplace=True)
    17. )
    18. )
    19. (pool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
    20. (fuse_last): ConvModule(
    21. (conv): Conv2d(32, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    22. (bn): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    23. (activate): ReLU(inplace=True)
    24. )
    25. )

    Gather-and-Expansion Layer

    除了第一个stem block和最后的context embedding block,语义分支的中间每个stage都是由GE layer组成的,如下图所示

     

    GE层包括(1)一个3x3卷积用来有效地聚合特征响应并扩展到高维空间(2)一个在每个通道上单独提取特征的3x3深度卷积(3)一个1x1卷积将深度卷积的输出映射到一个低通道空间。

    当stride=2时,另外采用2个3x3深度卷积进一步扩大感受野,并且采用深度可分离卷积作为shortcut。

    语义分支的stage3的结构如下所示,具体包括2个GE layer,第一个GE层stride=2,第二个GE层stride=1

    1. (stage2): Sequential(
    2. (0): GELayer(
    3. (conv1): ConvModule(
    4. (conv): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    5. (bn): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    6. (activate): ReLU(inplace=True)
    7. )
    8. (dwconv): Sequential(
    9. (0): ConvModule(
    10. (conv): Conv2d(16, 96, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=16, bias=False)
    11. (bn): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    12. )
    13. (1): ConvModule(
    14. (conv): Conv2d(96, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=96, bias=False)
    15. (bn): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    16. (activate): ReLU(inplace=True)
    17. )
    18. )
    19. (shortcut): Sequential(
    20. (0): DepthwiseSeparableConvModule(
    21. (depthwise_conv): ConvModule(
    22. (conv): Conv2d(16, 16, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=16, bias=False)
    23. (bn): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    24. )
    25. (pointwise_conv): ConvModule(
    26. (conv): Conv2d(16, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
    27. (bn): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    28. )
    29. )
    30. )
    31. (conv2): Sequential(
    32. (0): ConvModule(
    33. (conv): Conv2d(96, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
    34. (bn): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    35. )
    36. )
    37. (act): ReLU()
    38. )
    39. (1): GELayer(
    40. (conv1): ConvModule(
    41. (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    42. (bn): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    43. (activate): ReLU(inplace=True)
    44. )
    45. (dwconv): Sequential(
    46. (0): ConvModule(
    47. (conv): Conv2d(32, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
    48. (bn): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    49. (activate): ReLU(inplace=True)
    50. )
    51. )
    52. (conv2): Sequential(
    53. (0): ConvModule(
    54. (conv): Conv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
    55. (bn): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    56. )
    57. )
    58. (act): ReLU()
    59. )
    60. )

    Context Embedding Block

    作者将语义分支最后一个stage的最后一层由GE layer换成了CE layer,其结构如图(4)(b)所示,采用全局平均池化和残差连接来高效地编码全局上下文信息。

    1. (stage4_CEBlock): CEBlock(
    2. (gap): Sequential(
    3. (0): AdaptiveAvgPool2d(output_size=(1, 1))
    4. (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    5. )
    6. (conv_gap): ConvModule(
    7. (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
    8. (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    9. (activate): ReLU(inplace=True)
    10. )
    11. (conv_last): ConvModule(
    12. (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    13. (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    14. (activate): ReLU(inplace=True)
    15. )
    16. )

    Bilateral Guided Aggregation

    因为细节分支和语义分支关注的特征不同,细节分支提取的是low-level细节特征,而语义分支提取的是high-level语义特征,因此不能简单的通过summation或concatenation的方式融合两个分支提取的特征,作者提出了bilateral guided aggregation layer来融合来自两个分支的互补信息,利用语义分支的上下文信息来指导细节分支的特征响应,通过不同尺度下的引导,我们可以获得不同尺度的特征表示,有效地编码了多尺度信息。具体结构如下图所示

     

    BGA代码

    1. class BGALayer(BaseModule):
    2. """Bilateral Guided Aggregation Layer to fuse the complementary information
    3. from both Detail Branch and Semantic Branch.
    4. Args:
    5. out_channels (int): Number of output channels.
    6. Default: 128.
    7. align_corners (bool): align_corners argument of F.interpolate.
    8. Default: False.
    9. conv_cfg (dict | None): Config of conv layers.
    10. Default: None.
    11. norm_cfg (dict | None): Config of norm layers.
    12. Default: dict(type='BN').
    13. act_cfg (dict): Config of activation layers.
    14. Default: dict(type='ReLU').
    15. init_cfg (dict or list[dict], optional): Initialization config dict.
    16. Default: None.
    17. Returns:
    18. output (torch.Tensor): Output feature map for Segment heads.
    19. """
    20. def __init__(self,
    21. out_channels=128,
    22. align_corners=False,
    23. conv_cfg=None,
    24. norm_cfg=dict(type='BN'),
    25. act_cfg=dict(type='ReLU'),
    26. init_cfg=None):
    27. super(BGALayer, self).__init__(init_cfg=init_cfg)
    28. self.out_channels = out_channels
    29. self.align_corners = align_corners
    30. self.detail_dwconv = nn.Sequential(
    31. DepthwiseSeparableConvModule(
    32. in_channels=self.out_channels,
    33. out_channels=self.out_channels,
    34. kernel_size=3,
    35. stride=1,
    36. padding=1,
    37. dw_norm_cfg=norm_cfg,
    38. dw_act_cfg=None,
    39. pw_norm_cfg=None,
    40. pw_act_cfg=None,
    41. ))
    42. self.detail_down = nn.Sequential(
    43. ConvModule(
    44. in_channels=self.out_channels,
    45. out_channels=self.out_channels,
    46. kernel_size=3,
    47. stride=2,
    48. padding=1,
    49. bias=False,
    50. conv_cfg=conv_cfg,
    51. norm_cfg=norm_cfg,
    52. act_cfg=None),
    53. nn.AvgPool2d(kernel_size=3, stride=2, padding=1, ceil_mode=False))
    54. self.semantic_conv = nn.Sequential(
    55. ConvModule(
    56. in_channels=self.out_channels,
    57. out_channels=self.out_channels,
    58. kernel_size=3,
    59. stride=1,
    60. padding=1,
    61. bias=False,
    62. conv_cfg=conv_cfg,
    63. norm_cfg=norm_cfg,
    64. act_cfg=None))
    65. self.semantic_dwconv = nn.Sequential(
    66. DepthwiseSeparableConvModule(
    67. in_channels=self.out_channels,
    68. out_channels=self.out_channels,
    69. kernel_size=3,
    70. stride=1,
    71. padding=1,
    72. dw_norm_cfg=norm_cfg,
    73. dw_act_cfg=None,
    74. pw_norm_cfg=None,
    75. pw_act_cfg=None,
    76. ))
    77. self.conv = ConvModule(
    78. in_channels=self.out_channels,
    79. out_channels=self.out_channels,
    80. kernel_size=3,
    81. stride=1,
    82. padding=1,
    83. inplace=True,
    84. conv_cfg=conv_cfg,
    85. norm_cfg=norm_cfg,
    86. act_cfg=act_cfg,
    87. )
    88. def forward(self, x_d, x_s): # (4,128,60,60),(4,128,15,15)
    89. detail_dwconv = self.detail_dwconv(x_d) # (4,128,60,60)
    90. detail_down = self.detail_down(x_d) # (4,128,15,15)
    91. semantic_conv = self.semantic_conv(x_s) # (4,128,15,15)
    92. semantic_dwconv = self.semantic_dwconv(x_s) # (4,128,15,15)
    93. semantic_conv = resize(
    94. input=semantic_conv,
    95. size=detail_dwconv.shape[2:],
    96. mode='bilinear',
    97. align_corners=self.align_corners) # (4,128,60,60)
    98. fuse_1 = detail_dwconv * torch.sigmoid(semantic_conv) # (4,128,60,60)
    99. fuse_2 = detail_down * torch.sigmoid(semantic_dwconv) # (4,128,15,15)
    100. fuse_2 = resize(
    101. input=fuse_2,
    102. size=fuse_1.shape[2:],
    103. mode='bilinear',
    104. align_corners=self.align_corners) # (4,128,60,60)
    105. output = self.conv(fuse_1 + fuse_2) # (4,128,60,60)
    106. return output

    Booster Training Strategy

    为了进一步提高分割精度,作者提出了一种强化训练策略,它在训练阶段可以增强特征表示,在推理阶段可以直接丢弃,因此不会增加模型的推理速度。如图(3)所示,通过将辅助分割head添加到语义分支的不同位置,对模型的中间输出进行额外的监督,可以提高模型的精度。

    实现过程

    下面以MMSegmentation中的bisenet v2实现为例,捋一下具体实现过程

    假设batch_size=4,输入shape为(4, 3, 480, 480)。

    • Detail Branch的输出为(4, 128, 60, 60)
    • Semantic Branch如表(1)所示,Stem Block的输出为(4, 16, 120, 120),S3的输出为(4, 32, 60, 60),S4的输出为(4, 64, 30, 30),S5的输出包括第二个GE层的输出(4, 128, 15, 15)和最后一个CE层的输出(4, 128, 15, 15)。因此语义分支的输出是一个list,包含5个输出,最后CE的输出和细节分支的输出一起作为输入进入到BGA层,前4个输出在训练过程中,作为辅助分割head的输入。
    • Bilateral Guided Aggregation的输出为(4, 128, 60, 60)

    Experimental Results

    Cityscapes

    CamVid

     

  • 相关阅读:
    Stimulsoft Ultimate Reports 2022.3.1
    x86 --- 任务隔离特权级保护
    Android Material Design之BottomNavigationView(十一)
    Spring Boot 应用在 kubernetes 的 sidecar 设计与实战
    根文件系统介绍
    BD个人总结
    一、安全完善度等级SIL(Safety Integrity Level)介绍
    Eureka详解
    ETL:数据转换与集成的关键过程
    《C++ Primer Plus》第十三章复习题和编程练习
  • 原文地址:https://blog.csdn.net/ooooocj/article/details/126046474