• 【OpenPCDet】稀疏卷积SPConv-v1.2代码解读(2)


    【SPConv模块Python部分代码】

            在上一篇文章里分别展示了spconv源码中的Python和c++/cuda目录,这里再来看一下spconv编译安装完后的目录结构。

    1. (openpcd) ➜ spconv tree -L 1
    2. .
    3. ├── conv.py
    4. ├── functional.py
    5. ├── identity.py
    6. ├── __init__.py
    7. ├── libcuhash.so
    8. ├── libspconv.so
    9. ├── modules.py
    10. ├── ops.py
    11. ├── pool.py
    12. ├── __pycache__
    13. ├── spconv_utils.cpython-36m-x86_64-linux-gnu.so
    14. ├── spconv_utils.cpython-36m-x86_64-linux-gnu.so.1
    15. ├── spconv_utils.cpython-36m-x86_64-linux-gnu.so.1.1
    16. ├── tables.py
    17. ├── test_utils.py
    18. └── utils

    当我们 在Second代码中import spconv就是在导入安装好的spconv package。既然是package,那就必然有__init__.py 文件,别且在导入spconv时__init__.py中的可执行代码会被执行。pcdet/models/backbones_3d/spconv_backbone.py文件中,在导入完spconv后,可以直接使用spconv.SubMConv3d,spconv.SparseConv3d,spconv.SparseConvTensor,spconv.SparseSequential等子模块也是因为spconv的__init__.py代码中已经将它们一一导入了进来。

    1. rom spconv import ops, utils
    2. from spconv.conv import (SparseConv2d, SparseConv3d, SparseConvTranspose2d,
    3. SparseConvTranspose3d, SparseInverseConv2d,
    4. SparseInverseConv3d, SubMConv2d, SubMConv3d)
    5. from spconv.identity import Identity
    6. from spconv.modules import SparseModule, SparseSequential
    7. from spconv.ops import ConvAlgo
    8. from spconv.pool import SparseMaxPool2d, SparseMaxPool3d
    9. from spconv.tables import AddTable, ConcatTable, JoinTable
    10. _LIB_FILE_NAME = "libspconv.so"
    11. if platform.system() == "Windows":
    12. _LIB_FILE_NAME = "spconv.dll"
    13. _LIB_PATH = str(Path(__file__).parent / _LIB_FILE_NAME)
    14. torch.ops.load_library(_LIB_PATH)

    而其中需要用c++/cuda实现的操作,在spconv编译安装的时候会编译成共享库,在这里通过torch.ops.load_library来加载。从而轻松实现python代码调用c++/cuda实现的操作。__init__.py中同时定义了稀疏卷积的核心数据结构SparseConvTensor。当然,虽然它名字叫

    1. class SparseConvTensor(object):
    2. def __init__(self, features, indices, spatial_shape, batch_size,
    3. grid=None):
    4. """
    5. Args:
    6. features: [num_points, num_features] feature tensor
    7. indices: [num_points, ndim + 1] indice tensor. batch index saved in indices[:, 0]
    8. spatial_shape: spatial shape of your sparse data
    9. batch_size: batch size of your sparse data
    10. grid: pre-allocated grid tensor. should be used when the volume of spatial shape
    11. is very large.
    12. """
    13. self.features = features #e.g. torch.size([16000,4])
    14. self.indices = indices #e.g. torch.size([16000,4])
    15. self.spatial_shape = spatial_shape #e.g. array([41,1600,1408])
    16. self.batch_size = batch_size
    17. self.indice_dict = {}
    18. self.grid = grid
    19. @classmethod
    20. def from_dense(cls, x: torch.Tensor):
    21. """create sparse tensor fron channel last dense tensor by to_sparse
    22. x must be NHWC tensor, channel last
    23. """
    24. x = x.to_sparse(x.ndim - 1)
    25. spatial_shape = x.shape[1:-1]
    26. batch_size = x.shape[0]
    27. indices_th = x.indices().permute(1, 0).contiguous().int()
    28. features_th = x.values()
    29. return cls(features_th, indices_th, spatial_shape, batch_size)
    30. @property
    31. def spatial_size(self):
    32. return np.prod(self.spatial_shape)
    33. def find_indice_pair(self, key):
    34. if key is None:
    35. return None
    36. if key in self.indice_dict:
    37. return self.indice_dict[key]
    38. return None
    39. def dense(self, channels_first=True):
    40. output_shape = [self.batch_size] + list(
    41. self.spatial_shape) + [self.features.shape[1]]
    42. res = scatter_nd(
    43. self.indices.to(self.features.device).long(), self.features,
    44. output_shape)
    45. if not channels_first:
    46. return res
    47. ndim = len(self.spatial_shape)
    48. trans_params = list(range(0, ndim + 1))
    49. trans_params.insert(1, ndim + 1)
    50. return res.permute(*trans_params).contiguous()
    51. @property
    52. def sparity(self):
    53. return self.indices.shape[0] / np.prod(
    54. self.spatial_shape) / self.batch_size

    SparseConvTensor,但是本身并不是一个torch tensor,只是对稀疏Tensor的一个抽象。其内部成员features,indices和spatial_shape分别表示有效的数据,有效数据的索引以及空间大小。以Second中VoxelBackBone8x第一层的输入为例。假定如下配置参数设定:

    1. POINT_CLOUD_RANGE: [0, -40, -3, 70.4, 40, 1]
    2. VOXEL_SIZE: [0.05, 0.05, 0.1]
    3. MAX_POINTS_PER_VOXEL: 5
    4. MAX_NUMBER_OF_VOXELS: {
    5. 'train': 40000,
    6. 'test': 40000
    7. }
    8. BATCH_SIZE_PER_GPU: 2

    features和indices 的shape为[N,4]。其中N表示当前batch下2帧点云总的有效的(active)voxel数量。spatial_shape经过POINT_CLOUD_RANGE和VOXEL_SIZE计算后的值为[41,1600,1408]。3D稀疏标准稀疏卷积和3D子流行稀疏卷积分别有SparseConv3d和SparseMConv3d两个类定义。这两个类都派生自SparseConvolution。其输入参数subm用于区分是标准3d稀疏卷积还是3d子流行稀疏卷积。

    1. class SparseConvolution(SparseModule):
    2. __constants__ = [
    3. 'stride', 'padding', 'dilation', 'groups', 'bias', 'subm', 'inverse',
    4. 'transposed', 'output_padding', 'fused_bn'
    5. ]
    6. def __init__(self,
    7. ndim,
    8. in_channels,
    9. out_channels,
    10. kernel_size=3,
    11. stride=1,
    12. padding=0,
    13. dilation=1,
    14. groups=1,
    15. bias=True,
    16. subm=False,
    17. output_padding=0,
    18. transposed=False,
    19. inverse=False,
    20. indice_key=None,
    21. fused_bn=False,
    22. use_hash=False,
    23. algo=ops.ConvAlgo.Native):
    24. super(SparseConvolution, self).__init__()
    25. assert groups == 1
    26. if not isinstance(kernel_size, (list, tuple)):
    27. kernel_size = [kernel_size] * ndim
    28. if not isinstance(stride, (list, tuple)):
    29. stride = [stride] * ndim
    30. if not isinstance(padding, (list, tuple)):
    31. padding = [padding] * ndim
    32. if not isinstance(dilation, (list, tuple)):
    33. dilation = [dilation] * ndim
    34. if not isinstance(output_padding, (list, tuple)):
    35. output_padding = [output_padding] * ndim
    36. for d, s in zip(dilation, stride):
    37. assert any([s == 1, d == 1]), "don't support this."
    38. self.ndim = ndim #2d,3d,4d,....
    39. self.in_channels = in_channels
    40. self.out_channels = out_channels
    41. self.kernel_size = kernel_size
    42. self.conv1x1 = np.prod(kernel_size) == 1
    43. self.stride = stride
    44. self.padding = padding
    45. self.dilation = dilation
    46. self.transposed = transposed
    47. self.inverse = inverse
    48. self.output_padding = output_padding
    49. self.groups = groups
    50. self.subm = subm
    51. self.indice_key = indice_key
    52. self.fused_bn = fused_bn
    53. self.use_hash = use_hash
    54. self.algo = algo.value #what?
    55. self.weight = Parameter(
    56. torch.Tensor(*kernel_size, in_channels, out_channels))
    57. if bias:
    58. self.bias = Parameter(torch.Tensor(out_channels))
    59. else:
    60. self.register_parameter('bias', None)
    61. self.reset_parameters()
    62. def reset_parameters(self):
    63. n = self.in_channels
    64. init.kaiming_uniform_(self.weight, a=math.sqrt(5))
    65. if self.bias is not None:
    66. fan_in, _ = _calculate_fan_in_and_fan_out_hwio(self.weight)
    67. bound = 1 / math.sqrt(fan_in)
    68. init.uniform_(self.bias, -bound, bound)
    69. def forward(self, input):
    70. assert isinstance(input, spconv.SparseConvTensor)
    71. features = input.features #e.g. torch.Size[N,4]
    72. device = features.device
    73. #有效特征的coord idx,[batch_idx, z_idx, y_idx, x_idx]
    74. indices = input.indices #e.g. torch.Size[N,4]
    75. spatial_shape = input.spatial_shape #e.g. array([41,1600,1408])
    76. batch_size = input.batch_size
    77. if not self.subm:
    78. if self.transposed:
    79. #...
    80. else:
    81. #获取输出卷积的形状e.g. (41,1600,1408) -> (21,800,704)
    82. out_spatial_shape = ops.get_conv_output_size(
    83. spatial_shape, self.kernel_size, self.stride, self.padding,
    84. self.dilation)
    85. else:
    86. out_spatial_shape = spatial_shape
    87. #单独处理1x1卷积
    88. if self.conv1x1:
    89. features = torch.mm(
    90. input.features,
    91. self.weight.view(self.in_channels, self.out_channels))
    92. if self.bias is not None:
    93. features += self.bias
    94. out_tensor = spconv.SparseConvTensor(features, input.indices,
    95. input.spatial_shape,
    96. input.batch_size)
    97. out_tensor.indice_dict = input.indice_dict
    98. out_tensor.grid = input.grid
    99. return out_tensor
    100. datas = input.find_indice_pair(self.indice_key)
    101. if self.inverse:
    102. assert datas is not None and self.indice_key is not None
    103. _, outids, indice_pairs, indice_pair_num, out_spatial_shape = datas
    104. assert indice_pair_num.shape[0] == np.prod(
    105. self.kernel_size
    106. ), "inverse conv must have same kernel size as its couple conv"
    107. else:
    108. if self.indice_key is not None and datas is not None:
    109. outids, _, indice_pairs, indice_pair_num, _ = datas
    110. else:
    111. outids, indice_pairs, indice_pair_num = ops.get_indice_pairs(
    112. indices,
    113. batch_size,
    114. spatial_shape,
    115. self.kernel_size,
    116. self.stride,
    117. self.padding,
    118. self.dilation,
    119. self.output_padding,
    120. self.subm,
    121. self.transposed,
    122. grid=input.grid,
    123. use_hash=self.use_hash)
    124. input.indice_dict[self.indice_key] = (outids, indices,
    125. indice_pairs,
    126. indice_pair_num,
    127. spatial_shape)
    128. if self.fused_bn:
    129. assert self.bias is not None
    130. out_features = ops.fused_indice_conv(features, self.weight,
    131. self.bias,
    132. indice_pairs.to(device),
    133. indice_pair_num,
    134. outids.shape[0], self.inverse,
    135. self.subm)
    136. else:
    137. if self.subm:
    138. out_features = Fsp.indice_subm_conv(features, self.weight,
    139. indice_pairs.to(device),
    140. indice_pair_num,
    141. outids.shape[0], self.algo)
    142. else:
    143. if self.inverse:
    144. out_features = Fsp.indice_inverse_conv(
    145. features, self.weight, indice_pairs.to(device),
    146. indice_pair_num, outids.shape[0], self.algo)
    147. else:
    148. out_features = Fsp.indice_conv(features, self.weight,
    149. indice_pairs.to(device),
    150. indice_pair_num,
    151. outids.shape[0], self.algo)
    152. if self.bias is not None:
    153. out_features += self.bias
    154. out_tensor = spconv.SparseConvTensor(out_features, outids,
    155. out_spatial_shape, batch_size)
    156. out_tensor.indice_dict = input.indice_dict
    157. out_tensor.grid = input.grid
    158. return out_tensor

    为了抓住主要矛盾,示例代码中省略了部分次要代码,或者是在Second网络结构中不会调用的分支。SparseConvolution的forward函数输入必须是一个spconv中自定义的SparseConvTensor类型。在forward中完成稀疏卷积最重要的两个步骤:

    Step1:构建Rulebook;

    Step2:根据step1构建的Rulebook执行具体稀疏卷积计算;

    其中Step1构建Rulebook由ops.get_indice_pairs接口完成,Step2依卷积类型由Fsp.indice_subm_conv或Fsp.indice_conv完成。为什么如此紧密衔接的接口一个在分开在了两个不同的模块ops和fsp中实现呢?其实如果你进一步分析后续代码会发现其实它们殊途同归,Fsp.indice_subm_conv和Fsp.indice_conv经function.py中的SubMConvFunction和SparseConvFunction对象辗转还是会继续调用ops模块中的indice_conv等函数。最终,他们都会以torch.ops.spconv.xx的形式调用c++扩展共享库中的api来完成任务。要数区别,其实这里体现了Pytorch中添加c++扩展的其中两种形式。对于Step1构建Rulebook,它根据输入索引,卷积核大小等参数信息构建Rulebook(规则表),这里直接使用Python调用c++接口。而像Step2是使用torch.autograd.Function进行了一层封装。Function 类本身表示 PyTorch 的一个可导函数,只要为其定义了前向推理和反向传播的实现,我们就可以把它当成一个普通 PyTorch 函数来使用。PyTorch 会自动调度该函数,合适地执行前向和反向计算。对模型部署来说,Function 类有一个很好的性质:如果它定义了 symbolic 静态方法,该 Function 在执行 torch.onnx.export() 时就可以根据 symbolic 中定义的规则转换成 ONNX 算子。这个 symbolic 就是前面提到的符号函数,只是它的名称必须是 symbolic 而已。

    1. import torch
    2. from torch import nn
    3. from torch.autograd import Function
    4. import spconv.ops as ops
    5. class SparseConvFunction(Function):
    6. @staticmethod
    7. def forward(ctx, features, filters, indice_pairs, indice_pair_num,
    8. num_activate_out, algo):
    9. ctx.save_for_backward(indice_pairs, indice_pair_num, features, filters)
    10. ctx.algo = algo
    11. return ops.indice_conv(features,
    12. filters,
    13. indice_pairs,
    14. indice_pair_num,
    15. num_activate_out,
    16. False,
    17. algo=algo)
    18. @staticmethod
    19. def backward(ctx, grad_output):
    20. indice_pairs, indice_pair_num, features, filters = ctx.saved_tensors
    21. input_bp, filters_bp = ops.indice_conv_backward(features,
    22. filters,
    23. grad_output,
    24. indice_pairs,
    25. indice_pair_num,
    26. False,
    27. algo=ctx.algo)
    28. return input_bp, filters_bp, None, None, None, None
    29. class SubMConvFunction(Function):
    30. @staticmethod
    31. def forward(ctx, features, filters, indice_pairs, indice_pair_num,
    32. num_activate_out, algo):
    33. ctx.save_for_backward(indice_pairs, indice_pair_num, features, filters)
    34. ctx.algo = algo
    35. return ops.indice_conv(features,
    36. filters,
    37. indice_pairs,
    38. indice_pair_num,
    39. num_activate_out,
    40. False,
    41. True,
    42. algo=algo)
    43. @staticmethod
    44. def backward(ctx, grad_output):
    45. indice_pairs, indice_pair_num, features, filters = ctx.saved_tensors
    46. input_bp, filters_bp = ops.indice_conv_backward(features,
    47. filters,
    48. grad_output,
    49. indice_pairs,
    50. indice_pair_num,
    51. False,
    52. True,
    53. algo=ctx.algo)
    54. return input_bp, filters_bp, None, None, None, None

    对于3D稀疏卷积运算这样一个全新的扩展算子,在这里我们不仅要自己实现forward函数,还要实现backward函数。因为在c++端Pytorch目前不支持根据forward函数自动推导出backward函数,所以要必要对新扩展算子的反向传播原理十分清楚。

    【附录:Second可以轻易导出onnx吗?】

    Pytorch模型转ONNX模型原理

            在把 PyTorch 模型转换成 ONNX 模型时,我们往往只需要轻松地调用一句 torch.onnx.export 就行了。torch.onnx.export 中需要的模型实际上是一个 torch.jit.ScriptModule。而要把普通 PyTorch 模型转一个这样的 TorchScript 模型,有跟踪(trace)和记录(script)两种导出计算图的方法。如果给 torch.onnx.export 传入了一个普通 PyTorch 模型(torch.nn.Module),那么这个模型会默认使用trace(跟踪)的方法导出。这一过程如下图所示:

    trace(跟踪法)通过运行一遍模型(这就时为什么我们在export的时候要提供输入),在推理的过程中记录所有经过的计算,将这些记录整合成计算图,导出模型的静态图。也因为如此,跟踪法无法识别出模型中的控制流(如循环),记录法则能通过解析模型来正确记录所有的控制流。

    Second转ONNX的问题?

    问题1: ops.get_indice_pairs无法识别!

    1. RuntimeError: ONNX export failed on an operator with
    2. unrecognized namespace spconv::get_indice_pairs.
    3. If you are trying to export a custom operator,
    4. make sure you registered it with the right domain and version.

    问题2:SparseConFunction中无相关符号函数定义!

    【参考文献】

    TorchScript 解读(二):Torch jit tracer 实现解析 - 知乎

    模型部署入门教程(四):在 PyTorch 中支持更多 ONNX 算子 - 知乎

    这可能是关于Pytorch底层算子扩展最详细的总结了! - 知乎

    PyTorch扩展自定义PyThon/C++(CUDA)算子的若干方法总结 - 知乎

    PyTorch中构建和调用C++/CUDA扩展_NaiveYoungPeo的博客-CSDN博客

  • 相关阅读:
    vue3 + ts 权限管理
    Android---网络编程优化
    C++中自定义数据结构作为unordered-set以及set中的元素
    服务端打开12345端口,客户端无法使用12345端口
    JVM_逃逸分析
    旷视 CEO 印奇被敲诈勒索:不给 300 万就出售公司敏感信息!
    Java零基础入门-关系运算符
    SQL 在PostgreSQL中使用SQL将多行连接成数组
    【FastCAE源码阅读5】使用VTK实现鼠标拾取对象并高亮
    Java语言与系统设计课程实验报告
  • 原文地址:https://blog.csdn.net/ChuiGeDaQiQiu/article/details/127607480