• 【OpenPCDet】稀疏卷积SPConv-v1.2代码解读(3)


    【构建rulebook】

            传统卷积通过img2col实现,稀疏卷积通过Rulebook来实现。什么是Rulebook? 本质上来说就是一个表。先通过建立输入、输出的哈希表,分别将输入、输出的张量坐标映射到序号。再将输入、输出的哈希表中的序号建立起联系,这样就可以基本实现稀疏卷积,因此这也是稀疏卷积实现的关键。项目代码中建立rulebook这一步会调用Python函数get_indice_pairs,再有该函数进一步调用spconv共享模块中的c++函数getIndicePairs来一步步完成。我们先来说说get_indice_pairs函数。

    1. def get_indice_pairs(indices,
    2. batch_size,
    3. spatial_shape,
    4. ksize=3,
    5. stride=1,
    6. padding=0,
    7. dilation=1,
    8. out_padding=0,
    9. subm=False,
    10. transpose=False,
    11. grid=None,
    12. use_hash=False):
    13. ndim = indices.shape[1] - 1 #e.g. 4->3
    14. if not isinstance(ksize, (list, tuple)):
    15. ksize = [ksize] * ndim #e.g. 3->[3,3,3],3x3x3 kernel
    16. if not isinstance(stride, (list, tuple)):
    17. stride = [stride] * ndim #e.g. 1->[1,1,1]
    18. if not isinstance(padding, (list, tuple)):
    19. padding = [padding] * ndim #e.g. 0->[0,0,0]
    20. if not isinstance(dilation, (list, tuple)):
    21. dilation = [dilation] * ndim #e.g. 1->[1,1,1]
    22. if not isinstance(out_padding, (list, tuple)):
    23. out_padding = [out_padding] * ndim
    24. #不支持s,d都不等于1的设定
    25. for d, s in zip(dilation, stride):
    26. #只要有一个为true,any则为true
    27. assert any([s == 1, d == 1]), "don't support this."
    28. if not subm:
    29. if transpose:
    30. out_shape = get_deconv_output_size(spatial_shape, ksize, stride,
    31. padding, dilation, out_padding)
    32. else:
    33. out_shape = get_conv_output_size(spatial_shape, ksize, stride,
    34. padding, dilation)
    35. else:
    36. out_shape = spatial_shape #subm,输入输出shape一样
    37. if grid is None:
    38. res = torch.ops.spconv.get_indice_pairs(indices, batch_size, out_shape,
    39. spatial_shape, ksize, stride,
    40. padding, dilation, out_padding,
    41. int(subm), int(transpose),
    42. int(use_hash))
    43. return res
    44. else:
    45. #...省略...

    它其实主要就是完成了一些参数的校验和预处理。首先,对于非子流形稀疏卷积,根据输入shape大小,kernel size,stride等参数计算出输出输出shape。当然,子流行稀疏卷积就不必计算了,输出shape和输入shape一样大小。输出shape的计算很重要,因为建立rulebook这一步就是为了输入和输出的映射关系。准备好参数之后就进入最核心的get_indice_pairs函数。因为spconv通过torch.ops.load_library加载.so文件注册,所以这里通过torch.ops.spconv.get_indice_pairs这种方式来调用该函数。在src/spconv/all.cc文件中通过Pytorch提供的OP Register(算子注册的方式)对底层c++ api进行了注册,所以这里实际调用的是src/spconv/spconv_ops.cc文件种的getIndicePairs函数。

     摘自文件:src/spconv/spconv_ops.cc

    1. #include
    2. #include
    3. #include
    4. #include
    5. #include
    6. #include
    7. static auto registry =
    8. torch::RegisterOperators()
    9. .op("spconv::get_indice_pairs", &spconv::getIndicePairs)
    10. .op("spconv::indice_conv", &spconv::indiceConv)
    11. .op("spconv::indice_conv_batch", &spconv::indiceConvBatch)
    12. .op("spconv::indice_conv_backward", &spconv::indiceConvBackward)
    13. .op("spconv::fused_indice_conv_bn", &spconv::fusedIndiceConvBatchNorm)
    14. .op("spconv::indice_maxpool", &spconv::indiceMaxPool)
    15. .op("spconv::indice_maxpool_backward", &spconv::indiceMaxPoolBackward)
    16. .op("spconv::nms", &spconv::nonMaxSuppression<float>)
    17. .op("spconv::pillar_scatter_float", &spconv::pointPillarScatter<float>)
    18. .op("spconv::pillar_scatter_half", &spconv::pointPillarScatter);

    【补充:关于OP Register】同C++ extension方式一样,OP Register也是Pytorch提供的一种底层扩展算子注册的方式。注册的算子可以通过torch.xxx或者tensor.xxx的方式进行调用,该方式同样与pytorch源码解耦,增加和修改算子不需要重新编译pytorch源码。用该方式注册一个新的算子,流程非常简单:先编写C++相关的算子实现,然后通过pytorch底层的注册接口(torch::RegisterOperators),将该算子注册即可。

    摘自:src/spconv/spconv_ops.cc

    1. #include
    2. namespace spconv {
    3. std::vector
    4. getIndicePairs(torch::Tensor indices, int64_t batchSize,
    5. std::vector<int64_t> outSpatialShape,
    6. std::vector<int64_t> spatialShape,
    7. std::vector<int64_t> kernelSize, std::vector<int64_t> stride,
    8. std::vector<int64_t> padding, std::vector<int64_t> dilation,
    9. std::vector<int64_t> outPadding, int64_t _subM,
    10. int64_t _transpose, int64_t _useHash) {
    11. // auto timer = spconv::CudaContextTimer<>();
    12. bool subM = _subM != 0;
    13. bool transpose = _transpose != 0;
    14. auto NDim = kernelSize.size();
    15. // CPU always use hash (tsl::robin_map).
    16. bool useHash = _useHash != 0 || indices.device().type() == torch::kCPU;
    17. auto numAct = indices.size(0); //e.g. torch.Size([N,4]) -> N
    18. auto coorDim = indices.size(1) - 1;
    19. TV_ASSERT_RT_ERR(NDim == coorDim, "error");
    20. TV_ASSERT_RT_ERR(kernelSize.size() == coorDim, "error");
    21. TV_ASSERT_RT_ERR(outSpatialShape.size() == coorDim, "error");
    22. TV_ASSERT_RT_ERR(stride.size() == coorDim, "error");
    23. TV_ASSERT_RT_ERR(padding.size() == coorDim, "error");
    24. TV_ASSERT_RT_ERR(outPadding.size() == coorDim, "error");
    25. TV_ASSERT_RT_ERR(dilation.size() == coorDim, "error");
    26. //e.g. [3,3,3] -> 3*3*3 -> 27
    27. auto kernelVolume = kernelSize[0];
    28. for (int i = 1; i < kernelSize.size(); ++i) {
    29. kernelVolume *= kernelSize[i];
    30. }
    31. TV_ASSERT_RT_ERR(kernelVolume <= 4096, "error");
    32. auto outputVolume = outSpatialShape[0];
    33. for (int i = 1; i < outSpatialShape.size(); ++i) {
    34. outputVolume *= outSpatialShape[i];
    35. }
    36. std::string msg = "due to limits of cuda hash, the volume of dense space "
    37. "include batch size ";
    38. msg += "must less than std::numeric_limits::max() = 2e9";
    39. TV_ASSERT_RT_ERR(batchSize * outputVolume < std::numeric_limits<int>::max(),
    40. msg);
    41. //e.g. torch.Size([2,27,16000])
    42. torch::Tensor indicePairs = torch::full({2, kernelVolume, numAct}, -1,
    43. torch::dtype(torch::kInt32).device(indices.device()));
    44. //e.g. torch.Size([27])
    45. torch::Tensor indiceNum = torch::zeros({kernelVolume},
    46. torch::dtype(torch::kInt32).device(indices.device()));
    47. auto gridSize = batchSize * outputVolume;
    48. if (useHash) {
    49. gridSize = batchSize; //输入useHash为true,或者使用cpu
    50. }
    51. torch::Tensor gridOut = torch::full({gridSize}, -1,
    52. torch::dtype(torch::kInt32).device(indices.device()));
    53. gridOut = gridOut.view({batchSize, -1});
    54. int64_t numActOut = -1;
    55. for (int i = 0; i < NDim; ++i) {
    56. if (subM) {
    57. padding[i] = kernelSize[i] / 2; //根据kernel size计算pading大小
    58. stride[i] = 1;
    59. }
    60. }
    61. // tv::ssprint("prepare", timer.report() / 1000.0);
    62. if (subM) {
    63. if (indices.device().type() == torch::kCPU) {
    64. numActOut = create_submconv_indice_pair_cpu(
    65. indices, gridOut, indicePairs, indiceNum, kernelSize, stride, padding,
    66. dilation, outSpatialShape, transpose, false, useHash);
    67. }
    68. #ifdef TV_CUDA
    69. else if (indices.device().type() == torch::kCUDA) {
    70. numActOut = create_submconv_indice_pair_cuda(
    71. indices, gridOut, indicePairs, indiceNum, kernelSize, stride, padding,
    72. dilation, outSpatialShape, transpose, false, useHash);
    73. //啥子??GPU算出来-1就用cpu上?为什么?cpu算出来就不会是-1了??
    74. if (numActOut == -1) {
    75. auto device = indices.device();
    76. indicePairs = indicePairs.to({torch::kCPU});
    77. indiceNum = indiceNum.to({torch::kCPU});
    78. indices = indices.to({torch::kCPU});
    79. numActOut = create_submconv_indice_pair_cpu(
    80. indices, gridOut, indicePairs, indiceNum, kernelSize, stride,
    81. padding, dilation, outSpatialShape, transpose, false, useHash);
    82. return {indices.to(device), indicePairs.to(device),
    83. indiceNum.to(device)};
    84. }
    85. }
    86. #endif
    87. else {
    88. TV_THROW_INVALID_ARG("unknown device type");
    89. }
    90. // tv::ssprint("subm", timer.report() / 1000.0);
    91. return {indices, indicePairs, indiceNum};
    92. } else {
    93. //如果卷及类型是spconv,初始化indicePairUnique和outInds
    94. auto indicePairUnique = torch::full(
    95. {indicePairs.numel() / 2 + 1}, std::numeric_limits<int>::max(),
    96. torch::dtype(torch::kInt32).device(indices.device()));
    97. torch::Tensor outInds =
    98. //e.g. torch.Size([N*27,3+1])
    99. torch::zeros({numAct * kernelVolume, coorDim + 1},
    100. torch::dtype(torch::kInt32).device(indices.device()));
    101. if (indices.device().type() == torch::kCPU) {
    102. numActOut = create_conv_indice_pair_cpu(
    103. indices, outInds, gridOut, indicePairs, indiceNum, kernelSize, stride,
    104. padding, dilation, outSpatialShape, transpose, false, useHash);
    105. }
    106. #ifdef TV_CUDA
    107. else if (indices.device().type() == torch::kCUDA) {
    108. numActOut = create_conv_indice_pair_p1_cuda(
    109. indices, indicePairs, indiceNum, indicePairUnique, kernelSize, stride,
    110. padding, dilation, outSpatialShape, transpose);
    111. if (numActOut > 0) {
    112. auto res = torch::_unique(indicePairUnique);
    113. indicePairUnique = std::get<0>(res);
    114. numActOut = create_conv_indice_pair_p2_cuda(
    115. indices, outInds, gridOut, indicePairs, indiceNum, indicePairUnique,
    116. outSpatialShape, transpose, false, useHash);
    117. if (numActOut == -1) {
    118. auto device = indices.device();
    119. outInds = outInds.to({torch::kCPU});
    120. indicePairs = indicePairs.to({torch::kCPU});
    121. indiceNum = indiceNum.to({torch::kCPU});
    122. indices = indices.to({torch::kCPU});
    123. numActOut = create_conv_indice_pair_cpu(
    124. indices, outInds, gridOut, indicePairs, indiceNum, kernelSize,
    125. stride, padding, dilation, outSpatialShape, transpose, false,
    126. useHash);
    127. return {outInds.to(device).slice(0, 0, numActOut),
    128. indicePairs.to(device), indiceNum.to(device)};
    129. }
    130. }
    131. }
    132. #endif
    133. else {
    134. TV_THROW_INVALID_ARG("unknown device type");
    135. }
    136. return {outInds.slice(0, 0, numActOut), indicePairs, indiceNum};
    137. }
    138. }

    简单起见,在分析getIndicePairs建立rulebook的原理是我们只讨论GPU部分的逻辑,并且子流行3d稀疏卷积和正常3d稀疏卷积分开讨论,优先子流行3d稀疏卷积。代码种最重要的3个变量分别为:indicePairs,indiceNum和gridOut,其建立过程如下。

    1. auto outputVolume = outSpatialShape[0];
    2. for (int i = 1; i < outSpatialShape.size(); ++i) {
    3. outputVolume *= outSpatialShape[i];
    4. }
    5. //e.g. torch.Size([2,27,16000])
    6. torch::Tensor indicePairs = torch::full({2, kernelVolume, numAct}, -1,
    7. torch::dtype(torch::kInt32).device(indices.device()));
    8. //e.g. torch.Size([27])
    9. torch::Tensor indiceNum = torch::zeros({kernelVolume},
    10. torch::dtype(torch::kInt32).device(indices.device()));
    11. auto gridSize = batchSize * outputVolume;
    12. torch::Tensor gridOut = torch::full({gridSize}, -1,
    13. torch::dtype(torch::kInt32).device(indices.device()));
    14. gridOut = gridOut.view({batchSize, -1});

    对rulebook原理有过了解的话不难知道indicePairs最终就代表了稀疏卷积输入输出的映射规则。它的shape为{2,kernelVolume,numAct},2表示输入和输出两个方向,kernelVolume为卷积核的volume size。例如一个3x3x3的卷积核,其volume size就是27(3*3*3)。numAct表示输入有效(active)特征的数量。indiceNum用于保存卷积核每一个位置上的总的计算的次数,因为是稀疏卷积所以卷积核上每一个元素和有效数据的运算次数可能是不同的。

    最终建立的rulebook如上图所示,代码中关于gpu建立rulebook调用create_submconv_indice_pair_cuda函数来完成。

    摘自:src/spconv/indice.cu

    1. int create_submconv_indice_pair_cuda(
    2. torch::Tensor indicesIn, //e.g. torch.Size([N,4])
    3. torch::Tensor gridsOut, //e.g torch.Size([bs, gridOutVolume])
    4. torch::Tensor indicePairs, //e.g torch.Size([2,kernelVolume, numAct])
    5. torch::Tensor indiceNum,
    6. std::vector<int64_t> kernelSize,
    7. std::vector<int64_t> stride,
    8. std::vector<int64_t> padding,
    9. std::vector<int64_t> dilation,
    10. std::vector<int64_t> outSpatialShape,
    11. bool transpose, bool resetGrid, bool useHash) {
    12. auto stream = at::cuda::getCurrentCUDAStream();
    13. auto ndim = outSpatialShape.size(); //3
    14. auto numActIn = indicesIn.size(0);
    15. int batchSize = gridsOut.size(0);
    16. auto kernelVolume = indiceNum.size(0); // e.g. 3x3x3 => 27
    17. if (numActIn == 0)
    18. return 0;
    19. bool failed = false;
    20. tv::dispatch_torch<int32_t>(indicesIn.scalar_type(), [&](auto IndexValue) {
    21. using Index = TV_DECLTYPE(IndexValue); //类型推导
    22. using IndexGrid = int32_t;
    23. tv::dispatch_int<2, 3, 4>(ndim, [&](auto I) {
    24. constexpr int NDim = TV_DECLTYPE(I)::value;
    25. tv::SimpleVector ks(kernelSize.begin(), kernelSize.end());
    26. tv::SimpleVector st(stride.begin(), stride.end());
    27. tv::SimpleVector pa(padding.begin(), padding.end());
    28. tv::SimpleVector di(dilation.begin(), dilation.end());
    29. tv::SimpleVector ou(outSpatialShape.begin(), outSpatialShape.end());
    30. Index spatialVolume = 1;
    31. for (int i = 0; i < NDim; ++i) {
    32. spatialVolume *= outSpatialShape[i];
    33. }
    34. if (useHash) {
    35. //...省略...
    36. } else {
    37. // auto timer = spconv::CudaContextTimer<>();
    38. prepareSubMGridKernel
    39. <<getBlocks(numActIn), tv::cuda::CUDA_NUM_THREADS, 0, stream>>>(
    40. tv::torch2tv(indicesIn),
    41. tv::torch2tv(gridsOut),
    42. ou, spatialVolume);
    43. // tv::ssprint("prepareSubMGridKernel", timer.report() / 1000.0);
    44. TV_CHECK_CUDA_ERR_V2("prepareSubMGridKernel failed");
    45. // when dilation all one, we use a simple kernel to calc result
    46. bool dilation_one = true;
    47. for (int i = 0; i < NDim; ++i) {
    48. dilation_one &= di[i] == 1;
    49. }
    50. auto found = false;
    51. if (dilation_one && (NDim == 2 || NDim == 3)) {
    52. auto indiceNumCpu = indiceNum.cpu(); ///do what??no use!
    53. if (NDim == 2) {
    54. //...省略...
    55. } else if (NDim == 3) {
    56. tv::SimpleVector3> ou_(outSpatialShape.begin(), outSpatialShape.end());
    57. tv::dispatch_int_noexcept<1, 3, 5>(kernelSize[0], [&](auto K0C) {
    58. tv::dispatch_int_noexcept<1, 3, 5>(kernelSize[1], [&](auto K1C) {
    59. tv::dispatch_int_noexcept<1, 3, 5>(kernelSize[2], [&](auto K2C) {
    60. constexpr int K0 = TV_DECLTYPE(K0C)::value;
    61. constexpr int K1 = TV_DECLTYPE(K1C)::value;
    62. constexpr int K2 = TV_DECLTYPE(K2C)::value;
    63. found = true;
    64. getSubMIndicePairsKernel3
    65. <<getBlocks(numActIn), tv::cuda::CUDA_NUM_THREADS, 0, stream>>>(
    66. tv::torch2tv(indicesIn),
    67. tv::torch2tv(gridsOut),
    68. tv::torch2tv(indicePairs),
    69. tv::torch2tv(indiceNum), ou_,
    70. spatialVolume);
    71. });
    72. });
    73. });
    74. }
    75. }
    76. if (!found) {
    77. //...省略...
    78. }
    79. // tv::ssprint("getSubMIndicePairsKernel", timer.report() / 1000.0);
    80. }
    81. if (resetGrid && (!useHash)) {
    82. resetGridSubMKernel
    83. <<getBlocks(numActIn), tv::cuda::CUDA_NUM_THREADS, 0,
    84. stream>>>(indicesIn.data_ptr(),
    85. tv::torch2tv(gridsOut), ou, numActIn);
    86. TV_CHECK_CUDA_ERR_V2("resetGridKernel failed");
    87. }
    88. });
    89. });
    90. if (failed){
    91. return -1;
    92. }
    93. return numActIn;
    94. }

    在create_submconv_indice_pair_cuda我们大可不必深究以下动态分发机制的运行原理。

    tv::dispatch_torch(indicesIn.scalar_type(), [&](auto IndexValue) {

        ....

    tv::dispatch_int<2, 3, 4>(ndim, [&](auto I) {

        ....

    }    

    }

    直接将重心锁定在核函数:

    prepareSubMGridKernel
                <<>>(
                    tv::torch2tv(indicesIn), 
                    tv::torch2tv(gridsOut), 
                    ou, spatialVolume);

    我们知道cuda核函数的启动配置形如:
    <<>>

    这里grid_size(网格大小)和block_size(线程块大小)一般来说是一个结构体类型的变量,但也可以是一个普通的整形变量。prepareSubMGridKernel核函数中grid_size和block_size实则都是用的整形变量。其中block_size为tv::cuda::CUDA_NUM_THREADS,在include/tensorview/cuda_utils.h文件中定义,大小为1024。而grid_size大小通过tv::cuda::getBlocks(numActIn)计算得到,其中numActIn表示有效(active)输入数据的数量。

    摘自:include/tensorview/cuda_utils.h

    1. template <typename T1, typename T2> inline int DivUp(const T1 a, const T2 b) {
    2. return (a + b - 1) / b;
    3. }
    4. // Use 1024 threads per block, which requires cuda sm_2x or above
    5. constexpr int CUDA_NUM_THREADS = 1024;
    6. // CUDA: number of blocks for threads.
    7. inline int getNumThreads(const int N) {
    8. if (N > CUDA_NUM_THREADS) {
    9. return CUDA_NUM_THREADS;
    10. }
    11. return DivUp(N, 32) * 32;
    12. }
    13. inline int getBlocks(const int N) {
    14. TV_ASSERT_RT_ERR(N > 0,
    15. "CUDA kernel launch blocks must be positive, but got N=", N);
    16. return DivUp(N, getNumThreads(N));
    17. }

    prepareSubMGridKernel的作用类似于建立输出张量坐标(通过index表示)到输出序号之间的一张哈希表。

    摘自:include/spconv/indice.cu.h

    1. template <typename Index, typename IndexGrid, unsigned NDim>
    2. __global__ void prepareSubMGridKernel(
    3. tv::TensorView<const Index> indicesIn, tv::TensorView gridsOut,
    4. const tv::SimpleVector outSpatialShape, Index spatialVolume) {
    5. auto numActIn = indicesIn.dim(0); //e.g. torch.Size([N,4]) => N
    6. Index index = 0;
    7. for (int ix : tv::KernelLoopX<int>(numActIn)) {
    8. index = tv::ArrayIndexRowMajor::runPtrs(
    9. indicesIn.data() + ix * (NDim + 1) + 1, outSpatialShape.data(), 0) +
    10. spatialVolume * indicesIn(ix, 0);
    11. gridsOut[index] = ix;
    12. }
    13. }

    第一眼看到tv::ArrayIndexRowMajor定义的时候被作者高大上的操作整懵了,总的来说还是一个以行优先顺序计算元素索引的过程,只是换了一种模板加递归调用的写法。

    摘自:include/tensorview/tensorview.h

    1. template <int N, int Ndim> struct ArrayIndexRowMajor {
    2. //...省略...
    3. template <typename TShape, typename Tinit>
    4. TV_HOST_DEVICE_INLINE static unsigned
    5. runPtrs(const TShape *indexes, const TShape *shape, Tinit start) {
    6. return ArrayIndexRowMajor<N - 1, Ndim>::runPtrs(
    7. indexes, shape, (indexes[Ndim - N] + start) * shape[Ndim - N + 1]);
    8. }
    9. };
    10. template <int Ndim> struct ArrayIndexRowMajor<1, Ndim> {
    11. //...省略...
    12. template <typename TShape, typename Tinit>
    13. TV_HOST_DEVICE_INLINE static unsigned
    14. runPtrs(const TShape *indexes, const TShape *shape, Tinit start) {
    15. return start + indexes[Ndim - 1];
    16. }
    17. };

    【参考文献】

    稀疏卷积 Sparse Convolution Net - 知乎

    通俗易懂的解释Sparse Convolution过程 - 知乎

    模型部署入门教程(三):PyTorch 转 ONNX 详解 - 知乎

    小红书《CUDA编程基础与实践》

  • 相关阅读:
    LLM项目代码改写
    redis的原理和源码-sentinel哨兵的原理和源码解析(下)
    YB2419是一款功能齐备,内置mos100%占空比高效率同步降压IC
    套接字介绍
    同名在线查询系统微信小程序源码下载支持多种流量主
    老杨说运维 | 非常重要,事关转型
    点灯科技实现 “ESP8266-01/01s + 继电器” 远程开关
    内省机制(操作javaBean的信息)
    排序算法(待完善)java版
    抽象轻松java
  • 原文地址:https://blog.csdn.net/ChuiGeDaQiQiu/article/details/127660561