• (9)点云数据处理学习——Global registration(全局注册)


    1、主要参考

    (1)官网的地址

    Global registration — Open3D 0.16.0 documentation

    2、作用和原理

    2.1个人理解

    PS理解:(1)ICP的作用是,2个点云数据在初步转换关系(已知不精确)的情况下进行精细调整。(2)Global registration是不知道2个点云数据的初始关系,直接来一波计算,得到初步转换关系,接下来给ICP计算用。

     2.2 介绍

    ICP配准和彩色点云配准都被称为本地配准方法,因为它们依赖于粗略对齐作为初始化。本教程展示了另一类注册方法,称为全局注册。这一系列算法不需要对初始化进行对齐。它们通常产生不太紧密的对齐结果,并被用作局部方法的初始化。

    2.3 例子中封装的可视化函数

    下面的辅助函数将转换后的源点云与目标点云一起可视化:

    1. def draw_registration_result(source, target, transformation):
    2. source_temp = copy.deepcopy(source)
    3. target_temp = copy.deepcopy(target)
    4. source_temp.paint_uniform_color([1, 0.706, 0])
    5. target_temp.paint_uniform_color([0, 0.651, 0.929])
    6. source_temp.transform(transformation)
    7. o3d.visualization.draw_geometries([source_temp, target_temp],
    8. zoom=0.4559,
    9. front=[0.6452, -0.3036, -0.7011],
    10. lookat=[1.9892, 2.0208, 1.8945],
    11. up=[-0.2779, -0.9482, 0.1556])

    2.4 提取几何特征

    我们对点云进行下采样,估计法线,然后为每个点计算一个FPFH特征。FPFH特征是描述点的局部几何性质的33维向量。在33维空间中的最近邻查询可以返回具有相似局部几何结构的点。详见[Rasu2009]。

    1. def preprocess_point_cloud(pcd, voxel_size):
    2. print(":: Downsample with a voxel size %.3f." % voxel_size)
    3. pcd_down = pcd.voxel_down_sample(voxel_size)
    4. radius_normal = voxel_size * 2
    5. print(":: Estimate normal with search radius %.3f." % radius_normal)
    6. pcd_down.estimate_normals(
    7. o3d.geometry.KDTreeSearchParamHybrid(radius=radius_normal, max_nn=30))
    8. radius_feature = voxel_size * 5
    9. print(":: Compute FPFH feature with search radius %.3f." % radius_feature)
    10. pcd_fpfh = o3d.pipelines.registration.compute_fpfh_feature(
    11. pcd_down,
    12. o3d.geometry.KDTreeSearchParamHybrid(radius=radius_feature, max_nn=100))
    13. return pcd_down, pcd_fpfh

     3、测试的例子

    3.1 读取输入的两个点云文件

    下面的代码从两个文件读取一个源点云和一个目标点云。它们与单位矩阵的变换不对齐。

    (1)注意:下面代码中的np.identity(4),相当于没有进行平移和旋转 

    1. import open3d as o3d
    2. import numpy as np
    3. from copy import deepcopy
    4. def draw_registration_result(source, target, transformation):
    5. # source_temp = copy.deepcopy(source)
    6. # target_temp = copy.deepcopy(target)
    7. source_temp = deepcopy(source)
    8. target_temp = deepcopy(target)
    9. source_temp.paint_uniform_color([1, 0.706, 0])
    10. target_temp.paint_uniform_color([0, 0.651, 0.929])
    11. source_temp.transform(transformation)
    12. o3d.visualization.draw_geometries([source_temp, target_temp],
    13. zoom=0.4559,
    14. front=[0.6452, -0.3036, -0.7011],
    15. lookat=[1.9892, 2.0208, 1.8945],
    16. up=[-0.2779, -0.9482, 0.1556])
    17. def preprocess_point_cloud(pcd, voxel_size):
    18. print(":: Downsample with a voxel size %.3f." % voxel_size)
    19. pcd_down = pcd.voxel_down_sample(voxel_size)
    20. radius_normal = voxel_size * 2
    21. print(":: Estimate normal with search radius %.3f." % radius_normal)
    22. pcd_down.estimate_normals(
    23. o3d.geometry.KDTreeSearchParamHybrid(radius=radius_normal, max_nn=30))
    24. radius_feature = voxel_size * 5
    25. print(":: Compute FPFH feature with search radius %.3f." % radius_feature)
    26. pcd_fpfh = o3d.pipelines.registration.compute_fpfh_feature(
    27. pcd_down,
    28. o3d.geometry.KDTreeSearchParamHybrid(radius=radius_feature, max_nn=100))
    29. return pcd_down, pcd_fpfh
    30. def prepare_dataset(voxel_size):
    31. print(":: Load two point clouds and disturb initial pose.")
    32. demo_icp_pcds = o3d.data.DemoICPPointClouds()
    33. source = o3d.io.read_point_cloud(demo_icp_pcds.paths[0])
    34. target = o3d.io.read_point_cloud(demo_icp_pcds.paths[1])
    35. trans_init = np.asarray([[0.0, 0.0, 1.0, 0.0], [1.0, 0.0, 0.0, 0.0],
    36. [0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 0.0, 1.0]])
    37. source.transform(trans_init)
    38. draw_registration_result(source, target, np.identity(4))
    39. source_down, source_fpfh = preprocess_point_cloud(source, voxel_size)
    40. target_down, target_fpfh = preprocess_point_cloud(target, voxel_size)
    41. return source, target, source_down, target_down, source_fpfh, target_fpfh
    42. if __name__ == "__main__":
    43. voxel_size = 0.05 # means 5cm for this dataset
    44. source, target, source_down, target_down, source_fpfh, target_fpfh = prepare_dataset(
    45. voxel_size)

    (2)下载后应该用了下面的这个文件

     (3)显示的两个点云初始关系如下

     (4)操作流程

     3.2 RANSAC算法和测试

    3.2.1原理

          我们使用RANSAC进行全局注册。在每次RANSAC迭代中,从源点云中选取ransac_n个随机点。通过在33维FPFH特征空间中查询最近的邻居来检测它们在目标点云中的对应点。修剪步骤需要快速的修剪算法来快速地在早期拒绝错误匹配。

    Open3D提供以下剪枝算法: 

    • CorrespondenceCheckerBasedOnDistance 检查对齐点云是否接近(小于指定的阈值)。
    • CorrespondenceCheckerBasedOnEdgeLength检查从源和目标匹配分别绘制的任意两条边(由两个顶点组成的线)的长度是否相似。本教程检查||edgesource||>0.9带||edgetarget||和||edgetarget||>0.9带||edgesource||是否为真。

    • CorrespondenceCheckerBasedOnNormal考虑任何匹配的顶点法线亲和。它计算两个法向量的点积。它取一个弧度值作为阈值。

          只有通过修剪步骤的匹配才被用于计算转换,并在整个点云上进行验证。核心函数是registration_ransac_based_on_feature_matching。该函数最重要的超参数是RANSACConvergenceCriteria。它定义了RANSAC迭代的最大次数和置信概率。这两个数字越大,结果越准确,但算法花费的时间也越多。

         我们根据[[Choi2015]](../reference.html# Choi2015)提供的经验值设置RANSAC参数。

     3.2.2测试代码

    (1)代码

    1. import open3d as o3d
    2. import numpy as np
    3. from copy import deepcopy
    4. def draw_registration_result(source, target, transformation):
    5. # source_temp = copy.deepcopy(source)
    6. # target_temp = copy.deepcopy(target)
    7. source_temp = deepcopy(source)
    8. target_temp = deepcopy(target)
    9. source_temp.paint_uniform_color([1, 0.706, 0])
    10. target_temp.paint_uniform_color([0, 0.651, 0.929])
    11. source_temp.transform(transformation)
    12. o3d.visualization.draw_geometries([source_temp, target_temp],
    13. zoom=0.4559,
    14. front=[0.6452, -0.3036, -0.7011],
    15. lookat=[1.9892, 2.0208, 1.8945],
    16. up=[-0.2779, -0.9482, 0.1556])
    17. def preprocess_point_cloud(pcd, voxel_size):
    18. print(":: Downsample with a voxel size %.3f." % voxel_size)
    19. pcd_down = pcd.voxel_down_sample(voxel_size)
    20. radius_normal = voxel_size * 2
    21. print(":: Estimate normal with search radius %.3f." % radius_normal)
    22. pcd_down.estimate_normals(
    23. o3d.geometry.KDTreeSearchParamHybrid(radius=radius_normal, max_nn=30))
    24. radius_feature = voxel_size * 5
    25. print(":: Compute FPFH feature with search radius %.3f." % radius_feature)
    26. pcd_fpfh = o3d.pipelines.registration.compute_fpfh_feature(
    27. pcd_down,
    28. o3d.geometry.KDTreeSearchParamHybrid(radius=radius_feature, max_nn=100))
    29. return pcd_down, pcd_fpfh
    30. def prepare_dataset(voxel_size):
    31. print(":: Load two point clouds and disturb initial pose.")
    32. demo_icp_pcds = o3d.data.DemoICPPointClouds()
    33. source = o3d.io.read_point_cloud(demo_icp_pcds.paths[0])
    34. target = o3d.io.read_point_cloud(demo_icp_pcds.paths[1])
    35. trans_init = np.asarray([[0.0, 0.0, 1.0, 0.0], [1.0, 0.0, 0.0, 0.0],
    36. [0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 0.0, 1.0]])
    37. source.transform(trans_init)
    38. draw_registration_result(source, target, np.identity(4))
    39. source_down, source_fpfh = preprocess_point_cloud(source, voxel_size)
    40. target_down, target_fpfh = preprocess_point_cloud(target, voxel_size)
    41. return source, target, source_down, target_down, source_fpfh, target_fpfh
    42. def execute_global_registration(source_down, target_down, source_fpfh,
    43. target_fpfh, voxel_size):
    44. distance_threshold = voxel_size * 1.5
    45. print(":: RANSAC registration on downsampled point clouds.")
    46. print(" Since the downsampling voxel size is %.3f," % voxel_size)
    47. print(" we use a liberal distance threshold %.3f." % distance_threshold)
    48. result = o3d.pipelines.registration.registration_ransac_based_on_feature_matching(
    49. source_down, target_down, source_fpfh, target_fpfh, True,
    50. distance_threshold,
    51. o3d.pipelines.registration.TransformationEstimationPointToPoint(False),
    52. 3, [
    53. o3d.pipelines.registration.CorrespondenceCheckerBasedOnEdgeLength(
    54. 0.9),
    55. o3d.pipelines.registration.CorrespondenceCheckerBasedOnDistance(
    56. distance_threshold)
    57. ], o3d.pipelines.registration.RANSACConvergenceCriteria(100000, 0.999))
    58. return result
    59. if __name__ == "__main__":
    60. #(1)原始,下采样,查看
    61. voxel_size = 0.05 # means 5cm for this dataset
    62. source, target, source_down, target_down, source_fpfh, target_fpfh = prepare_dataset(
    63. voxel_size)
    64. #(2)ransac全局注册
    65. result_ransac = execute_global_registration(source_down, target_down,
    66. source_fpfh, target_fpfh,
    67. voxel_size)
    68. print(result_ransac)
    69. draw_registration_result(source_down, target_down, result_ransac.transformation)

    (2)结果

    (3)发现结果还真不错,看看上一个教程icp中提到的两个指标fitness=6.745798e-01, inlier_rmse=3.051079e-02

    注意:Open3D为全局注册提供了更快的实现。请参考快速全球注册。

    4、局部细化(Local refinement)

    (1)原理

    出于性能的考虑,全局配准只在低采样的点云上执行。结果也不紧绷。我们使用点对面ICP来进一步细化对准。

    (2)注意:下面的方法中使用了上一个教程中的点对面的配准方法

     

    1. import open3d as o3d
    2. import numpy as np
    3. from copy import deepcopy
    4. def draw_registration_result(source, target, transformation):
    5. # source_temp = copy.deepcopy(source)
    6. # target_temp = copy.deepcopy(target)
    7. source_temp = deepcopy(source)
    8. target_temp = deepcopy(target)
    9. source_temp.paint_uniform_color([1, 0.706, 0])
    10. target_temp.paint_uniform_color([0, 0.651, 0.929])
    11. source_temp.transform(transformation)
    12. o3d.visualization.draw_geometries([source_temp, target_temp],
    13. zoom=0.4559,
    14. front=[0.6452, -0.3036, -0.7011],
    15. lookat=[1.9892, 2.0208, 1.8945],
    16. up=[-0.2779, -0.9482, 0.1556])
    17. def preprocess_point_cloud(pcd, voxel_size):
    18. print(":: Downsample with a voxel size %.3f." % voxel_size)
    19. pcd_down = pcd.voxel_down_sample(voxel_size)
    20. radius_normal = voxel_size * 2
    21. print(":: Estimate normal with search radius %.3f." % radius_normal)
    22. pcd_down.estimate_normals(
    23. o3d.geometry.KDTreeSearchParamHybrid(radius=radius_normal, max_nn=30))
    24. radius_feature = voxel_size * 5
    25. print(":: Compute FPFH feature with search radius %.3f." % radius_feature)
    26. pcd_fpfh = o3d.pipelines.registration.compute_fpfh_feature(
    27. pcd_down,
    28. o3d.geometry.KDTreeSearchParamHybrid(radius=radius_feature, max_nn=100))
    29. return pcd_down, pcd_fpfh
    30. def prepare_dataset(voxel_size):
    31. print(":: Load two point clouds and disturb initial pose.")
    32. demo_icp_pcds = o3d.data.DemoICPPointClouds()
    33. source = o3d.io.read_point_cloud(demo_icp_pcds.paths[0])
    34. target = o3d.io.read_point_cloud(demo_icp_pcds.paths[1])
    35. trans_init = np.asarray([[0.0, 0.0, 1.0, 0.0], [1.0, 0.0, 0.0, 0.0],
    36. [0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 0.0, 1.0]])
    37. source.transform(trans_init)
    38. draw_registration_result(source, target, np.identity(4))
    39. source_down, source_fpfh = preprocess_point_cloud(source, voxel_size)
    40. target_down, target_fpfh = preprocess_point_cloud(target, voxel_size)
    41. return source, target, source_down, target_down, source_fpfh, target_fpfh
    42. def execute_global_registration(source_down, target_down, source_fpfh,
    43. target_fpfh, voxel_size):
    44. distance_threshold = voxel_size * 1.5
    45. print(":: RANSAC registration on downsampled point clouds.")
    46. print(" Since the downsampling voxel size is %.3f," % voxel_size)
    47. print(" we use a liberal distance threshold %.3f." % distance_threshold)
    48. result = o3d.pipelines.registration.registration_ransac_based_on_feature_matching(
    49. source_down, target_down, source_fpfh, target_fpfh, True,
    50. distance_threshold,
    51. o3d.pipelines.registration.TransformationEstimationPointToPoint(False),
    52. 3, [
    53. o3d.pipelines.registration.CorrespondenceCheckerBasedOnEdgeLength(
    54. 0.9),
    55. o3d.pipelines.registration.CorrespondenceCheckerBasedOnDistance(
    56. distance_threshold)
    57. ], o3d.pipelines.registration.RANSACConvergenceCriteria(100000, 0.999))
    58. return result
    59. def refine_registration(source, target, source_fpfh, target_fpfh, voxel_size):
    60. distance_threshold = voxel_size * 0.4
    61. print(":: Point-to-plane ICP registration is applied on original point")
    62. print(" clouds to refine the alignment. This time we use a strict")
    63. print(" distance threshold %.3f." % distance_threshold)
    64. result = o3d.pipelines.registration.registration_icp(
    65. source, target, distance_threshold, result_ransac.transformation,
    66. o3d.pipelines.registration.TransformationEstimationPointToPlane())
    67. return result
    68. if __name__ == "__main__":
    69. #(1)原始,下采样,查看
    70. voxel_size = 0.05 # means 5cm for this dataset
    71. source, target, source_down, target_down, source_fpfh, target_fpfh = prepare_dataset(
    72. voxel_size)
    73. #(2)ransac全局注册
    74. result_ransac = execute_global_registration(source_down, target_down,
    75. source_fpfh, target_fpfh,
    76. voxel_size)
    77. print(result_ransac)
    78. draw_registration_result(source_down, target_down, result_ransac.transformation)
    79. #(3)点对面细调整
    80. result_icp = refine_registration(source, target, source_fpfh, target_fpfh,
    81. voxel_size)
    82. print(result_icp)
    83. draw_registration_result(source, target, result_icp.transformation)

    (3)测试结果

     (4)匹配的参数真漂亮,参数为 fitness=6.210325e-01, inlier_rmse=6.564120e-03

     5、快速全局注册(Fast global registration)

    5.1原理

    (1)描述

    基于RANSAC的全局注册解决方案可能需要很长时间,因为有无数的模型建议和评估。[Zhou2016]提出了一种更快的方法,可以快速优化对应次数较少的线处理过程。由于每次迭代都不涉及模型提出和评估,[Zhou2016]提出的方法可以节省大量的计算时间。

    (2)用到的函数

    o3d.pipelines.registration.registration_fgr_based_on_correspondence(
            source_down, target_down, correspondence_set,
            o3d.pipelines.registration.FastGlobalRegistrationOption())

    本教程将基于RANSAC的全局注册的运行时间与[Zhou2016]的实现进行比较。

     5.2 测试代码

    1. import open3d as o3d
    2. import numpy as np
    3. from copy import deepcopy
    4. import time
    5. def draw_registration_result(source, target, transformation):
    6. # source_temp = copy.deepcopy(source)
    7. # target_temp = copy.deepcopy(target)
    8. source_temp = deepcopy(source)
    9. target_temp = deepcopy(target)
    10. source_temp.paint_uniform_color([1, 0.706, 0])
    11. target_temp.paint_uniform_color([0, 0.651, 0.929])
    12. source_temp.transform(transformation)
    13. o3d.visualization.draw_geometries([source_temp, target_temp],
    14. zoom=0.4559,
    15. front=[0.6452, -0.3036, -0.7011],
    16. lookat=[1.9892, 2.0208, 1.8945],
    17. up=[-0.2779, -0.9482, 0.1556])
    18. def preprocess_point_cloud(pcd, voxel_size):
    19. print(":: Downsample with a voxel size %.3f." % voxel_size)
    20. pcd_down = pcd.voxel_down_sample(voxel_size)
    21. radius_normal = voxel_size * 2
    22. print(":: Estimate normal with search radius %.3f." % radius_normal)
    23. pcd_down.estimate_normals(
    24. o3d.geometry.KDTreeSearchParamHybrid(radius=radius_normal, max_nn=30))
    25. radius_feature = voxel_size * 5
    26. print(":: Compute FPFH feature with search radius %.3f." % radius_feature)
    27. pcd_fpfh = o3d.pipelines.registration.compute_fpfh_feature(
    28. pcd_down,
    29. o3d.geometry.KDTreeSearchParamHybrid(radius=radius_feature, max_nn=100))
    30. return pcd_down, pcd_fpfh
    31. def prepare_dataset(voxel_size):
    32. print(":: Load two point clouds and disturb initial pose.")
    33. demo_icp_pcds = o3d.data.DemoICPPointClouds()
    34. source = o3d.io.read_point_cloud(demo_icp_pcds.paths[0])
    35. target = o3d.io.read_point_cloud(demo_icp_pcds.paths[1])
    36. trans_init = np.asarray([[0.0, 0.0, 1.0, 0.0], [1.0, 0.0, 0.0, 0.0],
    37. [0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 0.0, 1.0]])
    38. source.transform(trans_init)
    39. draw_registration_result(source, target, np.identity(4))
    40. source_down, source_fpfh = preprocess_point_cloud(source, voxel_size)
    41. target_down, target_fpfh = preprocess_point_cloud(target, voxel_size)
    42. return source, target, source_down, target_down, source_fpfh, target_fpfh
    43. def execute_global_registration(source_down, target_down, source_fpfh,
    44. target_fpfh, voxel_size):
    45. distance_threshold = voxel_size * 1.5
    46. print(":: RANSAC registration on downsampled point clouds.")
    47. print(" Since the downsampling voxel size is %.3f," % voxel_size)
    48. print(" we use a liberal distance threshold %.3f." % distance_threshold)
    49. result = o3d.pipelines.registration.registration_ransac_based_on_feature_matching(
    50. source_down, target_down, source_fpfh, target_fpfh, True,
    51. distance_threshold,
    52. o3d.pipelines.registration.TransformationEstimationPointToPoint(False),
    53. 3, [
    54. o3d.pipelines.registration.CorrespondenceCheckerBasedOnEdgeLength(
    55. 0.9),
    56. o3d.pipelines.registration.CorrespondenceCheckerBasedOnDistance(
    57. distance_threshold)
    58. ], o3d.pipelines.registration.RANSACConvergenceCriteria(100000, 0.999))
    59. return result
    60. def refine_registration(source, target, source_fpfh, target_fpfh, voxel_size):
    61. distance_threshold = voxel_size * 0.4
    62. print(":: Point-to-plane ICP registration is applied on original point")
    63. print(" clouds to refine the alignment. This time we use a strict")
    64. print(" distance threshold %.3f." % distance_threshold)
    65. result = o3d.pipelines.registration.registration_icp(
    66. source, target, distance_threshold, result_ransac.transformation,
    67. o3d.pipelines.registration.TransformationEstimationPointToPlane())
    68. return result
    69. def execute_fast_global_registration(source_down, target_down, source_fpfh,
    70. target_fpfh, voxel_size):
    71. distance_threshold = voxel_size * 0.5
    72. print(":: Apply fast global registration with distance threshold %.3f" \
    73. % distance_threshold)
    74. result = o3d.pipelines.registration.registration_fgr_based_on_feature_matching(
    75. source_down, target_down, source_fpfh, target_fpfh,
    76. o3d.pipelines.registration.FastGlobalRegistrationOption(
    77. maximum_correspondence_distance=distance_threshold))
    78. return result
    79. if __name__ == "__main__":
    80. #(1)原始,下采样,查看
    81. voxel_size = 0.05 # means 5cm for this dataset
    82. source, target, source_down, target_down, source_fpfh, target_fpfh = prepare_dataset(
    83. voxel_size)
    84. #(2)ransac全局注册,计算时间
    85. start = time.time()
    86. result_ransac = execute_global_registration(source_down, target_down,
    87. source_fpfh, target_fpfh,
    88. voxel_size)
    89. print("Global registration took %.3f sec.\n" % (time.time() - start))
    90. print(result_ransac)
    91. draw_registration_result(source_down, target_down, result_ransac.transformation)
    92. # #(3)点对面细调整
    93. # result_icp = refine_registration(source, target, source_fpfh, target_fpfh,
    94. # voxel_size)
    95. # print(result_icp)
    96. # draw_registration_result(source, target, result_icp.transformation)
    97. # (4)快速全局注册,计算时间
    98. start = time.time()
    99. result_fast = execute_fast_global_registration(source_down, target_down,
    100. source_fpfh, target_fpfh,
    101. voxel_size)
    102. print("Fast global registration took %.3f sec.\n" % (time.time() - start))
    103. print(result_fast)
    104. draw_registration_result(source_down, target_down, result_fast.transformation)

    5.3 结论

    PS:没仔细看,感觉和官网给的数据结果一样,速度没有快起来啊。大家有空仔细研究一下。

    通过适当的配置,快速全局注册的精度甚至可以与ICP媲美。更多实验结果见[Zhou2016]。

    除了基于fpfh特征的FGR,全局注册还可以通过registration_fgr_based_on_correspondence使用基于通信的FGR执行。如果您的通信前端不同于FPFH,但您仍然希望在给定一组假定通信时使用FGR,则此方法是有用的。调用方法如下:

    注意,该方法和上面的测试方法不同。

     o3d.pipelines.registration.registration_fgr_based_on_correspondence(
            source_down, target_down, correspondence_set,
            o3d.pipelines.registration.FastGlobalRegistrationOption())

  • 相关阅读:
    超图s3m服务加载时添加token
    vue项目使用history模式,打包部署到服务器根目录与二级目录的方法
    集合框架1
    001——体验鸿蒙(基于I.MAX6ULL)
    测试工程师一定要会的Jmeter_性能测试教程:性能测试脚本的优化
    CCC标准——PHY
    Mysql与GoldenDB中如何利用sql脚本添加字段和数据表名的备注
    微信小程序运行机制和生命周期
    一键同步,无处不在的书签体验:探索多电脑Chrome书签同步插件
    ABAP 计算时间差
  • 原文地址:https://blog.csdn.net/chencaw/article/details/128169483