Abstract:
Point cloud registration is extensively utilized in robotic inspection and scene reconstruction to integrate data from various measurement poses. However, for large-scale scenes such as high-speed train bodies and buildings, which exhibit prominent planar features, traditional registration methods based on point-to-point and point-to-plane distances are susceptible to converging at local minima when the initial pose estimation is inaccurate. Moreover, these methods tend to slide along the tangent spaces of planar features, leading to suboptimal alignment. To address these challenges, this paper proposes a robust multi-dimensional point cloud registration method. Initially, the point cloud is projected onto planar features to define a point-to-point in two-dimensional (PTP-2D) distance, which effectively constrains the distances between different planar features and mitigates the risk of falling into local minima. However, this constraint introduces non-uniqueness in the translation vector solution. Consequently, a point-to-point in three-dimensional (PTP-3D) constraint is incorporated to further refine the solution set of translation vectors, resulting in the definition of a point-to-point in multi-dimensional (PTP-MD) distance. The effectiveness and efficiency of the proposed PTP-MD method are validated through comparisons with four classical registration methods on high-speed train body inspection. Simulation results indicate that the proposed method significantly alleviates the local minima problem and effectively prevents sliding along tangent planes.