基于改进U-Net的小型机械零件识别与定位方法研究

Research on the recognition and localization method of small mechanical parts based on improved U-Net

  • 摘要: 针对基于机器视觉的小型机械零件识别速度慢、定位不精确等问题,文章提出一种改进U-Net(improve U-Net,IU-Net)和最小外接矩阵(minimum bounding rectangle,MBR)结合的小型机械零件识别和定位方法(IU-Net-MBR)。首先,搭建视觉分拣试验平台,制作小型机械零件数据集;其次,为了提高特征提取效率,将U-Net的特征提取网络替换成轻量级MobilenetV2网络,降低模型的参数和计算量;然后,为了提高U-Net的分割精度和鲁棒性,在网络结构中引入SE(squeeze and excitation)注意力模块;最后,使用最小外接矩阵得到零件的长宽基本参数,实现零件的识别和定位。试验表明,IU-Net相对于U-Net在平均交并比Miou(mean intersection over union)和像素准确率PA(pixel accuracy)分别提高4.39%和3.82%。在处理图像时,IU-Net相对于U-Net速度提升76.92%。与主流分割模型相比,IU-Net实现了更好的分割效果,有效地提高了小型机械零件的分割精度。在抓取试验中,IU-Net-MBR在识别率和抓取率上分别达到了100%和96.67%。

     

    Abstract: Aiming at the problem of slow recognition and inaccurate localization of small mechanical parts based on machine vision, this paper proposes a method of recognition and localization of small mechanical parts by combining Improve U-Net (IU-Net) and minimum bounding rectangle(IU-Net-MBR). Firstly, a visual sorting test platform is built to produce a data set of small mechanical parts.Secondly, in order to improve the feature extraction efficiency, the feature extraction network of U-Net is replaced by a lightweight MobilenetV2 network, which reduces the parameters of the model and the amount of computation.Then, in order to improve the segmentation accuracy and the robustness of the U-Net, the SE (squeeze and excitation) attention module.Finally, the length and width basic parameters of the parts are obtained using the minimum outer connection matrix to realize the part identification and localization. The experiments show that IU-Net improves 4.39% and 3.82% in mean intersection over union (Miou) and pixel accuracy (PA) relative to U-Net. In processing images, the speed of IU-Net is improved by 76.92% relative to U-Net. compared to mainstream segmentation models, IU-Net achieves better segmentation results and effectively improves the segmentation accuracy of small mechanical parts. In the grasping test, IU-Net-MBR achieves 100% and 96.67% in recognition rate and grasping rate, respectively.

     

/

返回文章
返回