PDF(5795 KB)
3D reconstruction and inflection point identification of weld seams in complex environments
Xiaob ing FENG, Yixiang DAI, Tengyue HAN, Fei YUAN, Guijin WANG
Journal of Tsinghua University(Science and Technology) ›› 2025, Vol. 65 ›› Issue (10) : 1980-1991.
PDF(5795 KB)
PDF(5795 KB)
3D reconstruction and inflection point identification of weld seams in complex environments
Objective: Welding remains a key process in modern manufacturing, but it still faces considerable automation challenges due to the complex and noisy welding environments. Accurate weld seam geometry detection is critical for robotic seam tracking and welding quality assurance. In particular, detecting inflection points on the weld seam—typically corresponding to groove edges—is crucial for trajectory planning and control of welding robots. However, intense arc light, dynamic spatter, and reflective interference often contribute to image quality degradation, complicating the robust extraction of seam features. To address these challenges, this paper proposes a comprehensive framework for the three-dimensional (3D) reconstruction and inflection point detection of weld seams in complex environments, seeking to enhance noise robustness, spatial accuracy, and real-time performance in intelligent welding systems. Methods: The proposed method comprises the following three sequential modules: temporal denoising, laser stripe segmentation, and 3D inflection point detection. First, a Gaussian background modeling-based temporal filtering algorithm is developed to capture frame-wise variations and suppress transient noise, such as welding spatter. This algorithm adaptively classifies pixels as foreground or background using statistical thresholds, thereby enhancing the signal-to-noise ratio. Second, a lightweight deep segmentation model incorporating uncertainty modeling and nonlocal self-attention is introduced. This model employs a dual-stage architecture, i.e., an initial U-Net with attention to coarse segmentation, followed by a CriticNet-enhanced refinement stage guided by epistemic uncertainty maps. This strategy ensures continuity in stripe detection and robustness against weak exposure or partial occlusions. Finally, the segmented laser stripe centerlines are projected into 3D space based on calibrated structured-light principles. An optimal view normalization step ensures that the viewpoint is aligned vertically with the laser plane, and a hierarchical geometric model of the weld cross-section is created to assist in the localization of inflection points. To improve temporal consistency, a sliding-window filter smooths the extracted inflection trajectories, and a feedback loop updates the 3D model in real time, enhancing prediction stability. Results: The framework is tested on a custom dataset collected from real-world welding scenarios using a mobile welding robot equipped with a laser vision sensor. This dataset comprises over 21 000 frames captured under the following three representative conditions: nonuniform surfaces, intense reflection, and severe spatter. Quantitative evaluations demonstrate that the proposed method achieves a weld inflection point detection success rate of 78%, which is a 13.3% improvement over baseline methods. The 3D reconstruction accuracy remains within a submillimeter error margin (≤0.1 mm), with a maximum relative deviation of only 0.2%. Ablation studies indicate that each module—temporal filtering, deep segmentation, and 3D modeling—offers substantial contributions to overall performance improvement. Additionally, the segmentation model achieves a mean intersection over union (mIoU) of 83.5% and a recall rate of 88.1%, with only 0.67 million parameters, outperforming conventional U-Net and SegFormer baseline methods in accuracy and efficiency. Conclusions: This study introduces an effective and lightweight method to addressing the challenges of weld seam 3D reconstruction and inflection point detection in noisy environments. By combining temporal filtering, uncertainty-aware segmentation, and geometry-guided 3D analysis, the method demonstrates strong noise resilience and geometric accuracy. These results highlight its strong potential for real-time application in intelligent welding systems, supporting accurate seam tracking and robotic welding guidance in complex industrial environments.
intelligent welding / 3D reconstruction / laser vision / deep learning / inflection point detection
| 1 |
郭岩宝, 王斌, 王德国, 等. 焊接机器人的研究进展与发展趋势[J]. 现代制造工程, 2021 (5): 53- 63.
|
| 2 |
王斌, 温磊. 我国新型钢铁材料及焊接性与焊接材料的发展[J]. 环球市场, 2016 (24): 178.
|
| 3 |
丁少闻, 张小虎, 于起峰, 等. 非接触式三维重建测量方法综述[J]. 激光与光电子学进展, 2017, 54 (7): 070003.
|
| 4 |
张弓, 脱帅华, 曹学鹏, 等. 焊接机器人焊缝跟踪技术的现状与发展趋势[J]. 科学技术与工程, 2021, 21 (10): 3868- 3876.
|
| 5 |
刘康, 刘翠荣, 吴志生, 等. 主动视觉的焊缝跟踪图像分析[J]. 焊接技术, 2020, 49 (2): 75-78, 6.
|
| 6 |
胡曦, 余震, 刘海生. 管道全位置焊缝三维测量研究[J]. 石油机械, 2021, 49 (9): 129- 136.
|
| 7 |
王修文, 李志伟, 李宏伟, 等. 基于双目相机的光学高度测量系统[J]. 智能计算机与应用, 2021, 11 (2): 55- 59.
|
| 8 |
许鹏飞. 基于机器视觉的焊缝三维重建及焊接飞溅物在位检测研究[D]. 济南: 山东大学, 2018.
XU P F. Three-dimensional reconstruction of weld seam and weld spatter onsite detection based on machine vision[D]. Jinan: Shandong University, 2018. (in Chinese)
|
| 9 |
|
| 10 |
|
| 11 |
郑鹭斌, 王晓栋, 严菲. 一种基于线结构光的焊缝三维重建方法[J]. 激光与光电子学进展, 2014, 51 (4): 041005.
|
| 12 |
韩家杰, 周建平, 薛瑞雷, 等. 线结构光管道焊缝表面形貌重建与质量评估[J]. 中国激光, 2021, 48 (14): 1402010.
|
| 13 |
李朋超, 王金涛, 宋吉来, 等. 基于线结构光扫描的复杂曲面焊缝检测[J]. 激光与光电子学进展, 2021, 58 (3): 0312005.
|
| 14 |
王克鸿, 杨嘉佳, 孙科. 基于视觉的焊接三维重建技术研究现状[J]. 机械制造与自动化, 2013, 41 (1): 1-5, 58.
|
| 15 |
洪磊, 杨小兰, 钟冬平. 基于斜率分析法的焊缝条纹直线特征提取分析[J]. 焊接学报, 2017, 38 (8): 91- 94.
|
| 16 |
申俊琦, 胡绳荪, 冯胜强. 激光视觉焊缝跟踪中图像二值化处理[J]. 天津大学学报, 2011, 44 (4): 308- 312.
|
| 17 |
王兴东, 杨雅伦, 孔建益, 等. 基于区域优化的等厚对接焊缝图像二值化方法[J]. 中国机械工程, 2019, 30 (14): 1756- 1763.
|
| 18 |
PANG S A, YANG H. An algorithm for extracting the center of linear structured light fringe based on directional template[C]//Proceedings of the 2021 4th International Conference on Advanced Electronic Materials, Computers and Software Engineering. Changsha, China: IEEE Press, 2021: 203-207.
|
| 19 |
LI L Y, FU L J, ZHOU X, et al. Image processing of seam tracking system using laser vision[M]//TARN T J, CHEN S B, ZHOU C J. Robotic welding, intelligence and automation. Berlin, Heidelberg: Springer, 2007: 319-324.
|
| 20 |
|
| 21 |
|
| 22 |
SHI Y H, WANG G R, LI G J. Adaptive robotic welding system using laser vision sensing for underwater engineering[C]//Proceedings of the 2007 IEEE International Conference on Control and Automation. Guangzhou, China: IEEE Press, 2007: 1213-1218.
|
| 23 |
YE H M, LIU Y Z, LIU W J. Weld seam tracking based on laser imaging binary image preprocessing[C]//Proceedings of the 2021 IEEE 5th Advanced Information Technology, Electronic and Automation Control Conference. Piscataway, USA: IEEE Press, 2021: 756-760.
|
| 24 |
|
| 25 |
|
| 26 |
|
| 27 |
陈凯, 王海. 基于深度学习的焊缝图像识别研究[J]. 安徽工程大学学报, 2022, 37 (1): 24- 31.
|
| 28 |
杭小虎, 王海. 基于深度学习的焊缝图像识别方法研究[J]. 无线互联科技, 2023, 20 (24): 126- 132.
|
| 29 |
邓贤东, 刘春华, 陈晓辉, 等. 基于深度学习的焊缝视觉跟踪方法研究[J]. 现代制造工程, 2023 (6): 124- 131.
|
| 30 |
成冲, 章进强, 代杰. 基于卷积神经网络的不锈钢焊缝视觉检测系统[J]. 工业控制计算机, 2019, 32 (5): 51-52, 55.
|
| 31 |
|
| 32 |
|
| 33 |
|
| 34 |
|
| 35 |
|
| 36 |
|
| 37 |
|
| 38 |
|
| 39 |
RONNEBERGER O, FISCHER P, BROX T. U-Net: Convolutional networks for biomedical image segmentation[C]//Proceedings of the Medical Image Computing and Computer-Assisted Intervention (MICCAI 2015). Munich, Germany: Springer, 2015: 234-241.
|
| 40 |
|
| 41 |
XIE E Z, WANG W H, YU Z D, et al. SegFormer: Simple and efficient design for semantic segmentation with transformers[C]//Proceedings of the 35th International Conference on Neural Information Processing Systems. Red Hook, USA: Curran Associates, 2021: 12077-12090.
|
/
| 〈 |
|
〉 |