盾构隧道衬砌普遍存在管片错台现象,严重时影响衬砌结构安全和服役性能,该文采用双目视觉测量技术测量盾构隧道管片错台。首先,标定消费级双目相机获得相机内外参数,并采用深度学习立体匹配算法计算管片双目图像的准确视差信息;其次,提取管片接缝骨架,采用最小二乘法进行直线拟合,得到接缝附近关键点的像素坐标,并基于三角测量原理计算关键点的相机三维坐标;再次,基于关键点的相机三维坐标建立管片坐标系,并确定管片坐标系与相机坐标系的转换关系;最后,提出相机姿态修正方法,以精确计算管片错台量。对比基于双目视觉的管片错台精确测量结果与焊缝规实测结果表明,双目相机正对接缝拍摄时,经相机姿态修正的计算结果与实测结果较一致,绝对误差不超过1.0 mm,且图像处理效率较高;对于倾斜拍摄情形,当相机坐标系相对于管片坐标系绕其Xc轴和Yc轴的姿态角分别处于(180.00±15.00)°和±20.00°范围时,该方法同样能有效修正计算结果,鲁棒性较强。该文研究结果可为精确和快速测量盾构隧道管片错台提供参考。
Objective: The phenomenon of segment dislocation is prevalent in shield tunnel lining, thereby affecting the structural safety and service performance of tunnels in severe scenarios. Manual measurement of segment dislocation is subjective and time-consuming, while automated methods, such as 3D laser scanners, are not only expensive but also susceptible to environmental conditions. In recent years, advancements in camera technology and deep learning algorithms have significantly propelled the development of computer vision applications in civil engineering monitoring and inspection. Binocular vision technology has been utilized in crack detection and engineering quality evaluation due to its cost-effectiveness and reliable accuracy. This study introduces binocular vision technology to enable accurate measurement of segment dislocation. Methods: The internal and external parameters of consumer-grade binocular cameras are initially obtained through a calibration experiment, followed by capturing binocular images of the segment joints. The accurate disparity information between the left and right images is further calculated using a deep-learning stereomatching algorithm. Finally, a camera attitude correction method is proposed to calculate the segment dislocation accurately. The primary challenge in camera attitude correction is determining key points on the segment surface without evident texture features. Therefore, a series of image processing techniques, such as graying, binarization, dilation, erosion, and target localization, are initially employed on the left photo to automatically identify the location of the segment joint. Then, the parallel thinning algorithm is applied to extract the skeleton of the segment joint (reduce the segment joint width from multi-pixel to single-pixel). The pixel coordinate data along the skeleton are extracted, and the least squares method is used to fit a straight line to the skeleton. Furthermore, the straight lines are shifted and flipped to form latitude and longitude lines, allowing the pixel and camera coordinates of the crossing point (key point) to be calculated in accordance with the triangulation principle. Finally, the segment coordinate system is established using these key points, and the actual segment dislocation is computed. Results: Field tests on the construction site of the shield tunnel were carried out to evaluate the efficiency of the proposed approach. Results were drawn as follows: (1) The calculation results from the vertical shooting of the binocular camera, after camera attitude correction, showed better consistency than the segment dislocation measurement via weld seam gauging. The specific performance demonstrated that the absolute error did not exceed 1.0 mm, and the overall processing time for the segmented image was approximately 10.00 s. (2) In multiple rounds of tests with left-leaning or right-leaning shooting scenarios, when the attitude angles of camera coordinate system relative to the segment coordinate system around its own Xc axis and Yc axis were in the ranges of(180.00±15.00)° and ±20.00°, respectively, the proposed method effectively corrected the calculation results with strong robustness. Conclusions: By combining deep learning algorithms with traditional image processing techniques, binocular vision technology is used to achieve rapid and accurate measurement of segment dislocation. This approach provides a reference for tunnel engineering inspection.