
Objective: High-altitude hydropower projects present significant challenges owing to harsh environmental conditions, project clustering, limited data availability, and high construction risks. Accurate carbon emission calculations are crucial in such environments to mitigate environmental impacts and promote sustainable development. This study targets the full lifecycle of carbon emissions during intelligent construction in high-altitude hydropower projects. Methods: This study establishes a comprehensive framework for calculating lifecycle carbon emissions tailored to the unique challenges of high-altitude hydropower construction. The methodology covers three primary stages: data collection, model formulation, and real-world implementation. Lifecycle boundaries and emission factors are established for material production, transportation, construction, and operational maintenance. Key emissions are identified based on quality, energy consumption, and cost criteria to build a detailed carbon inventory. To address altitude effects, an adjustment coefficient is derived by correlating field-monitored data with baseline values, accounting for altitude impacts on emission intensities. The carbon emission model incorporates a discrete event simulation (DES) to capture the dynamic characteristics of construction and equipment operations. This model couples static and dynamic elements, applying static calculations to stable phases such as material production and maintenance while using dynamic simulations for variable stages such as transportation and active construction. This DES approach simulates the sequential and interdependent nature of equipment operations, providing an accurate reflection of emission behavior over time. Furthermore, a network of onsite carbon monitoring devices was implemented across different construction sites in a case project, and real-time CO2 concentration data were collected. These data calibrate and validate emission factors within the model, ensuring accurate altitude-adjusted emission assessments. Results: The model was applied to the JX hydropower project in a high-altitude region with distinct climatic and geographical challenges. The findings indicated that material production and construction machinery were the largest carbon emitters, accounting for 65.7% and 27.4% of total emissions, respectively. Cement manufacturing was identified as the dominant emission source, emphasizing the need for greener materials and cement production. The DES model revealed that equipment states, such as idling and operation, significantly influence emission intensities, especially under reduced oxygen at high altitudes. By integrating the DES results with real-time monitoring, the model supports precise, responsive emission control strategies. The proposed mitigation measures included adopting cleaner fuels, optimizing equipment idle time, and enhancing operational efficiency through scheduled maintenance. The model reliability was demonstrated by the close alignment of the simulated results with actual onsite measurements. Conclusions: The developed model offers a structured approach to calculating lifecycle carbon emissions for intelligent hydropower construction in high-altitude regions. By addressing the unique characteristics of such projects, including altitude-induced effects on emission intensities and equipment behavior, the model serves as a reference for emission reduction in future high-altitude hydropower projects. This study advances the understanding and management of emissions in high-altitude construction, underscoring the potential of intelligent construction methods to drive sustainable hydropower development.
Objective: Dimensional management is crucial in manufacturing and construction. However, compared to the manufacturing industry, the construction industry has seen considerably less research investment in this area. Traditional construction relies on passive and inefficient dimensional management methods that focus on tolerance specifications and compliance measurements, which fail to meet the demands of rapid assembly in prefabricated buildings. This adversely affects both assembly quality and efficiency. This study establishes a dimensional management process for prefabricated buildings based on the information delivery manual (IDM) standards developed in building information modeling (BIM). It specifies the information exchange processes across different disciplines, providing a standardized implementation pathway and technical guidance for proactive dimensional management starting from the project design stage onward. Methods: This study conducts a survey and analysis of the current state and deficiencies in dimensional management mechanisms in prefabricated buildings, following which it proposes improvements via integration of the design, production, and construction processes of prefabricated buildings and introduction of tolerance analysis methods derived from the manufacturing industry. To effectively combine the enhanced dimensional management process with BIM, a dimensional management process is developed as per the IDM standards specified by ISO 29481-1—2016. This study follows four key steps: (1) Identification of reference processes. (2) Creation of process diagrams. (3) Defining of exchange requirements and business rules. (4) Development of functional parts. Results: The newly developed dimensional management process comprised four stages: (1) Preliminary design stage, early in this stage, a dimensional management team was formed, comprising experts, designers, production personnel, and construction technicians. This team determined the tolerance levels for key components on the basis of the owner's requirements. (2) In the detailed design stage, the dimensional management team, working as per the BIM model, identified potential deviation risks that could affect the project's appearance, functionality, quality, and constructability. In addition, critical dimensions requiring deviation accumulation prediction were specified by the team. (3) In the design optimization stage, deviation issues were predicted and simulated by means of the tolerance analysis functional part. Through the analysis results, the design was optimized, and reasonable tolerance and measurement plans were formulated. (4) In the production and construction stage, prefabricated components were manufactured and assembled onsite following the tolerance and measurement plans, with deviation reports generated via the digital compliance measurement functional part. In addition, this study clarified the exchange requirements and business rules involved in the process, facilitating the integration of dimensional management with the existing BIM systems. Conclusions: Through the survey results and improvement of the current dimensional management mechanisms in prefabricated buildings, this study develops a standard process for collaborative dimensional management across various disciplines, with reference to the IDM standards. In addition to traditional methods focusing on tolerance specifications and compliance measurements, the process emphasizes the establishment of a dedicated dimensional management team during the design stage. This team is responsible for the selection of tolerance levels, identification of deviation risks, prediction of deviation accumulation, and further guidance of design optimizations toward manufacturing and assembly. This proactive dimensional management approach aims for higher assembly precision and quality. In addition, this study specifies the exchange requirements and functional parts of the "prefabricated building dimensional management" IDM, introduces tolerance analysis techniques from the manufacturing industry and provides an information framework for future integration of dimensional management with BIM.
Objective: The city information model (CIM) is a new city information synthesis that combines large amounts of information to guide the construction of urban organisms through the digital representation of urban objects. However, because of the complex application requirements and the imperfect theoretical system of CIM, the problems of a lack of semantic specification and a unified framework need to be addressed, and the development of CIM is difficult to effectively promote. To form a universal semantic standard, this study proposes the CIM classification semantic web to guide the construction of the CIM and govern the CIM data. Methods: This study designs a CIM classification semantic tree via the line classification method based on various criteria, uses the robustly optimized bidirectional encoder representations from transformers pretraining approach (RoBERTa) model and a clustering algorithm to merge the synonyms of the same cluster, and adds new semantic knowledge to optimize the semantic tree. To further expand the application of CIM semantics, the city information model ontology (CIMO) is proposed based on the completed semantic tree and the Stanford seven-step method, which has six main classes and nine main property attributes. CIMO enables computers to process semantic information effectively. Moreover, given the multisource data feature within the CIM, this study aims to fully leverage semantic information derived from building information modeling and geographic information systems. This study designs a mapping relationship between CIMO and multisource heterogeneous data, which are composed of industry foundation classes and city geography markup language. The CIMO serves as a foundation for semantic analysis and the construction of knowledge graphs. This study proposes coding attributes as unique identifiers for management analysis, which improves the efficiency and accuracy of the CIM classification semantic net. The models of one primary school building and surrounding municipal facilities are selected as a case study to further evaluate the semantic-tree-based CIMO and knowledge graph data governance. Results: Based on mapping rules, this model could well form a triplet formal knowledge graph, and the resulting web ontology language could be understood and processed by computers. The semantic analysis could be completed based on the semantic web rule language(SWRL), and the logic test could be completed by an inference machine. The CIM classification semantic web that passed the test could identify the relationships between the instances and the instance categories contained in the query class by level and classification, could operate and update the instance data to complete logical and point-to-point query and governance, and had excellent data governance performance and clear semantic logic. The triplet file of the knowledge graph was imported into the graph database for graph visualization and graph data storage, which is convenient for intuitive understanding and processing of the graph data. Conclusions: The CIM classification semantic web proposed in this study can satisfy the construction of multiscenario, multiprecision, and multilevel CIM systems; can standardize the semantic expression of CIM; has good hierarchical logic and data processing functions; serves as a semantic standard, and integrates city-level and component-level data. The semantic web can provide guidance and a framework for the development of the CIM data governance platform and the modeling of city-level models and can promote the construction and development of the geometric and semantic integration of the CIM comprehensive system.
Objective: Approximately 70% of the time spent during the construction of a modular building is devoted to the onsite assembly of the prefabricated components. Because of the limited accuracy of traditional crane motion control, insufficient rigidity of the boom and rope, and susceptibility of cranes to outdoor environments, accurate placement of components in the target area using a crane alone is difficult. Repeated positioning of a component with the assistance of multiple workers is required before installing the component. Repeated steps of lifting and lowering components rely on a considerable amount of labor and affect construction efficiency. To solve the problems of traditional installation methods that rely excessively on manual labor for the positional adjustment of lifted components, a robot-assisted component installation system is proposed in this study. Methods: Construction sites are far more complex than structured factory manufacturing environments; therefore, construction robots have distinct technical characteristics compared with industrial robots. A robot-assisted installation method was designed based on an analysis of the technical characteristics of construction robots. After the initial alignment of a component using a crane, the two robots cooperate to adjust the position and orientation of the component accurately. Thus, automatic installation of the components can be realized. The procedure for conducting the entire installation-assisted task of the robot was illustrated in a pseudocode form for a series of actions, including positioning the robots and the reserved hole, threading the hole, and pushing components. To reduce the cycle time and costs for the development of such a robot, a prototype model of a robot-assisted component installation system was created on a computer and virtual prototyping was used to simulate and analyze the kinematic and dynamic properties of the model within a virtual system with real environmental properties. Moreover, the calculation results of the virtual prototype provided important data support for equipment selection and component design in subsequent test sessions. Results: The motion state and the change of contact force between the tool mounted on the end flange and the prefabricated component of the two robots during the task execution were accurately modeled. Based on the simulation results, the feasibility of the robot replacing workers to complete the positional adjustment of components and the effective reduction of the end load of the robot was verified. With the robot-assisted component installation system built in the laboratory, which was identical to the virtual prototype model, the test of installing a precast panel in a predetermined area was conducted several times. The installation position deviations observed in all tests were less than the limit values of the quality acceptance standards in China. Conclusions: The actions of the two robots for assisting in the accurate installation of components can be smoothly realized. The simulation calculations and experimental results demonstrate the advancement of the proposed robot-assisted component installation system for improving component installation accuracy and saving labor. Because of severe labor shortages and sharp increases in labor costs, the proposed method provides economic benefits that become more evident as the number of component installation tasks increases. Research on installation-assisted robots has substantial reference value and application prospects for recent research progress on construction robots and practical engineering problems. With the future development of intelligent control systems, the automation level of the proposed method can be considerably improved to realize truly unmanned construction.
Objective: The widespread adoption of building information modeling (BIM) in construction projects has notably advanced design, coordination, and project management. However, a persistent challenge in BIM implementation lies in the efficient management of geometric information, particularly when identical or similar components are repeatedly present in a model. This leads to substantial data redundancy, increasing both storage and network transmission costs, thereby impeding the scalability and efficiency of BIM models in large-scale projects. Methods: This study proposes an approach for geometric component comparison and reuse based on geometric tensor analysis and graph matching algorithms. The method focuses on BIM models in the industry foundation class (IFC) format, leveraging the boundary representation (B-Rep) of geometric components to extract key features such as metric tensors and inertia tensors. These tensors serve as shape and spatial property descriptors, enabling accurate assessments of geometric similarity. Graphs are constructed in this process, where nodes represent the surfaces of components and edges denote the topological relationships between them. By applying graph matching techniques, this method identifies geometric similarity even when components undergo transformations such as rotation, translation, or scaling. The proposed method was validated through several experiments focused on reducing geometric redundancy and optimizing model storage. Results: In the first experiment, a complex geometric component, a window that consisted of 392 surfaces and 758 edges, was analyzed for reuse. This process reduced the IFC file size from 188 kB to 66 kB, representing a 64.9% decrease and demonstrating the effectiveness of identifying and reusing repeated geometric components in minimizing storage requirements. The second experiment applied the method to a 22-story residential building, focusing on the standard floors comprising 22 718 components. The method decreased the overall IFC model size by 90.0%, illustrating its scalability and efficiency in handling large-scale BIM models. The third experiment evaluated the time required to retrieve models of different scales, showing short retrieval times. However, these findings indicated the need for further optimization when handling large-scale models. This approach offers several advantages over traditional methods for managing geometric information in BIM. First, geometric tensor analysis ensures robust component comparison that is invariant to transformations, enabling accurate identification and reuse of components regardless of their spatial orientation. Second, the integration of graph matching algorithms provides a flexible and scalable framework for handling complex topologies and large datasets, making this method particularly suitable for high-volume construction projects. In addition to reducing redundant geometric data in BIM models, the proposed method enhanced storage efficiency and data exchange, minimized BIM model sizes, and facilitated faster data transfer while reducing the strain on network resources. These features are critical for large collaborative projects involving multiple stakeholders across different platforms. Furthermore, the method improved the maintainability of BIM models by facilitating version control and consistency checks, ensuring efficient management of model updates without duplicating existing geometric data. Conclusions: In conclusion, combining geometric tensor analysis and graph matching offers a robust solution for optimizing BIM models through geometric component reuse. This method addresses the challenge of data redundancy, delivering significant improvements in storage efficiency and network transmission. Subsequent research may focus on refining the computational aspects of the method, particularly for processing highly complex models, and explore its integration into cloud-based BIM systems to further enhance real-time collaboration and model management across multidisciplinary teams.
Objective: Active control is a critical aspect of adaptive structures. The cable dome structure is a predominant form of large-span spatial architecture, with its equilibrium state representing the interaction between force and form. Consequently, the dome structure is controllable and serves as an ideal model for adaptive structures. Shape memory alloy (SMA), a typical smart material, demonstrates excellent shape memory effects and is frequently utilized as a driving mechanism in active control systems. This article explores the application of SMA in the adaptive cable dome structure to enhance structural form control, improve control accuracy, reduce control complexity and controller weight, and facilitate intelligent control. Methods: This paper uses the Geiger cable dome structure as a case study. First, a three-dimensional finite element model is created using ANSYS APDL software to assess the structural control requirements. Next, uniaxial tensile tests are performed on SMA wires to evaluate their material properties. According to the identified control requirements and the material properties of the SMA wire, a tendon designed for active control is developed and manufactured. A key design criterion is to ensure that the SMA tendon produces a specific plastic strain under load, which must remain below 8%. Subsequently, experimental research is conducted to evaluate the recovery performance of the SMA tendon. The SMA tendon is connected in series with steel wire rope to create the active control unit, which then replaces the external diagonal cables in the cable dome structure for active control testing. The performance of the SMA-based control method is compared with mechanical control methods to assess its effectiveness. Results: When the initial loads were set at 2 000, 2 500, and 3 000 N, the strain in the SMA tendon reached 4.10%, 4.54%, and 4.67%, respectively. Upon heating to 120 ℃, the tendon generated a recovery strain per unit heated length of 0.1462, 0.1554 and 0.1655 m-1, respectively. Additionally, the rate of recovery strain during heating depended on the martensite volume fraction, which varied with temperature. Compared with mechanical control methods, the cable dome structure controlled by SMA exhibited smaller errors, with smoother curves for internal forces of units and displacements of nodes. Furthermore, the finite element simulation closely aligned with the experimental results, effectively describing the control process of the structure. When the length of the external diagonal cable was shortened by 0.90 mm, the internal force in the structural spine cable increased by more than 25%. Conclusions: This research demonstrates that SMA can function as an active control driver for cable elements in cable dome structures, providing a stable and reliable control process. Compared with mechanical control methods, the SMA control method is more convenient and easier to manage in terms of accuracy; however, the control rate is dependent on the martensite volume fraction. The SMA tendon used in this study is relatively thick, causing temperature transmission from the exterior to the core, which results in a lag effect and requires a certain stabilization time. Adjusting the inclined cables outside the cable dome can effectively control the shape of the cable dome structure and alleviate the relaxation of the spine cables.
Significance: The construction industry in China is a major contributor to carbon emissions, creating substantial environmental challenges. In response, the construction sector is intensifying efforts to reduce its carbon footprint. Among the various strategies implemented, building information modeling (BIM) technology has emerged as a key digital tool with transformative potential to lower building-related carbon emissions. BIM technology enhances design precision and operational efficiency while enabling comprehensive analysis and optimization of building systems. This capability facilitates carbon emission reductions throughout the lifecycle of a building. However, there remains a notable lack of systematic documentation and synthesis on effectively leveraging BIM technology for carbon emission control in construction. This gap is further exacerbated by the lack of comprehensive analyses of potential future research directions and practical application scenarios for BIM in carbon reduction. Progress: Therefore, the present study investigates the specific application of BIM to reduce carbon emissions across the design, production, and operation phases of a building's lifecycle. Through bibliometric methods that entail quantitative analysis of published research, the study seeks to identify key technologies and emerging trends within this domain. This research is organized into two main components. First, a comparative literature review combined with a market survey is conducted to map advancements in BIM-based research related to the whole life cycle carbon emissions of buildings. This comprehensive review aims to consolidate existing knowledge while identifying gaps or inconsistencies within the current body of research. Second, a detailed examination is conducted, focusing on the stages that have the most significant impact on carbon emissions, including building design, production, and operation. This analysis aims to identify major achievements and ongoing challenges within current research efforts and practical implementations and highlight potential directions for future advancements. Conclusions and Prospects: The findings reveal several key insights. BIM technology has focused primarily on the whole life cycle carbon emission analysis and design phase of buildings. While these contributions are noteworthy, research targeting the production and operational phases remains comparatively underdeveloped. This imbalance is partly due to the limited exploration of BIM's application scenarios in these later stages of a building's lifecycle. Specifically, BIM's potential to optimize building production processes and enhance operational efficiency through real-time data analytics and predictive modeling has not been completely realized or integrated into practical projects. Therefore, future research should prioritize broadening BIM's application to cover all phases of a building's lifecycle comprehensively. This involves developing innovative BIM tools and methodologies that seamlessly integrate with building management systems to enable real-time monitoring and control of carbon emissions. Furthermore, fostering collaboration among academia, industry stakeholders, and policymakers is essential for advancing BIM-based carbon reduction strategies and ensuring their effective implementation in practical scenarios. By addressing these research and implementation gaps, the construction industry can fully leverage BIM technology to achieve substantial reductions in carbon emissions, thereby contributing to global sustainability efforts.
Objective: Architectural free-form surfaces often rely on nonuniform rational B-splines for delineation, posing significant challenges for grid partitioning owing to their diverse and irregular configurations. Prevailing grid partitioning methods are tailored to specific free-form geometries, leading to a lack of universality in extant explicit programming algorithms because of their specificity. Structural design, including grid partitioning, largely depends on the empirical knowledge and intuitive judgment of designers. As interdisciplinary collaboration and efficient design processes escalate, the need for accuracy and speed has increased. To alleviate the intrinsic limitations of explicit programming and mitigate overdependence on the designer acumen, this paper proposes the use of a generative adversarial network model to elucidate and integrate the logical correlation between free-form surfaces and their corresponding grid structures. This approach enables the generation of grid structures from free-form surfaces as inputs to the generative adversarial network model. Methods: The process starts with a preprocessing regimen for free-form surfaces. To fit the two-dimensional input and output framework of the generative adversarial network model, a self-developed algorithm generates curvature and height cloud maps representing the free-form surface, which are used as inputs to the generative adversarial network model. The pix2pixHD model is modified to allow both curvature and height cloud maps to be input simultaneously into the generator and discriminator. These cloud maps are then fed into the grid generative adversarial network(GridGAN) model, which has been pretrained and validated to derive grid partitioning outputs. In the postprocessing phase, the two-dimensional grid data is transformed into three-dimensional grid structures by extracting nodal points and their topological relationships from the grid layout. This information is subsequently projected into three-dimensional space. The effectiveness of the proposed method is demonstrated through a comparative analysis with two existing explicit programming grid partitioning algorithms (one quadrilateral and one triangular). Multiple examples are used to evaluate the generative design approach, employing evaluation metrics based on grid geometric properties such as rod length factor and shape quality factor. Results: The case studies indicated that the intelligent free-form grid partitioning method proposed in this paper performed comparably to the triangular and quadrilateral grid partitioning algorithms used in explicit programming. The maximum relative error recorded was 2.908% for the mean rod length and 1.133% for the shape quality factor, both of which fell within acceptable limits. Conclusions: These findings confirm that the proposed approach achieves grid partitioning outcomes comparable to those of various explicit programming algorithms. It effectively handles free-form surfaces with diverse shapes. This research introduces a methodological perspective in architectural design and establishes a robust foundation for future research and applications. It has the potential to catalyze the evolution of design practices toward greater efficiency and intelligence.
Objective: The structural integrity of bridges is a critical concern as infrastructure ages, necessitating the development of reliable methods for detecting potential failures. Among these, the identification of small target cracks is particularly important, as these cracks often grow undetected until they result in severe damage. Traditional inspection methods, such as manual visual inspections, are hindered by their labor-intensive nature and susceptibility to human error, often resulting in the oversight of small but significant defects. Recent advancements in computer vision and deep learning technologies offer new opportunities to improve the accuracy and efficiency of bridge inspections. This study introduces an innovative approach for detecting small target cracks in bridge structures by employing an enhanced version of the You Only Look Once (YOLOv8) object detection model, a widely recognized algorithm known for its rapid processing capabilities and high detection accuracy. The enhanced YOLOv8 model is tailored to detect small-scale cracks on bridge surfaces that may not be easily identifiable by traditional inspection methods or earlier versions of computer vision models. Methods: The proposed algorithm modifies the standard YOLOv8 model to address the specific challenges associated with detecting small cracks on bridge surfaces. A key modification is the integration of efficient vision transformer (EfficientViT) into the backbone of the YOLOv8 model. EfficientViT is an advanced transformer-based architecture that reduces redundant parameters and optimizes the extraction of local features from high-resolution images, enabling more precise detection of subtle crack features. This enhancement is crucial, as small cracks often exhibit low contrast against their background and may be easily overlooked by less sophisticated models. In addition to EfficientViT, the proposed algorithm also incorporates large selective kernel network (LSKNet) within the C2f module of YOLOv8. LSKNet employs a dynamic kernel selection mechanism that allows the model to adaptively adjust the size of the convolutional kernels based on the input features, making it highly suitable for detecting cracks of varying sizes, orientations, and morphological characteristics. This adaptability ensures that the model can detect small cracks, regardless of their form. Furthermore, the model uses bidirectional feature pyramid network (BiFPN) to merge feature maps at different scales. Traditional models struggle with detecting small targets due to the loss of critical information during downsampling operations. BiFPN mitigates this issue by preserving high-resolution feature maps across multiple layers, enhancing the model's ability to detect small cracks that would otherwise be missed. The combined effect of these modifications improves the accuracy of small target crack detection while maintaining computational efficiency. Results: The effectiveness of the proposed model was validated using a dataset of crack images from a specific bridge, captured by unmanned aerial vehicles (UAVs). UAVs provided detailed images from areas that were often difficult or dangerous to access using traditional inspection methods. The experimental results demonstrated that the enhanced YOLOv8 model significantly outperformed the original version in terms of key performance metrics. Specifically, the modified model achieved improvements of 3.7%, 3.5%, 3.5%, 3.9%, and 7.4% in terms of the detection precision, recall, F1 score, mAP50, and mAP50-95, respectively. These results indicated a substantial improvement in the model's ability to detect small cracks that often had low contrast and irregular shapes, which were typical characteristics of cracks on bridge surfaces. Furthermore, compared to conventional methods, the proposed model was able to detect cracks with higher precision and fewer false positives, making it a promising tool for improving the efficiency of bridge inspections. Conclusions: In conclusion, the improved YOLOv8 algorithm introduced in this study represents a significant advancement in the detection of small target cracks in bridge structures. The modifications made to the original YOLOv8 model, including the integration of EfficientViT, LSKNet, and BiFPN, result in a more accurate and computationally efficient model for crack detection. This approach offers a practical and scalable solution for the widespread application of bridge health monitoring, particularly in areas that are difficult to inspect using traditional methods. By leveraging advanced surface data processing techniques, this research contributes to the development of modern methods for assessing the health of bridge structures, ultimately helping to ensure the safety and longevity of infrastructure systems.
Objective: The Shenzhen-Zhongshan Link, an expressway that connects the cities of Shenzhen and Zhongshan, has a total length of 6 845 m, of which the immersed tube section is 5 035 m long and uses a steel shell-concrete structure. The design and construction of the final joint posed notable challenges, provided the complex sea conditions and site-selection constraints in the construction area. As a solution, the Shenzhen-Zhongshan Link innovatively adopted the "prefabricated push-type construction method" for the construction of the final joint, considerably increasing the construction efficiency. This paper conducts a detailed mechanical analysis of the procedures involved in the "prefabricated push-type construction method" employed in the Shenzhen-Zhongshan Link. Methods: First, the detailed construction process of the underwater push-out final joint in the link is discribed. The underwater construction of the final joint in the link is split into six main processes, covering key construction steps such as steel-shell transportation, water pumping and pressure fitting, and steel tie rod welding. Subsequently, this paper conducts a model verification of the final joint during the construction phase. Monolithic finite element models are established for the push-out part and expanded part, and finite element calculations are conducted on the basis of the loads of each working condition to confirm structural safety. Finally, a detailed mechanical analysis of the underwater push-out process is conducted; this paper observes that the process involves changes in the internal forces of the steel rods and the deformation of the GINA waterstop. This paper observes potential structural safety risks in the relevant process. Therefore, theoretical calculations and finite element model verifications are conducted for this special stress condition. A theoretical analysis model is established, and the effect of rail friction on the rebound amount of the GINA waterstop is studied via formula derivation. A refined finite element model is established to analyze changes in the internal forces of the steel rods during the underwater push-out process. Results: The results of the model verification during the construction phase indicated that under all working conditions, the maximum stress and floor deformation of the push-out part and expanded part were within the design safety range. This suggested that the structural design of the "prefabricated push-type construction method" is relatively reliable with a considerable safety margin. The results of the mechanical analysis of the underwater push-out process showed that rail friction caused greater rebound on the upper side than on the lower side, hence generating greater tensile forces in the upper steel rods. Furthermore, the underwater push-out process may lead to uneven spatial distribution of internal forces in the steel tie rods. Conclusions: The "prefabricated push-type construction method" adopted for the final joint in the Shenzhen-Zhongshan Link exhibits relatively structural-stress characteristics during the construction phase. This paper verifies the most unfavorable conditions in each construction process, and the results show that the relevant structural design is reasonable with a sufficient safety margin. During the underwater push-out process, uneven spatial forces can be generated in the rods because of the influence of rail friction and the spatial distribution of the steel tie rods on the cross section. This study suggests that similar construction processes should monitor tie rod stress data and flexibly employ anti-backward devices to ensure structural safety.
Objective: During the erection stage of suspension bridges, the critical flutter wind speed varies continuously owing to changes in structural dynamic properties, including mass and stiffness. Concurrently, the wind climate fluctuates seasonally or even monthly throughout the bridge erection process. To evaluate the flutter risk of suspension bridges exposed to varying wind conditions, a framework is proposed to optimize the deck erection timeline under complex wind climates, with a summary of the flutter risk flow. Methods: The aerodynamic flutter stability for each construction stage was tested by full-model wind tunnel tests. The time-varying wind climate is represented as an extreme wind speed annual exceedance probability distribution function for each month. Data for synoptic wind speed analysis were extracted from the National Oceanic and Atmospheric Administration (NOAA) Global Integrated Surface Dataset, while tropical cyclone wind speeds were modeled using Vickery's full-track Monte Carlo simulations. The mixed extreme wind speed, resulting from the combination of tropical cyclones and monsoons, was calculated using probability theory for independent events. An optimization algorithm was used to determine the optimal deck erection starting timeline to minimize on-site flutter probability during the entire deck erection construction process. Results: Two specific analytical methods and visualizations were proposed. The flutter risk flow illustrates the flutter performance of a single program, while the erection flutter risk box plot provides a horizontal comparison of different erection processes. The study also discussed how to choose the best deck erection plan for different erection procedure configurations. The Shenzhong Link, built on the southern coastline of China, served as a case study. A full-bridge aeroelastic model wind tunnel test was conducted to determine the critical flutter speed at different erection stages. As the construction of the main beam progressed, the critical wind speed of the bridge generally increased. In addition, a negative angle of attack was detrimental when the main beam assembly rate was low, while a positive angle of attack was unfavorable at higher assembly rates. Extreme wind speeds from typhoons and synoptic winds were analyzed using Monte Carlo typhoon simulations and meteorological data statistics, respectively. The results indicated strong seasonality in coastal mixed climates owing to typhoon influence. The optimization algorithm identified an optimal timeline for the Lingdingyang Bridge, minimizing flutter risk. However, selecting the worst timeline could exponentially increase flutter risk. Conclusions: This paper proposes a construction process optimization strategy for suspension bridge girder that minimizes the flutter risk during the overall construction process. The mixed climatic wind environment including typhoons and benign winds and the continuous time-varying bridge structure state are considered, and the time-varying construction period flutter probability is calculated. Research has shown that timeline optimization is a more economical and viable approach than structural and aerodynamic interventions, especially in mixed wind climates. Horizontal comparison of desk erection plans should consider not only the lowest flutter speed but also the flutter risk probability across all timelines. The erection risk box plot is an effective tool in this case. If the erection starting time is flexible, minimizing the flutter probability should be the main indicator. When the timeline is uncertain, greater attention should be given on the mean-value and quartile box indicators.
Objective: Traditional methods for floating and transporting immersed tunnel elements at sea often involve the use of tugboats for towing. This approach results in the vessel and the tunnel element moving independently, making it difficult to control the attitude of the immersed tube and leading to low navigation speeds. The Shenzhen-Zhongshan Bridge project in China, however, utilized an integrated vessel for the transportation and installation of immersed tubes. This specialized construction boat combines the operations of floating, positioning, immersion, and installation of tunnel elements. The integrated vessel measures 190.40 m in length, 75.00 m in beam, 14.70 m in depth, and 23 200 t in weight. It is equipped with two main propulsion systems, each capable of delivering 9 280 kW and eight side thrusters ranging from 2 600 to 3 000 kW. The integrated vessel, connected rigidly to the immersed tube through supports and cables, demonstrated rapid floating capabilities in the Shenzhen-Zhongshan Bridge project, achieving a maximum navigation speed of 5.8 kn and covering a 47.0 km floating route in just 7-8 h. While this high-speed floating navigation enhances operational efficiency, the safety of both the vessel and the transported elements during floating remains a significant concern. A notable issue observed is the synchronization of the attitude between the element and the vessel. During acceleration, a relatively significant longitudinal tilt occurs, necessitating in-depth analysis to understand the hydrodynamic mechanisms behind this trim occurrence during high-speed floating of oversized immersed tubes, as well as to assess the impact of sustained trim on the safety of floating navigation and the loss of propulsion efficiency for the vessel. Methods: This paper presents a theoretical analysis comparing the resistance distributions of immersed tunnel elements in calm water with those under navigation at specific speeds. In situ measurements were conducted to observe attitude changes during the floating process. A numerical model describing the floating condition of a single tube element was developed using FLOW-3D software to analyze the resistance distributions and attitude changes at approximately 4.0 kn. Additionally, a comprehensive numerical model of the vessel-tube connection was established using computational fluid dynamics methods, with a scaling ratio of 1∶40 for model-scale simulations. These models simulated the flow field changes around the integrated vessel and the immersed tube at navigation speeds of 4.0 and 6.0 kn. Results: Through theoretical analysis, in situ measurements, and numerical simulations, the following conclusions were drawn: (1) The geometric shape of the immersed tube, which was a nonstreamlined rectangular box, resulted in significantly greater end face (bow face) resistance than streamlined vessels. This end face resistance was the main component of the navigation resistance for the immersed tube. (2) At certain navigation speeds, a downward flow field formed by the water at the bottom of the bow end was identified as the primary cause of the bow-down tilt of the immersed tube. This vertical flow field decreased the water pressure in the area near the bow end, leading to a significant trim phenomenon. (3) The total frictional resistance caused by the viscosity of water was found to be only approximately 1.50% of the total resistance, making its impact almost negligible. Conclusions: Measurements of the integrated vessel's attitude during the rapid floating of immersed tubes indicate a significant longitudinal tilt. A relationship between the trim angle and navigation speed is established through these measurements. By combining numerical and theoretical analysis methods, it is possible to analyze the state of the flow field around the immersed tube under high-speed floating conditions. The analysis suggests that the longitudinal tilt of the immersed tube is related to the flow field formed at the bow of the immersed tube, which reduces the dynamic pressure at the bottom of the bow end. This reduction in pressure generates a rotational moment in the bow tilting of the immersed tube.
Objective: This study examines the wind-resistant design of the Shenzhen-Zhongshan Link located in the Pearl River Delta, an area prone to strong typhoons. The region can experience a maximum 10 min average wind speed of 43.0 m/s at a height of 10 m during a 100-year return period, posing significant challenges for bridge design. The aim is to propose a series of aerodynamic and mechanical control countermeasures to comprehensively mitigate wind-induced vibration risks during the operational lifespan of bridges. The main navigable span, the Lingdingyang Bridge, faces flutter risk, whereas the six-span nonnavigable span, featuring parallel twin box girders, is susceptible to vortex-induced vibration (VIV). Methods: The flutter performance of the two Lingdingyang Bridge design schemes is evaluated by conducting wind tunnel tests on a sectional model with a geometric scale of 1∶80. This research investigates the aerodynamic and aerostatic effects of upper central vertical stabilizers (UCVSs), lower central vertical stabilizers (LCVSs), horizontal stabilizers (HSs), and their combinations on two wide single box girders with an aspect ratio exceeding 12 at various angles of attack (AOAs). Wind tunnel tests optimize stabilizer layouts to improve flutter performance. For VIV, extensive wind tunnel experiments are conducted on a sectional model with a large geometric scale ratio of 1∶30. The experimental results investigate the VIV behavior of parallel twin box girders with different slot width ratios. The experiments consider wind angles of attack of ±3° and 0°, identifying 3° as the most unfavorable angle of attack. Results: The flutter tests highlighted the high sensitivity of the critical flutter wind speed to AOAs, with lower critical flutter wind speeds observed at AOAs of 1° and 2°. The application of UCVS was advantageous for increasing the critical flutter speed at positive angles of attack, whereas the utilization of LCVS was effective for the enhancement of the critical flutter speed at negative angles of attack. The combined use of central and horizontal vertical stabilizers effectively increased the critical flutter wind speed of wide single-box girders. The VIV experiments revealed significant aerodynamic interference among parallel twin box girders with different slot width ratios, notably at 3° of attack. The combination of wind fairings/winglets and skirt plates effectively suppressed VIV in parallel twin box steel girders, whereas a single aerodynamic countermeasure was insufficient. Conclusions: The comprehensive application of aerodynamic and mechanical control measures has significant potential for improving the operational stability of the Shenzhen-Zhongshan Link bridges throughout their lifecycle. Aerodynamic stabilizers effectively increased the critical flutter wind speed, whereas the combination of wind fairings and skirt plates suppressed VIV. Mechanical measures, including multiple tuned mass dampers (MTMDs), prove effective regardless of the girder's cross-section shape and wind environment. By considering mode coupling between VIV mode and non-VIV modes caused by MTMDs, optimal MTMD parameters can be designed for improved control performance. Implementing comprehensive wind-resistant designs and corresponding control measures across several bridge sections of the Shenzhen-Zhongshan Link has significant potential for improving the operational stability of bridges over their entire life cycle.
Objective: Accent is a significant challenge in speech recognition. To address the problem of multi accent speech recognition and further explore the interaction between accent and speech recognition tasks, this paper proposes a multi-task learning method for accent and speech recognition based on spike features. The multi-task learning approach can simultaneously model both accent recognition and speech recognition, simplifying system complexity and making the model more compact. In addition, the information shared between tasks can complement each other, improving the performance of each task. However, these methods also have several limitations. First, there should be more exploration and analysis of the accent information captured by different encoder layers and its effect on speech recognition performance. Second, the interaction between the two tasks has yet to be further explored, such as using speech recognition information to assist in accent recognition. Therefore, this paper will conduct further research on the aforementioned issues. Methods: In this paper, a multi-task learning framework for accent and speech recognition is constructed within the ESPnet environment. By sharing parts of the encoder's underlying network, the accent information is used to implicitly enhance the acoustic features of a specific accent to improve the performance of accent and speech recognition. This paper analyzes the encoder's hidden layer features to further investigate the interaction between the two tasks. The results show that the accent features are mainly composed of blank frames, whereas the effective label frames, which better reflect accent differences, do not play a significant role. To address this problem, this paper proposes using connectionist temporal classification (CTC) spike features corresponding to valid labels as the input features for accent recognition within a multi-task learning framework to improve accent recognition performance. During forward propagation in model training, CTC pseudo-label alignment information is calculated, and the corresponding hidden layer encoding features are obtained through the indices of non-blank frames. These features are then subjected to statistical pooling, and the resulting features are used for accent recognition training. All the parameters are updated synchronously during the joint training process. Given that speech sequences are typically much longer than text sequences, experiments are conducted using Spike-Frame and Spike-Chunk features as the accent features, respectively. The Spike-Chunk refers to extending a certain number of frames on both sides of the Spike-Frame index. Results: Experiments conducted on English Common Voice and AESRC2020 datasets demonstrated that the proposed method improved speech recognition performance by absolutely 0.6% and 1.0%, respectively, compared with the model trained with all the data mixed directly. In accent recognition, the performance based on CTC spike features improved by absolutely 0.7% and 1.9%, respectively. Conclusions: This paper proposes a model for both accent recognition and speech recognition within a multi-task learning framework. The experiments show that the accent information can enhance the hidden layer features of the encoder to some extent, thereby improving the speech recognition performance. Meanwhile, the spike features of valid labels in speech recognition also enhance the accent recognition performance to some degree. This study aims to further explore the interaction between accent and speech recognition tasks to improve the performance of both tasks and achieve the expected results.
Objective: Adolescent depression has become an urgent global health issue, which has a considerable increase in mental health problems among young individuals. Traditional diagnostic methods for depression often rely on self-reported symptoms and subjective evaluations, which can lead to underdiagnosis or misdiagnosis, especially in teenagers who may conceal their symptoms due to stigma. This study aims to fill this gap by developing a multimodal physiological signal database designed specifically for adolescent depression. This database incorporates various physiological signals, including speech and heart rate data, to enhance the objectivity of depression diagnosis. The goal of this work is to provide a tool that improves diagnostic accuracy and offers insights into the autonomic nervous system (ANS) dysfunctions associated with depression, thus paving the way for effective therapeutic interventions. Methods: This study recruited 86 adolescents aged between 12.00 and 20.00 years who were native Mandarin speakers. Data collection focused on multiple physiological modalities, including speech audio recordings, electrocardiogram (ECG) signals, and blood pressure readings. The participants were asked to engage in emotion elicitation tasks based on cognitive psychology principles. These tasks were designed to trigger a range of emotional responses, from neutral to positive and negative states, and allowed for real-time collection of vocal and physiological data under varying emotional conditions. ECG signals were analyzed to assess heart rate variability (HRV), a key marker of ANS function. Statistical methods, along with machine learning algorithms, were employed to analyze the relationship between vocal characteristics (pitch and speech energy) and physiological markers (HRV) to uncover potential patterns that differentiate depressed individuals from healthy controls. Results: Analysis revealed significant differences between the depressed and control groups in speech patterns and physiological responses. Adolescents with depression showed reduced variability in pitch and low speech energy, which were indicative of emotional blunting, a common symptom of depression. These vocal changes were strongly correlated with anomalies in HRV, specifically, a reduction in HRV, which signals impaired ANS function. The integration of multimodal data types (speech and physiological signals) not only confirmed the presence of ANS dysregulation in depressed adolescents but also provided a new framework for identifying vocal biomarkers as reliable indicators of depression severity. Additionally, the present study demonstrated that using multimodal data improved the overall precision of depression diagnosis because the combination of physiological and vocal features yielded better discriminatory power than the use of either modality alone. Conclusions: The creation of the multimodal physiological database presented herein represents an important step forward in the objective diagnosis of adolescent depression. By combining speech analysis with physiological markers, such as HRV, this study offers a comprehensive tool that can be used to diagnose depression with increased accuracy. This database not only provides a valuable resource for clinicians and researchers but also opens new avenues for personalized treatment approaches based on objective physiological data. Furthermore, this work highlights the critical role of the ANS in the pathology of depression and underscores the importance of integrating multimodal data in future psychiatric diagnostics. In conclusion, this database has the potential to revolutionize how adolescent depression is diagnosed and treated, providing a nuanced understanding of the neurophysiological mechanisms underlying this disorder.
Objective: Multi-channel speech enhancement (MCSE) focuses on recovering clean speech signals by leveraging both spatial relationships and temporal spectral information captured by microphone arrays. MCSE plays a crucial role in applications such as teleconferencing, human-computer interaction, and distant speech recognition, where enhancing speech quality and intelligibility is critical. However, existing approaches often struggle in challenging acoustic environments with low signal-to-noise ratios (SNRs) and high reverberation. While these methods excel at noise reduction and distortion minimization, they often fail to preserve the spatial and acoustic structure of speech in adverse environments. Specifically, methods that rely on networks to implicitly learn joint spectral and spatial information tend to struggle with accurately predicting beamforming weights, which are crucial for noise suppression. This challenge frequently results in speech distortion, such as loss of harmonic structures and decreased speech intelligibility. Methods: To address these issues, this study proposes a two-stage MCSE framework that explicitly incorporates the harmonic structure of speech. The first stage incorporates integrating a speech harmonic information extraction module into a UNet-based beamforming network; this module is designed to capture and retain the harmonic structure, which is fundamental to human auditory perception and speech clarity. The second stage introduces a residual iterative correction module to refine the speech signal; this stage focuses on addressing fiber acoustic structures that were not fully processed during the initial stage. By progressively reducing residual distortions, this approach ensures to improve speech quality, even in environments with low SNRs and significant reverberation, conditions where traditional methods typically fall short. Results: This study evaluated the proposed framework on a dataset derived from LibriSpeech, simulating various challenging acoustic environments with varying SNRs levels and reverberation conditions. The results demonstrated significant improvements over existing MCSE techniques in noise suppression and speech distortion reduction. Specifically, in low SNRs and highly reverberant environments, the beamforming network with harmonic structure information preserved the spatial and spectral characteristics of the speech signal better than traditional methods. Compared with approaches relying solely on implicitly learned spectral-spatial information, the proposed model showed greater effectiveness in retaining speech intelligibility and clarity. The inclusion of harmonic structure information allowed proposed framework to achieve better differentiation between speech and noise, producing more robust and reliable enhancement results under adverse conditions. The residual iterative correction stage further improved model performance by refining unprocessed acoustic structures, thus reducing residual distortions and enhancing spectral richness in the enhanced speech signal. Conclusions: The proposed two-stage MCSE framework successfully addresses the limitations of traditional MCSE methods by explicitly modeling and preserving the harmonic structure of speech. By integrating this information, the framework enhances spatial-spectral processing, enabling superior noise suppression and restoration of speech clarity, particularly in complex acoustic environments. The findings of this study indicate the significance of harmonic structures in human auditory perception and how their preservation contributes to improved intelligibility. The inclusion of the residual iterative correction stage in proposed framework ensures that finer acoustic details are addressed, allowing the model to perform reliably in low SNRs and high reverberation scenarios where conventional approaches often fail. This research underscores the importance of incorporating explicit acoustic structure information into MCSE systems, paving the way for more advanced speech enhancement models that can reliably address acoustic challenges in real-world. The research results of this paper can provide a reference for multi-channel speech enhancement schemes in extreme acoustic scenarios.
Methods: To more clearly describe the specific movements of each joint, the local POE method was introduced. For ease of analysis, the structure of the robot's passive arms was simplified using screw theory. A kinematic model for the Delta robot was established using the local POE method. The error model of the robot was obtained through the differential mapping of the exponential product. Based on the derived error model, error sources were subdivided into three major categories: structural errors, actuation angle errors, and spherical joint clearance errors. An in-depth analysis was conducted on how each error source affects the end-effector positioning accuracy of the robot when it moves along the X, Y, and Z directions. A Delta robot with active arm lengths of 400 mm and passive arm lengths of 950 mm was selected as the subject for simulation analysis in MATLAB. The square root of the sum of squared errors in the X, Y, and Z directions was used as a composite error to serve as an evaluation criterion. Results: The simulation results showed that assuming all error sources have a magnitude of 0.100 units (length unit being mm; angular unit being degrees), actuation angle errors had the most significant impact on the end-effector positioning accuracy of the Delta parallel robot, causing a composite error ranging from 1.500 to 2.000 mm. Spherical joint clearance errors caused a composite error of 0.340 mm in the robot. Structural errors exhibited a relatively stable composite error fluctuating around 0.100 mm, with a variation range of approximately 0.010 mm, which can be considered a constant value. Comprehensive analysis indicated that length errors in the active and passive arms significantly influenced end-effector positioning accuracy, with the induced error fluctuations notably larger than those from other sources. Additionally, when the magnitudes of error sources were 0.025 mm, 0.050 mm, 0.075 mm, and 0.100 mm, their impacts on robot positioning accuracy increased proportionally. Conclusions: The Delta robot error analysis model based on screw theory and utilizing the local POE method offers a more intuitive and comprehensive approach to analyzing the impact of major error sources on positioning accuracy compared to traditional error modeling methods. This approach effectively avoids issues of singularity and incompleteness. It provides theoretical reference for error modeling analysis of other parallel mechanisms. Through the assessment of the influence of each error source presented in this paper, during subsequent error compensation phases, more precise corrections can be made to the significantly impactful actuation angle errors, thereby effectively improving the efficiency and effectiveness of overall error compensation.
Objective: Implementing differentiated toll discounts for expressway trucks can lead to a more balanced traffic flow across the road network. Expressway operators hope to make profits by charging truck tolls, while truck groups aim to maximize profits through toll charges, whereas truck groups focus on minimizing travel costs in terms of economics and time. A balance exists between the benefits of both parties; however, determining differentiated toll discounts for expressways to reach this balance is difficult. Methods: (1) Based on consumer surplus theory, key factors affecting freight route selection are identified using the preference survey of traveling behavior, and a bi-level programming model is proposed for determining differentiated toll discounts, incorporating assumptions and constraints. (2) The upper model, considering truck cost and travel time, is a surplus maximization model for expressway operators. It is solved using a novel algorithm enhanced by combining a genetic algorithm with simulated annealing. In the upper model, a lower limit on the financial revenue targets of highway operating enterprises is included the constraints to avoid overflow of lower bound returns during the iteration process. (3) The lower model leverages a logit-based stochastic user equilibrium allocation model for multiple vehicle types under elastic demand, solved using the Frank Wolfe algorithm. A generalized impedance function considering economic and time costs is established in the lower model to demonstrate the impacts of road conditions on truck travel. Cost weighting coefficients are introduced, and calculation methods and recommended values are proposed to integrate economics and time costs. (4) Detailed execution steps are provided for solving algorithms of the upper and lower models. The model also introduces model convergence criteria to optimize the iteration efficiency of the solving algorithm. A fitness function is proposed based on the financial lower bound target, and the upper model is transformed into a minimum value problem, eliminating the constraint of discounted rates. Results: The feasibility of the model is validated using toll collection data of expressway and link traffic data of highways, with three instance highway sections. A reasonable range suitable for implementing differentiated toll discounts can attract trucks back to the expressway, and increasing the daily average traffic volume for each vehicle type. After 43 iterations, the upper model achieves a stable function value. The toll discount rates for small trucks, medium trucks, heavy trucks, and extra-heavy trucks on the instance expressway fall within the ranges of 78.68%-86.27%, 55.82%-65.82%, 47.90%-54.81% and 47.52%-48.31% respectively; consequently, the average truck flow on the expressway increases by 12.24%. Conclusions: The conclusion demonstrates that the bi-level programming model can accurately determine the toll discount range for trucks on expressways; however, even with a discount rate of 4.7% for oversized trucks on nearly 100 km of the actual expressway, attracting all oversized trucks to return to the expressway remains challenging. Fuel and toll fees remarkably impact travel path selection within the generalized impedance function; moreover, the same toll discount produces notable differences in implementation effects across truck types. The research provides support for developing differentiated toll policies for expressways, as well as their subsequent optimization and adjustment.
Objective: The phenomenon of segment dislocation is prevalent in shield tunnel lining, thereby affecting the structural safety and service performance of tunnels in severe scenarios. Manual measurement of segment dislocation is subjective and time-consuming, while automated methods, such as 3D laser scanners, are not only expensive but also susceptible to environmental conditions. In recent years, advancements in camera technology and deep learning algorithms have significantly propelled the development of computer vision applications in civil engineering monitoring and inspection. Binocular vision technology has been utilized in crack detection and engineering quality evaluation due to its cost-effectiveness and reliable accuracy. This study introduces binocular vision technology to enable accurate measurement of segment dislocation. Methods: The internal and external parameters of consumer-grade binocular cameras are initially obtained through a calibration experiment, followed by capturing binocular images of the segment joints. The accurate disparity information between the left and right images is further calculated using a deep-learning stereomatching algorithm. Finally, a camera attitude correction method is proposed to calculate the segment dislocation accurately. The primary challenge in camera attitude correction is determining key points on the segment surface without evident texture features. Therefore, a series of image processing techniques, such as graying, binarization, dilation, erosion, and target localization, are initially employed on the left photo to automatically identify the location of the segment joint. Then, the parallel thinning algorithm is applied to extract the skeleton of the segment joint (reduce the segment joint width from multi-pixel to single-pixel). The pixel coordinate data along the skeleton are extracted, and the least squares method is used to fit a straight line to the skeleton. Furthermore, the straight lines are shifted and flipped to form latitude and longitude lines, allowing the pixel and camera coordinates of the crossing point (key point) to be calculated in accordance with the triangulation principle. Finally, the segment coordinate system is established using these key points, and the actual segment dislocation is computed. Results: Field tests on the construction site of the shield tunnel were carried out to evaluate the efficiency of the proposed approach. Results were drawn as follows: (1) The calculation results from the vertical shooting of the binocular camera, after camera attitude correction, showed better consistency than the segment dislocation measurement via weld seam gauging. The specific performance demonstrated that the absolute error did not exceed 1.0 mm, and the overall processing time for the segmented image was approximately 10.00 s. (2) In multiple rounds of tests with left-leaning or right-leaning shooting scenarios, when the attitude angles of camera coordinate system relative to the segment coordinate system around its own Xc axis and Yc axis were in the ranges of(180.00±15.00)° and ±20.00°, respectively, the proposed method effectively corrected the calculation results with strong robustness. Conclusions: By combining deep learning algorithms with traditional image processing techniques, binocular vision technology is used to achieve rapid and accurate measurement of segment dislocation. This approach provides a reference for tunnel engineering inspection.
Objective: Due to the high flammability of nonflame-retardant pure acrylonitrile-butadiene-styrene (ABS), a material often used for passenger luggage, it is easily ignited by open flames, posing risks to aviation operations. Therefore, in-depth research on the pyrolytic combustion characteristics of ABS at high temperatures and high radiation intensities is crucial for the safe operation of aircraft. Methods: This study evaluated the thermal stability and combustion characteristics of ABS under different heating rates and radiation intensity conditions using thermogravimetric analysis and cone calorimeter systems. This study also analyzed the variations in the characteristic parameters of ABS. Results: The results show that the pyrolysis process of ABS can be divided into an initial volatilization stage, a rapid decomposition stage, a residual combustion stage, and a pyrolysis termination stage. In the rapid decomposition stage, when ABS reaches temperatures of approximately 310 ℃ to 343 ℃, the main polymer chains of ABS undergo cleavage, breaking down into different components, such as acrylonitrile and polyethylene monomers, leading to the decomposition of polymer molecules. When heated, the main chain of ABS ruptures. The molecular structure of ABS contains different components, such as styrene and butadiene, which are prone to decomposition and cross-linking reactions upon heating, resulting in the occurrence of the pyrolysis process. An increase in heating rate significantly shortens the pyrolysis time and enhances the maximum thermal decomposition rate. As the radiation intensity increases, the combustion process of ABS accelerates, with the heat release rate increasing and the peak heat release rate increasing by 53%. The combustion and ignition times decrease by 32% and 78%, respectively, because of the increase in material temperature and the exacerbation of heat conduction and convection phenomena leading to an increase in heat release rate. Under low radiation intensities, ABS cannot rapidly absorb energy to reach combustion conditions. However, as the radiation intensity increases, ABS can rapidly absorb sufficient energy for faster decomposition, thus shortening the combustion time. The generation time of carbon monoxide (CO) and carbon dioxide (CO2) is enhanced, and the maximum generation amounts of CO2 and CO increase by 49% and 74%, respectively. The oxygen consumption increases and the oxygen consumption rate accelerates due to the intensified molecular motion caused by thermal radiation, leading to a faster reaction with oxygen in the air. The mass loss time is enhanced, the remaining sample mass decreases, and the maximum mass loss rate increases by 53.8%. Based on the thermal penetration model, 2 mm thick ABS material is classified as a thermally thin material, and verification is conducted. Based on the ignition time model, a critical radiative heat flux formula is established, and the critical radiative heat flux is calculated to be 16.255 kW/m2. Finally, according to the fire performance indicators, as the radiation intensity increases, the material combustion rate increases, releasing higher amounts of heat, leading to faster fire growth and development, thereby increasing fire risk. The fire risk of ABS is positively correlated with the radiation intensity. Conclusions: This study concludes that ABS exhibits a high fire risk. This research provides crucial data and practical references on the fire risks associated with ABS material for safe aviation operations.