Chinese  |  English

Top access

  • Published in last 1 year
  • In last 2 years
  • In last 3 years
  • All

Please wait a minute...
  • Select all
    |
  • Public Safety
    Jiamei ZHOU, Wei LÜ, Jinghui WANG
    Journal of Tsinghua University(Science and Technology). 2025, 65(6): 1050-1059. https://doi.org/10.16511/j.cnki.qhdxxb.2025.22.015
    Abstract (657) PDF (87) HTML (448)   Knowledge map   Save

    Objective: With increased globalization, multiple countries are involved in supply chains, forming complex supply networks. Frequent occurrences of natural disasters, geopolitical instability, and global health crises pose unprecedented challenges to traditional supply chain management methods. Local disruptions in the supply chain can spread internally, causing a series of chain reactions. Enhancing supply chain risk resilience and robustness has become a research focus for many scholars. The widespread use of the Internet has led to rapid information exchange between enterprises; an increasing number of scholars have recognized the importance of early warning information in preventing supply chain disruptions. Therefore, understanding how information affects the propagation of risks within the supply chain and maximizing the early warning function of information have significant practical implications. Moreover, the heterogeneity in the responses of enterprises to early warning information also needs attention. Methods: To capture the propagation of early warning information and disruption risks, a two-layer propagation model that couples risk and information is constructed. In this model, the upper layer represents the information layer and the lower layer represents the risk layer. The information of a disruption in a lower-layer enterprise is transmitted to upstream and downstream enterprises with a certain probability. After receiving the early warning information, an enterprise transitions into a conscious node and this transition is reflected in the upper layer network. In this model, there are five possible states for the nodes in the network. A microscopic Markov chain (MMC) method is used to analyze the state transition process between nodes and calculate the risk propagation threshold of the system. Furthermore, the key factors influencing the propagation of disruption risk are analyzed. An agent-based approach is used for case simulation to validate the model's effectiveness. Numerical analysis of the model reveals that the network structure, network size, extent of risk information propagation in the information layer, and the probability of disruption risk propagation are the key factors influencing the propagation of the risk. Financial data from Tesla's supply chain in China are also collected. In case simulation, an agent-based method is used to study the effects of the information layer network structure, information propagation rate, and risk propagation rate on the supply chain resilience. Results: The results show that for a low information propagation rate, the scale-free network structure accelerates information dissemination, allowing more enterprises to quickly obtain early warning information, thereby helping the supply chain resist risks and improve resilience. When the information propagation rate exceeds 0.4, the small-world network structures can propagate risks more efficiently because of their shorter average paths. Additionally, three disruption schemes are used to analyze system resilience, revealing that prioritizing the disruption of nodes with higher degrees has the greatest impact on the network, while deliberately attacking nodes with smaller degrees allows the supply chain to maintain higher operational efficiency. This finding suggests that maintaining the robustness of the key nodes in the supply chain is critical for enhancing the overall network resilience. Conclusions: Adjusting the supply chain network structure can help improve the risk resilience and robustness of the system. Enhancing risk awareness of enterprises and their response strategies can effectively improve supply chain resilience and suppress risk diffusion. Deliberate attacks on hub nodes with high degrees cause the greatest damage to the network system. Thus, this study provides theoretical support for supply chain management and can serve as a basis for decision-making to improve supply chain risk resilience and optimize management strategies.

  • Advanced Ocean Energy Technology
    Libing ZOU, Mingjun ZHOU, Chao WANG, Xiangyuan ZHENG, Zouduan SU, Junwei LI
    Journal of Tsinghua University(Science and Technology). 2025, 65(8): 1377-1386. https://doi.org/10.16511/j.cnki.qhdxxb.2025.27.039
    Abstract (639) PDF (224) HTML (256)   Knowledge map   Save CSCD(1)

    Significance: Floating wind turbines (FWTs), as a revolutionary breakthrough in offshore renewable energy technology, are redefining the boundaries of human development of ocean energy with innovative technological solutions. With the collaborative innovation of floating foundations and dynamic anchoring systems, this technology has successfully broken through the limitations of traditional fixed wind turbines on water depth, expanding the scope of wind power development to deep-sea and high wind speed resource rich areas. Compared to offshore fixed wind turbines, FWTs not only significantly reduce marine ecological disturbances, but also provide a dual solution for global energy transformation that combines environmental friendliness and production efficiency through the potential for large-scale cluster deployment. This article systematically reviews the current development status of floating wind power technology and deeply analyzes the core pain points that constrain its commercialization process, including key technical challenges such as dynamic response control, mooring system durability, and life cycle cost optimization. Of particular note is the milestone breakthrough achieved by China's innovation forces in this field-the "Mingyang-Tiancheng" floating platform, as the world's largest single unit capacity floating wind turbine system, has opened up a new paradigm for the development of far-reaching offshore wind power and provided important technical references for the global iteration of floating wind power technology. Progress: Globally, floating wind power projects represented by Hywind (Spar) and WindFloat (semi-submersible) have completed the transition from experimental prototypes to small-scale commercial applications, and their technological level and industrial chain layout are in a world leading position. In contrast, China's floating wind power is still in the demonstration and verification stage, represented by the 5.5 MW (2021) and 7.25 MW (2023) units of the "Yinlinghao" and "Guanlanhao". Although key technological breakthroughs have been achieved, the maturity of technology and the construction of supporting industrial chains still need to be improved. Currently facing three development bottlenecks: at the economic level, floating wind power technology is not yet mature, research and application costs are high, and it is still far from achieving the goal of grid parity; In terms of environmental constraints, the special working conditions in typhoon prone areas require higher adaptability of the units; At the level of industrial synergy, an industrial cluster effect covering design, manufacturing, and operation and maintenance has not yet been formed. Therefore, it is urgent to promote technological innovation to drive the development of related industrial chains, gradually reduce development costs, and achieve large-scale commercial applications. At the same time, it is necessary to promote the coordinated upgrading of offshore wind power equipment manufacturing and marine engineering industry, build a full life cycle cost control system, and lay a technical and economic foundation for large-scale commercial applications. Conclusions and Prospects: In order to address these challenges, the "Mingyang-Tiancheng" floating wind power platform has made innovative breakthroughs in areas such as prestressed high-strength concrete technology, composite lightweight buoy design and construction technology, intelligent perception collaborative control technology, single point mooring technology, dual wind turbine technology, and typhoon resistance technology, reflecting China's emerging leadership position in floating wind technology. It combines material science breakthroughs, intelligent control systems, and ecological design principles. Future progress will require sustained interdisciplinary collaboration and accelerated global deployment through industrialization to reduce costs. The "Mingyang-Tiancheng" provides valuable practical experience and technical reference for the future development of floating wind power.

  • Process Systems Engineering
    Dong QIU, Qiming ZHAO, Yijiong HU, Tong QIU
    Journal of Tsinghua University(Science and Technology). 2025, 65(5): 813-824. https://doi.org/10.16511/j.cnki.qhdxxb.2024.21.034
    Abstract (574) PDF (188) HTML (94)   Knowledge map   Save

    Objective: In the petrochemical industry, molecular reconstruction is crucial for understanding and optimizing the compositions of complex crude oil and petroleum products. As the first step of process simulation, quality control, and economic evaluation, precise molecular reconstruction approaches usually employ mathematical models to calculate the molecular compositions of petroleum products that align with their macroscopic properties. Traditional molecular reconstruction methods employ the gamma distribution to represent the carbon number distributions of homologs, but the coupling effects between the parameters "shape (α)" and "scale (β)" pose notable challenges in achieving desired interpretability and optimization efficiency. This study addresses these challenges by introducing a novel shape-decoupled parameter method that enhances the model's interpretability and simplifies the optimization process. Methods: The proposed shape-decoupled parameter method modifies a traditional gamma distribution by replacing the parameter's shape and scale with two new independent variables called peak position (m) and variance (σ2). Notably, m provides direct control over the zenith of the distribution, whereas σ2 independently determines the spread or width of the distribution, effectively reducing the coupling issue between parameters that exists in conventional gamma distribution models. Aiming at enhancing the stability and convergence speed during optimization, a multivariate linear regression (MLR) model was employed to estimate the initial parameter values. This regression model was trained on historical data of molecular compositions to provide reasonable initial values and decrease the probability of being trapped in local minima. The molecule-type homologous series (MTHS) matrix is used to represent the molecular composition of hydrocarbons, namely paraffins, isoparaffins, olefins, naphthenes, and aromatics (PIONA), with a comprehensive depiction of their multiple homologs. Moreover, an optimization problem was developed to minimize the prediction errors of the macroscopic properties, including molecular weight, density, PIONA group composition, and true boiling point curves. Upon a comparative analysis of multiple deterministic and heuristic optimization techniques, the differential evolution (DE) algorithm was determined as a favorable optimization tool by virtue of its superior accuracy and robustness. Results: Experimental evaluations showed that the shape-decoupled parameter method outperformed traditional methods in accuracy and optimization efficiency. Specifically, the density error decreased from 0.012 to 0.0059 g/cm3, and the average percentage relative error for the PIONA group composition also exhibits notable reductions. Moreover, the decoupled approach achieves faster convergence, requiring fewer iterations—reducing from 1 000 to as few as 20—without compromising accuracy. This reduction highlights the computational efficiency of the proposed method, which is a notable advantage in industrial applications with limited computational resources and time. Moreover, the proposed method exhibits enhanced robustness in addressing extreme molecular composition distributions, maintaining low errors in peak position and molecular composition predictions. This robustness becomes particularly evident when managing scenarios considered challenging by conventional methods, such as distributions with narrow ranges or hydrocarbons with approximately zero components at the boundary. Furthermore, the decoupled method provides better interpretability via independent control strategies for peak position and distribution width. The overall optimization performance was enhanced by the appropriate integration of the DE algorithm and effective initial parameter estimation by the MLR model. Conclusions: Compared with traditional methods, the proposed shape-decoupled parameter method provides a more interpretable, efficient, and accurate approach to the molecular reconstruction of petroleum products. By reducing the coupling effect between the parameters controlling the peak position and distribution width, this method simplifies the optimization process and achieves superior prediction accuracy and faster convergence. The results indicate the feasibility of its application for complex or extreme homolog distributions of hydrocarbons, revealing its higher reliability and robustness compared with traditional approaches. Future work is expected to focus on incorporating advanced machine learning techniques to further increase the accuracy and applicability of the model across a wider range of petroleum compositions, potentially enabling real-time molecular reconstruction for dynamic process optimization.

  • Intelligent Construction
    Peng LIN, Jianqi YIN, Yunfei XIANG, Chaoyi LI, Yong XIA, Houlei XU, Hua MAO
    Journal of Tsinghua University(Science and Technology). 2025, 65(7): 1173-1184. https://doi.org/10.16511/j.cnki.qhdxxb.2025.26.022
    Abstract (518) PDF (167) HTML (152)   Knowledge map   Save

    Objective: High-altitude hydropower projects present significant challenges owing to harsh environmental conditions, project clustering, limited data availability, and high construction risks. Accurate carbon emission calculations are crucial in such environments to mitigate environmental impacts and promote sustainable development. This study targets the full lifecycle of carbon emissions during intelligent construction in high-altitude hydropower projects. Methods: This study establishes a comprehensive framework for calculating lifecycle carbon emissions tailored to the unique challenges of high-altitude hydropower construction. The methodology covers three primary stages: data collection, model formulation, and real-world implementation. Lifecycle boundaries and emission factors are established for material production, transportation, construction, and operational maintenance. Key emissions are identified based on quality, energy consumption, and cost criteria to build a detailed carbon inventory. To address altitude effects, an adjustment coefficient is derived by correlating field-monitored data with baseline values, accounting for altitude impacts on emission intensities. The carbon emission model incorporates a discrete event simulation (DES) to capture the dynamic characteristics of construction and equipment operations. This model couples static and dynamic elements, applying static calculations to stable phases such as material production and maintenance while using dynamic simulations for variable stages such as transportation and active construction. This DES approach simulates the sequential and interdependent nature of equipment operations, providing an accurate reflection of emission behavior over time. Furthermore, a network of onsite carbon monitoring devices was implemented across different construction sites in a case project, and real-time CO2 concentration data were collected. These data calibrate and validate emission factors within the model, ensuring accurate altitude-adjusted emission assessments. Results: The model was applied to the JX hydropower project in a high-altitude region with distinct climatic and geographical challenges. The findings indicated that material production and construction machinery were the largest carbon emitters, accounting for 65.7% and 27.4% of total emissions, respectively. Cement manufacturing was identified as the dominant emission source, emphasizing the need for greener materials and cement production. The DES model revealed that equipment states, such as idling and operation, significantly influence emission intensities, especially under reduced oxygen at high altitudes. By integrating the DES results with real-time monitoring, the model supports precise, responsive emission control strategies. The proposed mitigation measures included adopting cleaner fuels, optimizing equipment idle time, and enhancing operational efficiency through scheduled maintenance. The model reliability was demonstrated by the close alignment of the simulated results with actual onsite measurements. Conclusions: The developed model offers a structured approach to calculating lifecycle carbon emissions for intelligent hydropower construction in high-altitude regions. By addressing the unique characteristics of such projects, including altitude-induced effects on emission intensities and equipment behavior, the model serves as a reference for emission reduction in future high-altitude hydropower projects. This study advances the understanding and management of emissions in high-altitude construction, underscoring the potential of intelligent construction methods to drive sustainable hydropower development.

  • Hydraulic Engineering
    Jiahong LIU, Mengxue ZHANG, Jia WANG, Chao MEI
    Journal of Tsinghua University(Science and Technology). 2025, 65(10): 1853-1867. https://doi.org/10.16511/j.cnki.qhdxxb.2025.21.038
    Abstract (498) PDF (163) HTML (274)   Knowledge map   Save

    Objective: The frequency and intensity of global flood events are increasing, which is deeply driven by climate change and human activities. The key risk factors of hazard, exposure, and vulnerability are interconnected and collectively influence the occurrence and progression of flood disasters. Therefore, the relationships between these three factors need to be understood, and a comprehensive indicator system for the integrated assessment of flood risk needs to be developed. This study aims to analyze the spatiotemporal trends of global flood events from 1965 to 2023. Moreover, based on the key risk factors of hazard, exposure, and vulnerability, the spatiotemporal characteristics of the flood risk were revealed, which could provide a scientific basis for flood prevention and disaster mitigation decision-making. Methods: This study uses the Emergency Events Database, which includes global flood data, to analyze flood events from 1965 to 2023. This study conducts a trend analysis of the global flood occurrence, affected population, and mortality per unit area from 1965 to 2023. Based on the affected population and mortality per unit area, floods in six continents are classified into light, moderate, and severe categories using the percentage method. Spatial analysis of the flood occurrence, affected population, and mortality per unit area was performed for each country. The results showed the spatial distribution and impact intensity of flood disasters in different regions. In addition, key risk indicators, such as geographic elevation, precipitation, population density, and urbanization rate, are selected to analyze the characteristics of flood risk. Elevation and precipitation represent hazards. Population density indicates exposure, and urbanization rate reflects vulnerability. Trend analysis of these indicators was performed for three distinct periods, i.e., 1965-1984, 1985-2004, and 2005-2023. To examine the spatial trends of these indicators across countries over the entire study period, the Theil-Sen slope estimation method was employed. The entropy weight method was applied to calculate the weight of each risk indicator, and the flood risk values of six continents from 1965 to 2023 were calculated. Results: The main results are as follows: (1) From 1965 to 2023, global flood events show a fluctuating upward trend, although the affected population and number of deaths have shown a downward trend since the 1990s. At the continental level, floods occur most frequently in Asia, Africa, and South America, with a total of 2 322, 1 266, and 1 084 events, respectively. At the national level, Haiti experiences the highest frequency of flood events per unit area, with 23 events per 104 km2. Bangladesh has the highest total number of flood-affected people per unit area, with 27.1 million people per 104 km2, and the highest record of cumulative deaths, with 3 313 deaths per 104 km2. (2) Flood hazard, exposure, and vulnerability vary significantly across six continents. Among the indicators, population density and precipitation show the greatest influence on flood risk, with weights of 0.33 and 0.30, respectively. From 1965 to 2023, an obvious regional variation in flood risk across six continents is detected. The flood risk in Asia is significantly higher than that in other continents, with the flood risk values of both Asia and Africa showing a significant increase. By contrast, the flood risk value of South America decreased after 2010. Europe and North America show relatively low and stable flood risk values. Oceania exhibits the lowest flood risk values with significant fluctuations. Conclusions: This study conducts not only a systematic analysis of global flood events over a long time series but also an analysis of the changes in risk indicators, such as precipitation, geographic elevation, population density, and urbanization rate, from 1965 to 2023. Moreover, the relative impact of different indicators is quantified, which clarifies their respective contributions to flood risk. The results further revealed the comprehensively changing characteristics of flood risk. The findings provide guidance and evidence to inform flood prevention planning and disaster response strategies. In the future, exposure change based on population mobility and integrated adaptive capacity should be considered to reveal the dynamic characteristics of flood risk.

  • Safety Science
    Siyuan MU, Quanyi LIU, Ruxuan YANG, Yi LIU, Rui YANG
    Journal of Tsinghua University(Science and Technology). 2025, 65(7): 1368-1376. https://doi.org/10.16511/j.cnki.qhdxxb.2025.22.001
    Abstract (453) PDF (262) HTML (166)   Knowledge map   Save CSCD(1)

    Objective: Due to the high flammability of nonflame-retardant pure acrylonitrile-butadiene-styrene (ABS), a material often used for passenger luggage, it is easily ignited by open flames, posing risks to aviation operations. Therefore, in-depth research on the pyrolytic combustion characteristics of ABS at high temperatures and high radiation intensities is crucial for the safe operation of aircraft. Methods: This study evaluated the thermal stability and combustion characteristics of ABS under different heating rates and radiation intensity conditions using thermogravimetric analysis and cone calorimeter systems. This study also analyzed the variations in the characteristic parameters of ABS. Results: The results show that the pyrolysis process of ABS can be divided into an initial volatilization stage, a rapid decomposition stage, a residual combustion stage, and a pyrolysis termination stage. In the rapid decomposition stage, when ABS reaches temperatures of approximately 310 ℃ to 343 ℃, the main polymer chains of ABS undergo cleavage, breaking down into different components, such as acrylonitrile and polyethylene monomers, leading to the decomposition of polymer molecules. When heated, the main chain of ABS ruptures. The molecular structure of ABS contains different components, such as styrene and butadiene, which are prone to decomposition and cross-linking reactions upon heating, resulting in the occurrence of the pyrolysis process. An increase in heating rate significantly shortens the pyrolysis time and enhances the maximum thermal decomposition rate. As the radiation intensity increases, the combustion process of ABS accelerates, with the heat release rate increasing and the peak heat release rate increasing by 53%. The combustion and ignition times decrease by 32% and 78%, respectively, because of the increase in material temperature and the exacerbation of heat conduction and convection phenomena leading to an increase in heat release rate. Under low radiation intensities, ABS cannot rapidly absorb energy to reach combustion conditions. However, as the radiation intensity increases, ABS can rapidly absorb sufficient energy for faster decomposition, thus shortening the combustion time. The generation time of carbon monoxide (CO) and carbon dioxide (CO2) is enhanced, and the maximum generation amounts of CO2 and CO increase by 49% and 74%, respectively. The oxygen consumption increases and the oxygen consumption rate accelerates due to the intensified molecular motion caused by thermal radiation, leading to a faster reaction with oxygen in the air. The mass loss time is enhanced, the remaining sample mass decreases, and the maximum mass loss rate increases by 53.8%. Based on the thermal penetration model, 2 mm thick ABS material is classified as a thermally thin material, and verification is conducted. Based on the ignition time model, a critical radiative heat flux formula is established, and the critical radiative heat flux is calculated to be 16.255 kW/m2. Finally, according to the fire performance indicators, as the radiation intensity increases, the material combustion rate increases, releasing higher amounts of heat, leading to faster fire growth and development, thereby increasing fire risk. The fire risk of ABS is positively correlated with the radiation intensity. Conclusions: This study concludes that ABS exhibits a high fire risk. This research provides crucial data and practical references on the fire risks associated with ABS material for safe aviation operations.

  • Public Safety
    Dingli LIU, Xiao LEI, Diping YUAN, Yanglong WU, Zhisheng XU
    Journal of Tsinghua University(Science and Technology). 2025, 65(6): 1009-1018. https://doi.org/10.16511/j.cnki.qhdxxb.2025.22.017
    Abstract (447) PDF (141) HTML (98)   Knowledge map   Save CSCD(1)

    Objective: The efficient allocation and dispatch of fire rescue resources are crucial to urban public safety. Traditional approaches assume continuous spatial distribution of fire service coverage areas and give less consideration to the impact of real-time traffic conditions on rescue route selection and response times. This study aims to introduce and define the concept of "rescue enclaves"—areas that, although not directly adjacent to fire stations, can be effectively covered by them—and proposes a method to identify and calculate these spatially discontinuous coverage areas. Methods: This study proposed a method for identifying and calculating spatially discontinuous coverage areas by mapping points to grids. Using this method: (1) fire truck travel times were calculated using real-time traffic data, (2) geographic coordinates were converted to universal transverse Mercator (UTM) coordinates, (3) the region was divided into fine grids, (4) grid coverage status was determined, (5) transition grids were processed through neighborhood analysis, and (6) rescue enclaves were identified using a breadth-first search (BFS) algorithm. The CS-XX urban fire station in a Chinese city was selected as a case study to validate the method. In this case study, 3 818 points of interest were identified as rescue demand points across 49 evaluation periods in one day, generating 187 082 valid data samples. A target response time of 4 min was established, and an 80% reduction coefficient was applied to convert regular vehicle travel times to fire truck travel times. Results: The rescue enclave areas were successfully identified and calculated using the proposed method, through which the following key findings were revealed: (1) the dynamic coverage area of CS-XX was observed to vary from 1.83 to 4.57 km2, with the minimum fire service coverage of 1.83 km2 being recorded during the morning peak at 8:00, (2) the calculated coverage area trends were found to be consistent with the percentage of demand points accessible within 4 min, whereby the reliability of the method was validated, (3) critical rescue enclaves were identified near CS-XX, with enclave areas ranging from 0.25 to 1.12 km2, accounting for 12.20%-27.53% of the total coverage area, (4) the rescue enclaves were observed to occasionally extend beyond the traditional coverage of 7.00 km2 prescribed by standard area determination methods, and (5) coverage areas and rescue enclave areas were demonstrated to synchronously vary with traffic conditions, with traffic congestion leading to a significant reduction in their sizes. Conclusions: The proposed conceptualization of rescue enclaves is elucidated in this study, and their substantial manifestation within fire service coverage areas is substantiated through rigorous analysis. The rescue enclaves are systematically identified and quantified via an algorithmically driven methodological framework, and it is ascertained that such enclaves may comprise up to 27.53% of the coverage area of a fire station. If rescue enclaves are integrated into fire rescue jurisdiction planning protocols, they can substantially optimize resource allocation efficacy. While real-time traffic conditions and different flow efficiencies across heterogeneous route typologies are identified as the primary determinants of enclave formation, subsequent investigations are warranted to elucidate the precise mechanistic underpinnings and contributory factors governing rescue enclave emergence as well as to establish quantitative metrics for rescue passage efficiency across diverse route configurations.

  • Hydraulic Engineering
    Shouguang WANG, Huaguang LIU, Pengyu MU, Qiang YANG, Yaoru LIU, Qianghui LIU, Chi LIU, Xingyu JIANG
    Journal of Tsinghua University(Science and Technology). 2025, 65(10): 1821-1837. https://doi.org/10.16511/j.cnki.qhdxxb.2025.21.016
    Abstract (447) PDF (152) HTML (232)   Knowledge map   Save

    Objective: The construction of hydraulic tunnels in high-stress surrounding rock environments often leads to the occurrence of rock bursts, thereby posing a substantial threat to engineering safety. Among the various active prevention and control measures for rock bursts, drilling pressure relief in surrounding rocks is considered a relatively economical and effective method. By creating drilled holes in the rock mass, stress concentration can be redistributed, thereby reducing the likelihood of sudden failures and improving the overall stability of the tunnel structure. Methods: In order to investigate the mechanical properties and damage characteristics of sandstone in hydraulic tunnels under different combinations of drilling numbers and drilling depths, a series of uniaxial compression tests were conducted. These tests utilized an advanced uniaxial compression testing machine and the VIC-3D noncontact full-field strain measurement system. The experiment involved eight different combinations of drilling holes in the sandstone specimens. This study comprehensively analyzed key parameters such as compressive strength, the accumulation and release characteristics of elastic strain energy, and the residual volume rate of sandstone. A regression analysis was conducted to establish a quantitative relationship between the residual volume rate of sandstone and its compressive strength. In addition, the crack evolution and damage characteristics of sandstone under different drilling hole configurations were studied using digital image correlation (DIC) technology and fracture phase field simulation. Furthermore, numerical simulations based on finite element methods were performed to compare the effects of straight holes and 10° inclined holes on stress redistribution within the rock mass. Results: The experimental and numerical results led to the following key findings: (1) when the radius of the drilled holes remains constant, an increase in the drilling depth leads to a decrease in the compressive strength of sandstone. This finding indicates that deeper drilling can effectively weaken the rock mass and facilitate stress relief. (2) Under the condition of identical hole radius and depth, an increase in the number of drilled holes results in a discontinuous reduction in the compressive strength of sandstone. Moreover, the arrangement of the drilled holes plays a crucial role in determining the overall strength of sandstone. For instance, the specimens with asymmetrical three-borehole configurations exhibited lower compressive strength than those with symmetrical four-borehole configurations. This finding suggests that asymmetrical arrangements can enhance energy dissipation efficiency and reduce the overall stress level within the rock. (3) The elastic strain energy of sandstone exhibits a strong positive correlation with compressive strength. Moreover, as the ratio of loss energy to elastic strain energy approaches zero, the intensity of sandstone destruction considerably increases. This outcome highlights the role of energy release in the failure process of rock materials. (4) DIC strain field analysis and numerical simulations confirm that sandstone under uniaxial compression follows a characteristic butterfly-shaped damage pattern. The three-borehole asymmetric configuration showed lower compressive strength, greater far-field stress reduction, earlier failure onset, and higher economic feasibility for pressure relief applications than the four-borehole symmetric configurations. (5) Under identical rock formation and borehole depth conditions, the impact of straight and 10° inclined boreholes on stress redistribution is found to be similar. However, practical construction decisions should be made, considering site-specific conditions and operational requirements. Conclusions: This study provides valuable insights for optimizing the design of borehole pressure relief schemes for hydraulic tunnels. The findings provide a reference for engineers seeking to improve tunnel stability through effective stress redistribution strategies. By systematically evaluating different drilling configurations, this study contributes to the development of more efficient and cost-effective methods for mitigating rock bursts in high-stress environments.

  • Lithium-Ion Battery
    Yuanhua HE, Xingchen SU, Liang ZHAO
    Journal of Tsinghua University(Science and Technology). 2025, 65(9): 1805-1820. https://doi.org/10.16511/j.cnki.qhdxxb.2025.22.046
    Abstract (444) PDF (91) HTML (303)   Knowledge map   Save

    Significance: Amid the rapid development of new productivity tools, the active thermal management system of power lithium-ion batteries is facing significant challenges, such as improving charge and discharge ratios and adapting to harsh application scenarios. To maintain stable operations of the power system in the best state, the technical bottleneck of efficient and long-term heat dissipation needs to be overcome. At the same time, in the consumer market, the cost factors of engineering products, including design, materials, space volume, cooling refrigerants, and plumbing systems, need to be carefully considered. Therefore, the active thermal management system of power lithium-ion batteries, which is widely used and has great potential, needs to be systematically summarized. Progress: This paper comprehensively reviews research progress on the active thermal management of power lithium-ion batteries in recent years. First, we summarize the research status of single-phase thermal management methods, including forced air cooling, natural air cooling, immersion liquid cooling, and microchannel liquid cooling. In the context of low charge and discharge ratios and lightweight engineering, air cooling still plays an important role. The main factors affecting battery temperature include air flow rate, air flow velocity, battery layout, and flow channel design. The air cooling system has unique engineering advantages because of its low cost. With an increase in charge and discharge ratios, the effect of microchannel and immersion liquid cooling is significantly enhanced, which is beneficial in controlling the battery's temperature and temperature uniformity. Several factors, such as liquid flow rate and channel design, have notable effects on the battery's heat dissipation; however, corresponding costs also increase. Second, we discuss advanced cooling techniques based on gas/liquid two-phase flow, such as submerged boiling cooling and spray-integrated cooling. In the context of increasing demand for batteries with high charge and discharge ratios, these technologies provide efficient, flexible, and adaptable solutions to thermal management challenges. The cooling medium, the flow rate, and the nozzle arrangement all have different effects on the temperature of the battery, along with the size of the droplets. The feasibility of the comprehensive and market recovery costs to maintain profits and long-term development of the enterprise also needs to be considered. Conclusions and Prospects: Based on the literature review, this paper forecasts the progress trend of active thermal management technology from multiple application scenarios to meet the development needs of lithium electric power in sea, land, and air. We believe that the development of the active thermal management technology of the new generation of power lithium-ion batteries should fully consider practical engineering requirements, such as charge and discharge ratios and harsh application scenarios. Future research and development should focus on improving heat transfer efficiency, system integration, and intelligent control capabilities while overcoming the challenges of reliability, cost, adaptability to extreme operating conditions, and energy consumption optimization.

  • Process Systems Engineering
    Xin LIU, Bing WANG, Chenxi CAO
    Journal of Tsinghua University(Science and Technology). 2025, 65(5): 833-843. https://doi.org/10.16511/j.cnki.qhdxxb.2024.21.033
    Abstract (426) PDF (99) HTML (202)   Knowledge map   Save

    Objective: High-pressure gaseous hydrogen storage systems, such as large-scale hydrogen tank farms and distributed hydrogen refueling stations, are prone to hydrogen leakage, fire, and explosion because of the unique physicochemical properties of hydrogen. These events could set off a series of more serious accidents that would cause domino accidents. This study proposes a Bayesian network (BN)-based analysis method for the assessment of internal domino risk distribution within such systems. Methods: First, event tree models were established for various leakage scenarios in hydrogen-related facilities. Thereafter, all potential domino accident scenarios within the area were enumerated in calculation using accident consequence assessment models for hydrogen facility leaks. Next, BN models were automatically constructed to describe the propagation of domino accidents for each potential initial accident device. Finally, using BN models to analyze the magnitude and sources of overall risk for these systems, as well as the patterns of accident propagation and leakage scenarios. Results: The overall risk in hydrogen refueling stations mainly originates from the self-failure risk of compressors and the domino risk of hydrogen storage cylinders; jet fire (JF) and vapor cloud explosion (VCE) contribute 76% and 23.4% to the domino risk of all hydrogen cylinders, respectively. When the storage pressure in hydrogen tank farms is between 2 and 15 MPa, the domino risk comprises >25% of the overall risk, with explosions serving as the predominant accident type resulting in domino accidents. Causal reasoning indicates that a JF from a medium hole is the most probable domino accident scenario for both the hydrogen storage cylinders in the hydrogen refueling stations affected by the JF and the spherical tanks in the hydrogen tank farms affected by the explosion. Diagnostic reasoning for initial accident scenarios indicates that rupture and large-hole leakage of hydrogen spherical tanks and cylinders, respectively, are the most probable cause, provided that a multistage domino accident has occurred. Conclusions: Regarding the common 2-MPa hydrogen spherical tank employed in Chinese green hydrogen projects, the cumulative self-failure risk and domino risk of all tanks in the tank farms is 3.5×10-5 and 1.88×10-5 a-1, respectively, with the latter accounting for ~35%. In the future, decreasing the storage pressure to 1-1.7 MPa or increasing it to 10-15 MPa might lower the contribution of domino risk to < 30% and maintain cumulative self-failure risk at a level of 10-5 a-1. At 70-MPa hydrogen refueling stations, the domino risk to hydrogen cylinders from the compressors and pipeline is ~2.9×10-4 and ~4.4×10-5 a-1, respectively. In the abovementioned hydrogen storage systems, explosions are a notable accident type that can trigger domino accidents. Therefore, the implementation of explosion-suppression measures to decrease the probability of ignition is a key focus for mitigating the overall risk of hydrogen storage systems. Our findings indicate that future quantitative risk assessments for high-pressure hydrogen storage systems should consider the possibility of domino accidents. We believe these results serve as notable references for the establishment of advanced quantitative risk assessment methods customized to high-pressure hydrogen storage systems.

  • Vehicle and Traffic
    Peibao WU, Rongkang LUO, Zhihao YU, Zhichao HOU
    Journal of Tsinghua University(Science and Technology). 2025, 65(5): 930-939. https://doi.org/10.16511/j.cnki.qhdxxb.2025.21.005
    Abstract (407) PDF (114) HTML (205)   Knowledge map   Save

    Objective: In-wheel motor drive systems offer significant advantages for electric vehicles, including large chassis space, high transmission efficiency, and great control flexibility. However, in current mainstream in-wheel motor driving vehicles, the unsprung mass is significantly increased because the motor or the driving unit is rigidly connected to the wheel hub. The increased unsprung mass not only deteriorates vehicle ride comfort and road holding performance, but also results in heavy motor vibration. To mitigate these negative effects, configurations with suspended motor or driving unit have been proposed. It is thus desirable to explore the potential of these new configurations in this regard. Methods: This paper aims to mitigate the negative effects of unsprung mass by optimizing vehicle and motor suspension parameters simultaneously. To this end, it examines two typical in-wheel motor drive configurations with motor suspension: the dynamic vibration absorber configuration and the two-stage suspension configuration. Half-vehicle models are established respectively for both configurations, and key indices for vehicle dynamic performance are selected or defined. Drawing on earlier studies on how the increased unsprung mass impacts vehicle performance at various speeds, and considering the trade-off among ride comfort, road holding, and motor vibration, a multiobjective optimization strategy is proposed for parameter optimization of vehicle suspension and motor suspension. In the strategy, the goal is to minimize body vertical acceleration, wheel dynamic load, and motor acceleration at medium speeds while reducing body pitch acceleration, wheel dynamic load, and motor acceleration at high speeds. Constraints include the natural frequency and dynamic deflection of the vehicle suspension. Using the NSGA-Ⅱ algorithm, Pareto optimal solution sets are derived respectively for the two configurations. The entropy weight method is then applied to determine the optimal parameters for vehicle and motor suspensions. With the optimal suspension parameters, dynamic simulations are conducted on a random road, and the dynamic performance is evaluated based on the predefined indices. Results: The results indicate that, compared to the fixed hub motor configuration, both motor suspension configurations achieve a substantial performance enhancement in vehicle ride comfort, road holding, and motor vibration. Specifically, the dynamic vibration absorber configuration delivers greater enhancements in vehicle body vertical and pitch vibrations, as well as wheel dynamic load. Specifically, it reduces body vertical and pitch accelerations by 36.9% and 33.09%, respectively, at medium and high speeds. The wheel dynamic load is decreased by 18.42% and 18.55% at medium and high speeds, respectively. By contrast, the two-stage suspension configuration excels in reducing motor vertical vibration. It reduces motor vertical acceleration by 67.48% and 65.43% at medium and high speeds, respectively. Conclusions: This paper presents a passive control approach to address the negative effects of unsprung mass by utilizing motor suspension configurations. The in-wheel motor drive configurations with motor suspension demonstrate significant potential for improving vehicle dynamic performance. This research serves as a valuable resource for the design of in-wheel motor driving vehicles.

  • Jingjing CHEN, Xin HU, Xinke SHEN, Dan ZHANG
    Journal of Tsinghua University(Science and Technology). 2025, 65(12): 2341-2350. https://doi.org/10.16511/j.cnki.qhdxxb.2025.21.049
    Abstract (407) PDF (210) HTML (361)   Knowledge map   Save

    Significance: Empowering machines to understand human emotions remains one of the primary challenges in developing artificial intelligence (AI). The affective brain-computer interface (BCI), which decodes emotional states based on brain signals, is an emerging field combining psychology, neuroscience, and AI. Brain signals are inherently uncontrollable, contain rich emotion-specific information, and provide a promising physiological basis for developing computing systems that support continual emotion monitoring. Since its inception, affective BCI research has required close collaboration across various disciplines: it depends on the use of information science for feature engineering and algorithm development, psychology for theoretical frameworks of emotion, and neuroscience for revealing the neural mechanisms underlying emotional processes. Such a demand for multidisciplinary co-operation forms the core focus of this review. Specifically, this paper focuses on the methods by which psychology-and neuroscience-based insights can inspire and advance affective BCI research. Progress: We summarize the current progress at three levels: theoretical, technical, and applied. In the first one, recent advances in affective science offer new perspectives for shaping affective BCI paradigms. The traditional discrete and dimensional frameworks have laid the groundwork for emotion decoding but often overlook positive emotions and the dynamic intensity of affective experiences. Recent emotion theories emphasizing refined positive emotions, mixed emotions, and context-dependent emotions provide valuable directions for improving emotion representation. Affective computing should align with these developments, integrating them into computational models to enhance ecological validity. In turn, affective BCI research may also contribute to psychology by offering evidence to test and refine emotion theories, fostering reciprocal progress across disciplines. At the technical level, neuroscience provides crucial insights for building more robust affective BCIs. Findings on emotional valence lateralization and distributed emotion-associated brain representations can inform the design of models that better capture emotional processing complexity. Moreover, inter-subject brain synchronization research has revealed mechanisms that enhance model generalizability across users, suggesting that incorporating neuroscientific findings can substantially improve the performance and reliability of affective BCIs. At the application level, affective BCIs are expanding beyond emotion recognition toward understanding emotion-related individual differences. Variability between individuals—often treated as noise—may instead offer meaningful information about personality traits or mental health conditions. In the long term, the goal of affective BCI systems may evolve from accurately identifying emotions to comprehensively understanding each individual's psychological tendencies and dynamic affective patterns across multimodal neural and behavioral data. We advocate for stronger integration between affective BCI technologies and practical domains. Such integration allows practical demands to drive technological development, ensuring that affective BCI remains human-centered. Conclusions and Prospects: Finally, we discuss the technical challenges of affective BCI, including extending algorithms from controlled laboratory settings to real-world scenarios, advancing sensor technology for more convenient and reliable brain-signal acquisition, and leveraging large models to enhance performance for affective BCI. Specifically, we emphasize the vital role of ethical considerations: as affective BCIs move from passive emotion detection toward active emotional support or intervention, the responsibility of humans as rational moral agents in a future era of man-computer symbiosis must be considered, ensuring the autonomy of human emotions.

  • Traffic and Transportation
    Chengyong ZHAO, Fei MA, Ruiying CUI, Wei REN
    Journal of Tsinghua University(Science and Technology). 2025, 65(10): 1930-1944. https://doi.org/10.16511/j.cnki.qhdxxb.2025.27.021
    Abstract (404) PDF (84) HTML (295)   Knowledge map   Save

    Objective: The development of modern urban agglomerations requires cultivating integrated transportation networks, advancing transportation integration, and building resilient, comprehensive three-dimensional systems. Building a highly efficient and stable transportation network is fundamental to promoting the growth of modern metropolitan areas. Methods: This paper focuses on the metropolitan area multimodal transportation network (MA-MTN) and uses complex network methodologies to model different transportation network construction processes. By applying a load capacity model, it defines the load and capacity levels of the MA-MTN and develops a load redistribution model to facilitate network operations; then, different dimensions of network resilience evaluation indicators were developed, focusing on three levels: overall network, network structure, and network function. From an integrated perspective, a comprehensive resilience indicator for multimodal transportation networks in urban agglomerations was established. The Xi'an metropolitan area was selected as the research subject to conduct simulation analyses of the multimodal transportation network's integrated resilience. The sensitivity levels of different traffic subnetworks in the multimodal transportation network of the metropolitan area were assessed through scenario simulation methods. Simulations examined the performance loss stage, the recovery stage following the implementation of recovery strategies, and the dynamic recovery stage during network failures caused by attacks on the multimodal transportation network. The effectiveness of strategies aimed at improving the integrated resilience of the metropolitan multimodal transportation network was also compared and analyzed. Results: The research results indicated that different transportation subnets play distinct roles in the multimodal transportation network of urban agglomerations. When MA-MTN is attacked, the sharpest decline in network performance occurs before the failure rate of network nodes reaches 60%. The node capacity adjustment parameter β in MA-MTN needs to be dynamically calibrated, with a value of 0.2 recommended for daily operations and 0.7 during extreme weather events or holiday peaks. Enhancing the resilience of urban multimodal transportation networks requires prioritizing transportation efficiency factors in the transfer of passenger flow transfer between nodes; this approach improves the comprehensiveness of the network resilience. Recovery strategies based on node load capacity levels are the most effective in improving the integrated resilience level of these networks. Such strategies enhance resistance to attacks and significantly improve recovery capabilities during cascading failures, ensuring robust network performance even under dynamic recovery conditions. Conclusions: This paper measures the integrated resilience level of transportation networks against external disturbances using a multidimensional integrated perspective; moreover, it identifies effective strategies to improve the structural and functional resilience of multimodal transportation networks in urban areas. Based on the research results, this study emphasizes the importance of determining critical thresholds for network node failures. It recommends dynamically adjusting the node capacity parameter β according to the operational conditions of the MA-MTN and external factors. Furthermore, it advocates optimizing the transfer process of node passenger flow load by prioritizing transportation efficiency during network operations. The study also highlights the significance of prioritizing node load capacity and developing effective recovery strategies based on the comprehensive importance of nodes. These approaches are crucial for promoting the construction of efficient, stable, and multimodal transportation networks in urban areas.

  • Public Safety
    Xiang GUO, Weibiao HU
    Journal of Tsinghua University(Science and Technology). 2025, 65(6): 1027-1039. https://doi.org/10.16511/j.cnki.qhdxxb.2025.22.024
    Abstract (392) PDF (122) HTML (154)   Knowledge map   Save

    Objective: Increasing complexity of disaster rescue systems and frequent global natural disasters require efficient emergency command mechanisms to reduce life and property losses. Large-scale earthquakes present time-sensitive, multi-dimensional challenges, needing rapid decisions, precise resource allocation, and cross-departmental coordination. However, current research does not quantitatively analyze how emergency command capabilities affect rescue efficiency in dynamic disaster scenarios. This study develops a multi-agent simulation model based on the 6.2-magnitude Jishishan earthquake in China to assess the impact of command strategies on rescue operations and optimize emergency response systems. Methods: A multi-agent simulation model is developed using the NetLogo platform to meet the research objectives. It represents the emergency command and rescue system with four types of agents: the on-scene chief command agent, the on-scene support command agent, the emergency rescue force agent, and the disaster-affected agent. Each agent has specific behavioral patterns and interaction rules. The on-scene chief command agent oversees coordination, decision-making, and resource allocation. The on-scene support command agent manages task planning, resource scheduling, and real-time information feedback. The emergency rescue force agent performs rescue tasks, while the disaster-affected agent represents victims awaiting rescue. The simulation model is designed to reflect real-world scenarios, focusing on key variables such as information completeness, decision-making capability, resource allocation efficiency, and coordination success rate. This study analyzes scenarios under different conditions: (1) Information incompleteness: limited communication and fragmented data; (2) Resource scarcity: imbalanced demand-supply distribution; and (3) Feedback delays: lagging information updates and decision adjustments. The rescue rate (R), defined as the ratio of rescued victims to total victims, is the primary performance metric. Comparative analyses adjust agent capabilities to identify optimal strategies. Results: The simulation results highlight key findings: (1) Critical role of command capabilities. The on-scene chief command agent's information organization and coordination control capabilities are crucial in accelerating early-stage rescue operations. When optimized, these capabilities increase R by 0.4 within the first five simulation ticks. The on-scene chief command agent's feedback adjustment capability becomes crucial in later stages, thus reducing task conflicts by 0.25 through dynamic strategy updates. (2) Scenario-specific optimization strategies. Under incomplete information conditions, improving the on-scene support command agent's resource scheduling speed increases R from 0.4 to 0.9 in 9 ticks. During resource scarcity, enhancing the on-scene support command agent's coordination ability minimizes allocation conflicts, thus achieving a stable R of 0.7 despite limited supplies. During feedback delays, enhancing the on-scene support command agent's task prioritization management reduces decision latency by 30%, thus increasing R from 0.5 to 0.68 in 12 ticks. (3) Role of lower-level command agents. This study emphasizes the significance of lower-level command agents, especially the on-scene support command agent, in enhancing rescue efficiency. Optimizing their resource scheduling and coordination abilities can significantly enhance the overall rescue operation, even under complex, challenging conditions. Conclusions: This study quantitatively confirms that effective emergency command is crucial for earthquake rescue efficiency. The on-scene chief command agent's information integration and macro-level coordination capabilities form the foundation for rapid response, while the on-scene support command agent's strategic optimizations are critical under resource constraints. A hierarchical, decentralized command structure is recommended to effectively balance decision-making authority with operational flexibility. Future research should combine dynamic disaster factors to evaluate the robustness of command strategies in unpredictable scenarios.

  • Computational Linguistics
    Yuanlai WANG, Yu BAI, Peng LIAN
    Journal of Tsinghua University(Science and Technology). 2025, 65(5): 844-853. https://doi.org/10.16511/j.cnki.qhdxxb.2024.21.032
    Abstract (381) PDF (112) HTML (144)   Knowledge map   Save

    Objective: ID-based recommendation methods in recommender systems utilize unique identifiers of users or items to generate suggestions. However, these methods often encounter challenges such as data sparsity and cold-start problems, especially when using single-domain data. Cross-domain ID-based recommendations can help mitigate cold-start issues by relying on overlapping users or items across different domains. However, cross-domain ID information often lacks overlapped users or items. To address this, latent semantic patterns in behavioral networks across various recommender domains can be leveraged. This method aims to extract user preferences for items from discrete ID data, thereby tackling the limited shared information between these domains. Methods: Based on the study of interaction behaviors, this paper assumes the existence of latent pattern correlations between user-item interactions across different domains. A potential factor connects users across domains, leading some users to exhibit similar interaction behaviors in different contexts. These shared characteristics are referred to as interaction behavior semantic patterns. The proposed pattern-enhanced ID recommendation method enhances ID-based recommendations by leveraging these semantic patterns. In the target domain recommendation task, auxiliary domain information is introduced, and information from both auxiliary and target domains is jointly encoded using a graph neural network. By incorporating interaction behavior semantic patterns, user-item interaction and item description information from the auxiliary domain are transferred to the target domain. This process enhances the semantics of interaction behaviors in ID-based recommendations within the target domain. Results: This study conducts experiments on nine public datasets. User-item ID interaction data from datasets such as Yelp2018, Amazon-Kindle, Alibaba-iFashion, Amazon-Electronic, Book Crossing, MovieLens10M, MovieLens20M, and MovieLens25M serve as target domain datasets. Meanwhile, item description data from the Citeulike-a dataset is used as the auxiliary domain dataset. There are no overlapping user or item IDs between these domains. Experimental results show that the proposed method outperforms the current state-of-the-art methods, showing improvements in Recall@20 by 3%-30% and in NDCG@20 by 1% to 40%. Conclusions: This study proposes an ID recommendation method enhanced by interaction behavior semantic patterns based on the assumption of latent pattern correlations in user-item interactions across different domains. By introducing these semantic patterns, this method transfers user-item interaction information and item description information from the auxiliary domain to the target domain, thereby enhancing semantic understanding in ID-based recommendations within the target domain. Experimental results validate the ability of the proposed method to transfer semantic information in the absence of overlapping users and items across domains, yielding better recommendation performance. These findings validate the effectiveness of the proposed assumption and method. Additionally, experiments on ID recommendation tasks in multiple domains show that interaction behavior patterns between similar domains offer better transferability. The closer the auxiliary domain is to the target domain, the more notable the improvement in the target domain's ID recommendation results.

  • Vehicle and Traffic
    Haoran LI, Yunpeng LU, Shucai XU, Sifa ZHENG, Chuan SUN
    Journal of Tsinghua University(Science and Technology). 2025, 65(5): 948-958. https://doi.org/10.16511/j.cnki.qhdxxb.2024.21.023
    Abstract (370) PDF (92) HTML (193)   Knowledge map   Save CSCD(1)

    Objective: In the context of the swift progression of autonomous driving technology, the widespread reliance of current systems on uniform behavioral models for decision-making and path planning is a crucial concern. This generalized approach often disregards variations in driving behavior among different drivers, making it challenging to achieve driving behavior that aligns with drivers' expectations in complex and dynamic traffic scenarios. Consequently, a decrease in comfort and trust is observed in autonomous vehicles. This study focuses on lane changing, a common yet critical driving maneuver, aiming to optimize planning strategies by incorporating drivers' characteristics to match individual driving styles. Methods: This study comprehensively analyzes data derived from naturalistic driving experiments. Kalman filtering is used to detect and eliminate anomalies in raw data, thereby reducing noise interference. The integration of temporal constraints into the fuzzy C-means clustering algorithm ensures the preservation of chronological order in the clustered data, which is essential for analyzing sequential events such as lane change maneuvers. Lane changing requires lateral and longitudinal vehicle control with distinct operational characteristics across different phases of the maneuver. By clustering the entire lane-changing process data into three major categories, C1, C2, and C3, representing the preparation, execution, and completion stages of lane changing, respectively, this study aims to analyze disparities in driver behavior during these distinct phases. According to the characteristics of lane-changing scenarios, relevant variables are selected for in-depth examination. Independent sample t-tests are then conducted among different drivers for each variable, and variables with a high proportion of insignificant t-values are eliminated. This process helps identify personalized indicators that reflect driver-specific traits during lane changing. Subsequently, an artificial potential field (APF) model is established for the lane-changing scenario. The APF method uses virtual attractive and repulsive forces to guide the vehicle toward a path of decreasing potential energy, effectively avoiding obstacles while moving toward the target position. Variations in the APF parameters lead to different planning paths. By leveraging the extracted personalized indicator, the APF model for lane changing is customized, yielding paths that align with individual driving styles. Another pivotal consideration is the planning of lane-changing speeds. Given the notable variations in the speed preferences of drivers, this study proposes a lane-changing speed planning algorithm based on a quintic polynomial function. This ensures that the mean duration of acceleration and the maximum acceleration limit during the execution phase align with each driver's speed control habits and that a smooth velocity profile is maintained throughout the lane-changing maneuver. Conclusions: This study proposes a lane-changing planning method for autonomous vehicles that considers driver differences. The simulation results confirm that the proposed personalized lane-changing planning approach not only produces paths that align with individual driving styles but also regulates lane-changing velocities in accordance with each driver's operational habits. By quantifying behavioral variations, developing personalized APF models, and implementing customized speed planning strategies, this study exemplifies how to tackle individualization challenges in autonomous driving. This study represents a step forward in advancing autonomous vehicle technology toward a human-centric and intelligent future.

  • Intelligent Construction
    Jinpei LI, Xiaolin MENG, Liangliang HU, Yan BAO, Shiyu ZHAO
    Journal of Tsinghua University(Science and Technology). 2025, 65(7): 1260-1271. https://doi.org/10.16511/j.cnki.qhdxxb.2025.26.023
    Abstract (364) PDF (89) HTML (216)   Knowledge map   Save CSCD(1)

    Objective: The structural integrity of bridges is a critical concern as infrastructure ages, necessitating the development of reliable methods for detecting potential failures. Among these, the identification of small target cracks is particularly important, as these cracks often grow undetected until they result in severe damage. Traditional inspection methods, such as manual visual inspections, are hindered by their labor-intensive nature and susceptibility to human error, often resulting in the oversight of small but significant defects. Recent advancements in computer vision and deep learning technologies offer new opportunities to improve the accuracy and efficiency of bridge inspections. This study introduces an innovative approach for detecting small target cracks in bridge structures by employing an enhanced version of the You Only Look Once (YOLOv8) object detection model, a widely recognized algorithm known for its rapid processing capabilities and high detection accuracy. The enhanced YOLOv8 model is tailored to detect small-scale cracks on bridge surfaces that may not be easily identifiable by traditional inspection methods or earlier versions of computer vision models. Methods: The proposed algorithm modifies the standard YOLOv8 model to address the specific challenges associated with detecting small cracks on bridge surfaces. A key modification is the integration of efficient vision transformer (EfficientViT) into the backbone of the YOLOv8 model. EfficientViT is an advanced transformer-based architecture that reduces redundant parameters and optimizes the extraction of local features from high-resolution images, enabling more precise detection of subtle crack features. This enhancement is crucial, as small cracks often exhibit low contrast against their background and may be easily overlooked by less sophisticated models. In addition to EfficientViT, the proposed algorithm also incorporates large selective kernel network (LSKNet) within the C2f module of YOLOv8. LSKNet employs a dynamic kernel selection mechanism that allows the model to adaptively adjust the size of the convolutional kernels based on the input features, making it highly suitable for detecting cracks of varying sizes, orientations, and morphological characteristics. This adaptability ensures that the model can detect small cracks, regardless of their form. Furthermore, the model uses bidirectional feature pyramid network (BiFPN) to merge feature maps at different scales. Traditional models struggle with detecting small targets due to the loss of critical information during downsampling operations. BiFPN mitigates this issue by preserving high-resolution feature maps across multiple layers, enhancing the model's ability to detect small cracks that would otherwise be missed. The combined effect of these modifications improves the accuracy of small target crack detection while maintaining computational efficiency. Results: The effectiveness of the proposed model was validated using a dataset of crack images from a specific bridge, captured by unmanned aerial vehicles (UAVs). UAVs provided detailed images from areas that were often difficult or dangerous to access using traditional inspection methods. The experimental results demonstrated that the enhanced YOLOv8 model significantly outperformed the original version in terms of key performance metrics. Specifically, the modified model achieved improvements of 3.7%, 3.5%, 3.5%, 3.9%, and 7.4% in terms of the detection precision, recall, F1 score, mAP50, and mAP50-95, respectively. These results indicated a substantial improvement in the model's ability to detect small cracks that often had low contrast and irregular shapes, which were typical characteristics of cracks on bridge surfaces. Furthermore, compared to conventional methods, the proposed model was able to detect cracks with higher precision and fewer false positives, making it a promising tool for improving the efficiency of bridge inspections. Conclusions: In conclusion, the improved YOLOv8 algorithm introduced in this study represents a significant advancement in the detection of small target cracks in bridge structures. The modifications made to the original YOLOv8 model, including the integration of EfficientViT, LSKNet, and BiFPN, result in a more accurate and computationally efficient model for crack detection. This approach offers a practical and scalable solution for the widespread application of bridge health monitoring, particularly in areas that are difficult to inspect using traditional methods. By leveraging advanced surface data processing techniques, this research contributes to the development of modern methods for assessing the health of bridge structures, ultimately helping to ensure the safety and longevity of infrastructure systems.

  • Advanced Ocean Energy Technology
    Yajun REN, Sheng LI, Wei SHI, Jungang HAO, Ling ZHU, Shuai LI
    Journal of Tsinghua University(Science and Technology). 2025, 65(8): 1387-1402. https://doi.org/10.16511/j.cnki.qhdxxb.2025.27.020
    Abstract (359) PDF (109) HTML (174)   Knowledge map   Save

    Significance: The global offshore wind power industry is experiencing rapid growth, with floating offshore wind energy technology emerging as a pivotal solution for exploiting wind resources in deep sea areas. The floating foundation, a critical component of floating offshore wind power systems, plays an essential role in ensuring the stability and safe operation of wind turbines. However, the design and analysis of these foundations are fraught with challenges due to their intricate system composition, distinctive dynamic characteristics, and the harsh marine environment they must endure. Traditional design methods, which rely heavily on experience and trial-and-error, are not only inefficient but also fail to integrate multidisciplinary theories, highlighting the need for the more scientific design and optimization tools. Progress: As research delves deeper, technological advancements, and accumulated development experience have led to the application of multidisciplinary optimization design and analysis techniques in the floating wind power sector. The field of floating offshore wind power foundation optimization has seen significant advancements in recent years, with a shift towards more sophisticated multidisciplinary, multi-objective optimization techniques. These techniques have been crucial in addressing the complex interplay between various factors such as structural mechanics, hydrodynamics, aerodynamics, and economic considerations. MDAO techniques, initially from aerospace, enable system-wide optimization by considering interdisciplinary interactions, crucial for managing the complex dynamics between wind turbines and environmental loads. In the realm of optimization algorithms, genetic algorithms, particularly the Non-Dominated Sorting Genetic Algorithm Ⅱ (NSGA-Ⅱ), have become prominent due to their ability to handle multiple conflicting objectives simultaneously. These algorithms have been effectively utilized to identify a set of Pareto-optimal solutions, providing a range of options that balance different performance criteria such as cost, structural fatigue, motion response, and tower acceleration. The use of frequency domain analysis has been widespread for early-stage optimization research due to its efficiency in capturing key dynamic characteristics of the floating structures. However, the industry has also recognized the need for time-domain simulations to capture the nonlinear dynamics of the system, especially when precision is paramount. Hybrid methods that combine the benefits of both frequency and time-domain analyses, as well as the application of surrogate model, are being developed to achieve a balance between computational efficiency and accuracy. These innovative techniques offer scientific guidance for the scale planning and optimization design of floating foundations, striving to achieve an optimal balance in cost, performance, and environmental adaptability. This paper provides a comprehensive review of the evolution and application of multi-objective, multidisciplinary optimization methods in the scale optimization of floating offshore wind power foundations. Conclusions: The integration of multi-objective, multi-disciplinary optimization technology is of paramount importance for the optimized design of floating offshore wind power foundations. By merging structural optimization concepts with efficient optimization algorithms and precise simulation tools, it is possible to enhance design efficiency, abbreviate the design cycle, and more scientifically and swiftly obtain floating foundation design that exhibit superior comprehensive performance. This approach not only streamlines the design process but also ensures that the final scheme is more robust and cost-effective, meeting the stringent requirements of the offshore wind power industry. Looking ahead, the field is expected to see further integration of advanced computational methods, machine learning techniques, and high-fidelity simulations to push the boundaries of floating offshore wind power foundation design, leading to more efficient, cost-effective, and durable solutions that can withstand the test of time and the rigors of the marine environment.

  • Process Systems Engineering
    Chaopeng TENG, Cheng JI, Fangyuan MA, Jingde WANG, Wei SUN
    Journal of Tsinghua University(Science and Technology). 2025, 65(5): 825-832. https://doi.org/10.16511/j.cnki.qhdxxb.2024.21.036
    Abstract (358) PDF (92) HTML (106)   Knowledge map   Save

    Objective: During chemical production, the different sampling frequencies of different variables generate a substantial amount of unlabeled data, which is challenging to use effectively, resulting in data waste. Additionally, distributed control systems frequently produce noisy data due to environmental interference and aging measurement instruments, complicating soft sensing modeling. Furthermore, in semi-supervised tasks, unsupervised components can undermine the accuracy of supervised tasks. To address these issues, this study proposes a semi-supervised soft sensing method for product quality based on a ladder network, enabling accurate, timely determination of key product quality and enhancing operational efficiency. methods: A two-step variable screening method—maximum mutual information (MIC) followed by minimum redundancy maximum relevance (mRMR)—was used to screen auxiliary variables. MIC was first applied to eliminate low-correlation variables, and mRMR was then used to remove redundant variables among the auxiliary set, yielding an optimal selection for modeling. The ladder network-based soft sensing method was then established, improving noise resistance by injecting disturbances into each encoder layer and reconstructing noise-free features layer by layer through the decoder. Skip connections were added between encoders and decoders to extract more information from unlabeled data, enhancing focus on supervised tasks and strengthening the model's robustness and generalization. Results: This method was applied to the methanol-to-olefin (MTO) process, termed DMTO. The MIC and mRMR screening reduced 203 auxiliary variables to an optimal 50. After preprocessing, several soft sensor models were established to compare outcomes. Results showed that unlabeled samples improved the effectiveness of supervised soft sensing tasks, with the proposed method enhancing various evaluation metrics. Residual analysis further indicated that the predicted residuals of the ladder network-based semi-supervised method closely aligned with a standard normal distribution, validating the method's superiority. Conclusions: Compared with supervised and other semi-supervised learning methods, the ladder network demonstrates superior prediction accuracy and generalization in soft sensing ethylene products in the DMTO process. The proposed approach offers promising applications for real-time monitoring and control of product quality in chemical production.

  • Frontiers in New-Quality Communication Technology
    Hailong QIN, Jincheng DAI, Sixian WANG, Shengshi YAO, Kai NIU, Wenjun XU
    Journal of Tsinghua University(Science and Technology). 2025, 65(11): 2080-2094. https://doi.org/10.16511/j.cnki.qhdxxb.2025.27.046
    Abstract (357) PDF (627) HTML (321)   Knowledge map   Save

    Significance: End-to-end semantic communication leverages deep learning models to extract semantic features from data, enabling intent-driven communication processes that significantly enhance transmission efficiency. However, existing semantic communication paradigms based on discriminative models employ symbol-level rate-distortion optimization and perform maximum likelihood estimation solely based on received signals, failing to satisfy the perceptual requirements of users. To ensure the visual quality of transmitted data, a generative visual semantic communication paradigm has emerged, which adopts a rate-distortion-perception optimization framework to achieve alignment between data transmission and human perception through maximum a posteriori estimation. Diffusion models are advantageous for controlling visual generation and have thus become essential tools for this generative paradigm. Nevertheless, systematic organization of the technical roadmaps for empowering semantic communication using diffusion models is lacking in current research. Progress: This study addresses this gap by modeling the communication process as a mathematical inverse problem and elucidating the general methodology by which diffusion models solve data compression and transmission challenges through posterior sampling. The fundamental concepts, mathematical formulations, and sampling strategies underpinning diffusion models are systematically introduced. In addition, the general methods and key technologies employed for diffusion model-enabled generative compression and transmission are comprehensively reviewed from an inverse problem-solving perspective. Moreover, the performance metrics commonly used for objective assessment of the visual quality of transmitted data are summarized to provide a comprehensive evaluation framework. The core methodology demonstrates that generalized communication processes can be effectively modeled as inverse problems. The approach involves inferring the source data distribution using maximum a posteriori estimation based on channel measurements and forward operators composed of various signal processing operations. Through diffusion posterior sampling, diffusion models solve these communication inverse problems via a three-step process: first, pre-training diffusion models from large-scale datasets are used to obtain diffusion priors; second, joint source-channel codecs are used to mitigate channel distortions in visual data transmission and construct proximal regularization terms; finally, measurement regularization terms are constructed based on channel measurements. By integrating these regularization terms for posterior estimation and distribution sampling, diffusion models can implicitly reconstruct source data through gradient descent, effectively overcoming transmission challenges caused by strong channel noise, nonlinear operators, and time-varying channel conditions. Conclusions and Prospects: The analysis reveals that compared to visual semantic communication approaches based on discriminative deep learning models, the generative visual semantic communication paradigm based on diffusion models can significantly improve transmission efficiency and resilience while ensuring perceptual quality and semantic consistency of visual information. This advancement represents a fundamental shift toward communication systems that prioritize human perceptual requirements alongside traditional distortion metrics. Open issues, including image realism modeling and acceleration of diffusion model sampling, are discussed. The report highlights the effectiveness of conditional diffusion models for enabling existing semantic communication architectures to recover sources at the receiver based on minimal tokens and highly degraded measurements, offering an intelligent and concise design philosophy for future generative visual semantic communication systems.

  • Mechanical Engineering
    Chuanhui ZHU, Zihao WANG, Zhiming ZHU, Tianyi ZHANG, Jichang GUO
    Journal of Tsinghua University(Science and Technology). 2025, 65(5): 882-890. https://doi.org/10.16511/j.cnki.qhdxxb.2024.21.024
    Abstract (351) PDF (108) HTML (172)   Knowledge map   Save

    Objective: Long-distance oil and gas transmission pipelines are important energy infrastructures. Currently, there are deficiencies in the automatic tracking accuracy and adaptability of external welding machines during pipeline construction. Operators often need to manually adjust external welding equipment (welding torch) to ensure the quality of the joints. Improving the intelligence of the welding process is an effective way to improve the efficiency and joint qualification rate during the on-site laying of long oil and gas pipelines. This study proposes a detection and control algorithm for the welding torch position and posture, applicable to all position welding of workpieces with arbitrary spatial postures. Methods: This study is the first to design a multisource sensor that combines laser-structured light vision sensing with dual-axis tilt sensing. This multisource sensor combines the advantages of both types of sensing, enabling it to detect the relative position information of the welding torch, as well as the posture information of the welding torch and workpiece. Using this multisource sensor, the algorithm performs integrated calculations of the welding groove size parameters and relative position and posture parameters of the welding torch under any workpiece posture through local groove surface reconstruction. This method fully uses laser line data from images to ensure stable, applicable, and accurate parameter calculations. Through coordinate transformation, the spatial posture (αw and βw) of the local workpiece can be obtained. These integrated feature parameters provide the basis for controlling the welding torch's spatial position and posture in any pipeline space all-position welding. Next, a pipeline intelligent welding system with five degrees of freedom based on multisource sensing is constructed. The system, combined with the designed algorithm, achieves real-time control of the welding torch position and posture (e, H, α, and β), meeting welding process requirements and enabling high-quality weld formation control during arc welding. Results: The experimental results show that the attitude angle feedback control error of the welding torch did not exceed 0.8°, the lateral position tracking deviation was within 0.25 mm, and the height tracking deviation did not exceed 0.63 mm during the pipeline all-position welding process. Compared to existing welding seam detection and tracking systems based on structured light-vision sensing, the proposed algorithm offers superior accuracy and stability. It detects not only the position deviation of the welding torch but also the posture of the welding joint on any unstructured surface with an unknown spatial posture. Conclusions: The proposed algorithm for detecting and controlling the position and posture of the welding torch can be used to achieve accurate control during pipeline space all-position welding. This advancement significantly improves the intelligence level of pipeline external welding equipment and provides technical support for controlling the position and posture control of the welding torch when welding unknown posture-curved workpieces.

  • Public Safety
    Changkun CHEN, Yipeng BAO, Jian ZHANG, Rongfu YU
    Journal of Tsinghua University(Science and Technology). 2025, 65(6): 1019-1026. https://doi.org/10.16511/j.cnki.qhdxxb.2025.22.018
    Abstract (348) PDF (75) HTML (131)   Knowledge map   Save

    Objective: The safety of rescue personnel is a critical factor in determining the success of rescue operations. The ability to accurately identify the motion states of rescue personnel is key to ensuring their safety. However, monitoring their motion states in real time is challenging because of the complex and dangerous environment they operate in. This study aims to develop a method for identifying the motion states of rescue personnel based on triaxial motion data to enhance the efficiency of personnel safety monitoring during rescue missions. Methods: In this study, the MPU6050 sensor, an integrated triaxial accelerometer and gyroscope, was utilized to collect the motion data from the leg and waist of rescue personnel. This sensor was selected based on its low power consumption, automatic sleep mode, and power management features, making it suitable for long-duration rescue tasks. Before data collection, the sensors were calibrated using zero-bias calibration to reduce errors and ensure data reliability. These sensors were strategically placed on the waist the and leg of the rescue personnel to capture their overall body dynamics and detailed movements. This study analyzed the acceleration data under four different motion states: standing still, working in a small area, walking, and running. The data were analyzed using time-domain feature analysis, focusing on the standard deviation of acceleration to quantify the fluctuation and stability of the motion states. This study proposed a classification mechanism based on the sum of the standard deviations of waist and leg accelerations to distinguish between different motion states. Results: The experimental results demonstrated that the proposed method effectively distinguished between different motion states. In the standing-still state, the total acceleration was close to zero, indicating no movement. In the state of working in a small area, the acceleration was greater than zero but remained within a small range with stable fluctuations. In the walking state, there was a significant difference between the waist and leg total accelerations, with the latter showing larger fluctuations and clear peaks and valleys. In the running state, both waist and leg total accelerations showed larger fluctuations, with the latter having a greater amplitude. The method showed high accuracy and stability in real-time monitoring of rescue personnel's motion states, effectively identifying the changes in motion states within a 2-min test period. The standard deviation analysis revealed a clear hierarchical distribution, indicating significant differences in acceleration fluctuations between different motion states. The sum of the standard deviations of waist and leg accelerations provided a reliable basis for distinguishing between the four motion states. Conclusions: This study has provided a reliable method for monitoring the motion states of rescue personnel, which can substantially improve the safety and efficiency of rescue operations. The method's ability to accurately and stably identify different motion states in real time makes it a valuable tool for ensuring the safety of rescue personnel in complex and dangerous environments. The findings of this study contribute to the development of more effective monitoring systems for rescue operations, potentially reducing the risk of accidents and enhancing the overall success rate of rescue missions.

  • Microgravity Combustion
    Yucheng LIU, Xingxian LI, Yuzhe WEN, Huilong ZHENG, Xiaofang YANG, Xiaowu ZHANG, Yufeng HE, Jiaokun CAO, Changshuai DU, Qiang YAO
    Journal of Tsinghua University(Science and Technology). 2025, 65(9): 1609-1620. https://doi.org/10.16511/j.cnki.qhdxxb.2024.27.039
    Abstract (348) PDF (171) HTML (169)   Knowledge map   Save

    Significance: Experimental conditions in microgravity differ considerably from those in Earth's normal gravity. Combustion experiments conducted in microgravity eliminate the effects of natural convection and simplify the complex factors of combustion processes. Combustion experiments can reveal many physical and chemical phenomena only under normal gravity conditions, providing significant insights for fundamental scientific research. Meanwhile, microgravity combustion experiments allow a deeper investigation into the fundamental physical phenomena of advanced combustion issues, serving as a crucial means for basic research. This research supports China's energy and power industries in addressing the needs related to energy conservation, emission reduction, and green energy transition, as well as those related to fire prevention on the ground and in space. Progress: The China Space Station (CSS) is planned to support combustion science experiments using multiple fuel types, including gaseous, liquid, and solid fuels, in orbit. The first series of CSS combustion experiments consisted of gaseous combustion experiments, a few of which were conducted in the combustion science rack (CSR). This article reviews the progress of microgravity jet flame research and introduces types of scientific research that can potentially be supported by the combustion science application system and gaseous combustion experiment insert (GCEI) in the CSR. The combustion science experiment system provides the GCEI with the necessary resources, such as water cooling, electricity, and gas emissions. The GCEI supports gas-flow regulation functions, allowing the adjustment of the gas type, flow rate, and ignition power based on the project's scientific objectives. The GCEI features a universal burner platform and can adjust the gas composition, flow rate, and ignition energy. Various types of flames can be generated by replacing the project burners. Optical diagnostics conducted outside the optical windows of the combustion chamber provide data on the flame dynamics, flow fields, and spatial distributions of OH and CH. Currently, astronauts aboard the CSS have installed an igniter in the gas experiment module and mounted the GCEI in the CSR combustion chamber. The GCEI automatically completes a series of actions, including configuring the combustion environment gas, ejecting the fuel gas, heating the igniter, determining parameters, performing optical diagnostics, filtering and circulating, and exhausting waste gases. Because of the lack of buoyancy effects, microgravity flames exhibit considerable differences compared to normal gravity flames. After transmitting the experimental data to the ground operation control center, the control and monitoring of the experimental conditions are performed to confirm the normal operation of each subsystem. The fuel, oxidizer, and inert-gas flow rates are set according to predetermined delays and settings, demonstrating the normal operation of key modules, such as the GCEI's fuel gas cylinder module, gas-distribution solenoid valve, igniter, and oxidizer and diluent subsystems of the CSR. The image intensifier camera of the combustion diagnostic subsystem captures corresponding OH and CH emission images, demonstrating an increase in the flame width and a rapid decrease in the flame height until localized extinction occurs at the end of the non-premixed flame. Conclusions and Prospects: The present study verifies that the GCEI can effectively realize microgravity flames for gaseous experiments in orbit and provide a support and design basis for subsequent diversified combustion science experiments. The GCEI is expected to provide valuable data and platform support for subsequent microgravity experiments aboard the CSS.

  • Public Safety
    Jing LI, Qiyu FANG, Cheng GUAN, Zhizhen ZHANG, Xiao LI, Xuecai XIE
    Journal of Tsinghua University(Science and Technology). 2025, 65(6): 1079-1089. https://doi.org/10.16511/j.cnki.qhdxxb.2025.22.021
    Abstract (343) PDF (87) HTML (143)   Knowledge map   Save

    Objective: With the continuous expansion of power grid engineering and the rapid enhancement of information technology, the complexity of accident scenarios has increased significantly, leading to an explosive growth in monitoring data. This study aims to address the limitations of current power grid emergency plans in large-scale data querying and on-site guidance, and assist emergency decision-makers in quickly generating response plans, accurately allocating emergency resources, and promote the digitalization of emergency plans. Toward this goal, this study proposes an improved method for constructing an ontology model in power grid emergency planning. Methods: First, the traditional seven-step ontology construction method is refined based on the Toronto virtual enterprise (TOVE) and skeletal methods. In the refinement process, an "application scenario analysis" phase is introduced in the initial step to enhance the relevance of the ontology construction. Additionally, after creating ontology instances, a "qualitative and quantitative analysis" phase is adopted to verify the scientific validity and feasibility of the ontology, thereby improving model quality. Subsequently, the improved method comprehensively implements the goal determination and construction processes of the ontology model. These processes include defining knowledge in the power grid emergency planning domain; evaluating the reuse of existing ontologies; clarifying key concepts from legislation, emergency scenarios, and enterprise planning systems; and establishing class hierarchies and attributes. Next, the Protégé tool is employed for model visualization. For the example of the emergency plan for typhoon disaster events from a provincial power company, a model was constructed comprising 39 ontology categories, 24 relationship categories, and 14 attribute categories, supplemented by 408 entities, 774 relationships, and 334 attributes. Finally, the ontology model is applied to study the semantic network of emergency plans, designing a schema for the knowledge graph of emergency plans for power enterprises based on the resource description framework schema and web ontology language frameworks. The ontology model in the field of power grid emergency planning is visualized using Protégé. The richness and structural integrity of the model are evaluated using a HermiT1.4.3.456 reasoning engine and the ontology quality analysis method. Results: The results indicate that the relationship richness of the model approaches 1, suggesting a rich relationship structure; the attribute richness value exceeds 1, indicating reasonable attribute settings; and the richness of major classes is 1, whereas that of minor classes is 0.9474, close to 1, demonstrating a high utilization rate of classes. Overall, the model exhibits good rationality and practicality. Conclusions: Empirical results demonstrate that this ontology model effectively addresses impracticality issues, usability, and relevance often encountered in emergency plans. It significantly enhances the efficiency of emergency personnel in response and decision-making while also improving the expressiveness and digital construction of knowledge related to power grid emergency planning.

  • Mechanical Engineering
    Zhenjun YU, Ningbo LEI, Yu MO, Xiu LI, Biqing HUANG
    Journal of Tsinghua University(Science and Technology). 2025, 65(5): 901-911. https://doi.org/10.16511/j.cnki.qhdxxb.2024.22.034
    Abstract (341) PDF (122) HTML (155)   Knowledge map   Save

    Objective: Predicting the remaining useful life (RUL) of industrial equipment is critical for maintaining safe operations and minimizing maintenance costs. However, RUL prediction for edge devices faces several challenges. First, edge devices often lack the computational power and storage capacity required for complex RUL prediction algorithms, making such predictions difficult. Many RUL prediction algorithms require substantial resources, which are scarce on edge devices. Second, the limited data transmission rate between the cloud and edge devices causes high latency when transmitting large data sets to the cloud, affecting real-time predictions and increasing network bandwidth usage. Additionally, data sharing among all edge devices is often impractical owing to privacy, security issues, and potential conflicts of interest, limiting models to local data and reducing their accuracy. Methods: To address these challenges, this paper proposes a cloud-edge collaboration framework for RUL prediction based on federated learning. The framework comprises two main processes. In the first process, each training device trains a variational autoencoder (VAE) using its local data set. The trained encoders are then uploaded to the cloud and aggregated using a weighted average method (FedAVG), with the number of training samples as weights. The aggregated global encoder is then downloaded to all edge devices. In the second process, the aggregated encoder extracts hidden features from the local data sets on each edge device. These features are uploaded sequentially to the cloud to train the RUL predictor. Once trained, the predictor is sent back to the edge devices, completing one training cycle. This iterative process continues until a well-trained RUL prediction model, consisting of the global encoder and predictor, is achieved. During the testing stage, the global encoder is used to extract hidden features, while the RUL predictor performs deeper feature extraction and RUL prediction. In this framework, only local encoders and hidden features are uploaded to the server, significantly reducing communication overhead. Most of the training occurs on the server, with clients only performing the basic training of the shallow VAE, thereby effectively utilizing the server's powerful computational capabilities. Data privacy is maintained since the server receives hidden features and encoders, not the original data, preventing data reconstruction. Results: To validate the proposed method's efficiency and practicality, different network structures were tested for RUL prediction on the commercial modular aero-propulsion system simulation (C-MAPSS). Although there was a slight decline in prediction performance compared to the baseline, the difference was within acceptable limits. This minor trade-off in accuracy enabled RUL prediction under resource constraints. The proposed algorithm significantly reduced data transmission time after feature extraction across various data scales consistently. In industrial scenarios with large data volumes, this reduction was even more pronounced. Further validation using nuclear power unit fault data sets showed a slight decrease in root mean square error (RMSE) on the test set without a significant drop in prediction accuracy. These results demonstrate that the proposed cloud-edge collaboration framework is promising for fault diagnosis in nuclear power units, effectively addressing edge resource limitations. Conclusions: The proposed cloud-edge collaboration framework leverages federated learning to achieve RUL prediction on resource-constrained edge devices, thereby alleviating issues related to resource constraints and data privacy. By employing VAE-based feature extraction and federated learning for model training, the framework achieves efficient model training while significantly reducing communication overhead with minimal impact on accuracy. Experimental validation on industrial simulation data sets and nuclear power unit fault data sets demonstrates the framework's practicality and effectiveness. This framework represents a useful approach to addressing challenges in fault diagnosis and URL prediction within resource-constrained settings.

  • Public Safety
    Tiantian WANG, Tiezhong LIU, Congcong LI
    Journal of Tsinghua University(Science and Technology). 2025, 65(6): 1040-1049. https://doi.org/10.16511/j.cnki.qhdxxb.2025.22.013
    Abstract (333) PDF (55) HTML (166)   Knowledge map   Save

    Objective: This study integrates social attributes of human behavior as an independent mechanism within the analytical framework of negative emotion propagation dynamics. It aims to provide a comprehensive understanding of how negative emotions spread across social networks and establish a scientific basis for effective public opinion management and crisis response. Methods: This study examines the distinct mechanisms of social reinforcement and individual regulation that differentiate the spread of negative emotions in social networks from that of traditional infectious diseases. A heterogeneous propagation threshold model, named the SI-SEIR (social reinforcement and individual regulation susceptible-exposed- infected-recovered) model, incorporates a dual influence mechanism of "social reinforcement-individual regulation". First, we develop a non-Markovian negative emotion propagation model, considering social reinforcement and variations in individual emotion regulation abilities. We then extend the edge-based compartmental theory to determine the theoretical outbreak threshold and final propagation scale, including both continuous and discontinuous phase transitions. Extensive numerical simulations are conducted based on data from the Weibo network, using the Hubei Province Red Cross Society incident at the early stage of the COVID-19 pandemic to validate the effectiveness of the SI-SEIR model. Results: The findings show that individual emotion regulation abilities and social reinforcement significantly impact the spread of negative emotions. Improving individuals' emotion regulation ability and decreasing social reinforcement intensity can help effectively reduce large-scale outbreaks of negative emotions during public crises. Moreover, the network's topology feature significantly influences propagation outcomes. When individuals have relatively uniform emotion regulation abilities, a higher average degree of the network substantially raises the outbreak threshold, thereby reducing the likelihood of widespread diffusion. Increasing network heterogeneity can help increase the outbreak threshold and reduce the spread of negative emotions. Conclusions: Considering both social reinforcement and individual emotion regulation mechanisms is critical for accurately modeling and predicting the dynamics of negative emotion propagation in social networks.

  • Traffic and Transportation
    Qingchang LU, Rundong WANG, Pengcheng XU, Shixin WANG
    Journal of Tsinghua University(Science and Technology). 2025, 65(10): 1945-1956. https://doi.org/10.16511/j.cnki.qhdxxb.2025.21.014
    Abstract (332) PDF (77) HTML (211)   Knowledge map   Save

    Objective: The metro system, as a crucial component of modern urban transportation, relies heavily on the reliability of its traction power network to maintain stable operations. However, existing research on metro system resilience assessment often overlooks the complex coupling characteristics between the traction power network and the metro network. In particular, the many-to-one and one-to-many coupling characteristics of the traction power network significantly influence metro system resilience but remain underexplored. This study proposes a resilience assessment method for metro networks based on the network coupling characteristics, focusing on quantitatively evaluating the dynamic impact of traction power network failures on metro network operational performance under both partial and complete failure scenarios. Methods: This research constructs separate models for the traction power network and the metro network. Building on these foundational models, it incorporates the many-to-one and one-to-many power supply characteristics of the traction power network, establishing a coupling model that integrates both systems. Network efficiency, which considers passenger flow weighting and travel time impedance, forms the basis for assessing resilience. The Monte Carlo method is used to model the recovery process of the metro traction power network. Using the Xi'an metro network as a case study, different failure scenarios are simulated, enabling a comprehensive evaluation of the metro system's service capacity and resilience changes under various fault conditions. Results: The results of this study are as follows: (1) The many-to-one redundancy characteristic of the traction power network enhances metro network resilience by 6.8%-14.4%. However, ignoring the one-to-many characteristics of the traction power network may lead to an overestimation of resilience, as cascading failure effects are inadequately accounted for. (2) Traction power network failures in high passenger flow areas can cause efficiency losses of up to 50.2%, with corresponding resilience losses reaching 36.1%. (3) Resilience performance varies across metro stations and the overall network depending on the complexity of failure scenarios. More complex scenarios involve a greater number and broader distribution of repair targets, increasing the intricacy and time demand of recovery processes. Conclusions: The proposed metro network resilience assessment method based on network coupling characteristics provides a more accurate evaluation of the impact of traction power network failures. By accounting for both many-to-one and one-to-many coupling characteristics, the method realistically reflects the redundancy supply effect of the system and the cascading failure process. The study emphasizes that while adopting a decentralized layout, metro system operation and planning need to strengthen the redundancy design of traction substations and supply section networks. Furthermore, a coordinated emergency response across multiple departments is recommended to ensure rapid mobilization of repair resources and shuttle capacity, minimizing disruptions to passenger travel during emergencies. The findings of this study provide theoretical guidance for developing emergency response and recovery strategies in metro systems under power facility failure scenarios. Future research will expand the resilience assessment framework to multi-modal transportation systems, further improving the universality and practicality of the model.

  • Hydraulic Engineering
    Bei YI, Xiaolian LIU, Xueni WANG, Leike ZHANG, Yu TIAN, Weiwei GUO
    Journal of Tsinghua University(Science and Technology). 2025, 65(10): 1868-1879. https://doi.org/10.16511/j.cnki.qhdxxb.2025.27.030
    Abstract (329) PDF (92) HTML (172)   Knowledge map   Save

    Objective: For long-distance and complex water conveyance systems that combine pressurized water and gravity flow, achieving effective water hammer control through individual valve regulation is challenging. It is crucial to implement joint pump-and-valve operation to ensure the system's overall safety performance. Existing research often overlooks water level fluctuations in storage facilities and fails to evaluate the effects of coordinated pump-and-valve operations on the overall system. Furthermore, the interdependence of parameters in multivariable pump-and-valve control creates difficulties in multi-objective optimization and decision-making. Methods: The joint optimal operation model for pumps and valves is developed based on hydraulic calculations, considering three key objectives: minimizing the maximum water hammer pressure, maximizing the minimum water hammer pressure, and minimizing water level fluctuations in elevated pools. The model is solved using the non-dominated sorting-based multi-objective coati optimization algorithm (NSCOA), while optimization schemes are realized through improved ideal-point-based decision (IIPBD) derived from the computationally generated Pareto front. A long-distance water transmission project serves as the research object, where NSCOA, NSGA-Ⅱ, and NSSA are used to solve its joint optimal pump-and-valve operation. Reasonable parameter settings for NSCOA are determined to solve the model. The optimal solution is obtained using IIPBD, validating the superiority of the NSCOA-IIPBD optimization decision-making method. Results: The performance of NSCOA, NSGA-Ⅱ, and NSSA was evaluated using hypervolume (HV) and spacing (SP) indices across ZDT1 to ZDT4 and ZDT6 test functions, confirming the superiority of NSCOA. At the same time, the parameters of NSCOA in the calculation of joint optimal operation of pumps and valves are reasonably set: the population number is 50, the size of external archives is 50, and the number of iterations is 75. Comparisons with NSGA-Ⅱ and NSSA further demonstrated the effectiveness of NSCOA in solving this problem. On this basis, the Pareto frontier calculated by NSCOA is determined based on IIPBD. Compared with the current scheme, the optimal scheme shortened the fast-closing time, extended the slow-closing time of the valve after the pump, and coordinated pump time intervals with the terminal valve response. These adjustments resulted in a 14.40% reduction in maximum pressure and a 70.37% decrease in water level fluctuation in the elevated pool. These findings confirm the reliability of NSCOA for optimizing the joint optimal operation model of pumps and valves under phased shutdown of pumps. Conclusions: The widely distributed Pareto frontier solution set can be obtained by solving the joint optimal operation model of pumps and valves with NSCOA. Using IIPBD, an optimal scheme with significantly lower pressure and water level fluctuations compared to the current scheme was achieved. The NSCOA-IIPBD method provides a more efficient and feasible scheme for the multi-objective solution and decision-making of the joint optimal operation of pumps and valves in long-distance complex water transmission systems.

  • Advanced Ocean Energy Technology
    Yuqi JIAO, Dongsheng QIAO, Guoqiang TANG, Lin LÜ, Jinping OU
    Journal of Tsinghua University(Science and Technology). 2025, 65(8): 1455-1464. https://doi.org/10.16511/j.cnki.qhdxxb.2025.27.029
    Abstract (328) PDF (51) HTML (232)   Knowledge map   Save CSCD(1)

    Objective: Large-diameter monopiles are the primary foundations for offshore wind turbines. However, in challenging marine hydrodynamic environments, flow disturbances around these monopiles often cause significant scour in the adjacent sandy seabed. This scour reduces the effective embedment depth, increases the length of the cantilever section of the monopiles, initiates sediment transport, and modifies the consolidation state of the underlying soil. These changes weaken monopiles' lateral bearing capacity and affect wind turbines' overall dynamic responses. Consequently, developing an accurate and efficient method to assess scour effects on the lateral bearing capacities and dynamic responses of monopiles is imperative. Methods: In this research, finite element models of pile-soil interactions after scour equilibrium were developed in Abaqus; these models integrate a cyclic dynamic hypoplastic constitutive model that captures the mechanical behavior of sand under complex loading paths and accounts for soil consolidation states. Turbulent wind loads and irregular wave loads acting on wind turbine foundations were computed using OpenFAST and Abaqus/Aqua, respectively. The numerical simulation unfolds in three phases: 1) The first phase involves assigning the initial stress fields and applying gravity loads to the complete pile-soil model to achieve geostatic equilibrium with the soil in a normally consolidated state. 2) The second phase involves removing soil elements within a predefined scour depth to simulate the unloading process, shifting the underlying soil to an over-consolidated state. 3) The third phase involves imposing the turbulent wind and irregular wave load on the monopiles to evaluate horizontal dynamic responses, accounting for scour effects. The pile-soil interaction model was validated using centrifuge test data. Based on this model, the soil flow mechanisms of monopiles under horizontal cyclic loads after scour equilibrium were analyzed, revealing the impacts of changing stress histories in remaining soils and local scour depths on the horizontal bearing capacity responses of cyclically loaded monopiles. Results: Numerical analysis results reveal the following key findings: 1) Scour significantly accelerates deformation accumulation in monopiles and reduces the lateral stiffness of pile-soil interactions. At identical scour depths, peak horizontal displacement at the mudline is twice as high for global scour compared to local scour. 2) Scour-induced changes in soil consolidation states enhance the remaining soil's shear strength and compressive resistance. Assessing post-scour horizontal displacement responses using pile-soil interaction stiffness derived from pre-scour soil parameters overestimates peak displacement by approximately 23%. 3) The influence of scour depth and lateral extent on pile-soil interactions is confined to a wedge-shaped failure zone surrounding the monopile. The zone's width and depth scale linearly with increasing local scour depth. Conclusions: The finite element analysis models of pile-soil interactions developed in this study are effective for evaluating scour impacts on the dynamic response of monopile foundations under cyclic loading. Unlike API and DNV standards, which only account for scour by simply reducing foundation embedment depth, this study highlights the critical role of scour-induced changes in soil consolidation state; incorporating them further reduces monopile displacement responses.

  • Mechanical Engineering
    Meng LI, Zehua YANG, Rukang WU, Yu CHEN, Bijun WU, Yanqin ZHANG
    Journal of Tsinghua University(Science and Technology). 2025, 65(5): 912-920. https://doi.org/10.16511/j.cnki.qhdxxb.2024.21.029
    Abstract (324) PDF (70) HTML (136)   Knowledge map   Save

    Objective: The square-shaped dual-chamber floating oscillating water column (OWC) wave energy converter is designed to convert wave energy through the heave motion of its floating structure. This device uses airflow channeled from the dual chambers to drive a turbine generator, making it essential to investigate its primary energy conversion characteristics. Methods: Numerical calculations and experimental tests were conducted to study the model's capture performance. Hydrodynamic software was used to simulate the response of the dual-chamber floating OWC wave energy model under different wave conditions. Regular wave experiments verified the accuracy of these simulations and evaluated the model's performance, while irregular wave experiments assessed its capture performance in real marine environments. Results: The numerical analysis indicated that the motion response of the dual-chamber wave energy model peaks near the heave natural period of the floating body, optimizing energy capture. It was also found that the angle between the incident wave direction and the model significantly affects performance. When the chambers are aligned front to back (0° angle), energy capture is maximized, suggesting this arrangement is the most effective. To verify the numerical calculations and assess the actual performance of the wave energy model, regular wave experiments were carried out. These experiments demonstrated that when the dual chambers of the floating OWC wave energy model are arranged front to back, the capture performance is superior compared to the left-right arrangement. The optimal capture performance periods for the front and back chambers of the model are not aligned, allowing the front-to-back chamber arrangement to broaden the range of optimal response periods, thereby enhancing the system's overall energy capture efficiency. Additionally, to evaluate the capture performance of the dual-chamber floating OWC wave energy model in real marine environments, irregular wave experiments were conducted. The experimental results showed a maximum capture width ratio of 41.84% under irregular wave conditions, which is close to 84% of its performance under regular waves. This indicates that the dual-chamber wave energy model maintains strong energy capture capability and stability even in challenging marine conditions. Conclusions: Combining the results of numerical calculations and experimental tests, the dual-chamber floating OWC wave energy model exhibits excellent energy conversion performance across different wave conditions. The innovative front-to-back arrangement design of the dual chambers significantly enhances capture performance and broadens the range of optimal response periods. This research provides new ideas and methods for the development of wave energy conversion technology. The results have significant implications for optimizing and practically applying wave energy solutions, and they are expected to promote the development and utilization of marine renewable energy, thereby contributing positively to the advancement of green energy.

  • Hydraulic Engineering
    Mingchao LI, Yuangeng LÜ, Qiubing REN, Leping LIU, Zhiyong QI, Dan TIAN
    Journal of Tsinghua University(Science and Technology). 2025, 65(10): 1838-1852. https://doi.org/10.16511/j.cnki.qhdxxb.2025.21.020
    Abstract (322) PDF (85) HTML (188)   Knowledge map   Save

    Objective: Timely hazard detection during the construction of super high arch dams is crucial for reducing engineering accidents and ensuring project safety. Hazards in such settings are often hidden and diverse, making them difficult to detect during early-stage conventional inspections. However, fragmented hazard records at construction sites are crucial for identifying and detecting issues early, helping management personnel to promptly assess potential engineering risks. Methods: This study proposes an intelligent hazard identification method for super high arch dam construction using enhanced semisupervised contrastive learning. A multisource classification model for hazard text is developed to categorize and assess hazard types and levels from fragmented hazard texts, establishing a systematic hazard inspection framework. The model is built on the Transformer architecture, effectively capturing the semantic and positional relationships inherent in hazard descriptions. A contrastive learning module improves the Transformer by leveraging interclass relationships to amplify the differences between dissimilar samples. This significantly enhances classification accuracy, especially for multi-source attribute hazard categories. The method integrates self-supervised and supervised learning, emphasizing interclass distinctions while making use of label content. A memory bank mechanism decouples training batches, enabling comprehensive collection of negative samples, thereby enhancing the performance of semisupervised contrastive learning. Finally, the hazard category and level identification results are combined to visualize safety hazard distributions. Latent Dirichlet allocation (LDA) is used to extract latent clues for hazard risk inspection, constructing structured hazard inspection tables for different levels of risk. These tables allow managers to prioritize inspections in high-risk areas, enhancing the efficiency and precision of hazard detection. Results: The results show that the proposed classification model significantly improves hazard type and hazard level recognition tasks, with F1 score improvements of 4.9% and 3.3%, respectively. Multidimensional experiments were conducted to validate its significant advantages: 1) Analyzing the influence of different Memory Bank sizes on model performance highlighted the importance of batch decoupling batches and the selection of a robust number of negative samples; 2) Ablation experiments validated the contribution of each module to the model's performance improvement; 3) Dimensionality reduction clustering using t-SNE visually confirmed the contrastive learning module's ability to effectively group similar classification samples; 4) A comparison of infoNCE loss between this model and the base Transformer demonstrated the practical benefits of the contrastive learning module during training; 5) Performance comparisons with common classification models showed the proposed model's significant advantages in overall accuracy. The hazard category and level identification results are used to extract key topic information using the LDA topic model, revealing the potential risks present in the current hazard categories and levels. Taking "High-altitude fall" as an example, key topic clustering was applied to compile a complete hazard inspection clue table structured by hazard levels. Conclusions: The method enhances the precision and systematization of hazard identification during the construction of super high arch dams. It introduces a refined multi-source attribute hazard identification method, providing a novel approach to intelligent safety management in engineering and promoting the development of hazard management toward automation and intelligence.

  • Mechanical Engineering
    Tao GUO, Wengang GAN, Haiyang WANG, Siyuan LIU
    Journal of Tsinghua University(Science and Technology). 2025, 65(5): 921-929. https://doi.org/10.16511/j.cnki.qhdxxb.2024.27.041
    Abstract (320) PDF (76) HTML (108)   Knowledge map   Save

    Objective: The distributing pipe in a Pelton turbine serves as a crucial water supply component responsible for regulating flow and inducing diversion. Its special structure, however, can lead to adverse effects such as flow separation and Dean vortices causing hydraulic losses; these losses can vary with changes in the upstream head, further affecting the incoming flow conditions. Traditionally, the pressure drop method has been primarily utilized to assess these losses, yet it fails to pinpoint the exact locations where significant hydraulic losses occur. Methods: This study investigates the hydraulic and loss characteristics of the distributing pipe. Utilizing the SST(shear stress transport) k-ω turbulence model, we simulate the flow inside the distributing pipe and analyze entropy production distribution based on the entropy production theory. Then, according to the distribution of entropy production rate and flow pattern, the reasons for the hydraulic loss in the main channel and bifurcation 2 were analyzed detailly. Entropy production—indicative of irreversible dissipative effects during fluid flow—effectively highlights high hydraulic loss areas by converting lost mechanical energy into internal energy. Results: Results show a remarkable increase in total entropy production within the pipe, with values rising from 210.999 to 4 614.980. Specifically, entropy production in the main channel increases from 145.549 to 3 477.351, and in bifurcation 2 from 38.857 to 717.608. Under high-speed flow conditions, the separation between internal and external flows becomes distinct, particularly when fluid navigates bends. The hydraulic loss is dominated by fluctuation entropy production, accounting for >50%. The main flow zone and bifurcation 2 are the primary sites of hydraulic loss, accounting for approximately 90% of the total loss, whereas bifurcations 1 and 3 experience relatively small losses. Conclusions: Comparative analysis of entropy generation rate contours, streamline plots, and pressure fluctuation curves highlights that high entropy generation areas experience significant pressure pulsations, accompanied by adverse flow phenomena such as Dean vortices and flow separation. At bifurcation 2, high-speed fluid is diverted and squeezed outward, creating a low-pressure vortex on the inner side, inducing significant hydraulic loss. At the bend position, the fluid tends to flow outward, resulting in high external pressure and low internal pressure distribution at the ring pipe and further in high hydraulic loss on the inside. These phenomena create large pressure gradients and significant pressure fluctuations, affecting flow stability. Furthermore, optimization strategies are proposed for the distributing pipe design, including the addition of flow-diversion baffles at bifurcation points to stabilize flow patterns, reduce vortices, and alleviate flow separation by increasing the number of nozzles and reducing curvature. This study employs numerical computation to investigate the mechanisms of hydraulic loss generation within the distributing pipe and meticulously delineates areas of high hydraulic losses, offering hydro turbine developers optimization strategies.

  • Public Safety
    Qiushuang YAN, Chenqing FAN, Xintong ZHAO, Jie ZHANG
    Journal of Tsinghua University(Science and Technology). 2025, 65(6): 1090-1101. https://doi.org/10.16511/j.cnki.qhdxxb.2025.22.029
    Abstract (310) PDF (49) HTML (187)   Knowledge map   Save

    Objective: The Northwest Pacific Ocean is prone to typhoons; hence, high-resolution wind field data are essential for understanding their formation and aiding disaster prevention in China's coastal areas. Spaceborne synthetic aperture radar (SAR) is currently the only means for detecting large-scale, fine-grained typhoon wind fields. Traditional SAR techniques have limitations, such as challenges in determining wind direction and reliance on external inputs for wind speed. Most methods utilize single-polarization data, restricting their ability to capture a broad range of wind speeds. Although deep learning has shown promise in SAR wind field retrieval, research in this area remains preliminary and often neglects the benefits of dual polarization images or the specific challenges posed by typhoon conditions. Therefore, it is necessary to further explore the application potential of dual-polarization SAR data and deep learning technology in obtaining sea surface wind fields in typhoon sea areas covering a wide range of wind speeds. Methods: In this paper, we propose a wind field retrieval model based on deep learning using dual-polarization Sentinel-1 SAR data. We attempt to effectively capture spatial features at various positions in SAR images, enhance the significance of key features, mitigate interference from irrelevant information, and improve retrieval efficiency. We integrate attention mechanisms such as SKNet, ECANet, and CBAM into the ResNet18 architecture to develop an E-SKNet_wind model. The performance of the developed models under different polarizations (VV, VH, and VV+VH) is systematically evaluated through comparisons with the ResNet18_wind models and the results reported in the literature. Results: The statistical results show that the precision of both types of deep learning models (E-SKNet_wind and ResNet18_wind) is higher in VV+VH dual polarization data than in either VV or VH single polarization data. Further, the dual-polarization E-SKNet_wind model performs better than the dual-polarization ResNet18_wind model. For wind speed retrieval, the root mean square error (RMSE) of the dual-polarization E-SKNet_wind model is 1.49 m · s-1, which is smaller than that of the dual-polarization ResNet18_wind model (1.86 m · s-1). For wind direction retrieval, the dual-polarization E-SKNet_wind model has an RMSE of 19.03°, which is smaller than that of the dual-polarization ResNet18_wind model (22.38°). In addition, the dual-polarization E-SKNet_wind model performs better than nearly all traditional methods and most existing machine learning and deep learning wind speed retrieval models. However, a few models may outperform our model potentially because of the narrower wind speed range or the inclusion of external wind direction data as an input parameter. The results of a case analysis show that the retrieval results of the typhoon wind field from the dual-polarization E-SKNet_wind model follow a trend consistent with the wind field from the ERA5 reanalysis across almost all regions. However, in the regions characterized by exceptionally low wind speeds, such as near the center of the typhoon, there is a notable and significant overestimation of wind speed values. This discrepancy results in a discontinuity in the retrieved wind speed profile. Future research and solutions are necessary to address these issues. Conclusions: The dual-polarization E-SKNet_wind model effectively leverages spatial texture features from dual-polarization SAR images to precisely extract wind speeds and directions without any external input, thus overcoming the limitations of single-polarization data. This model accurately extracts sea surface wind fields from SAR images for the Northwest Pacific Ocean typhoon sea area.

  • Intelligent Construction
    Haichen ZHANG, Jinyu LU, Zhicheng SHA, Haiying ZHANG, Jun ZOU
    Journal of Tsinghua University(Science and Technology). 2025, 65(7): 1229-1238. https://doi.org/10.16511/j.cnki.qhdxxb.2025.26.027
    Abstract (305) PDF (165) HTML (121)   Knowledge map   Save

    Objective: Active control is a critical aspect of adaptive structures. The cable dome structure is a predominant form of large-span spatial architecture, with its equilibrium state representing the interaction between force and form. Consequently, the dome structure is controllable and serves as an ideal model for adaptive structures. Shape memory alloy (SMA), a typical smart material, demonstrates excellent shape memory effects and is frequently utilized as a driving mechanism in active control systems. This article explores the application of SMA in the adaptive cable dome structure to enhance structural form control, improve control accuracy, reduce control complexity and controller weight, and facilitate intelligent control. Methods: This paper uses the Geiger cable dome structure as a case study. First, a three-dimensional finite element model is created using ANSYS APDL software to assess the structural control requirements. Next, uniaxial tensile tests are performed on SMA wires to evaluate their material properties. According to the identified control requirements and the material properties of the SMA wire, a tendon designed for active control is developed and manufactured. A key design criterion is to ensure that the SMA tendon produces a specific plastic strain under load, which must remain below 8%. Subsequently, experimental research is conducted to evaluate the recovery performance of the SMA tendon. The SMA tendon is connected in series with steel wire rope to create the active control unit, which then replaces the external diagonal cables in the cable dome structure for active control testing. The performance of the SMA-based control method is compared with mechanical control methods to assess its effectiveness. Results: When the initial loads were set at 2 000, 2 500, and 3 000 N, the strain in the SMA tendon reached 4.10%, 4.54%, and 4.67%, respectively. Upon heating to 120 ℃, the tendon generated a recovery strain per unit heated length of 0.1462, 0.1554 and 0.1655 m-1, respectively. Additionally, the rate of recovery strain during heating depended on the martensite volume fraction, which varied with temperature. Compared with mechanical control methods, the cable dome structure controlled by SMA exhibited smaller errors, with smoother curves for internal forces of units and displacements of nodes. Furthermore, the finite element simulation closely aligned with the experimental results, effectively describing the control process of the structure. When the length of the external diagonal cable was shortened by 0.90 mm, the internal force in the structural spine cable increased by more than 25%. Conclusions: This research demonstrates that SMA can function as an active control driver for cable elements in cable dome structures, providing a stable and reliable control process. Compared with mechanical control methods, the SMA control method is more convenient and easier to manage in terms of accuracy; however, the control rate is dependent on the martensite volume fraction. The SMA tendon used in this study is relatively thick, causing temperature transmission from the exterior to the core, which results in a lag effect and requires a certain stabilization time. Adjusting the inclined cables outside the cable dome can effectively control the shape of the cable dome structure and alleviate the relaxation of the spine cables.

  • Medical Equipment
    Kui WANG, Xiangbao ZHOU, Tianhao ZHOU, Huajun LI, Yuhang QIU, Qingyang WEI
    Journal of Tsinghua University(Science and Technology). 2025, 65(5): 1000-1008. https://doi.org/10.16511/j.cnki.qhdxxb.2025.22.003
    Abstract (299) PDF (77) HTML (92)   Knowledge map   Save

    Objective: Nuclear medicine imaging is a dynamic imaging technique that enables researchers to analyze the physiological and pathological processes, especially in the brain. When imaging awake and unconstrained mice, the free movement of their heads can cause motion artifacts in the nuclear medicine images. These artifacts reduce image resolution, decrease the concentration of the tracer in the region of interest, and affect the quantification of the standard uptake value and the estimation of tracer kinetic model parameters. Therefore, the elimination of head motion artifacts is crucial for improving the quality of brain positron emission tomography (PET) images. In recent years, some researchers have been using markers attached to the mice's heads to track their movement. However, attaching markers to the mice's heads may cause discomfort and anxiety. In addition, the freedom of movement of the head during imaging can lead to relative sliding or detachment of the markers, resulting in incorrect motion estimation. Methods: In this study, we design a mouse head motion tracking system based on the you only live once (YOLO) v5 algorithm. This system can accurately monitor the position and posture of the mice's heads in real time, providing precise motion information for motion correction in nuclear medicine images. In contrast to traditional motion tracking systems, this system does not require markers attached to the mice's heads, effectively addressing the limitations of previous tracking methods. The proposed motion tracking system consists of three main stages, namely feature point recognition and positioning, three-dimensional reconstruction of feature points, and calculation of the rotation and translation parameters. First, the YOLO v5 algorithm automatically identifies and locates existing feature points on the mice's heads to obtain the pixel coordinates of each feature point. Then, using the parallax effect and triangulation principles, we reconstruct the three-dimensional coordinates of the feature points in the world coordinate system. Finally, we calculate the Euler angles of the mice's heads using the symmetry of the feature points and utilize inter-frame pose differential methods to compute the translation and rotation parameters of the head pose change between adjacent frames. Results: To verify the performance of the designed motion tracking system, we place a mouse phantom in a stationary position and measure the changes in its head position and posture angles using the designed system. The experimental results show that for the X, Y, and Z axes, the root-mean-square errors of the translational degrees of freedom are 0.04, 0.19, and 0.03 mm, whereas the root-mean-square errors of the rotational degrees of freedom are 0.58°, 0.34°, and 2.03°. We use the MATLAB function to obtain the histogram statistics of the detected translation and rotation parameters, all of which conform to a normal distribution. Conclusions: The results indicate that the designed motion tracking system can accurately monitor the movement of the mice's heads during nuclear medicine imaging. The detected parameters of six degrees of freedom conform to a normal distribution, further confirming the reliability of the system. Moreover, this system does not rely on markers, effectively avoiding the risk of marker detachment. The motion data obtained through this system can be used to compensate for and correct motion artifacts of the mice's heads in nuclear medicine imaging, thereby enhancing the quality of nuclear medicine images.

  • Public Safety
    Xing LI, Hao WU, Mengqi YUAN
    Journal of Tsinghua University(Science and Technology). 2025, 65(6): 1060-1069. https://doi.org/10.16511/j.cnki.qhdxxb.2025.22.016
    Abstract (293) PDF (54) HTML (128)   Knowledge map   Save

    Objective: An important issue that fiber-reinforced composites face in applications is their sensitivity to low-velocity impact. Low-velocity impact not only causes material penetration and perforation but also introduces internal damage in the forms of delamination, matrix cracking, and fiber breakage. This reduces the damage tolerance of fiber-reinforced composites, posing a potential threat to their protective performance. The research objective of this paper is to enhance the impact resistance through fiber hybridization and to explore the influence of the fabric structure on the impact resistance of carbon-Kevlar fiber intraply hybrid composites. Methods: In this study, carbon-Kevlar fiber intraply hybrid composites with different fabric structures were fabricated using the vacuum assisted resin infusion (VARI) process. Through low-velocity impact tests using a drop hammer under 30- and 60-J energy, the reinforcement effect of intraply hybridization on the composites and the influence of the fabric structures on the mechanical properties and impact resistance of carbon-Kevlar fiber intraply hybrid composites were investigated. Low-velocity impact was simulated through an impact simulation model of hybrid fiber composites, and the damage mechanism of the specimens under impact loads was explored. The impact response behaviors and impact resistance of each specimen were comparatively analyzed using the load - displacement, load - time, and energy - time curves. Results: Results show that: (1) At an energy of 30 J, the specimen with the maximum peak load is the plain warp carbon weft Kevlar (P-CC/KK) specimen, reaching 3.30 kN, which is 72.77% and 106.25% higher than the maximum peak loads of plain carbon (P-C) and twill warp carbon weft Kevlar (T-CC/KK) specimens (with the minimum peak load), respectively. At an energy of 60 J, the specimen with the maximum peak load is also P-CC/KK, reaching 3.68 kN, which is 97.85% higher than the maximum peak load of the P-C specimen (with the minimum peak load). (2) Comparing the maximum deflection of rebounded specimens, the T-CC/KK specimen has the largest deflection at 30- and 60-J energy, reaching 24.74 and 31.92 mm, respectively. (3) From the energy absorption curve, the energy absorption of the hybrid specimens is significantly increased compared with that of P-C. At an energy of 30 J, the plain carbon-Kevlar alternate (P-CK/CK) specimen just stops the impactor, absorbing the most energy of 29.54 J, which is 132% higher than the energy absorbed by P-C. At an energy of 60 J, the T-CC/KK specimen absorbs the most energy of 59.65 J, which is 324% higher than the energy absorbed by P-C, suffering greater bending deformation and internal damage. (4) The low-velocity impact simulation study based on the finite element model on the P-CC/KK specimens shows that the load - time and load - displacement curves from the tests and simulation results have a high degree of consistency, and errors in the peak loads of the curves and maximum deflections of the specimens are less than 10%. Conclusions: The low-velocity impact resistance of the carbon-Kevlar hybrid composites is significantly improved compared to that of pure carbon fiber. The energy absorption of the hybrid specimens is greatly enhanced compared to that of P-C. During impact, P-CC/KK has the highest peak load and T-CC/KK has the largest deflection. The warp carbon weft Kevlar (CC/KK) structure has better impact resistance than the carbon-Kevlar alternate (CK/CK) structure, and the plain weave structure has superior low-velocity impact resistance compared to the twill weave structure. The established finite element model has relatively high accuracy, and the numerical simulation model well reflects the in-plane damage of the composites under the nonpenetration condition.

  • Mechanical Engineering
    Xiaobing FENG, Jun ZHENG, Shangxian YANG, Baiwa PAN
    Journal of Tsinghua University(Science and Technology). 2025, 65(5): 867-881. https://doi.org/10.16511/j.cnki.qhdxxb.2024.22.047
    Abstract (288) PDF (70) HTML (129)   Knowledge map   Save

    Objective: Accurate weld seam recognition and automatic tracking control are crucial for ensuring the welding quality and operational efficiency of crawling robots. To achieve the efficient and automatic tracking of curved surface welds on large structural components, this work proposes a crawling robot bidirectional automatic tracking technology based on a single laser sensor and an adaptive weight welding gun cascade control method. Methods: A kinematic model was established for a crawling robot. The methods for estimating distance deviation between the laser system and the weld seam and correcting the angle between the robot and the weld seam were analyzed. By dynamically adjusting the position of the crawling robot with respect to the weld seam, the robot achieved bidirectional automatic tracking along the weld seam. Based on the welding process parameters and weld position information, the welding gun posture and end position were determined. The motion displacement value of the welding gun transmission joint was obtained by solving the inverse kinematics model of the actuator, and the joint motor was adjusted based on the motion displacement value for real-time welding gun calibration. Results: The influence of the distance between the laser system and the center of the robot on the straight weld path tracking was simulated and analyzed. Distances between 35 and 50 cm enabled rapid tracking of the weld seam by the laser system and center of the robot. The initial distance deviation had a small impact on the deviation between the laser system and the weld seam but has a significant impact on the angle correction between the robot and the weld seam. The stability conditions of the cascade control system were analyzed, and the bidirectional tracking performance of the robot along the weld seam was tested at the 5G and 6G welding positions. The distance deviation curve between the laser system and the weld seam during the tracking process and the angle correction curve between the robot and the weld seam were obtained. The distance deviation between the laser system and the weld seam was less than 2 cm, and the angle correction between the robot and the weld seam was approximately 1°. Conclusions: To ensure the stability of the cascade control system, the distance deviation between the laser system and the weld seam should be converted to the distance deviation of the robot tail for proportion integration differentiation (PID) input. The crawling robot motion control system satisfies the bidirectional automatic tracking along the weld seam in the 5G and 6G test scenarios, and the system has accurate welding gun positioning capability. Prealignment of the weld should be done before the welding operation of the crawling robot to further ensure operating stability.

  • Intelligent Construction
    Chenyu LIU, Jing WU, Yunfan GU, Luqi XIE
    Journal of Tsinghua University(Science and Technology). 2025, 65(7): 1209-1220. https://doi.org/10.16511/j.cnki.qhdxxb.2025.26.028
    Abstract (281) PDF (50) HTML (180)   Knowledge map   Save

    Objective: Approximately 70% of the time spent during the construction of a modular building is devoted to the onsite assembly of the prefabricated components. Because of the limited accuracy of traditional crane motion control, insufficient rigidity of the boom and rope, and susceptibility of cranes to outdoor environments, accurate placement of components in the target area using a crane alone is difficult. Repeated positioning of a component with the assistance of multiple workers is required before installing the component. Repeated steps of lifting and lowering components rely on a considerable amount of labor and affect construction efficiency. To solve the problems of traditional installation methods that rely excessively on manual labor for the positional adjustment of lifted components, a robot-assisted component installation system is proposed in this study. Methods: Construction sites are far more complex than structured factory manufacturing environments; therefore, construction robots have distinct technical characteristics compared with industrial robots. A robot-assisted installation method was designed based on an analysis of the technical characteristics of construction robots. After the initial alignment of a component using a crane, the two robots cooperate to adjust the position and orientation of the component accurately. Thus, automatic installation of the components can be realized. The procedure for conducting the entire installation-assisted task of the robot was illustrated in a pseudocode form for a series of actions, including positioning the robots and the reserved hole, threading the hole, and pushing components. To reduce the cycle time and costs for the development of such a robot, a prototype model of a robot-assisted component installation system was created on a computer and virtual prototyping was used to simulate and analyze the kinematic and dynamic properties of the model within a virtual system with real environmental properties. Moreover, the calculation results of the virtual prototype provided important data support for equipment selection and component design in subsequent test sessions. Results: The motion state and the change of contact force between the tool mounted on the end flange and the prefabricated component of the two robots during the task execution were accurately modeled. Based on the simulation results, the feasibility of the robot replacing workers to complete the positional adjustment of components and the effective reduction of the end load of the robot was verified. With the robot-assisted component installation system built in the laboratory, which was identical to the virtual prototype model, the test of installing a precast panel in a predetermined area was conducted several times. The installation position deviations observed in all tests were less than the limit values of the quality acceptance standards in China. Conclusions: The actions of the two robots for assisting in the accurate installation of components can be smoothly realized. The simulation calculations and experimental results demonstrate the advancement of the proposed robot-assisted component installation system for improving component installation accuracy and saving labor. Because of severe labor shortages and sharp increases in labor costs, the proposed method provides economic benefits that become more evident as the number of component installation tasks increases. Research on installation-assisted robots has substantial reference value and application prospects for recent research progress on construction robots and practical engineering problems. With the future development of intelligent control systems, the automation level of the proposed method can be considerably improved to realize truly unmanned construction.

  • Aerospace and Engineering Mechanics
    Fan CHEN, Manyu ZHANG, Zhipeng YANG, Xu ZHU, Yi MO, Hua ZHOU, Zhuyin REN
    Journal of Tsinghua University(Science and Technology). 2025, 65(10): 2000-2016. https://doi.org/10.16511/j.cnki.qhdxxb.2025.27.040
    Abstract (279) PDF (59) HTML (173)   Knowledge map   Save

    Objective: Turbulent combustion models embedded in current combustion simulation software have successfully predicted performance across various combustor configurations. However, their accuracy is highly sensitive to model parameters, which must be iteratively adjusted and optimized in engineering practice to accommodate diverse combustor geometries, fuel types, and operating conditions. Thus, analyzing the main control mechanisms of turbulent combustion and establishing effective model parameter calibration and optimization processes are crucial for enhancing the prediction accuracy and reliability of combustion simulation software. Methods: This study selects an unconfined, strongly swirling lean premixed flame as the research object. It uses the active subspace method to analyze the effects of the key model parameters of the delayed detached eddy simulation turbulence model and dynamically thickened flame combustion model on the simulation errors of flame temperature and axial velocity. By identifying the high-dimensional mapping direction with the maximum gradient and retaining the main influencing directions in the parameter space, this method reduces dimensionality, based on which a low-dimensional response surface can be constructed for the multidimensional parameter input space. Furthermore, this study proposes a multimodel parameter optimization method that combines the active subspace method, a simple genetic algorithm, and the nondominated sorting genetic algorithm Ⅱ (NSGA-Ⅱ). Specifically, in the analysis of the main control mechanisms, the active subspace method is applied to the TECFLAM flame with the typical swirling premixed characteristics of combustors. Simulation calculations are performed to obtain parameters such as the turbulent dissipation coefficient and maximum flame thickening factor under different values. This helps in identifying the main control mechanisms by which these parameters affect simulation accuracy for the target variables like flame temperature and axial velocity, thereby revealing the parameter optimization direction for improving the accuracy of turbulent combustion simulations. Moreover, the proposed multimodel parameter optimization method for combustor simulations is used to optimize seven key turbulent combustion model parameters of the delayed detached eddy simulation turbulence model and dynamically thickened flame combustion model, including the turbulent dissipation coefficient and maximum flame thickening factor for a typical swirling premixed flame simulation case. Results: Results show that (1) the maximum flame thickening factor is the primary model parameter controlling the temperature and axial velocity errors of swirling premixed flame. (2) Calibrating the key parameters of the turbulent combustion model via the optimization process reduces the average temperature error at critical sections by 7.58% and the average axial velocity error by 42.60% when using the simple genetic algorithm alone. When using elitist NSGA-Ⅱ, the average temperature error at critical sections decreases further by 1.08%, and the average axial velocity error reduces by 2.96%. (3) The optimized model parameters significantly enhance simulation accuracy for typical swirling premixed flames, verifying the effectiveness of the proposed method. Conclusions: The proposed multimodel parameter optimization method effectively improves simulation accuracy for typical swirling premixed flames. It is applicable not only to resolve the parameter optimization problems of turbulent combustion models with the swirling premixed flame characteristics but also offers a new approach for circumventing multiparameter optimization issues in more complex two-phase simulations of combustors, including atomization, evaporation, turbulence, and combustion models.

  • Mechanical Engineering
    Deyong SHANG, Zhan PAN, Shuangfu SUO, Fan ZHANG
    Journal of Tsinghua University(Science and Technology). 2025, 65(7): 1336-1346. https://doi.org/10.16511/j.cnki.qhdxxb.2025.27.016
    Abstract (278) PDF (56) HTML (162)   Knowledge map   Save CSCD(1)

    Methods: To more clearly describe the specific movements of each joint, the local POE method was introduced. For ease of analysis, the structure of the robot's passive arms was simplified using screw theory. A kinematic model for the Delta robot was established using the local POE method. The error model of the robot was obtained through the differential mapping of the exponential product. Based on the derived error model, error sources were subdivided into three major categories: structural errors, actuation angle errors, and spherical joint clearance errors. An in-depth analysis was conducted on how each error source affects the end-effector positioning accuracy of the robot when it moves along the X, Y, and Z directions. A Delta robot with active arm lengths of 400 mm and passive arm lengths of 950 mm was selected as the subject for simulation analysis in MATLAB. The square root of the sum of squared errors in the X, Y, and Z directions was used as a composite error to serve as an evaluation criterion. Results: The simulation results showed that assuming all error sources have a magnitude of 0.100 units (length unit being mm; angular unit being degrees), actuation angle errors had the most significant impact on the end-effector positioning accuracy of the Delta parallel robot, causing a composite error ranging from 1.500 to 2.000 mm. Spherical joint clearance errors caused a composite error of 0.340 mm in the robot. Structural errors exhibited a relatively stable composite error fluctuating around 0.100 mm, with a variation range of approximately 0.010 mm, which can be considered a constant value. Comprehensive analysis indicated that length errors in the active and passive arms significantly influenced end-effector positioning accuracy, with the induced error fluctuations notably larger than those from other sources. Additionally, when the magnitudes of error sources were 0.025 mm, 0.050 mm, 0.075 mm, and 0.100 mm, their impacts on robot positioning accuracy increased proportionally. Conclusions: The Delta robot error analysis model based on screw theory and utilizing the local POE method offers a more intuitive and comprehensive approach to analyzing the impact of major error sources on positioning accuracy compared to traditional error modeling methods. This approach effectively avoids issues of singularity and incompleteness. It provides theoretical reference for error modeling analysis of other parallel mechanisms. Through the assessment of the influence of each error source presented in this paper, during subsequent error compensation phases, more precise corrections can be made to the significantly impactful actuation angle errors, thereby effectively improving the efficiency and effectiveness of overall error compensation.