Chinese  |  English

Top access

  • Published in last 1 year
  • In last 2 years
  • In last 3 years
  • All

Please wait a minute...
  • Select all
    |
  • Advanced Ocean Energy Technology
    Libing ZOU, Mingjun ZHOU, Chao WANG, Xiangyuan ZHENG, Zouduan SU, Junwei LI
    Journal of Tsinghua University(Science and Technology). 2025, 65(8): 1377-1386. https://doi.org/10.16511/j.cnki.qhdxxb.2025.27.039
    Abstract (601) PDF (218) HTML (226)   Knowledge map   Save

    Significance: Floating wind turbines (FWTs), as a revolutionary breakthrough in offshore renewable energy technology, are redefining the boundaries of human development of ocean energy with innovative technological solutions. With the collaborative innovation of floating foundations and dynamic anchoring systems, this technology has successfully broken through the limitations of traditional fixed wind turbines on water depth, expanding the scope of wind power development to deep-sea and high wind speed resource rich areas. Compared to offshore fixed wind turbines, FWTs not only significantly reduce marine ecological disturbances, but also provide a dual solution for global energy transformation that combines environmental friendliness and production efficiency through the potential for large-scale cluster deployment. This article systematically reviews the current development status of floating wind power technology and deeply analyzes the core pain points that constrain its commercialization process, including key technical challenges such as dynamic response control, mooring system durability, and life cycle cost optimization. Of particular note is the milestone breakthrough achieved by China's innovation forces in this field-the "Mingyang-Tiancheng" floating platform, as the world's largest single unit capacity floating wind turbine system, has opened up a new paradigm for the development of far-reaching offshore wind power and provided important technical references for the global iteration of floating wind power technology. Progress: Globally, floating wind power projects represented by Hywind (Spar) and WindFloat (semi-submersible) have completed the transition from experimental prototypes to small-scale commercial applications, and their technological level and industrial chain layout are in a world leading position. In contrast, China's floating wind power is still in the demonstration and verification stage, represented by the 5.5 MW (2021) and 7.25 MW (2023) units of the "Yinlinghao" and "Guanlanhao". Although key technological breakthroughs have been achieved, the maturity of technology and the construction of supporting industrial chains still need to be improved. Currently facing three development bottlenecks: at the economic level, floating wind power technology is not yet mature, research and application costs are high, and it is still far from achieving the goal of grid parity; In terms of environmental constraints, the special working conditions in typhoon prone areas require higher adaptability of the units; At the level of industrial synergy, an industrial cluster effect covering design, manufacturing, and operation and maintenance has not yet been formed. Therefore, it is urgent to promote technological innovation to drive the development of related industrial chains, gradually reduce development costs, and achieve large-scale commercial applications. At the same time, it is necessary to promote the coordinated upgrading of offshore wind power equipment manufacturing and marine engineering industry, build a full life cycle cost control system, and lay a technical and economic foundation for large-scale commercial applications. Conclusions and Prospects: In order to address these challenges, the "Mingyang-Tiancheng" floating wind power platform has made innovative breakthroughs in areas such as prestressed high-strength concrete technology, composite lightweight buoy design and construction technology, intelligent perception collaborative control technology, single point mooring technology, dual wind turbine technology, and typhoon resistance technology, reflecting China's emerging leadership position in floating wind technology. It combines material science breakthroughs, intelligent control systems, and ecological design principles. Future progress will require sustained interdisciplinary collaboration and accelerated global deployment through industrialization to reduce costs. The "Mingyang-Tiancheng" provides valuable practical experience and technical reference for the future development of floating wind power.

  • Public Safety
    Jiamei ZHOU, Wei LÜ, Jinghui WANG
    Journal of Tsinghua University(Science and Technology). 2025, 65(6): 1050-1059. https://doi.org/10.16511/j.cnki.qhdxxb.2025.22.015
    Abstract (574) PDF (85) HTML (368)   Knowledge map   Save

    Objective: With increased globalization, multiple countries are involved in supply chains, forming complex supply networks. Frequent occurrences of natural disasters, geopolitical instability, and global health crises pose unprecedented challenges to traditional supply chain management methods. Local disruptions in the supply chain can spread internally, causing a series of chain reactions. Enhancing supply chain risk resilience and robustness has become a research focus for many scholars. The widespread use of the Internet has led to rapid information exchange between enterprises; an increasing number of scholars have recognized the importance of early warning information in preventing supply chain disruptions. Therefore, understanding how information affects the propagation of risks within the supply chain and maximizing the early warning function of information have significant practical implications. Moreover, the heterogeneity in the responses of enterprises to early warning information also needs attention. Methods: To capture the propagation of early warning information and disruption risks, a two-layer propagation model that couples risk and information is constructed. In this model, the upper layer represents the information layer and the lower layer represents the risk layer. The information of a disruption in a lower-layer enterprise is transmitted to upstream and downstream enterprises with a certain probability. After receiving the early warning information, an enterprise transitions into a conscious node and this transition is reflected in the upper layer network. In this model, there are five possible states for the nodes in the network. A microscopic Markov chain (MMC) method is used to analyze the state transition process between nodes and calculate the risk propagation threshold of the system. Furthermore, the key factors influencing the propagation of disruption risk are analyzed. An agent-based approach is used for case simulation to validate the model's effectiveness. Numerical analysis of the model reveals that the network structure, network size, extent of risk information propagation in the information layer, and the probability of disruption risk propagation are the key factors influencing the propagation of the risk. Financial data from Tesla's supply chain in China are also collected. In case simulation, an agent-based method is used to study the effects of the information layer network structure, information propagation rate, and risk propagation rate on the supply chain resilience. Results: The results show that for a low information propagation rate, the scale-free network structure accelerates information dissemination, allowing more enterprises to quickly obtain early warning information, thereby helping the supply chain resist risks and improve resilience. When the information propagation rate exceeds 0.4, the small-world network structures can propagate risks more efficiently because of their shorter average paths. Additionally, three disruption schemes are used to analyze system resilience, revealing that prioritizing the disruption of nodes with higher degrees has the greatest impact on the network, while deliberately attacking nodes with smaller degrees allows the supply chain to maintain higher operational efficiency. This finding suggests that maintaining the robustness of the key nodes in the supply chain is critical for enhancing the overall network resilience. Conclusions: Adjusting the supply chain network structure can help improve the risk resilience and robustness of the system. Enhancing risk awareness of enterprises and their response strategies can effectively improve supply chain resilience and suppress risk diffusion. Deliberate attacks on hub nodes with high degrees cause the greatest damage to the network system. Thus, this study provides theoretical support for supply chain management and can serve as a basis for decision-making to improve supply chain risk resilience and optimize management strategies.

  • Process Systems Engineering
    Dong QIU, Qiming ZHAO, Yijiong HU, Tong QIU
    Journal of Tsinghua University(Science and Technology). 2025, 65(5): 813-824. https://doi.org/10.16511/j.cnki.qhdxxb.2024.21.034
    Abstract (554) PDF (188) HTML (76)   Knowledge map   Save

    Objective: In the petrochemical industry, molecular reconstruction is crucial for understanding and optimizing the compositions of complex crude oil and petroleum products. As the first step of process simulation, quality control, and economic evaluation, precise molecular reconstruction approaches usually employ mathematical models to calculate the molecular compositions of petroleum products that align with their macroscopic properties. Traditional molecular reconstruction methods employ the gamma distribution to represent the carbon number distributions of homologs, but the coupling effects between the parameters "shape (α)" and "scale (β)" pose notable challenges in achieving desired interpretability and optimization efficiency. This study addresses these challenges by introducing a novel shape-decoupled parameter method that enhances the model's interpretability and simplifies the optimization process. Methods: The proposed shape-decoupled parameter method modifies a traditional gamma distribution by replacing the parameter's shape and scale with two new independent variables called peak position (m) and variance (σ2). Notably, m provides direct control over the zenith of the distribution, whereas σ2 independently determines the spread or width of the distribution, effectively reducing the coupling issue between parameters that exists in conventional gamma distribution models. Aiming at enhancing the stability and convergence speed during optimization, a multivariate linear regression (MLR) model was employed to estimate the initial parameter values. This regression model was trained on historical data of molecular compositions to provide reasonable initial values and decrease the probability of being trapped in local minima. The molecule-type homologous series (MTHS) matrix is used to represent the molecular composition of hydrocarbons, namely paraffins, isoparaffins, olefins, naphthenes, and aromatics (PIONA), with a comprehensive depiction of their multiple homologs. Moreover, an optimization problem was developed to minimize the prediction errors of the macroscopic properties, including molecular weight, density, PIONA group composition, and true boiling point curves. Upon a comparative analysis of multiple deterministic and heuristic optimization techniques, the differential evolution (DE) algorithm was determined as a favorable optimization tool by virtue of its superior accuracy and robustness. Results: Experimental evaluations showed that the shape-decoupled parameter method outperformed traditional methods in accuracy and optimization efficiency. Specifically, the density error decreased from 0.012 to 0.0059 g/cm3, and the average percentage relative error for the PIONA group composition also exhibits notable reductions. Moreover, the decoupled approach achieves faster convergence, requiring fewer iterations—reducing from 1 000 to as few as 20—without compromising accuracy. This reduction highlights the computational efficiency of the proposed method, which is a notable advantage in industrial applications with limited computational resources and time. Moreover, the proposed method exhibits enhanced robustness in addressing extreme molecular composition distributions, maintaining low errors in peak position and molecular composition predictions. This robustness becomes particularly evident when managing scenarios considered challenging by conventional methods, such as distributions with narrow ranges or hydrocarbons with approximately zero components at the boundary. Furthermore, the decoupled method provides better interpretability via independent control strategies for peak position and distribution width. The overall optimization performance was enhanced by the appropriate integration of the DE algorithm and effective initial parameter estimation by the MLR model. Conclusions: Compared with traditional methods, the proposed shape-decoupled parameter method provides a more interpretable, efficient, and accurate approach to the molecular reconstruction of petroleum products. By reducing the coupling effect between the parameters controlling the peak position and distribution width, this method simplifies the optimization process and achieves superior prediction accuracy and faster convergence. The results indicate the feasibility of its application for complex or extreme homolog distributions of hydrocarbons, revealing its higher reliability and robustness compared with traditional approaches. Future work is expected to focus on incorporating advanced machine learning techniques to further increase the accuracy and applicability of the model across a wider range of petroleum compositions, potentially enabling real-time molecular reconstruction for dynamic process optimization.

  • Intelligent Construction
    Peng LIN, Jianqi YIN, Yunfei XIANG, Chaoyi LI, Yong XIA, Houlei XU, Hua MAO
    Journal of Tsinghua University(Science and Technology). 2025, 65(7): 1173-1184. https://doi.org/10.16511/j.cnki.qhdxxb.2025.26.022
    Abstract (488) PDF (165) HTML (122)   Knowledge map   Save

    Objective: High-altitude hydropower projects present significant challenges owing to harsh environmental conditions, project clustering, limited data availability, and high construction risks. Accurate carbon emission calculations are crucial in such environments to mitigate environmental impacts and promote sustainable development. This study targets the full lifecycle of carbon emissions during intelligent construction in high-altitude hydropower projects. Methods: This study establishes a comprehensive framework for calculating lifecycle carbon emissions tailored to the unique challenges of high-altitude hydropower construction. The methodology covers three primary stages: data collection, model formulation, and real-world implementation. Lifecycle boundaries and emission factors are established for material production, transportation, construction, and operational maintenance. Key emissions are identified based on quality, energy consumption, and cost criteria to build a detailed carbon inventory. To address altitude effects, an adjustment coefficient is derived by correlating field-monitored data with baseline values, accounting for altitude impacts on emission intensities. The carbon emission model incorporates a discrete event simulation (DES) to capture the dynamic characteristics of construction and equipment operations. This model couples static and dynamic elements, applying static calculations to stable phases such as material production and maintenance while using dynamic simulations for variable stages such as transportation and active construction. This DES approach simulates the sequential and interdependent nature of equipment operations, providing an accurate reflection of emission behavior over time. Furthermore, a network of onsite carbon monitoring devices was implemented across different construction sites in a case project, and real-time CO2 concentration data were collected. These data calibrate and validate emission factors within the model, ensuring accurate altitude-adjusted emission assessments. Results: The model was applied to the JX hydropower project in a high-altitude region with distinct climatic and geographical challenges. The findings indicated that material production and construction machinery were the largest carbon emitters, accounting for 65.7% and 27.4% of total emissions, respectively. Cement manufacturing was identified as the dominant emission source, emphasizing the need for greener materials and cement production. The DES model revealed that equipment states, such as idling and operation, significantly influence emission intensities, especially under reduced oxygen at high altitudes. By integrating the DES results with real-time monitoring, the model supports precise, responsive emission control strategies. The proposed mitigation measures included adopting cleaner fuels, optimizing equipment idle time, and enhancing operational efficiency through scheduled maintenance. The model reliability was demonstrated by the close alignment of the simulated results with actual onsite measurements. Conclusions: The developed model offers a structured approach to calculating lifecycle carbon emissions for intelligent hydropower construction in high-altitude regions. By addressing the unique characteristics of such projects, including altitude-induced effects on emission intensities and equipment behavior, the model serves as a reference for emission reduction in future high-altitude hydropower projects. This study advances the understanding and management of emissions in high-altitude construction, underscoring the potential of intelligent construction methods to drive sustainable hydropower development.

  • Fire in Forests
    Fangpu LI, Xue RUI, Zijun LI, Weiguo SONG
    Journal of Tsinghua University(Science and Technology). 2025, 65(4): 655-663. https://doi.org/10.16511/j.cnki.qhdxxb.2025.27.004
    Abstract (452) PDF (137) HTML (108)   Knowledge map   Save CSCD(1)

    Objective: Fires are disaster events with destructive power. In relation to fire-related accidents, fire monitoring is one of the effective measures to reduce the casualties and economic losses caused by such incidents. Compared to traditional methods in fire monitoring, target detection has shown its strengths in terms of cost and outcome. Many researchers have investigated various ways to improve the efficiency of target detection by proposing new algorithms. Thus, numerous algorithms suited for fire monitoring applications have been proposed. However, these typically lack the capacity to detect small targets, which is the main characteristic of flame targets in incipient fires. To enhance the capacity to detect small targets for fire target detection, this paper improved the YOLOv5 algorithm and trained a model based on it with corresponding datasets collected. Methods: First, a fire image dataset with small target scene conditions is prepared for model training and performance testing. In the validation set, eight sets of mutually exclusive sub-datasets of environmental conditions are divided for the purpose of performance testing. Second, three improvements are introduced to improve the YOLOv5 algorithm: a) expansion of the multiscale detection layer to improve its receptive resolution; b) enhancement of the multiscale feature extraction capability by embedding the Swin transformer module, thus reducing the cost of calculation in algorithm deployment; and c) optimization of the postprocessing function by replacing the original algorithm with soft-NMS algorithm to maintain more potential adjacent targets. Next, an improved model YOLOv5s-SSS (swin transformer with soft-NMS for small target) is proposed. To verify the effect of every improvement and their contributions to the final model, the new model is evaluated using four sets of ablation experiments. After parameter optimization, a set of fire images is inputted into the models in the ablation experiment to compare and verify their outputs. Results: The ablation experimental results indicate that, first, all the improvements introduced into the algorithm are valid. Furthermore, the average accuracy of the improved model is 16.3% higher than that of the original algorithm in flame image targets under challenging scene conditions and 5.9% higher in normal-sized image targets. The verification result shows, compared to the original model, the improved model has obvious improvements in terms of reducing the location range of fire targets, thus minimizing the missing detection of small-sized and densely-distributed fire targets and clearly dividing densely or overlapping distributed fire targets. Conclusions: The dataset prepared in this paper can effectively support the training and testing of the improved fire detection algorithm model. Furthermore, the proposed model improvement has been shown to work effectively, along with the reliable performance test, thus providing a new improvement scheme for fire image detection technology. It can also serve as a reference in improving efficiency in various applications, such as accurate positioning of fire points in incipient forest fires and remote sensing monitoring of large-scale fires. However, the overall accuracy of the improved model is relatively low, possibly due to the images in the validation set being deliberately limited to small targets to assess the model's improvement. In the future, more improvements should be introduced to enhance the model's detection ability under various scenarios, such as low-light conditions, so that it can be adequate for industrial applications.

  • Hydraulic Engineering
    Jiahong LIU, Mengxue ZHANG, Jia WANG, Chao MEI
    Journal of Tsinghua University(Science and Technology). 2025, 65(10): 1853-1867. https://doi.org/10.16511/j.cnki.qhdxxb.2025.21.038
    Abstract (448) PDF (155) HTML (229)   Knowledge map   Save

    Objective: The frequency and intensity of global flood events are increasing, which is deeply driven by climate change and human activities. The key risk factors of hazard, exposure, and vulnerability are interconnected and collectively influence the occurrence and progression of flood disasters. Therefore, the relationships between these three factors need to be understood, and a comprehensive indicator system for the integrated assessment of flood risk needs to be developed. This study aims to analyze the spatiotemporal trends of global flood events from 1965 to 2023. Moreover, based on the key risk factors of hazard, exposure, and vulnerability, the spatiotemporal characteristics of the flood risk were revealed, which could provide a scientific basis for flood prevention and disaster mitigation decision-making. Methods: This study uses the Emergency Events Database, which includes global flood data, to analyze flood events from 1965 to 2023. This study conducts a trend analysis of the global flood occurrence, affected population, and mortality per unit area from 1965 to 2023. Based on the affected population and mortality per unit area, floods in six continents are classified into light, moderate, and severe categories using the percentage method. Spatial analysis of the flood occurrence, affected population, and mortality per unit area was performed for each country. The results showed the spatial distribution and impact intensity of flood disasters in different regions. In addition, key risk indicators, such as geographic elevation, precipitation, population density, and urbanization rate, are selected to analyze the characteristics of flood risk. Elevation and precipitation represent hazards. Population density indicates exposure, and urbanization rate reflects vulnerability. Trend analysis of these indicators was performed for three distinct periods, i.e., 1965-1984, 1985-2004, and 2005-2023. To examine the spatial trends of these indicators across countries over the entire study period, the Theil-Sen slope estimation method was employed. The entropy weight method was applied to calculate the weight of each risk indicator, and the flood risk values of six continents from 1965 to 2023 were calculated. Results: The main results are as follows: (1) From 1965 to 2023, global flood events show a fluctuating upward trend, although the affected population and number of deaths have shown a downward trend since the 1990s. At the continental level, floods occur most frequently in Asia, Africa, and South America, with a total of 2 322, 1 266, and 1 084 events, respectively. At the national level, Haiti experiences the highest frequency of flood events per unit area, with 23 events per 104 km2. Bangladesh has the highest total number of flood-affected people per unit area, with 27.1 million people per 104 km2, and the highest record of cumulative deaths, with 3 313 deaths per 104 km2. (2) Flood hazard, exposure, and vulnerability vary significantly across six continents. Among the indicators, population density and precipitation show the greatest influence on flood risk, with weights of 0.33 and 0.30, respectively. From 1965 to 2023, an obvious regional variation in flood risk across six continents is detected. The flood risk in Asia is significantly higher than that in other continents, with the flood risk values of both Asia and Africa showing a significant increase. By contrast, the flood risk value of South America decreased after 2010. Europe and North America show relatively low and stable flood risk values. Oceania exhibits the lowest flood risk values with significant fluctuations. Conclusions: This study conducts not only a systematic analysis of global flood events over a long time series but also an analysis of the changes in risk indicators, such as precipitation, geographic elevation, population density, and urbanization rate, from 1965 to 2023. Moreover, the relative impact of different indicators is quantified, which clarifies their respective contributions to flood risk. The results further revealed the comprehensively changing characteristics of flood risk. The findings provide guidance and evidence to inform flood prevention planning and disaster response strategies. In the future, exposure change based on population mobility and integrated adaptive capacity should be considered to reveal the dynamic characteristics of flood risk.

  • Hydraulic Engineering
    Shouguang WANG, Huaguang LIU, Pengyu MU, Qiang YANG, Yaoru LIU, Qianghui LIU, Chi LIU, Xingyu JIANG
    Journal of Tsinghua University(Science and Technology). 2025, 65(10): 1821-1837. https://doi.org/10.16511/j.cnki.qhdxxb.2025.21.016
    Abstract (429) PDF (152) HTML (217)   Knowledge map   Save

    Objective: The construction of hydraulic tunnels in high-stress surrounding rock environments often leads to the occurrence of rock bursts, thereby posing a substantial threat to engineering safety. Among the various active prevention and control measures for rock bursts, drilling pressure relief in surrounding rocks is considered a relatively economical and effective method. By creating drilled holes in the rock mass, stress concentration can be redistributed, thereby reducing the likelihood of sudden failures and improving the overall stability of the tunnel structure. Methods: In order to investigate the mechanical properties and damage characteristics of sandstone in hydraulic tunnels under different combinations of drilling numbers and drilling depths, a series of uniaxial compression tests were conducted. These tests utilized an advanced uniaxial compression testing machine and the VIC-3D noncontact full-field strain measurement system. The experiment involved eight different combinations of drilling holes in the sandstone specimens. This study comprehensively analyzed key parameters such as compressive strength, the accumulation and release characteristics of elastic strain energy, and the residual volume rate of sandstone. A regression analysis was conducted to establish a quantitative relationship between the residual volume rate of sandstone and its compressive strength. In addition, the crack evolution and damage characteristics of sandstone under different drilling hole configurations were studied using digital image correlation (DIC) technology and fracture phase field simulation. Furthermore, numerical simulations based on finite element methods were performed to compare the effects of straight holes and 10° inclined holes on stress redistribution within the rock mass. Results: The experimental and numerical results led to the following key findings: (1) when the radius of the drilled holes remains constant, an increase in the drilling depth leads to a decrease in the compressive strength of sandstone. This finding indicates that deeper drilling can effectively weaken the rock mass and facilitate stress relief. (2) Under the condition of identical hole radius and depth, an increase in the number of drilled holes results in a discontinuous reduction in the compressive strength of sandstone. Moreover, the arrangement of the drilled holes plays a crucial role in determining the overall strength of sandstone. For instance, the specimens with asymmetrical three-borehole configurations exhibited lower compressive strength than those with symmetrical four-borehole configurations. This finding suggests that asymmetrical arrangements can enhance energy dissipation efficiency and reduce the overall stress level within the rock. (3) The elastic strain energy of sandstone exhibits a strong positive correlation with compressive strength. Moreover, as the ratio of loss energy to elastic strain energy approaches zero, the intensity of sandstone destruction considerably increases. This outcome highlights the role of energy release in the failure process of rock materials. (4) DIC strain field analysis and numerical simulations confirm that sandstone under uniaxial compression follows a characteristic butterfly-shaped damage pattern. The three-borehole asymmetric configuration showed lower compressive strength, greater far-field stress reduction, earlier failure onset, and higher economic feasibility for pressure relief applications than the four-borehole symmetric configurations. (5) Under identical rock formation and borehole depth conditions, the impact of straight and 10° inclined boreholes on stress redistribution is found to be similar. However, practical construction decisions should be made, considering site-specific conditions and operational requirements. Conclusions: This study provides valuable insights for optimizing the design of borehole pressure relief schemes for hydraulic tunnels. The findings provide a reference for engineers seeking to improve tunnel stability through effective stress redistribution strategies. By systematically evaluating different drilling configurations, this study contributes to the development of more efficient and cost-effective methods for mitigating rock bursts in high-stress environments.

  • Public Safety
    Dingli LIU, Xiao LEI, Diping YUAN, Yanglong WU, Zhisheng XU
    Journal of Tsinghua University(Science and Technology). 2025, 65(6): 1009-1018. https://doi.org/10.16511/j.cnki.qhdxxb.2025.22.017
    Abstract (420) PDF (141) HTML (77)   Knowledge map   Save CSCD(1)

    Objective: The efficient allocation and dispatch of fire rescue resources are crucial to urban public safety. Traditional approaches assume continuous spatial distribution of fire service coverage areas and give less consideration to the impact of real-time traffic conditions on rescue route selection and response times. This study aims to introduce and define the concept of "rescue enclaves"—areas that, although not directly adjacent to fire stations, can be effectively covered by them—and proposes a method to identify and calculate these spatially discontinuous coverage areas. Methods: This study proposed a method for identifying and calculating spatially discontinuous coverage areas by mapping points to grids. Using this method: (1) fire truck travel times were calculated using real-time traffic data, (2) geographic coordinates were converted to universal transverse Mercator (UTM) coordinates, (3) the region was divided into fine grids, (4) grid coverage status was determined, (5) transition grids were processed through neighborhood analysis, and (6) rescue enclaves were identified using a breadth-first search (BFS) algorithm. The CS-XX urban fire station in a Chinese city was selected as a case study to validate the method. In this case study, 3 818 points of interest were identified as rescue demand points across 49 evaluation periods in one day, generating 187 082 valid data samples. A target response time of 4 min was established, and an 80% reduction coefficient was applied to convert regular vehicle travel times to fire truck travel times. Results: The rescue enclave areas were successfully identified and calculated using the proposed method, through which the following key findings were revealed: (1) the dynamic coverage area of CS-XX was observed to vary from 1.83 to 4.57 km2, with the minimum fire service coverage of 1.83 km2 being recorded during the morning peak at 8:00, (2) the calculated coverage area trends were found to be consistent with the percentage of demand points accessible within 4 min, whereby the reliability of the method was validated, (3) critical rescue enclaves were identified near CS-XX, with enclave areas ranging from 0.25 to 1.12 km2, accounting for 12.20%-27.53% of the total coverage area, (4) the rescue enclaves were observed to occasionally extend beyond the traditional coverage of 7.00 km2 prescribed by standard area determination methods, and (5) coverage areas and rescue enclave areas were demonstrated to synchronously vary with traffic conditions, with traffic congestion leading to a significant reduction in their sizes. Conclusions: The proposed conceptualization of rescue enclaves is elucidated in this study, and their substantial manifestation within fire service coverage areas is substantiated through rigorous analysis. The rescue enclaves are systematically identified and quantified via an algorithmically driven methodological framework, and it is ascertained that such enclaves may comprise up to 27.53% of the coverage area of a fire station. If rescue enclaves are integrated into fire rescue jurisdiction planning protocols, they can substantially optimize resource allocation efficacy. While real-time traffic conditions and different flow efficiencies across heterogeneous route typologies are identified as the primary determinants of enclave formation, subsequent investigations are warranted to elucidate the precise mechanistic underpinnings and contributory factors governing rescue enclave emergence as well as to establish quantitative metrics for rescue passage efficiency across diverse route configurations.

  • Safety Science
    Siyuan MU, Quanyi LIU, Ruxuan YANG, Yi LIU, Rui YANG
    Journal of Tsinghua University(Science and Technology). 2025, 65(7): 1368-1376. https://doi.org/10.16511/j.cnki.qhdxxb.2025.22.001
    Abstract (415) PDF (241) HTML (130)   Knowledge map   Save CSCD(1)

    Objective: Due to the high flammability of nonflame-retardant pure acrylonitrile-butadiene-styrene (ABS), a material often used for passenger luggage, it is easily ignited by open flames, posing risks to aviation operations. Therefore, in-depth research on the pyrolytic combustion characteristics of ABS at high temperatures and high radiation intensities is crucial for the safe operation of aircraft. Methods: This study evaluated the thermal stability and combustion characteristics of ABS under different heating rates and radiation intensity conditions using thermogravimetric analysis and cone calorimeter systems. This study also analyzed the variations in the characteristic parameters of ABS. Results: The results show that the pyrolysis process of ABS can be divided into an initial volatilization stage, a rapid decomposition stage, a residual combustion stage, and a pyrolysis termination stage. In the rapid decomposition stage, when ABS reaches temperatures of approximately 310 ℃ to 343 ℃, the main polymer chains of ABS undergo cleavage, breaking down into different components, such as acrylonitrile and polyethylene monomers, leading to the decomposition of polymer molecules. When heated, the main chain of ABS ruptures. The molecular structure of ABS contains different components, such as styrene and butadiene, which are prone to decomposition and cross-linking reactions upon heating, resulting in the occurrence of the pyrolysis process. An increase in heating rate significantly shortens the pyrolysis time and enhances the maximum thermal decomposition rate. As the radiation intensity increases, the combustion process of ABS accelerates, with the heat release rate increasing and the peak heat release rate increasing by 53%. The combustion and ignition times decrease by 32% and 78%, respectively, because of the increase in material temperature and the exacerbation of heat conduction and convection phenomena leading to an increase in heat release rate. Under low radiation intensities, ABS cannot rapidly absorb energy to reach combustion conditions. However, as the radiation intensity increases, ABS can rapidly absorb sufficient energy for faster decomposition, thus shortening the combustion time. The generation time of carbon monoxide (CO) and carbon dioxide (CO2) is enhanced, and the maximum generation amounts of CO2 and CO increase by 49% and 74%, respectively. The oxygen consumption increases and the oxygen consumption rate accelerates due to the intensified molecular motion caused by thermal radiation, leading to a faster reaction with oxygen in the air. The mass loss time is enhanced, the remaining sample mass decreases, and the maximum mass loss rate increases by 53.8%. Based on the thermal penetration model, 2 mm thick ABS material is classified as a thermally thin material, and verification is conducted. Based on the ignition time model, a critical radiative heat flux formula is established, and the critical radiative heat flux is calculated to be 16.255 kW/m2. Finally, according to the fire performance indicators, as the radiation intensity increases, the material combustion rate increases, releasing higher amounts of heat, leading to faster fire growth and development, thereby increasing fire risk. The fire risk of ABS is positively correlated with the radiation intensity. Conclusions: This study concludes that ABS exhibits a high fire risk. This research provides crucial data and practical references on the fire risks associated with ABS material for safe aviation operations.

  • Fire in Buildings and Timber Structures
    Qiang MA, Hongrui JIANG, Ke WANG, Bo WANG, Long DING, Jie JI
    Journal of Tsinghua University(Science and Technology). 2025, 65(4): 625-633. https://doi.org/10.16511/j.cnki.qhdxxb.2024.27.027
    Abstract (396) PDF (129) HTML (81)   Knowledge map   Save

    Objective: Most historic buildings in China wood or brick-wood structures; consequently, they have low fire resistance ratings and large fire loading values. Furthermore, firefighting problems associated with these buildings include high building density, insufficient fire separation distances, and narrow fire passages; thus, historical buildings face a high risk of damage owing to fire. Therefore, to prevent and control fires in historical buildings, exploring efficient early fire detection and alarm systems for these buildings is necessary. Image fire detectors enable rapid identification of fires and provide the location of fires; thus, they are used in early fire detection and alarm systems for historic buildings. However, there is a lack of study that accurately calculate the coverage and evaluated the placement of image fire detectors. Currently, the placement of outdoor image fire detectors depends on semiquantitative approaches such as engineers' experiences and existing regulations. Methods: Consequently, this study proposes a placement optimization methodology for outdoor image fire detectors in historical buildings based on the set covering and maximum covering models. First, a three-dimensional model of the target area is constructed by integrating the structural information of historical buildings and mesh division. Second, the field of view of an image fire detector is determined based on its models and specifications to analyze its viewshed; subsequently, a set of candidate detector positions is constructed using the selection rule that selects more important areas and less occluded areas, thus providing support for the reasonable placement of image fire detectors. Furthermore, the extent of coverage of the target area grid by the candidate detectors is determined by constructing a binary observation matrix that maps the position and direction of image fire detectors in the target area. Third, considering the optimization of the cost and coverage of image fire detectors as the goal, a mathematical model for the placement optimization of image fire detectors is developed based on the set covering model and maximum covering model. Finally, based on the genetic algorithm, the optimal placement scheme of image fire detectors is obtained using the previously constructed placement optimization model. Results: To demonstrate the feasibility and effectiveness of the proposed methodology, this study considers the joint area composed of the Hall of Supreme Harmony, Hall of Central Harmony, and Hall of Preserving Harmony of The Imperial Palace as a case study. Under the given target area coverage and predetermined cost, the optimal placement plan of image fire detectors is obtained, i.e., {11, 13, 16, 20, 23, 25, 33, 35, 48, 55, 58, 60, 62, 64, 67, 68} and {11, 22, 26, 29, 35, 44, 48, 59, 64, 67}. In addition, the joint area coverage reached 98.17% and 92.41%, respectively. Conclusions: Compared with the existing semiquantitative approaches to placing image fire detectors, the proposed methodology simplifies several manual calculation processes and can meet different requirements for cost and coverage within the detection area, thus optimizing the placement of image fire detectors. In addition, this methodology can quickly and accurately obtain the position and direction of the detector, which can be used for installing and calibrating outdoor image fire detectors in historical buildings. Thus, the proposed methodology can be implemented in the construction of early fire detection and alarm system in historical buildings to prevent massive economic and cultural losses owing to fire damage.

  • Lithium-Ion Battery
    Yuanhua HE, Xingchen SU, Liang ZHAO
    Journal of Tsinghua University(Science and Technology). 2025, 65(9): 1805-1820. https://doi.org/10.16511/j.cnki.qhdxxb.2025.22.046
    Abstract (379) PDF (86) HTML (248)   Knowledge map   Save

    Significance: Amid the rapid development of new productivity tools, the active thermal management system of power lithium-ion batteries is facing significant challenges, such as improving charge and discharge ratios and adapting to harsh application scenarios. To maintain stable operations of the power system in the best state, the technical bottleneck of efficient and long-term heat dissipation needs to be overcome. At the same time, in the consumer market, the cost factors of engineering products, including design, materials, space volume, cooling refrigerants, and plumbing systems, need to be carefully considered. Therefore, the active thermal management system of power lithium-ion batteries, which is widely used and has great potential, needs to be systematically summarized. Progress: This paper comprehensively reviews research progress on the active thermal management of power lithium-ion batteries in recent years. First, we summarize the research status of single-phase thermal management methods, including forced air cooling, natural air cooling, immersion liquid cooling, and microchannel liquid cooling. In the context of low charge and discharge ratios and lightweight engineering, air cooling still plays an important role. The main factors affecting battery temperature include air flow rate, air flow velocity, battery layout, and flow channel design. The air cooling system has unique engineering advantages because of its low cost. With an increase in charge and discharge ratios, the effect of microchannel and immersion liquid cooling is significantly enhanced, which is beneficial in controlling the battery's temperature and temperature uniformity. Several factors, such as liquid flow rate and channel design, have notable effects on the battery's heat dissipation; however, corresponding costs also increase. Second, we discuss advanced cooling techniques based on gas/liquid two-phase flow, such as submerged boiling cooling and spray-integrated cooling. In the context of increasing demand for batteries with high charge and discharge ratios, these technologies provide efficient, flexible, and adaptable solutions to thermal management challenges. The cooling medium, the flow rate, and the nozzle arrangement all have different effects on the temperature of the battery, along with the size of the droplets. The feasibility of the comprehensive and market recovery costs to maintain profits and long-term development of the enterprise also needs to be considered. Conclusions and Prospects: Based on the literature review, this paper forecasts the progress trend of active thermal management technology from multiple application scenarios to meet the development needs of lithium electric power in sea, land, and air. We believe that the development of the active thermal management technology of the new generation of power lithium-ion batteries should fully consider practical engineering requirements, such as charge and discharge ratios and harsh application scenarios. Future research and development should focus on improving heat transfer efficiency, system integration, and intelligent control capabilities while overcoming the challenges of reliability, cost, adaptability to extreme operating conditions, and energy consumption optimization.

  • Traffic and Transportation
    Chengyong ZHAO, Fei MA, Ruiying CUI, Wei REN
    Journal of Tsinghua University(Science and Technology). 2025, 65(10): 1930-1944. https://doi.org/10.16511/j.cnki.qhdxxb.2025.27.021
    Abstract (376) PDF (83) HTML (270)   Knowledge map   Save

    Objective: The development of modern urban agglomerations requires cultivating integrated transportation networks, advancing transportation integration, and building resilient, comprehensive three-dimensional systems. Building a highly efficient and stable transportation network is fundamental to promoting the growth of modern metropolitan areas. Methods: This paper focuses on the metropolitan area multimodal transportation network (MA-MTN) and uses complex network methodologies to model different transportation network construction processes. By applying a load capacity model, it defines the load and capacity levels of the MA-MTN and develops a load redistribution model to facilitate network operations; then, different dimensions of network resilience evaluation indicators were developed, focusing on three levels: overall network, network structure, and network function. From an integrated perspective, a comprehensive resilience indicator for multimodal transportation networks in urban agglomerations was established. The Xi'an metropolitan area was selected as the research subject to conduct simulation analyses of the multimodal transportation network's integrated resilience. The sensitivity levels of different traffic subnetworks in the multimodal transportation network of the metropolitan area were assessed through scenario simulation methods. Simulations examined the performance loss stage, the recovery stage following the implementation of recovery strategies, and the dynamic recovery stage during network failures caused by attacks on the multimodal transportation network. The effectiveness of strategies aimed at improving the integrated resilience of the metropolitan multimodal transportation network was also compared and analyzed. Results: The research results indicated that different transportation subnets play distinct roles in the multimodal transportation network of urban agglomerations. When MA-MTN is attacked, the sharpest decline in network performance occurs before the failure rate of network nodes reaches 60%. The node capacity adjustment parameter β in MA-MTN needs to be dynamically calibrated, with a value of 0.2 recommended for daily operations and 0.7 during extreme weather events or holiday peaks. Enhancing the resilience of urban multimodal transportation networks requires prioritizing transportation efficiency factors in the transfer of passenger flow transfer between nodes; this approach improves the comprehensiveness of the network resilience. Recovery strategies based on node load capacity levels are the most effective in improving the integrated resilience level of these networks. Such strategies enhance resistance to attacks and significantly improve recovery capabilities during cascading failures, ensuring robust network performance even under dynamic recovery conditions. Conclusions: This paper measures the integrated resilience level of transportation networks against external disturbances using a multidimensional integrated perspective; moreover, it identifies effective strategies to improve the structural and functional resilience of multimodal transportation networks in urban areas. Based on the research results, this study emphasizes the importance of determining critical thresholds for network node failures. It recommends dynamically adjusting the node capacity parameter β according to the operational conditions of the MA-MTN and external factors. Furthermore, it advocates optimizing the transfer process of node passenger flow load by prioritizing transportation efficiency during network operations. The study also highlights the significance of prioritizing node load capacity and developing effective recovery strategies based on the comprehensive importance of nodes. These approaches are crucial for promoting the construction of efficient, stable, and multimodal transportation networks in urban areas.

  • Process Systems Engineering
    Xin LIU, Bing WANG, Chenxi CAO
    Journal of Tsinghua University(Science and Technology). 2025, 65(5): 833-843. https://doi.org/10.16511/j.cnki.qhdxxb.2024.21.033
    Abstract (365) PDF (93) HTML (144)   Knowledge map   Save

    Objective: High-pressure gaseous hydrogen storage systems, such as large-scale hydrogen tank farms and distributed hydrogen refueling stations, are prone to hydrogen leakage, fire, and explosion because of the unique physicochemical properties of hydrogen. These events could set off a series of more serious accidents that would cause domino accidents. This study proposes a Bayesian network (BN)-based analysis method for the assessment of internal domino risk distribution within such systems. Methods: First, event tree models were established for various leakage scenarios in hydrogen-related facilities. Thereafter, all potential domino accident scenarios within the area were enumerated in calculation using accident consequence assessment models for hydrogen facility leaks. Next, BN models were automatically constructed to describe the propagation of domino accidents for each potential initial accident device. Finally, using BN models to analyze the magnitude and sources of overall risk for these systems, as well as the patterns of accident propagation and leakage scenarios. Results: The overall risk in hydrogen refueling stations mainly originates from the self-failure risk of compressors and the domino risk of hydrogen storage cylinders; jet fire (JF) and vapor cloud explosion (VCE) contribute 76% and 23.4% to the domino risk of all hydrogen cylinders, respectively. When the storage pressure in hydrogen tank farms is between 2 and 15 MPa, the domino risk comprises >25% of the overall risk, with explosions serving as the predominant accident type resulting in domino accidents. Causal reasoning indicates that a JF from a medium hole is the most probable domino accident scenario for both the hydrogen storage cylinders in the hydrogen refueling stations affected by the JF and the spherical tanks in the hydrogen tank farms affected by the explosion. Diagnostic reasoning for initial accident scenarios indicates that rupture and large-hole leakage of hydrogen spherical tanks and cylinders, respectively, are the most probable cause, provided that a multistage domino accident has occurred. Conclusions: Regarding the common 2-MPa hydrogen spherical tank employed in Chinese green hydrogen projects, the cumulative self-failure risk and domino risk of all tanks in the tank farms is 3.5×10-5 and 1.88×10-5 a-1, respectively, with the latter accounting for ~35%. In the future, decreasing the storage pressure to 1-1.7 MPa or increasing it to 10-15 MPa might lower the contribution of domino risk to < 30% and maintain cumulative self-failure risk at a level of 10-5 a-1. At 70-MPa hydrogen refueling stations, the domino risk to hydrogen cylinders from the compressors and pipeline is ~2.9×10-4 and ~4.4×10-5 a-1, respectively. In the abovementioned hydrogen storage systems, explosions are a notable accident type that can trigger domino accidents. Therefore, the implementation of explosion-suppression measures to decrease the probability of ignition is a key focus for mitigating the overall risk of hydrogen storage systems. Our findings indicate that future quantitative risk assessments for high-pressure hydrogen storage systems should consider the possibility of domino accidents. We believe these results serve as notable references for the establishment of advanced quantitative risk assessment methods customized to high-pressure hydrogen storage systems.

  • Vehicle and Traffic
    Peibao WU, Rongkang LUO, Zhihao YU, Zhichao HOU
    Journal of Tsinghua University(Science and Technology). 2025, 65(5): 930-939. https://doi.org/10.16511/j.cnki.qhdxxb.2025.21.005
    Abstract (364) PDF (111) HTML (170)   Knowledge map   Save

    Objective: In-wheel motor drive systems offer significant advantages for electric vehicles, including large chassis space, high transmission efficiency, and great control flexibility. However, in current mainstream in-wheel motor driving vehicles, the unsprung mass is significantly increased because the motor or the driving unit is rigidly connected to the wheel hub. The increased unsprung mass not only deteriorates vehicle ride comfort and road holding performance, but also results in heavy motor vibration. To mitigate these negative effects, configurations with suspended motor or driving unit have been proposed. It is thus desirable to explore the potential of these new configurations in this regard. Methods: This paper aims to mitigate the negative effects of unsprung mass by optimizing vehicle and motor suspension parameters simultaneously. To this end, it examines two typical in-wheel motor drive configurations with motor suspension: the dynamic vibration absorber configuration and the two-stage suspension configuration. Half-vehicle models are established respectively for both configurations, and key indices for vehicle dynamic performance are selected or defined. Drawing on earlier studies on how the increased unsprung mass impacts vehicle performance at various speeds, and considering the trade-off among ride comfort, road holding, and motor vibration, a multiobjective optimization strategy is proposed for parameter optimization of vehicle suspension and motor suspension. In the strategy, the goal is to minimize body vertical acceleration, wheel dynamic load, and motor acceleration at medium speeds while reducing body pitch acceleration, wheel dynamic load, and motor acceleration at high speeds. Constraints include the natural frequency and dynamic deflection of the vehicle suspension. Using the NSGA-Ⅱ algorithm, Pareto optimal solution sets are derived respectively for the two configurations. The entropy weight method is then applied to determine the optimal parameters for vehicle and motor suspensions. With the optimal suspension parameters, dynamic simulations are conducted on a random road, and the dynamic performance is evaluated based on the predefined indices. Results: The results indicate that, compared to the fixed hub motor configuration, both motor suspension configurations achieve a substantial performance enhancement in vehicle ride comfort, road holding, and motor vibration. Specifically, the dynamic vibration absorber configuration delivers greater enhancements in vehicle body vertical and pitch vibrations, as well as wheel dynamic load. Specifically, it reduces body vertical and pitch accelerations by 36.9% and 33.09%, respectively, at medium and high speeds. The wheel dynamic load is decreased by 18.42% and 18.55% at medium and high speeds, respectively. By contrast, the two-stage suspension configuration excels in reducing motor vertical vibration. It reduces motor vertical acceleration by 67.48% and 65.43% at medium and high speeds, respectively. Conclusions: This paper presents a passive control approach to address the negative effects of unsprung mass by utilizing motor suspension configurations. The in-wheel motor drive configurations with motor suspension demonstrate significant potential for improving vehicle dynamic performance. This research serves as a valuable resource for the design of in-wheel motor driving vehicles.

  • Public Safety
    Xiang GUO, Weibiao HU
    Journal of Tsinghua University(Science and Technology). 2025, 65(6): 1027-1039. https://doi.org/10.16511/j.cnki.qhdxxb.2025.22.024
    Abstract (356) PDF (109) HTML (118)   Knowledge map   Save

    Objective: Increasing complexity of disaster rescue systems and frequent global natural disasters require efficient emergency command mechanisms to reduce life and property losses. Large-scale earthquakes present time-sensitive, multi-dimensional challenges, needing rapid decisions, precise resource allocation, and cross-departmental coordination. However, current research does not quantitatively analyze how emergency command capabilities affect rescue efficiency in dynamic disaster scenarios. This study develops a multi-agent simulation model based on the 6.2-magnitude Jishishan earthquake in China to assess the impact of command strategies on rescue operations and optimize emergency response systems. Methods: A multi-agent simulation model is developed using the NetLogo platform to meet the research objectives. It represents the emergency command and rescue system with four types of agents: the on-scene chief command agent, the on-scene support command agent, the emergency rescue force agent, and the disaster-affected agent. Each agent has specific behavioral patterns and interaction rules. The on-scene chief command agent oversees coordination, decision-making, and resource allocation. The on-scene support command agent manages task planning, resource scheduling, and real-time information feedback. The emergency rescue force agent performs rescue tasks, while the disaster-affected agent represents victims awaiting rescue. The simulation model is designed to reflect real-world scenarios, focusing on key variables such as information completeness, decision-making capability, resource allocation efficiency, and coordination success rate. This study analyzes scenarios under different conditions: (1) Information incompleteness: limited communication and fragmented data; (2) Resource scarcity: imbalanced demand-supply distribution; and (3) Feedback delays: lagging information updates and decision adjustments. The rescue rate (R), defined as the ratio of rescued victims to total victims, is the primary performance metric. Comparative analyses adjust agent capabilities to identify optimal strategies. Results: The simulation results highlight key findings: (1) Critical role of command capabilities. The on-scene chief command agent's information organization and coordination control capabilities are crucial in accelerating early-stage rescue operations. When optimized, these capabilities increase R by 0.4 within the first five simulation ticks. The on-scene chief command agent's feedback adjustment capability becomes crucial in later stages, thus reducing task conflicts by 0.25 through dynamic strategy updates. (2) Scenario-specific optimization strategies. Under incomplete information conditions, improving the on-scene support command agent's resource scheduling speed increases R from 0.4 to 0.9 in 9 ticks. During resource scarcity, enhancing the on-scene support command agent's coordination ability minimizes allocation conflicts, thus achieving a stable R of 0.7 despite limited supplies. During feedback delays, enhancing the on-scene support command agent's task prioritization management reduces decision latency by 30%, thus increasing R from 0.5 to 0.68 in 12 ticks. (3) Role of lower-level command agents. This study emphasizes the significance of lower-level command agents, especially the on-scene support command agent, in enhancing rescue efficiency. Optimizing their resource scheduling and coordination abilities can significantly enhance the overall rescue operation, even under complex, challenging conditions. Conclusions: This study quantitatively confirms that effective emergency command is crucial for earthquake rescue efficiency. The on-scene chief command agent's information integration and macro-level coordination capabilities form the foundation for rapid response, while the on-scene support command agent's strategic optimizations are critical under resource constraints. A hierarchical, decentralized command structure is recommended to effectively balance decision-making authority with operational flexibility. Future research should combine dynamic disaster factors to evaluate the robustness of command strategies in unpredictable scenarios.

  • Fire in Forests
    Li Deng, Jin Zhou, Quanyi Liu
    Journal of Tsinghua University(Science and Technology). 2025, 65(4): 681-689. https://doi.org/10.16511/j.cnki.qhdxxb.2024.27.036
    Abstract (348) PDF (85) HTML (133)   Knowledge map   Save

    Objective: With the rapid and continuous advancement of urbanization at an astonishing pace, fire accidents are happening with increasing frequency globally. A sudden fire outbreak holds a significantly high probability of causing extensive and severe harm to society. Research conducted on image-based fire detection algorithms is highly beneficial and valuable in terms of extracting the detailed morphological features of fires or smoke, aiding in effectively improving the efficiency of fire warnings. Methods: This study presents and introduces an improved version of the YOLOv8 algorithm. Initially, the neck network of the algorithm is strengthened by integrating the SlimNeck lightweight module. Then, the inference framework of the YOLOv8 algorithm is substituted with slicing-aided hyper inference (SAHI) to further enhance the capability of the algorithm to detect small targets. Moreover, fire and smoke are two crucial target categories in fire scenarios. Given the inherent complexity of fire image backgrounds, which frequently contain numerous interferences from nonfire categories, fire dataset targets are classified as fire, smoke, and default. Results: Experimental results clearly indicate that the SlimNeck-YOLOv8 algorithm showcases superior fire detection performance compared with other related advanced algorithms. In contrast to the YOLOv8 algorithm, the recall rate of this algorithm is elevated by 2.7%, mean average precision (mAP) is increased by 0.2%, and detection speed is accelerated by 35 frames/s. Simultaneously, with the developed algorithm, the computational burden is effectively reduced. Conclusions: By integrating SlimNeck and SAHI, respectively, to optimize the network structure and inference framework of the YOLOv8 algorithm, the improved YOLOv8 algorithm is utilized for detecting fire and smoke, which has, to a certain extent, remedied the shortcomings of the YOLOv8 algorithm for this purpose. To effectively verify the performance and effectiveness of the proposed algorithm, the model is not merely trained on the fire dataset but is trained on coco128 dataset under precisely the same training epochs and parameters. This is done with the specific aim of conducting comprehensive tests to accurately evaluate model performance. The improved algorithm proposed in this study has successfully achieved the expected goals of significantly enhancing the mAP, recall, and speed of the YOLOv8 algorithm for detecting fire and smoke and concurrently reducing the rates of missed and false detections. This advancement holds great promise for enhancing the reliability and effectiveness of fire detection systems, providing prior and more accurate warnings to minimize potential losses and damages caused by fires. The combination of innovative techniques and targeted optimizations presented in this research offers valuable insights and practical solutions in the fire safety field and related applications.

  • Fire in Subterranean Spaces and Tunnels
    Nie YANG, Caiyi XIONG, Jiaqi CHENG
    Journal of Tsinghua University(Science and Technology). 2025, 65(4): 714-720. https://doi.org/10.16511/j.cnki.qhdxxb.2024.27.055
    Abstract (348) PDF (251) HTML (142)   Knowledge map   Save

    Objective: Tunnel fires pose remarkable challenges for evacuation and fire rescue operations due to inadequate ventilation and associated hazards, such as smoke accumulation, elevated temperatures, rapid heat release rates (HRRs), and severely reduced visibility. While various monitoring techniques, such as thermocouples, fibers, and CCTV cameras, have been proposed to monitor fire development trends and assist in firefighting and evacuation efforts, obtaining critical tunnel fire information, specifically real-time fire HRR and fire source locations, remains challenging. These difficulties arise mainly because conventional detection methods are often disrupted by high temperatures or obstructed by dense smoke, hindering effective information transmission. Hence, an improved method to predict tunnel fires is urgently needed. Methods: In this study, external smoke images, i.e., the smoke structure observed from outside the tunnel gate, and CNN-based deep-learning algorithms are used to predict real-time fire HRR and location within the tunnel. A 100-m full-scale tunnel is selected as the target, and its behavior is simulated using the Fire Dynamics Simulator to form an image database. During simulation, different fire parameters, such as maximum HRR, soot yield rate, and location, are varied based on typical vehicle types found in real tunnels, resulting in approximately 900 different tunnel models that generate diverse external smoke morphologies. The simulated smoke images are captured at 1 s intervals from four observation angles: front and side views from the left and right tunnel gates. As a result, approximately 388, 800 smoke images are collected in the database. For the deep-learning algorithm, the VGG16 model, proposed by the Oxford CNN team, is employed as the target AI model for tunnel prediction. During model training, the VGG16 model continuously refines its internal parameters to minimize the error between AI predictions and the FDS simulation. Results: Results show that the proposed method can effectively predict real-time variations in fire HRR variation and location. The model trained using front-view images from both tunnel gates achieved the highest prediction accuracy, with an HRR error of less than 25% and a location error of less than 10 m. Additional tunnel simulations were conducted to further validate the robustness of the proposed method. In these simulations, the fire source is not stable but continuously moving within the tunnel at velocities ranging from 0 to 2 m/s, simulating a scenario where a vehicle catches fire but does not stop immediately. The results show that, although trained on stable fire cases, the AI model still maintains high accuracy in predicting the moving fire source, with small HRR and location errors, thus confirming the effectiveness of the smoke image-based detection method. Conclusions: Notably, further efforts are still necessary for the application of this method in real tunnels because the current work does not consider the complex background interference in actual smoke images, nor does it consider the impacts of environmental factors such as wind, sprinklers, and exhaust systems on the external smoke structure. However, this study represents an important first step toward predicting tunnel fires based on external smoke, which could play a valuable role in future smart fire prediction and firefighting applications.

  • Vehicle and Traffic
    Haoran LI, Yunpeng LU, Shucai XU, Sifa ZHENG, Chuan SUN
    Journal of Tsinghua University(Science and Technology). 2025, 65(5): 948-958. https://doi.org/10.16511/j.cnki.qhdxxb.2024.21.023
    Abstract (342) PDF (91) HTML (168)   Knowledge map   Save CSCD(1)

    Objective: In the context of the swift progression of autonomous driving technology, the widespread reliance of current systems on uniform behavioral models for decision-making and path planning is a crucial concern. This generalized approach often disregards variations in driving behavior among different drivers, making it challenging to achieve driving behavior that aligns with drivers' expectations in complex and dynamic traffic scenarios. Consequently, a decrease in comfort and trust is observed in autonomous vehicles. This study focuses on lane changing, a common yet critical driving maneuver, aiming to optimize planning strategies by incorporating drivers' characteristics to match individual driving styles. Methods: This study comprehensively analyzes data derived from naturalistic driving experiments. Kalman filtering is used to detect and eliminate anomalies in raw data, thereby reducing noise interference. The integration of temporal constraints into the fuzzy C-means clustering algorithm ensures the preservation of chronological order in the clustered data, which is essential for analyzing sequential events such as lane change maneuvers. Lane changing requires lateral and longitudinal vehicle control with distinct operational characteristics across different phases of the maneuver. By clustering the entire lane-changing process data into three major categories, C1, C2, and C3, representing the preparation, execution, and completion stages of lane changing, respectively, this study aims to analyze disparities in driver behavior during these distinct phases. According to the characteristics of lane-changing scenarios, relevant variables are selected for in-depth examination. Independent sample t-tests are then conducted among different drivers for each variable, and variables with a high proportion of insignificant t-values are eliminated. This process helps identify personalized indicators that reflect driver-specific traits during lane changing. Subsequently, an artificial potential field (APF) model is established for the lane-changing scenario. The APF method uses virtual attractive and repulsive forces to guide the vehicle toward a path of decreasing potential energy, effectively avoiding obstacles while moving toward the target position. Variations in the APF parameters lead to different planning paths. By leveraging the extracted personalized indicator, the APF model for lane changing is customized, yielding paths that align with individual driving styles. Another pivotal consideration is the planning of lane-changing speeds. Given the notable variations in the speed preferences of drivers, this study proposes a lane-changing speed planning algorithm based on a quintic polynomial function. This ensures that the mean duration of acceleration and the maximum acceleration limit during the execution phase align with each driver's speed control habits and that a smooth velocity profile is maintained throughout the lane-changing maneuver. Conclusions: This study proposes a lane-changing planning method for autonomous vehicles that considers driver differences. The simulation results confirm that the proposed personalized lane-changing planning approach not only produces paths that align with individual driving styles but also regulates lane-changing velocities in accordance with each driver's operational habits. By quantifying behavioral variations, developing personalized APF models, and implementing customized speed planning strategies, this study exemplifies how to tackle individualization challenges in autonomous driving. This study represents a step forward in advancing autonomous vehicle technology toward a human-centric and intelligent future.

  • Computational Linguistics
    Yuanlai WANG, Yu BAI, Peng LIAN
    Journal of Tsinghua University(Science and Technology). 2025, 65(5): 844-853. https://doi.org/10.16511/j.cnki.qhdxxb.2024.21.032
    Abstract (342) PDF (90) HTML (107)   Knowledge map   Save

    Objective: ID-based recommendation methods in recommender systems utilize unique identifiers of users or items to generate suggestions. However, these methods often encounter challenges such as data sparsity and cold-start problems, especially when using single-domain data. Cross-domain ID-based recommendations can help mitigate cold-start issues by relying on overlapping users or items across different domains. However, cross-domain ID information often lacks overlapped users or items. To address this, latent semantic patterns in behavioral networks across various recommender domains can be leveraged. This method aims to extract user preferences for items from discrete ID data, thereby tackling the limited shared information between these domains. Methods: Based on the study of interaction behaviors, this paper assumes the existence of latent pattern correlations between user-item interactions across different domains. A potential factor connects users across domains, leading some users to exhibit similar interaction behaviors in different contexts. These shared characteristics are referred to as interaction behavior semantic patterns. The proposed pattern-enhanced ID recommendation method enhances ID-based recommendations by leveraging these semantic patterns. In the target domain recommendation task, auxiliary domain information is introduced, and information from both auxiliary and target domains is jointly encoded using a graph neural network. By incorporating interaction behavior semantic patterns, user-item interaction and item description information from the auxiliary domain are transferred to the target domain. This process enhances the semantics of interaction behaviors in ID-based recommendations within the target domain. Results: This study conducts experiments on nine public datasets. User-item ID interaction data from datasets such as Yelp2018, Amazon-Kindle, Alibaba-iFashion, Amazon-Electronic, Book Crossing, MovieLens10M, MovieLens20M, and MovieLens25M serve as target domain datasets. Meanwhile, item description data from the Citeulike-a dataset is used as the auxiliary domain dataset. There are no overlapping user or item IDs between these domains. Experimental results show that the proposed method outperforms the current state-of-the-art methods, showing improvements in Recall@20 by 3%-30% and in NDCG@20 by 1% to 40%. Conclusions: This study proposes an ID recommendation method enhanced by interaction behavior semantic patterns based on the assumption of latent pattern correlations in user-item interactions across different domains. By introducing these semantic patterns, this method transfers user-item interaction information and item description information from the auxiliary domain to the target domain, thereby enhancing semantic understanding in ID-based recommendations within the target domain. Experimental results validate the ability of the proposed method to transfer semantic information in the absence of overlapping users and items across domains, yielding better recommendation performance. These findings validate the effectiveness of the proposed assumption and method. Additionally, experiments on ID recommendation tasks in multiple domains show that interaction behavior patterns between similar domains offer better transferability. The closer the auxiliary domain is to the target domain, the more notable the improvement in the target domain's ID recommendation results.

  • Fire in Buildings and Timber Structures
    Yunfa WU, Yanqing Wang, Sarula Chen, Zehao Chen, Chao Ding
    Journal of Tsinghua University(Science and Technology). 2025, 65(4): 644-654. https://doi.org/10.16511/j.cnki.qhdxxb.2024.27.045
    Abstract (340) PDF (88) HTML (124)   Knowledge map   Save

    Objective: Fire risk has consistently posed a significant challenge in traditional villages owing to their diverse layouts and architectural elements, complex geographical environments, and the inherent flammability of building materials. Most research on fire spread models predominantly focuses on individual buildings or forest fires, neglecting the unique conditions of traditional villages. Safety concerns also limit field studies. Therefore, it is crucial to study fire spread dynamics in Huizhou's traditional villages to develop a quantitative model that enhances both fire safety and emergency response capabilities in these communities. Methods: This paper constructs a disaster field model using cellular automata to simulate fire spread in village conditions, considering factors like fire dynamics, environmental characteristics, building materials, and layout considerations. The model was validated using the Miaozhai fire incident in Wenquan village, Jianhe county, Guizhou province (February 2016), analyzing how ignition points, wind speeds, and wind directions influenced potential ignition sites, spread paths, and impact zones. Computer simulations were employed to mitigate the risks associated with live-fire experiments, enhancing simulation efficiency. By incorporating multiple influencing factors into the prediction framework, this approach models how fires spread through village environments, using initial conditions regarding village elements and fire states to create visual data of fire spread patterns. However, most studies utilizing cellular automata primarily address forest fires, making them unsuitable for traditional villages. This paper specifically investigates Hongcun, a dense village in Anhui province, and proposes a model tailored to its unique wildfire risks, applicable to similar settings in the region. Results: Findings indicate that simulated outcomes align closely with real-world observations, confirming the method's feasibility. Fires follow distinct pathways depending on different ignition sources, and densely clustered buildings are more susceptible to extensive fire spread than less populated regions throughout the respective timelines analyzed herein. Layout configurations significantly influence both the paths flames take and the extent of their spread after ignition. This variability is influenced by how structures are distributed over time. Higher building densities are linked to faster fire spread, especially when wind plays a crucial role in determining how far the fire reaches. Wind direction significantly influences the path of the fire, largely pushing it downwind orientations and away from the point of ignition. However, with prevailing easterly or southerly winds, the fire range is considerably reduced, limiting the overall local impact despite ongoing combustion taking place nearby. Conversely, strategically placed roadways and water systems effectively slow down fire progression and limit its lateral spread. This helps mitigate threats beyond the immediate area affected by the fire, protecting surrounding locations from the phenomena previously discussed. Conclusions: The developed modeling framework offers detailed analysis for assessing risks in clustered traditional settlements, facilitating informed decision-making. It supports the implementation of preventative measures, including hazard identification protocols and isolation strategies, specifically targeting identified vulnerabilities. This framework provides valuable insights for future planning, optimizing firefighting resources allocated and ensuring sustainable development practices are maintained.

  • Process Systems Engineering
    Chaopeng TENG, Cheng JI, Fangyuan MA, Jingde WANG, Wei SUN
    Journal of Tsinghua University(Science and Technology). 2025, 65(5): 825-832. https://doi.org/10.16511/j.cnki.qhdxxb.2024.21.036
    Abstract (338) PDF (91) HTML (87)   Knowledge map   Save

    Objective: During chemical production, the different sampling frequencies of different variables generate a substantial amount of unlabeled data, which is challenging to use effectively, resulting in data waste. Additionally, distributed control systems frequently produce noisy data due to environmental interference and aging measurement instruments, complicating soft sensing modeling. Furthermore, in semi-supervised tasks, unsupervised components can undermine the accuracy of supervised tasks. To address these issues, this study proposes a semi-supervised soft sensing method for product quality based on a ladder network, enabling accurate, timely determination of key product quality and enhancing operational efficiency. methods: A two-step variable screening method—maximum mutual information (MIC) followed by minimum redundancy maximum relevance (mRMR)—was used to screen auxiliary variables. MIC was first applied to eliminate low-correlation variables, and mRMR was then used to remove redundant variables among the auxiliary set, yielding an optimal selection for modeling. The ladder network-based soft sensing method was then established, improving noise resistance by injecting disturbances into each encoder layer and reconstructing noise-free features layer by layer through the decoder. Skip connections were added between encoders and decoders to extract more information from unlabeled data, enhancing focus on supervised tasks and strengthening the model's robustness and generalization. Results: This method was applied to the methanol-to-olefin (MTO) process, termed DMTO. The MIC and mRMR screening reduced 203 auxiliary variables to an optimal 50. After preprocessing, several soft sensor models were established to compare outcomes. Results showed that unlabeled samples improved the effectiveness of supervised soft sensing tasks, with the proposed method enhancing various evaluation metrics. Residual analysis further indicated that the predicted residuals of the ladder network-based semi-supervised method closely aligned with a standard normal distribution, validating the method's superiority. Conclusions: Compared with supervised and other semi-supervised learning methods, the ladder network demonstrates superior prediction accuracy and generalization in soft sensing ethylene products in the DMTO process. The proposed approach offers promising applications for real-time monitoring and control of product quality in chemical production.

  • Advanced Ocean Energy Technology
    Yajun REN, Sheng LI, Wei SHI, Jungang HAO, Ling ZHU, Shuai LI
    Journal of Tsinghua University(Science and Technology). 2025, 65(8): 1387-1402. https://doi.org/10.16511/j.cnki.qhdxxb.2025.27.020
    Abstract (332) PDF (108) HTML (151)   Knowledge map   Save

    Significance: The global offshore wind power industry is experiencing rapid growth, with floating offshore wind energy technology emerging as a pivotal solution for exploiting wind resources in deep sea areas. The floating foundation, a critical component of floating offshore wind power systems, plays an essential role in ensuring the stability and safe operation of wind turbines. However, the design and analysis of these foundations are fraught with challenges due to their intricate system composition, distinctive dynamic characteristics, and the harsh marine environment they must endure. Traditional design methods, which rely heavily on experience and trial-and-error, are not only inefficient but also fail to integrate multidisciplinary theories, highlighting the need for the more scientific design and optimization tools. Progress: As research delves deeper, technological advancements, and accumulated development experience have led to the application of multidisciplinary optimization design and analysis techniques in the floating wind power sector. The field of floating offshore wind power foundation optimization has seen significant advancements in recent years, with a shift towards more sophisticated multidisciplinary, multi-objective optimization techniques. These techniques have been crucial in addressing the complex interplay between various factors such as structural mechanics, hydrodynamics, aerodynamics, and economic considerations. MDAO techniques, initially from aerospace, enable system-wide optimization by considering interdisciplinary interactions, crucial for managing the complex dynamics between wind turbines and environmental loads. In the realm of optimization algorithms, genetic algorithms, particularly the Non-Dominated Sorting Genetic Algorithm Ⅱ (NSGA-Ⅱ), have become prominent due to their ability to handle multiple conflicting objectives simultaneously. These algorithms have been effectively utilized to identify a set of Pareto-optimal solutions, providing a range of options that balance different performance criteria such as cost, structural fatigue, motion response, and tower acceleration. The use of frequency domain analysis has been widespread for early-stage optimization research due to its efficiency in capturing key dynamic characteristics of the floating structures. However, the industry has also recognized the need for time-domain simulations to capture the nonlinear dynamics of the system, especially when precision is paramount. Hybrid methods that combine the benefits of both frequency and time-domain analyses, as well as the application of surrogate model, are being developed to achieve a balance between computational efficiency and accuracy. These innovative techniques offer scientific guidance for the scale planning and optimization design of floating foundations, striving to achieve an optimal balance in cost, performance, and environmental adaptability. This paper provides a comprehensive review of the evolution and application of multi-objective, multidisciplinary optimization methods in the scale optimization of floating offshore wind power foundations. Conclusions: The integration of multi-objective, multi-disciplinary optimization technology is of paramount importance for the optimized design of floating offshore wind power foundations. By merging structural optimization concepts with efficient optimization algorithms and precise simulation tools, it is possible to enhance design efficiency, abbreviate the design cycle, and more scientifically and swiftly obtain floating foundation design that exhibit superior comprehensive performance. This approach not only streamlines the design process but also ensures that the final scheme is more robust and cost-effective, meeting the stringent requirements of the offshore wind power industry. Looking ahead, the field is expected to see further integration of advanced computational methods, machine learning techniques, and high-fidelity simulations to push the boundaries of floating offshore wind power foundation design, leading to more efficient, cost-effective, and durable solutions that can withstand the test of time and the rigors of the marine environment.

  • Microgravity Combustion
    Yucheng LIU, Xingxian LI, Yuzhe WEN, Huilong ZHENG, Xiaofang YANG, Xiaowu ZHANG, Yufeng HE, Jiaokun CAO, Changshuai DU, Qiang YAO
    Journal of Tsinghua University(Science and Technology). 2025, 65(9): 1609-1620. https://doi.org/10.16511/j.cnki.qhdxxb.2024.27.039
    Abstract (329) PDF (150) HTML (151)   Knowledge map   Save

    Significance: Experimental conditions in microgravity differ considerably from those in Earth's normal gravity. Combustion experiments conducted in microgravity eliminate the effects of natural convection and simplify the complex factors of combustion processes. Combustion experiments can reveal many physical and chemical phenomena only under normal gravity conditions, providing significant insights for fundamental scientific research. Meanwhile, microgravity combustion experiments allow a deeper investigation into the fundamental physical phenomena of advanced combustion issues, serving as a crucial means for basic research. This research supports China's energy and power industries in addressing the needs related to energy conservation, emission reduction, and green energy transition, as well as those related to fire prevention on the ground and in space. Progress: The China Space Station (CSS) is planned to support combustion science experiments using multiple fuel types, including gaseous, liquid, and solid fuels, in orbit. The first series of CSS combustion experiments consisted of gaseous combustion experiments, a few of which were conducted in the combustion science rack (CSR). This article reviews the progress of microgravity jet flame research and introduces types of scientific research that can potentially be supported by the combustion science application system and gaseous combustion experiment insert (GCEI) in the CSR. The combustion science experiment system provides the GCEI with the necessary resources, such as water cooling, electricity, and gas emissions. The GCEI supports gas-flow regulation functions, allowing the adjustment of the gas type, flow rate, and ignition power based on the project's scientific objectives. The GCEI features a universal burner platform and can adjust the gas composition, flow rate, and ignition energy. Various types of flames can be generated by replacing the project burners. Optical diagnostics conducted outside the optical windows of the combustion chamber provide data on the flame dynamics, flow fields, and spatial distributions of OH and CH. Currently, astronauts aboard the CSS have installed an igniter in the gas experiment module and mounted the GCEI in the CSR combustion chamber. The GCEI automatically completes a series of actions, including configuring the combustion environment gas, ejecting the fuel gas, heating the igniter, determining parameters, performing optical diagnostics, filtering and circulating, and exhausting waste gases. Because of the lack of buoyancy effects, microgravity flames exhibit considerable differences compared to normal gravity flames. After transmitting the experimental data to the ground operation control center, the control and monitoring of the experimental conditions are performed to confirm the normal operation of each subsystem. The fuel, oxidizer, and inert-gas flow rates are set according to predetermined delays and settings, demonstrating the normal operation of key modules, such as the GCEI's fuel gas cylinder module, gas-distribution solenoid valve, igniter, and oxidizer and diluent subsystems of the CSR. The image intensifier camera of the combustion diagnostic subsystem captures corresponding OH and CH emission images, demonstrating an increase in the flame width and a rapid decrease in the flame height until localized extinction occurs at the end of the non-premixed flame. Conclusions and Prospects: The present study verifies that the GCEI can effectively realize microgravity flames for gaseous experiments in orbit and provide a support and design basis for subsequent diversified combustion science experiments. The GCEI is expected to provide valuable data and platform support for subsequent microgravity experiments aboard the CSS.

  • People Evacuation and Risk Assessment
    Dongyue ZHAO, Qian CHEN, Shibin NIE, Changkun CHEN, Wei PENG, Shihua REN
    Journal of Tsinghua University(Science and Technology). 2025, 65(4): 732-741. https://doi.org/10.16511/j.cnki.qhdxxb.2025.27.006
    Abstract (329) PDF (138) HTML (126)   Knowledge map   Save

    Objective: The operational safety resilience of urban lifeline systems refers to their ability to maintain critical functions and recover swiftly when faced with multiple disruptive events during their operational period. This resilience is crucial for ensuring the safe and uninterrupted operation of these critical systems. However, existing resilience evaluation methods often risk inaccuracies under multi-event scenarios, making it challenging to accurately assess the systems' overall resilience. To address this issue, this paper proposes a comprehensive resilience evaluation model and methodology tailored for urban lifeline systems during their operational phase. The approach considers multi-event contexts and is built on a detailed analysis of resilience mechanisms. It incorporates key factors such as resistance, recovery, and adaptability. Methods: This study presents an evaluation method for assessing the operational safety resilience of urban lifeline systems under multi-event scenarios. The methodology is structured around three main steps: analyzing resilience mechanisms, constructing a resilience evaluation model for multi-event contexts, and applying the method in a case study. First, using resilience curve theory, scenario assumptions and theoretical deductions were used to analyze the specific roles and effects of resistance, recovery, and adaptability during operational performance loss and recovery under both single and multiple disruptive event scenarios. This analysis facilitated a systematic understanding of the resilience mechanisms governing urban lifeline systems. Second, building on existing resilience metrics, the study identified misjudgment risks and limitations in current resilience evaluation models. This led to the development of an enhanced model that integrates the advantages of existing methods. The improved model accounts for resilience mechanisms, computational efficiency, and reasonable distribution. An exponential function was utilized to establish an evaluation model for urban lifeline system resilience during the operational period. Key components, including system performance indicator settings, performance calculations, the resilience evaluation model, and a comprehensive resilience judgment matrix, were incorporated to form a "four-step" evaluation process tailored to multi-event scenarios. Finally, the methodology was validated through a case study on the Hong Kong (China) MTR East Rail Line from 2005 to 2009. Service performance indicators for the metro system were identified, and the causes of annual changes in comprehensive resilience levels were calculated and analyzed. The results demonstrated the feasibility and effectiveness of the proposed evaluation method. Results: The research results revealed the following insights: (1) During the operational period, the resilience mechanisms of urban lifeline systems differ significantly between single and multiple disruptive events. In single-event scenarios, the system primarily relies on resistance to mitigate or eliminate disruptions and uses recovery to restore functionality. In contrast, in multiple-event scenarios, the system leverages adaptability, continuously optimizing resistance and post-event recovery. This enhances its capacity to respond to regular, extreme, or unknown events. (2) The proposed evaluation model, incorporating key resilience factors like resistance, recovery, and adaptability, assesses the comprehensive resilience level of the system throughout its operational period. It also measures the average resilience in single-event scenarios and cumulative resilience under multiple events. (3) Practical application demonstrates that a high average resilience in single-event scenarios does not necessarily correlate with high cumulative or comprehensive resilience. Enhancing comprehensive resilience is a long-term, dynamic process that relies on adaptability to repeatedly refine system resistance and recovery. This approach minimizes the impact of disruptive events and accelerates recovery. These findings validate the feasibility and effectiveness of the proposed evaluation method. Conclusions: This method enhances the safe operation of urban lifeline systems and serves as a valuable methodological reference for future research on resilience enhancement strategies. It also holds significant potential for application in the resilience analysis of complex coupled systems.

  • Jingjing CHEN, Xin HU, Xinke SHEN, Dan ZHANG
    Journal of Tsinghua University(Science and Technology). 2025, 65(12): 2341-2350. https://doi.org/10.16511/j.cnki.qhdxxb.2025.21.049
    Abstract (328) PDF (190) HTML (284)   Knowledge map   Save

    Significance: Empowering machines to understand human emotions remains one of the primary challenges in developing artificial intelligence (AI). The affective brain-computer interface (BCI), which decodes emotional states based on brain signals, is an emerging field combining psychology, neuroscience, and AI. Brain signals are inherently uncontrollable, contain rich emotion-specific information, and provide a promising physiological basis for developing computing systems that support continual emotion monitoring. Since its inception, affective BCI research has required close collaboration across various disciplines: it depends on the use of information science for feature engineering and algorithm development, psychology for theoretical frameworks of emotion, and neuroscience for revealing the neural mechanisms underlying emotional processes. Such a demand for multidisciplinary co-operation forms the core focus of this review. Specifically, this paper focuses on the methods by which psychology-and neuroscience-based insights can inspire and advance affective BCI research. Progress: We summarize the current progress at three levels: theoretical, technical, and applied. In the first one, recent advances in affective science offer new perspectives for shaping affective BCI paradigms. The traditional discrete and dimensional frameworks have laid the groundwork for emotion decoding but often overlook positive emotions and the dynamic intensity of affective experiences. Recent emotion theories emphasizing refined positive emotions, mixed emotions, and context-dependent emotions provide valuable directions for improving emotion representation. Affective computing should align with these developments, integrating them into computational models to enhance ecological validity. In turn, affective BCI research may also contribute to psychology by offering evidence to test and refine emotion theories, fostering reciprocal progress across disciplines. At the technical level, neuroscience provides crucial insights for building more robust affective BCIs. Findings on emotional valence lateralization and distributed emotion-associated brain representations can inform the design of models that better capture emotional processing complexity. Moreover, inter-subject brain synchronization research has revealed mechanisms that enhance model generalizability across users, suggesting that incorporating neuroscientific findings can substantially improve the performance and reliability of affective BCIs. At the application level, affective BCIs are expanding beyond emotion recognition toward understanding emotion-related individual differences. Variability between individuals—often treated as noise—may instead offer meaningful information about personality traits or mental health conditions. In the long term, the goal of affective BCI systems may evolve from accurately identifying emotions to comprehensively understanding each individual's psychological tendencies and dynamic affective patterns across multimodal neural and behavioral data. We advocate for stronger integration between affective BCI technologies and practical domains. Such integration allows practical demands to drive technological development, ensuring that affective BCI remains human-centered. Conclusions and Prospects: Finally, we discuss the technical challenges of affective BCI, including extending algorithms from controlled laboratory settings to real-world scenarios, advancing sensor technology for more convenient and reliable brain-signal acquisition, and leveraging large models to enhance performance for affective BCI. Specifically, we emphasize the vital role of ethical considerations: as affective BCIs move from passive emotion detection toward active emotional support or intervention, the responsibility of humans as rational moral agents in a future era of man-computer symbiosis must be considered, ensuring the autonomy of human emotions.

  • Fire in Buildings and Timber Structures
    Huiling JIANG, Leiyin YANG, Liang ZHOU
    Journal of Tsinghua University(Science and Technology). 2025, 65(4): 634-643. https://doi.org/10.16511/j.cnki.qhdxxb.2024.27.034
    Abstract (324) PDF (66) HTML (112)   Knowledge map   Save

    Objective: The unique design and complex morphological characteristics of roof structures in the "purlin-type" ancient buildings of the Ming and Qing dynasties significantly influenced the spread of smoke. However, the existing fire detection systems used in ancient structures lack sufficient consideration of the impact of roof architecture on the placement of smoke detectors, which makes it challenging to realize early fire detection effectively and accurately. Therefore, further research on the response time of fire detectors considering purlin height is necessary to strengthen fire prevention and control in ancient buildings. Methods: Through the job-site survey of several gable-roofed buildings in the Forbidden City, the existing smoke detectors were found to be primarily installed on the purlins below the ridge or both sides of the ceiling. However, the scientific basis for these installation positions remains unclear. Thus, this study selected typical gable-roofed buildings with a slope of 27.41° constructed during the Ming and Qing Dynasties as the research object. Fire dynamics simulator was utilized for numerical fire simulation to analyze smoke movement in the different fire source positions (center, edge, and corner) and various purlin heights (low: 10-20 cm; middle: 30-40 cm; high: 50-60 cm). The research further explored the influence of smoke detector response performance on each fire position and purlin height by setting up a 9 × 11 smoke detector array and several slices of smoke mass fraction. In this model, as the purlin height increased, the detectors arranged under them maintained their x and y coordinates, while the z coordinate moved down the corresponding height. The position of the detectors on the ceiling remained unchanged. Results: The findings showed the following: 1) The variation law of the smoke propagation path: In the scenarios of central and edge fires, smoke primarily spread along the horizontal direction of the main ridge. In those of corner fire, when the purlin height was low, the smoke tended to expand upward along the sloping roof; when the purlin height was high, the smoke propagated along the purlin in a "stepped" path. 2) The response time sequence of smoke detectors: For the center and edge fires, the differences in response time of smoke detectors affected by purlin height was approximately 30 s. For the corner fire, when the purlin height was below 20 cm, the detector at the main ridge responded within 60 s, and the variation in the response time of smoke detectors affected by the purlin height can reach up to 45 s. 3) The locations suggested for detector installation: When the purlin height is below 30 cm, the smoke detector should be installed at the center of the main ridge; when the purlin height is above 30 cm, the detectors should be placed on the ceiling near the center of the main ridge on both sides. Conclusions: These findings provide technical support for the rational placement of fire detectors in similar "purlin style" buildings from the Ming and Qing dynasties to achieve comprehensive and timely early detection and warning of fire prevention in ancient buildings.

  • Mechanical Engineering
    Chuanhui ZHU, Zihao WANG, Zhiming ZHU, Tianyi ZHANG, Jichang GUO
    Journal of Tsinghua University(Science and Technology). 2025, 65(5): 882-890. https://doi.org/10.16511/j.cnki.qhdxxb.2024.21.024
    Abstract (322) PDF (101) HTML (145)   Knowledge map   Save

    Objective: Long-distance oil and gas transmission pipelines are important energy infrastructures. Currently, there are deficiencies in the automatic tracking accuracy and adaptability of external welding machines during pipeline construction. Operators often need to manually adjust external welding equipment (welding torch) to ensure the quality of the joints. Improving the intelligence of the welding process is an effective way to improve the efficiency and joint qualification rate during the on-site laying of long oil and gas pipelines. This study proposes a detection and control algorithm for the welding torch position and posture, applicable to all position welding of workpieces with arbitrary spatial postures. Methods: This study is the first to design a multisource sensor that combines laser-structured light vision sensing with dual-axis tilt sensing. This multisource sensor combines the advantages of both types of sensing, enabling it to detect the relative position information of the welding torch, as well as the posture information of the welding torch and workpiece. Using this multisource sensor, the algorithm performs integrated calculations of the welding groove size parameters and relative position and posture parameters of the welding torch under any workpiece posture through local groove surface reconstruction. This method fully uses laser line data from images to ensure stable, applicable, and accurate parameter calculations. Through coordinate transformation, the spatial posture (αw and βw) of the local workpiece can be obtained. These integrated feature parameters provide the basis for controlling the welding torch's spatial position and posture in any pipeline space all-position welding. Next, a pipeline intelligent welding system with five degrees of freedom based on multisource sensing is constructed. The system, combined with the designed algorithm, achieves real-time control of the welding torch position and posture (e, H, α, and β), meeting welding process requirements and enabling high-quality weld formation control during arc welding. Results: The experimental results show that the attitude angle feedback control error of the welding torch did not exceed 0.8°, the lateral position tracking deviation was within 0.25 mm, and the height tracking deviation did not exceed 0.63 mm during the pipeline all-position welding process. Compared to existing welding seam detection and tracking systems based on structured light-vision sensing, the proposed algorithm offers superior accuracy and stability. It detects not only the position deviation of the welding torch but also the posture of the welding joint on any unstructured surface with an unknown spatial posture. Conclusions: The proposed algorithm for detecting and controlling the position and posture of the welding torch can be used to achieve accurate control during pipeline space all-position welding. This advancement significantly improves the intelligence level of pipeline external welding equipment and provides technical support for controlling the position and posture control of the welding torch when welding unknown posture-curved workpieces.

  • Public Safety
    Changkun CHEN, Yipeng BAO, Jian ZHANG, Rongfu YU
    Journal of Tsinghua University(Science and Technology). 2025, 65(6): 1019-1026. https://doi.org/10.16511/j.cnki.qhdxxb.2025.22.018
    Abstract (315) PDF (75) HTML (101)   Knowledge map   Save

    Objective: The safety of rescue personnel is a critical factor in determining the success of rescue operations. The ability to accurately identify the motion states of rescue personnel is key to ensuring their safety. However, monitoring their motion states in real time is challenging because of the complex and dangerous environment they operate in. This study aims to develop a method for identifying the motion states of rescue personnel based on triaxial motion data to enhance the efficiency of personnel safety monitoring during rescue missions. Methods: In this study, the MPU6050 sensor, an integrated triaxial accelerometer and gyroscope, was utilized to collect the motion data from the leg and waist of rescue personnel. This sensor was selected based on its low power consumption, automatic sleep mode, and power management features, making it suitable for long-duration rescue tasks. Before data collection, the sensors were calibrated using zero-bias calibration to reduce errors and ensure data reliability. These sensors were strategically placed on the waist the and leg of the rescue personnel to capture their overall body dynamics and detailed movements. This study analyzed the acceleration data under four different motion states: standing still, working in a small area, walking, and running. The data were analyzed using time-domain feature analysis, focusing on the standard deviation of acceleration to quantify the fluctuation and stability of the motion states. This study proposed a classification mechanism based on the sum of the standard deviations of waist and leg accelerations to distinguish between different motion states. Results: The experimental results demonstrated that the proposed method effectively distinguished between different motion states. In the standing-still state, the total acceleration was close to zero, indicating no movement. In the state of working in a small area, the acceleration was greater than zero but remained within a small range with stable fluctuations. In the walking state, there was a significant difference between the waist and leg total accelerations, with the latter showing larger fluctuations and clear peaks and valleys. In the running state, both waist and leg total accelerations showed larger fluctuations, with the latter having a greater amplitude. The method showed high accuracy and stability in real-time monitoring of rescue personnel's motion states, effectively identifying the changes in motion states within a 2-min test period. The standard deviation analysis revealed a clear hierarchical distribution, indicating significant differences in acceleration fluctuations between different motion states. The sum of the standard deviations of waist and leg accelerations provided a reliable basis for distinguishing between the four motion states. Conclusions: This study has provided a reliable method for monitoring the motion states of rescue personnel, which can substantially improve the safety and efficiency of rescue operations. The method's ability to accurately and stably identify different motion states in real time makes it a valuable tool for ensuring the safety of rescue personnel in complex and dangerous environments. The findings of this study contribute to the development of more effective monitoring systems for rescue operations, potentially reducing the risk of accidents and enhancing the overall success rate of rescue missions.

  • Public Safety
    Jing LI, Qiyu FANG, Cheng GUAN, Zhizhen ZHANG, Xiao LI, Xuecai XIE
    Journal of Tsinghua University(Science and Technology). 2025, 65(6): 1079-1089. https://doi.org/10.16511/j.cnki.qhdxxb.2025.22.021
    Abstract (307) PDF (83) HTML (109)   Knowledge map   Save

    Objective: With the continuous expansion of power grid engineering and the rapid enhancement of information technology, the complexity of accident scenarios has increased significantly, leading to an explosive growth in monitoring data. This study aims to address the limitations of current power grid emergency plans in large-scale data querying and on-site guidance, and assist emergency decision-makers in quickly generating response plans, accurately allocating emergency resources, and promote the digitalization of emergency plans. Toward this goal, this study proposes an improved method for constructing an ontology model in power grid emergency planning. Methods: First, the traditional seven-step ontology construction method is refined based on the Toronto virtual enterprise (TOVE) and skeletal methods. In the refinement process, an "application scenario analysis" phase is introduced in the initial step to enhance the relevance of the ontology construction. Additionally, after creating ontology instances, a "qualitative and quantitative analysis" phase is adopted to verify the scientific validity and feasibility of the ontology, thereby improving model quality. Subsequently, the improved method comprehensively implements the goal determination and construction processes of the ontology model. These processes include defining knowledge in the power grid emergency planning domain; evaluating the reuse of existing ontologies; clarifying key concepts from legislation, emergency scenarios, and enterprise planning systems; and establishing class hierarchies and attributes. Next, the Protégé tool is employed for model visualization. For the example of the emergency plan for typhoon disaster events from a provincial power company, a model was constructed comprising 39 ontology categories, 24 relationship categories, and 14 attribute categories, supplemented by 408 entities, 774 relationships, and 334 attributes. Finally, the ontology model is applied to study the semantic network of emergency plans, designing a schema for the knowledge graph of emergency plans for power enterprises based on the resource description framework schema and web ontology language frameworks. The ontology model in the field of power grid emergency planning is visualized using Protégé. The richness and structural integrity of the model are evaluated using a HermiT1.4.3.456 reasoning engine and the ontology quality analysis method. Results: The results indicate that the relationship richness of the model approaches 1, suggesting a rich relationship structure; the attribute richness value exceeds 1, indicating reasonable attribute settings; and the richness of major classes is 1, whereas that of minor classes is 0.9474, close to 1, demonstrating a high utilization rate of classes. Overall, the model exhibits good rationality and practicality. Conclusions: Empirical results demonstrate that this ontology model effectively addresses impracticality issues, usability, and relevance often encountered in emergency plans. It significantly enhances the efficiency of emergency personnel in response and decision-making while also improving the expressiveness and digital construction of knowledge related to power grid emergency planning.

  • Frontiers in New-Quality Communication Technology
    Hailong QIN, Jincheng DAI, Sixian WANG, Shengshi YAO, Kai NIU, Wenjun XU
    Journal of Tsinghua University(Science and Technology). 2025, 65(11): 2080-2094. https://doi.org/10.16511/j.cnki.qhdxxb.2025.27.046
    Abstract (304) PDF (502) HTML (273)   Knowledge map   Save

    Significance: End-to-end semantic communication leverages deep learning models to extract semantic features from data, enabling intent-driven communication processes that significantly enhance transmission efficiency. However, existing semantic communication paradigms based on discriminative models employ symbol-level rate-distortion optimization and perform maximum likelihood estimation solely based on received signals, failing to satisfy the perceptual requirements of users. To ensure the visual quality of transmitted data, a generative visual semantic communication paradigm has emerged, which adopts a rate-distortion-perception optimization framework to achieve alignment between data transmission and human perception through maximum a posteriori estimation. Diffusion models are advantageous for controlling visual generation and have thus become essential tools for this generative paradigm. Nevertheless, systematic organization of the technical roadmaps for empowering semantic communication using diffusion models is lacking in current research. Progress: This study addresses this gap by modeling the communication process as a mathematical inverse problem and elucidating the general methodology by which diffusion models solve data compression and transmission challenges through posterior sampling. The fundamental concepts, mathematical formulations, and sampling strategies underpinning diffusion models are systematically introduced. In addition, the general methods and key technologies employed for diffusion model-enabled generative compression and transmission are comprehensively reviewed from an inverse problem-solving perspective. Moreover, the performance metrics commonly used for objective assessment of the visual quality of transmitted data are summarized to provide a comprehensive evaluation framework. The core methodology demonstrates that generalized communication processes can be effectively modeled as inverse problems. The approach involves inferring the source data distribution using maximum a posteriori estimation based on channel measurements and forward operators composed of various signal processing operations. Through diffusion posterior sampling, diffusion models solve these communication inverse problems via a three-step process: first, pre-training diffusion models from large-scale datasets are used to obtain diffusion priors; second, joint source-channel codecs are used to mitigate channel distortions in visual data transmission and construct proximal regularization terms; finally, measurement regularization terms are constructed based on channel measurements. By integrating these regularization terms for posterior estimation and distribution sampling, diffusion models can implicitly reconstruct source data through gradient descent, effectively overcoming transmission challenges caused by strong channel noise, nonlinear operators, and time-varying channel conditions. Conclusions and Prospects: The analysis reveals that compared to visual semantic communication approaches based on discriminative deep learning models, the generative visual semantic communication paradigm based on diffusion models can significantly improve transmission efficiency and resilience while ensuring perceptual quality and semantic consistency of visual information. This advancement represents a fundamental shift toward communication systems that prioritize human perceptual requirements alongside traditional distortion metrics. Open issues, including image realism modeling and acceleration of diffusion model sampling, are discussed. The report highlights the effectiveness of conditional diffusion models for enabling existing semantic communication architectures to recover sources at the receiver based on minimal tokens and highly degraded measurements, offering an intelligent and concise design philosophy for future generative visual semantic communication systems.

  • Energy and Industrial Thermal Safety
    Jiachen WANG, Haitao LI, Li CHANG, Shoutong DIAO, Yihao YAO, Gege HU, Minggao YU
    Journal of Tsinghua University(Science and Technology). 2025, 65(4): 759-768. https://doi.org/10.16511/j.cnki.qhdxxb.2025.27.002
    Abstract (303) PDF (124) HTML (112)   Knowledge map   Save

    Objective: High-pressure hydrogen leakage is a common safety concern in hydrogen refueling stations, significantly affecting the safe operation of these facilities. Accurate and timely identification of the leak source location and continuous monitoring of hydrogen concentration are essential for preventing explosions and ensuring the safety of refueling operations. Methods: In this study, we propose a novel deep learning-based hydrogen leak detection model that provides a smart, real-time detection solution. The model leverages computational fluid dynamics (CFD) simulations to construct a proprietary database of high-pressure hydrogen leaks under various conditions, including different leak locations, flow rates, and wind directions. Results: Our analysis revealed that wind direction plays the most significant role in influencing hydrogen dispersion patterns, which is crucial for accurately identifying leak sources and predicting the affected area. We compared six deep learning models: a backpropagation neural network (BPNN) based on a multi-layer perceptron (MLP), convolutional neural network (CNN), long short-term memory (LSTM) network, convolutional long short-term memory (CNN-LSTM), bidirectional long short-term memory (BiLSTM), and convolutional neural network bidirectional long short-term memory (CNN-BiLSTM). Among these models, the CNN-BiLSTM model exhibited the highest performance in leak detection tasks. By combining the strong local feature extraction capabilities of CNN with the long-term dependency capturing abilities of BiLSTM, this hybrid model significantly outperformed other models, achieving an accuracy and F1 score exceeding 98%. These results highlight the model's ability to handle complex temporal data efficiently, making it particularly effective for identifying hydrogen leaks in real-time industrial environments. The study also explores key factors influencing the performance of the detection models. We conducted sensitivity analyses on two critical hyperparameters: batch size and the number of training iterations. We found that a batch size of 16 and 400 iterations provided the optimal trade-off between convergence speed and detection accuracy. In addition, the robustness of the model has been demonstrated, maintaining high accuracy even in the face of complex conditions such as changes in wind direction and leak strength. The model has a high localization accuracy of more than 98.00% when detecting most leakage sources. The detection accuracy is slightly lower only in the hydrogen unloading region and the hydrogen storage region, mainly due to the limitation of the sensor layout. In addition, the research introduced data preprocessing techniques, including normalization, data dimension reduction, and feature selection, which significantly improved the efficiency of the detection process. By minimizing the dimensionality of input data, the computational load was reduced, enabling faster detection without sacrificing accuracy. Notably, the CNN-BiLSTM model also excelled in detecting rare but dangerous leak events, enhancing the overall safety monitoring capabilities of hydrogen refueling stations. Conclusions: This study's findings not only provide a theoretical foundation for hydrogen leak detection in refueling stations but also present a practical solution that improves both detection accuracy and operational efficiency. The proposed CNN-BiLSTM model offers a robust and intelligent approach for monitoring hydrogen leaks, significantly enhancing real-time safety measures in complex industrial settings. Future work will focus on expanding the model's generalizability to broader industrial applications and exploring further optimization of the feature extraction and classification processes to support the development of intelligent safety monitoring systems.

  • Fire in Subterranean Spaces and Tunnels
    Tao CHEN, Zhaijun LU, Dan Zhou
    Journal of Tsinghua University(Science and Technology). 2025, 65(4): 707-713. https://doi.org/10.16511/j.cnki.qhdxxb.2024.27.050
    Abstract (299) PDF (63) HTML (91)   Knowledge map   Save

    Objective: When a train catches fire in a tunnel, in order to facilitate the evacuation and rescue of passengers, the train should continue to operate until it exits the tunnel and arrives the next station or the emergency rescue station. To deeply understand the evolution process of moving train plumes in tunnels and provide theoretical guidance for the operation safety of railway tunnels, this study conducted moving model experiments to compare and analyze the differences of fire plumes between stationary and moving trains in a tunnel. Methods: Based on the existing moving model test platform, a 1:10 train-tunnel model was designed by using the Froude similarity criterion. The head of the train model adopted a streamlined design to avoid the influence of the vortex caused by boundary layer separation on the experimental results. The front surface of the tunnel was designed with fire-resistant transparent tempered glass to facilitate the observation of the train plume behavior. Thermocouple array was used to measure the temperature distribution at the top of the train and under the tunnel ceiling, and the total and radiative heat flux at the top of the train were measured by heat flux gauges. Based on biharmonic spline interpolation algorithm, a Matlab script was written to reconstruct the discrete temperature values on the top of the train, and the two-dimensional temperature distribution contour on the top of the train was obtained. Results: The results show that, unlike stationary train fires, the fire plume of the moving train moves forward by sweeping the roof of the tunnel. This paper defines this type of flow as "ceiling sweep" for the first time and divides its evolution into three stages: (1) Rise stage. The train fire plume rises and inclines to the upstream of the fire source under the coupling of inertia and viscous forces; (2) Contact stage. Depending on the heat release rate, the train speed and the tunnel height, this stage includes direct flame contact with the ceiling and smoke plume contact with the ceiling. (3) Sweep stage. After contacting the tunnel, the fire plume expands below the ceiling. At the same time, due to the relative motion between the train and the tunnel, the fire plume will sweep the tunnel ceiling and move forward. This aforementioned flow pattern is defined as "ceiling sweep" in this paper. Due to heat accumulation and thermal feedback of the tunnel, the maximum temperature, maximum total heat flux and maximum radiative heat flux at the top of the train under the action of the moving train fire plume in the tunnel are greater than that under the open line scenario. The maximum temperature under the roof of the tunnel is significantly decreases, and the longitudinal temperature presents an asymmetric distribution of higher upstream and lower downstream of the fire source. Conclusions: The above results show that the threat of the fire plume of moving trains to the train body increases, but the threat to the tunnel decreases. This study can provide reference for the design of flame-retardant materials and emergency operation strategy of trains.

  • Hydraulic Engineering
    Mingchao LI, Yuangeng LÜ, Qiubing REN, Leping LIU, Zhiyong QI, Dan TIAN
    Journal of Tsinghua University(Science and Technology). 2025, 65(10): 1838-1852. https://doi.org/10.16511/j.cnki.qhdxxb.2025.21.020
    Abstract (299) PDF (85) HTML (166)   Knowledge map   Save

    Objective: Timely hazard detection during the construction of super high arch dams is crucial for reducing engineering accidents and ensuring project safety. Hazards in such settings are often hidden and diverse, making them difficult to detect during early-stage conventional inspections. However, fragmented hazard records at construction sites are crucial for identifying and detecting issues early, helping management personnel to promptly assess potential engineering risks. Methods: This study proposes an intelligent hazard identification method for super high arch dam construction using enhanced semisupervised contrastive learning. A multisource classification model for hazard text is developed to categorize and assess hazard types and levels from fragmented hazard texts, establishing a systematic hazard inspection framework. The model is built on the Transformer architecture, effectively capturing the semantic and positional relationships inherent in hazard descriptions. A contrastive learning module improves the Transformer by leveraging interclass relationships to amplify the differences between dissimilar samples. This significantly enhances classification accuracy, especially for multi-source attribute hazard categories. The method integrates self-supervised and supervised learning, emphasizing interclass distinctions while making use of label content. A memory bank mechanism decouples training batches, enabling comprehensive collection of negative samples, thereby enhancing the performance of semisupervised contrastive learning. Finally, the hazard category and level identification results are combined to visualize safety hazard distributions. Latent Dirichlet allocation (LDA) is used to extract latent clues for hazard risk inspection, constructing structured hazard inspection tables for different levels of risk. These tables allow managers to prioritize inspections in high-risk areas, enhancing the efficiency and precision of hazard detection. Results: The results show that the proposed classification model significantly improves hazard type and hazard level recognition tasks, with F1 score improvements of 4.9% and 3.3%, respectively. Multidimensional experiments were conducted to validate its significant advantages: 1) Analyzing the influence of different Memory Bank sizes on model performance highlighted the importance of batch decoupling batches and the selection of a robust number of negative samples; 2) Ablation experiments validated the contribution of each module to the model's performance improvement; 3) Dimensionality reduction clustering using t-SNE visually confirmed the contrastive learning module's ability to effectively group similar classification samples; 4) A comparison of infoNCE loss between this model and the base Transformer demonstrated the practical benefits of the contrastive learning module during training; 5) Performance comparisons with common classification models showed the proposed model's significant advantages in overall accuracy. The hazard category and level identification results are used to extract key topic information using the LDA topic model, revealing the potential risks present in the current hazard categories and levels. Taking "High-altitude fall" as an example, key topic clustering was applied to compile a complete hazard inspection clue table structured by hazard levels. Conclusions: The method enhances the precision and systematization of hazard identification during the construction of super high arch dams. It introduces a refined multi-source attribute hazard identification method, providing a novel approach to intelligent safety management in engineering and promoting the development of hazard management toward automation and intelligence.

  • Traffic and Transportation
    Qingchang LU, Rundong WANG, Pengcheng XU, Shixin WANG
    Journal of Tsinghua University(Science and Technology). 2025, 65(10): 1945-1956. https://doi.org/10.16511/j.cnki.qhdxxb.2025.21.014
    Abstract (297) PDF (75) HTML (181)   Knowledge map   Save

    Objective: The metro system, as a crucial component of modern urban transportation, relies heavily on the reliability of its traction power network to maintain stable operations. However, existing research on metro system resilience assessment often overlooks the complex coupling characteristics between the traction power network and the metro network. In particular, the many-to-one and one-to-many coupling characteristics of the traction power network significantly influence metro system resilience but remain underexplored. This study proposes a resilience assessment method for metro networks based on the network coupling characteristics, focusing on quantitatively evaluating the dynamic impact of traction power network failures on metro network operational performance under both partial and complete failure scenarios. Methods: This research constructs separate models for the traction power network and the metro network. Building on these foundational models, it incorporates the many-to-one and one-to-many power supply characteristics of the traction power network, establishing a coupling model that integrates both systems. Network efficiency, which considers passenger flow weighting and travel time impedance, forms the basis for assessing resilience. The Monte Carlo method is used to model the recovery process of the metro traction power network. Using the Xi'an metro network as a case study, different failure scenarios are simulated, enabling a comprehensive evaluation of the metro system's service capacity and resilience changes under various fault conditions. Results: The results of this study are as follows: (1) The many-to-one redundancy characteristic of the traction power network enhances metro network resilience by 6.8%-14.4%. However, ignoring the one-to-many characteristics of the traction power network may lead to an overestimation of resilience, as cascading failure effects are inadequately accounted for. (2) Traction power network failures in high passenger flow areas can cause efficiency losses of up to 50.2%, with corresponding resilience losses reaching 36.1%. (3) Resilience performance varies across metro stations and the overall network depending on the complexity of failure scenarios. More complex scenarios involve a greater number and broader distribution of repair targets, increasing the intricacy and time demand of recovery processes. Conclusions: The proposed metro network resilience assessment method based on network coupling characteristics provides a more accurate evaluation of the impact of traction power network failures. By accounting for both many-to-one and one-to-many coupling characteristics, the method realistically reflects the redundancy supply effect of the system and the cascading failure process. The study emphasizes that while adopting a decentralized layout, metro system operation and planning need to strengthen the redundancy design of traction substations and supply section networks. Furthermore, a coordinated emergency response across multiple departments is recommended to ensure rapid mobilization of repair resources and shuttle capacity, minimizing disruptions to passenger travel during emergencies. The findings of this study provide theoretical guidance for developing emergency response and recovery strategies in metro systems under power facility failure scenarios. Future research will expand the resilience assessment framework to multi-modal transportation systems, further improving the universality and practicality of the model.

  • Hydraulic Engineering
    Bei YI, Xiaolian LIU, Xueni WANG, Leike ZHANG, Yu TIAN, Weiwei GUO
    Journal of Tsinghua University(Science and Technology). 2025, 65(10): 1868-1879. https://doi.org/10.16511/j.cnki.qhdxxb.2025.27.030
    Abstract (297) PDF (84) HTML (141)   Knowledge map   Save

    Objective: For long-distance and complex water conveyance systems that combine pressurized water and gravity flow, achieving effective water hammer control through individual valve regulation is challenging. It is crucial to implement joint pump-and-valve operation to ensure the system's overall safety performance. Existing research often overlooks water level fluctuations in storage facilities and fails to evaluate the effects of coordinated pump-and-valve operations on the overall system. Furthermore, the interdependence of parameters in multivariable pump-and-valve control creates difficulties in multi-objective optimization and decision-making. Methods: The joint optimal operation model for pumps and valves is developed based on hydraulic calculations, considering three key objectives: minimizing the maximum water hammer pressure, maximizing the minimum water hammer pressure, and minimizing water level fluctuations in elevated pools. The model is solved using the non-dominated sorting-based multi-objective coati optimization algorithm (NSCOA), while optimization schemes are realized through improved ideal-point-based decision (IIPBD) derived from the computationally generated Pareto front. A long-distance water transmission project serves as the research object, where NSCOA, NSGA-Ⅱ, and NSSA are used to solve its joint optimal pump-and-valve operation. Reasonable parameter settings for NSCOA are determined to solve the model. The optimal solution is obtained using IIPBD, validating the superiority of the NSCOA-IIPBD optimization decision-making method. Results: The performance of NSCOA, NSGA-Ⅱ, and NSSA was evaluated using hypervolume (HV) and spacing (SP) indices across ZDT1 to ZDT4 and ZDT6 test functions, confirming the superiority of NSCOA. At the same time, the parameters of NSCOA in the calculation of joint optimal operation of pumps and valves are reasonably set: the population number is 50, the size of external archives is 50, and the number of iterations is 75. Comparisons with NSGA-Ⅱ and NSSA further demonstrated the effectiveness of NSCOA in solving this problem. On this basis, the Pareto frontier calculated by NSCOA is determined based on IIPBD. Compared with the current scheme, the optimal scheme shortened the fast-closing time, extended the slow-closing time of the valve after the pump, and coordinated pump time intervals with the terminal valve response. These adjustments resulted in a 14.40% reduction in maximum pressure and a 70.37% decrease in water level fluctuation in the elevated pool. These findings confirm the reliability of NSCOA for optimizing the joint optimal operation model of pumps and valves under phased shutdown of pumps. Conclusions: The widely distributed Pareto frontier solution set can be obtained by solving the joint optimal operation model of pumps and valves with NSCOA. Using IIPBD, an optimal scheme with significantly lower pressure and water level fluctuations compared to the current scheme was achieved. The NSCOA-IIPBD method provides a more efficient and feasible scheme for the multi-objective solution and decision-making of the joint optimal operation of pumps and valves in long-distance complex water transmission systems.

  • Mechanical Engineering
    Meng LI, Zehua YANG, Rukang WU, Yu CHEN, Bijun WU, Yanqin ZHANG
    Journal of Tsinghua University(Science and Technology). 2025, 65(5): 912-920. https://doi.org/10.16511/j.cnki.qhdxxb.2024.21.029
    Abstract (295) PDF (70) HTML (108)   Knowledge map   Save

    Objective: The square-shaped dual-chamber floating oscillating water column (OWC) wave energy converter is designed to convert wave energy through the heave motion of its floating structure. This device uses airflow channeled from the dual chambers to drive a turbine generator, making it essential to investigate its primary energy conversion characteristics. Methods: Numerical calculations and experimental tests were conducted to study the model's capture performance. Hydrodynamic software was used to simulate the response of the dual-chamber floating OWC wave energy model under different wave conditions. Regular wave experiments verified the accuracy of these simulations and evaluated the model's performance, while irregular wave experiments assessed its capture performance in real marine environments. Results: The numerical analysis indicated that the motion response of the dual-chamber wave energy model peaks near the heave natural period of the floating body, optimizing energy capture. It was also found that the angle between the incident wave direction and the model significantly affects performance. When the chambers are aligned front to back (0° angle), energy capture is maximized, suggesting this arrangement is the most effective. To verify the numerical calculations and assess the actual performance of the wave energy model, regular wave experiments were carried out. These experiments demonstrated that when the dual chambers of the floating OWC wave energy model are arranged front to back, the capture performance is superior compared to the left-right arrangement. The optimal capture performance periods for the front and back chambers of the model are not aligned, allowing the front-to-back chamber arrangement to broaden the range of optimal response periods, thereby enhancing the system's overall energy capture efficiency. Additionally, to evaluate the capture performance of the dual-chamber floating OWC wave energy model in real marine environments, irregular wave experiments were conducted. The experimental results showed a maximum capture width ratio of 41.84% under irregular wave conditions, which is close to 84% of its performance under regular waves. This indicates that the dual-chamber wave energy model maintains strong energy capture capability and stability even in challenging marine conditions. Conclusions: Combining the results of numerical calculations and experimental tests, the dual-chamber floating OWC wave energy model exhibits excellent energy conversion performance across different wave conditions. The innovative front-to-back arrangement design of the dual chambers significantly enhances capture performance and broadens the range of optimal response periods. This research provides new ideas and methods for the development of wave energy conversion technology. The results have significant implications for optimizing and practically applying wave energy solutions, and they are expected to promote the development and utilization of marine renewable energy, thereby contributing positively to the advancement of green energy.

  • Mechanical Engineering
    Tao GUO, Wengang GAN, Haiyang WANG, Siyuan LIU
    Journal of Tsinghua University(Science and Technology). 2025, 65(5): 921-929. https://doi.org/10.16511/j.cnki.qhdxxb.2024.27.041
    Abstract (295) PDF (74) HTML (84)   Knowledge map   Save

    Objective: The distributing pipe in a Pelton turbine serves as a crucial water supply component responsible for regulating flow and inducing diversion. Its special structure, however, can lead to adverse effects such as flow separation and Dean vortices causing hydraulic losses; these losses can vary with changes in the upstream head, further affecting the incoming flow conditions. Traditionally, the pressure drop method has been primarily utilized to assess these losses, yet it fails to pinpoint the exact locations where significant hydraulic losses occur. Methods: This study investigates the hydraulic and loss characteristics of the distributing pipe. Utilizing the SST(shear stress transport) k-ω turbulence model, we simulate the flow inside the distributing pipe and analyze entropy production distribution based on the entropy production theory. Then, according to the distribution of entropy production rate and flow pattern, the reasons for the hydraulic loss in the main channel and bifurcation 2 were analyzed detailly. Entropy production—indicative of irreversible dissipative effects during fluid flow—effectively highlights high hydraulic loss areas by converting lost mechanical energy into internal energy. Results: Results show a remarkable increase in total entropy production within the pipe, with values rising from 210.999 to 4 614.980. Specifically, entropy production in the main channel increases from 145.549 to 3 477.351, and in bifurcation 2 from 38.857 to 717.608. Under high-speed flow conditions, the separation between internal and external flows becomes distinct, particularly when fluid navigates bends. The hydraulic loss is dominated by fluctuation entropy production, accounting for >50%. The main flow zone and bifurcation 2 are the primary sites of hydraulic loss, accounting for approximately 90% of the total loss, whereas bifurcations 1 and 3 experience relatively small losses. Conclusions: Comparative analysis of entropy generation rate contours, streamline plots, and pressure fluctuation curves highlights that high entropy generation areas experience significant pressure pulsations, accompanied by adverse flow phenomena such as Dean vortices and flow separation. At bifurcation 2, high-speed fluid is diverted and squeezed outward, creating a low-pressure vortex on the inner side, inducing significant hydraulic loss. At the bend position, the fluid tends to flow outward, resulting in high external pressure and low internal pressure distribution at the ring pipe and further in high hydraulic loss on the inside. These phenomena create large pressure gradients and significant pressure fluctuations, affecting flow stability. Furthermore, optimization strategies are proposed for the distributing pipe design, including the addition of flow-diversion baffles at bifurcation points to stabilize flow patterns, reduce vortices, and alleviate flow separation by increasing the number of nozzles and reducing curvature. This study employs numerical computation to investigate the mechanisms of hydraulic loss generation within the distributing pipe and meticulously delineates areas of high hydraulic losses, offering hydro turbine developers optimization strategies.

  • Intelligent Construction
    Jinpei LI, Xiaolin MENG, Liangliang HU, Yan BAO, Shiyu ZHAO
    Journal of Tsinghua University(Science and Technology). 2025, 65(7): 1260-1271. https://doi.org/10.16511/j.cnki.qhdxxb.2025.26.023
    Abstract (292) PDF (82) HTML (145)   Knowledge map   Save

    Objective: The structural integrity of bridges is a critical concern as infrastructure ages, necessitating the development of reliable methods for detecting potential failures. Among these, the identification of small target cracks is particularly important, as these cracks often grow undetected until they result in severe damage. Traditional inspection methods, such as manual visual inspections, are hindered by their labor-intensive nature and susceptibility to human error, often resulting in the oversight of small but significant defects. Recent advancements in computer vision and deep learning technologies offer new opportunities to improve the accuracy and efficiency of bridge inspections. This study introduces an innovative approach for detecting small target cracks in bridge structures by employing an enhanced version of the You Only Look Once (YOLOv8) object detection model, a widely recognized algorithm known for its rapid processing capabilities and high detection accuracy. The enhanced YOLOv8 model is tailored to detect small-scale cracks on bridge surfaces that may not be easily identifiable by traditional inspection methods or earlier versions of computer vision models. Methods: The proposed algorithm modifies the standard YOLOv8 model to address the specific challenges associated with detecting small cracks on bridge surfaces. A key modification is the integration of efficient vision transformer (EfficientViT) into the backbone of the YOLOv8 model. EfficientViT is an advanced transformer-based architecture that reduces redundant parameters and optimizes the extraction of local features from high-resolution images, enabling more precise detection of subtle crack features. This enhancement is crucial, as small cracks often exhibit low contrast against their background and may be easily overlooked by less sophisticated models. In addition to EfficientViT, the proposed algorithm also incorporates large selective kernel network (LSKNet) within the C2f module of YOLOv8. LSKNet employs a dynamic kernel selection mechanism that allows the model to adaptively adjust the size of the convolutional kernels based on the input features, making it highly suitable for detecting cracks of varying sizes, orientations, and morphological characteristics. This adaptability ensures that the model can detect small cracks, regardless of their form. Furthermore, the model uses bidirectional feature pyramid network (BiFPN) to merge feature maps at different scales. Traditional models struggle with detecting small targets due to the loss of critical information during downsampling operations. BiFPN mitigates this issue by preserving high-resolution feature maps across multiple layers, enhancing the model's ability to detect small cracks that would otherwise be missed. The combined effect of these modifications improves the accuracy of small target crack detection while maintaining computational efficiency. Results: The effectiveness of the proposed model was validated using a dataset of crack images from a specific bridge, captured by unmanned aerial vehicles (UAVs). UAVs provided detailed images from areas that were often difficult or dangerous to access using traditional inspection methods. The experimental results demonstrated that the enhanced YOLOv8 model significantly outperformed the original version in terms of key performance metrics. Specifically, the modified model achieved improvements of 3.7%, 3.5%, 3.5%, 3.9%, and 7.4% in terms of the detection precision, recall, F1 score, mAP50, and mAP50-95, respectively. These results indicated a substantial improvement in the model's ability to detect small cracks that often had low contrast and irregular shapes, which were typical characteristics of cracks on bridge surfaces. Furthermore, compared to conventional methods, the proposed model was able to detect cracks with higher precision and fewer false positives, making it a promising tool for improving the efficiency of bridge inspections. Conclusions: In conclusion, the improved YOLOv8 algorithm introduced in this study represents a significant advancement in the detection of small target cracks in bridge structures. The modifications made to the original YOLOv8 model, including the integration of EfficientViT, LSKNet, and BiFPN, result in a more accurate and computationally efficient model for crack detection. This approach offers a practical and scalable solution for the widespread application of bridge health monitoring, particularly in areas that are difficult to inspect using traditional methods. By leveraging advanced surface data processing techniques, this research contributes to the development of modern methods for assessing the health of bridge structures, ultimately helping to ensure the safety and longevity of infrastructure systems.

  • Public Safety
    Tiantian WANG, Tiezhong LIU, Congcong LI
    Journal of Tsinghua University(Science and Technology). 2025, 65(6): 1040-1049. https://doi.org/10.16511/j.cnki.qhdxxb.2025.22.013
    Abstract (289) PDF (53) HTML (124)   Knowledge map   Save

    Objective: This study integrates social attributes of human behavior as an independent mechanism within the analytical framework of negative emotion propagation dynamics. It aims to provide a comprehensive understanding of how negative emotions spread across social networks and establish a scientific basis for effective public opinion management and crisis response. Methods: This study examines the distinct mechanisms of social reinforcement and individual regulation that differentiate the spread of negative emotions in social networks from that of traditional infectious diseases. A heterogeneous propagation threshold model, named the SI-SEIR (social reinforcement and individual regulation susceptible-exposed- infected-recovered) model, incorporates a dual influence mechanism of "social reinforcement-individual regulation". First, we develop a non-Markovian negative emotion propagation model, considering social reinforcement and variations in individual emotion regulation abilities. We then extend the edge-based compartmental theory to determine the theoretical outbreak threshold and final propagation scale, including both continuous and discontinuous phase transitions. Extensive numerical simulations are conducted based on data from the Weibo network, using the Hubei Province Red Cross Society incident at the early stage of the COVID-19 pandemic to validate the effectiveness of the SI-SEIR model. Results: The findings show that individual emotion regulation abilities and social reinforcement significantly impact the spread of negative emotions. Improving individuals' emotion regulation ability and decreasing social reinforcement intensity can help effectively reduce large-scale outbreaks of negative emotions during public crises. Moreover, the network's topology feature significantly influences propagation outcomes. When individuals have relatively uniform emotion regulation abilities, a higher average degree of the network substantially raises the outbreak threshold, thereby reducing the likelihood of widespread diffusion. Increasing network heterogeneity can help increase the outbreak threshold and reduce the spread of negative emotions. Conclusions: Considering both social reinforcement and individual emotion regulation mechanisms is critical for accurately modeling and predicting the dynamics of negative emotion propagation in social networks.

  • Mechanical Engineering
    Zhenjun YU, Ningbo LEI, Yu MO, Xiu LI, Biqing HUANG
    Journal of Tsinghua University(Science and Technology). 2025, 65(5): 901-911. https://doi.org/10.16511/j.cnki.qhdxxb.2024.22.034
    Abstract (287) PDF (105) HTML (106)   Knowledge map   Save

    Objective: Predicting the remaining useful life (RUL) of industrial equipment is critical for maintaining safe operations and minimizing maintenance costs. However, RUL prediction for edge devices faces several challenges. First, edge devices often lack the computational power and storage capacity required for complex RUL prediction algorithms, making such predictions difficult. Many RUL prediction algorithms require substantial resources, which are scarce on edge devices. Second, the limited data transmission rate between the cloud and edge devices causes high latency when transmitting large data sets to the cloud, affecting real-time predictions and increasing network bandwidth usage. Additionally, data sharing among all edge devices is often impractical owing to privacy, security issues, and potential conflicts of interest, limiting models to local data and reducing their accuracy. Methods: To address these challenges, this paper proposes a cloud-edge collaboration framework for RUL prediction based on federated learning. The framework comprises two main processes. In the first process, each training device trains a variational autoencoder (VAE) using its local data set. The trained encoders are then uploaded to the cloud and aggregated using a weighted average method (FedAVG), with the number of training samples as weights. The aggregated global encoder is then downloaded to all edge devices. In the second process, the aggregated encoder extracts hidden features from the local data sets on each edge device. These features are uploaded sequentially to the cloud to train the RUL predictor. Once trained, the predictor is sent back to the edge devices, completing one training cycle. This iterative process continues until a well-trained RUL prediction model, consisting of the global encoder and predictor, is achieved. During the testing stage, the global encoder is used to extract hidden features, while the RUL predictor performs deeper feature extraction and RUL prediction. In this framework, only local encoders and hidden features are uploaded to the server, significantly reducing communication overhead. Most of the training occurs on the server, with clients only performing the basic training of the shallow VAE, thereby effectively utilizing the server's powerful computational capabilities. Data privacy is maintained since the server receives hidden features and encoders, not the original data, preventing data reconstruction. Results: To validate the proposed method's efficiency and practicality, different network structures were tested for RUL prediction on the commercial modular aero-propulsion system simulation (C-MAPSS). Although there was a slight decline in prediction performance compared to the baseline, the difference was within acceptable limits. This minor trade-off in accuracy enabled RUL prediction under resource constraints. The proposed algorithm significantly reduced data transmission time after feature extraction across various data scales consistently. In industrial scenarios with large data volumes, this reduction was even more pronounced. Further validation using nuclear power unit fault data sets showed a slight decrease in root mean square error (RMSE) on the test set without a significant drop in prediction accuracy. These results demonstrate that the proposed cloud-edge collaboration framework is promising for fault diagnosis in nuclear power units, effectively addressing edge resource limitations. Conclusions: The proposed cloud-edge collaboration framework leverages federated learning to achieve RUL prediction on resource-constrained edge devices, thereby alleviating issues related to resource constraints and data privacy. By employing VAE-based feature extraction and federated learning for model training, the framework achieves efficient model training while significantly reducing communication overhead with minimal impact on accuracy. Experimental validation on industrial simulation data sets and nuclear power unit fault data sets demonstrates the framework's practicality and effectiveness. This framework represents a useful approach to addressing challenges in fault diagnosis and URL prediction within resource-constrained settings.