Chinese  |  English

Top access

  • Published in last 1 year
  • In last 2 years
  • In last 3 years
  • All

Please wait a minute...
  • Select all
    |
  • SPECIAL SECTION: BIG DATA
    WU Houyue, LI Xianwei, ZHANG Shunxiang, ZHU Honghao, WANG Ting
    Journal of Tsinghua University(Science and Technology). 2024, 64(12): 1997-2006. https://doi.org/10.16511/j.cnki.qhdxxb.2024.21.027
    Abstract (887) PDF (359) HTML (0)   Knowledge map   Save
    [Objective] The generation of adversarial samples in text represents a significant area of research in natural language processing. The process is employed to test the robustness of machine learning models and has gained widespread attention from scholars. Owing to the complex nature of Chinese semantics, generating Chinese adversarial samples remains a major challenge. Traditional methods for generating Chinese adversarial samples mainly involve word replacement, deletion/insertion, and word order adjustment. These methods often produce samples that are easily detectable and have low attack success rates, and thus, the methods struggle to balance attack effectiveness and semantic coherence. To address these limitations, this study introduces DiffuAdv, a novel method for generating Chinese adversarial samples. This approach enhances the generation process by simulating the data distribution during the adversarial attack phase. The gradient changes between adversarial and original samples are used as guiding conditions during the model's reverse diffusion phase in pre-training, resulting in the generation of more natural and effective adversarial samples. [Methods] DiffuAdv entails the introduction of diffusion models into the generation of adversarial samples to improve attack success rates while ensuring the naturalness of the generated text. This method utilizes a gradient-guided diffusion process, leveraging gradient information between original and adversarial samples as guiding conditions. It consists of two stages: forward diffusion and reverse diffusion. In the forward diffusion stage, noise is progressively added to the original data until a noise-dominated state is achieved. The reverse diffusion stage involves the reconstruction of samples, in which the gradient changes between adversarial and original samples are leveraged to maximize the adversarial objective. During the pre-training phase, data capture and feature learning occur under gradient guidance, with the aim of learning the data distribution of original samples and analyzing the deviations from adversarial samples. In the reverse diffusion generation phase, adversarial perturbations are constructed using gradients and integrated into the reverse diffusion process, ensuring that at each step of reverse diffusion, samples evolve toward greater adversarial effectiveness. To validate the effectiveness of the proposed method, extensive experiments are conducted across multiple datasets and various natural language processing tasks, and the performance of the method is compared with those of seven existing state-of-the-art methods. [Results] Compared with existing methods for generating Chinese adversarial samples, DiffuAdv demonstrates higher attack success rates across three tasks: text sentiment classification, causal relation extraction, and sentiment cause extraction. Ablation experiments confirm the effectiveness of using gradient changes between original and adversarial samples to guide the generation of adversarial samples and improve their quality. Perplexity (PPL) measurements indicate that the adversarial samples generated by DiffuAdv have an average PPL value of only 0.518, demonstrating that these samples are superior in rationality and readability compared with the samples generated by other methods. [Conclusions] DiffuAdv effectively generates high-quality adversarial samples that closely resemble real text in terms of fluency and naturalness. The adversarial samples produced by this method not only achieve high attack success rates but also exhibit strong robustness. The introduction of DiffuAdv enhances the research perspective on generating adversarial text samples and broadens the approaches for tasks such as text sentiment classification, causal relationship extraction, and emotion-cause pair extraction.
  • Special Section: Construction Management
    Hong ZHANG, Zhijun BI
    Journal of Tsinghua University(Science and Technology). 2025, 65(1): 1-11. https://doi.org/10.16511/j.cnki.qhdxxb.2024.22.038
    Abstract (737) PDF (487) HTML (15)   Knowledge map   Save

    Objective: The β coefficient is a critical indicator for stock sector investment, and its stability is essential for making informed future investment decisions based on historical data. The real estate sector, known for its high investment risks and stock fluctuations, plays a crucial role in many investors' portfolios. Although there is a growing body of literature on the β coefficient of the real estate sector, research on its systematic calculation and stability remains limited. This paper analyzes the changes and stability of the β coefficient in the real estate sector, providing valuable insights for investors. Methods: Through method screening, this paper uses the single index equation to calculate the monthly and annual β coefficients of the Chinese A-share real estate sector from 2013 to 2022. After confirming data stationarity, daily data are processed through least squares regression analysis to obtain accurate and reliable monthly and annual β coefficients. The stability of the β coefficient is assessed using the Chow test for adjacent calendar months and years, and statistical analysis is conducted on the results. Ultimately, the study includes a comparative analysis between the real estate, financial, and construction sectors to provide a comprehensive understanding of the β coefficient characteristics. Results: The research results reveal the followings: (1) The monthly and annual mean β coefficients of the real estate sector are close to but less than 1. Monthly β coefficients show significant variability, while the annual β coefficient initially increases and then decreases. (2) The monthly β coefficient demonstrates stronger stability compared to the annual β coefficient. (3) The trajectories of the β coefficient in both the real estate and construction sectors are highly similar, with the stability of the β coefficient in the real estate sector being lower than that of the construction sector but higher than that of the financial sector. Conclusions: There are clear differences in the stability characteristics of the monthly and annual β coefficients in the real estate sector, and these differences vary across different sectors. This paper suggests that the followings: (1) Short-term investors should monitor changes in monthly β coefficients to predict market volatility. (2) For long-term investment decisions based on the real estate sector's β coefficients, timely adjustments should be made according to macroeconomic factors and other variables. (3) When investing across different stock sectors, investors should focus on the volatility relationship among the construction, financial, and the real estate sectors, and adopt appropriate risk hedging strategies to reasonably diversify investment risks.

  • SPECIAL SECTION: BIG DATA
    LI Jiayi, HUANG Ruizhang, CHEN Yanping, LIN Chuan, QIN Yongbin
    Journal of Tsinghua University(Science and Technology). 2024, 64(12): 2007-2018. https://doi.org/10.16511/j.cnki.qhdxxb.2024.21.028
    Abstract (604) PDF (241) HTML (2)   Knowledge map   Save
    [Objective] The increasing maturity of large language model technology has facilitated its widespread application in downstream tasks across various vertical fields. Large language models have exhibited beneficial performance in text summarization tasks in general fields, such as news and art. However, the highly specific language style in the judicial field and the unique complexity of judicial documents in terms of structure and logic make it difficult for large language models to generate judicial document summaries. This study aims to combine prompt learning with large language models to explore their performance in summarizing judicial documents. Prompt templates containing structural information and judicial documents are used as inputs for fine-tuning large language models. As a result, large language models can generate judicial document summaries that adhere to judicial language styles and the structural and logical complexities of judicial documents. [Methods] This study proposes a judicial document summary method that combines prompt learning and the Qwen large language model. Judicial document data are used as the input for fine-tuning a large language model using supervised fine-tuning technology to enhance its applicability in the judicial field. Simultaneously, prompt templates that incorporate structural information and role instructions are designed to optimize summary generation to more accurately reflect the structural characteristics and logical relationships of documents. According to the characteristics of the pretraining data format of the large language model, the fine-tuning data were constructed in the form of question-answer pairs. [Results] The experimental results show that the proposed method improves the F1 of the baseline model by 21.44%, 28.50%, and 28.97% in ROUGE-1, ROUGE-2, and ROUGE-L, respectively, and exceeds all of the comparison models. The ablation experiment demonstrated that the summary generation method using prompt learning was superior to the method without prompt learning for all indicators, and the performance of summarization generated by the large language model utilizing prompt learning was significantly enhanced. The case demonstration reveals that after prompt learning is used to enhance the perception of structural information in the judgment document by the large language model, the judgment document summary generated by this model can better capture and retain key information in the judgment document. Moreover, the language style of this model is closer to that of a real judgment document summary, which further illustrates the effectiveness of the proposed method. [Conclusions] This study integrates the structural information of a judgment document into the task of generating a judgment document summary using a large language model in the form of prompt templates. Prompt templates containing structural information are used to assist the large language model in summarization generation. Therefore, the model can focus on the key information in the judgment document and capture deeper semantic logical relationships. The results demonstrate that after fine-tuning the large language model with judicial document data and introducing structural information, the model demonstrated excellent performance and great application potential in the judicial document summary task. The proposed method can effectively enhance the capability of a large language model in the field of judicial document summaries.
  • Special Section: Construction Management
    Xinxiang JIN, Xiao LIN, Xinru YU, Hongling GUO
    Journal of Tsinghua University(Science and Technology). 2025, 65(1): 35-44. https://doi.org/10.16511/j.cnki.qhdxxb.2025.22.006
    Abstract (490) PDF (196) HTML (22)   Knowledge map   Save CSCD(1)

    Objective: Progress management is an important part of construction management, which helps effectively reduce the risk of project delay. Its main objective is to monitor the actual construction progress and compare it with the construction plan. The traditional method of progress updating relies on manual checking and recording, which not only lags behind but also is prone to recording errors. Following the development of building information modeling (BIM), technologies such as the internet of things (IoT), point clouds, and visual images have been gradually applied to construction progress identification and plan comparison. However, these methods require the introduction of additional acquisition equipment, and point cloud acquisition equipment is costly. In addition, image processing is easily affected by factors such as occlusion, light, and weather. Therefore, the present study proposes a construction progress updating method based on BIM and large language models (LLMs). This approach enables construction personnel to verbally report progress information to the LLM, allowing a three-dimensional (3-D) building model to be accordingly updated. Methods: This research develops a system that automatically extracts relevant information from natural language, which identifies the corresponding component using the planned construction time in the component database and visualizes the progress status of the 3-D building model in Blender. The system does not require detailed information such as precise component IDs but completes the progress update by recognizing fuzzy information (e.g., construction section, floor, and other relevant information). Specifically, this study first parses the industry foundation classes (IFC) format BIM file and construction schedule to extract and correlate the component IDs, location information, and scheduled construction time. It then constructs a database of building components. Subsequently, the LLM is enhanced through prompt engineering so that it can generate accurate information query instructions based on natural language inputs, retrieve component information from the database, assess the progress status, and generate corresponding model update instructions to achieve dynamic updates in Blender. Results: This study tested the accuracy and consistency of the proposed method using a BIM model with 716 components and a dataset of 200 progress reports in various natural language formats. The testing results showed that after prompt fine-tuning, the LLM-based method achieved an average accuracy of 96% in progress assessment and model updating and 62.5% improvement over the non-fine-tuned model. The consistency reached 87% or an increase of 68% over the non-fine-tuned model, demonstrating the effectiveness and feasibility of this method for construction progress updating. Conclusions: This study has successfully combined BIM and LLMs to develop a construction progress updating method, including the construction of component retrieval database and schedule updating process based on LLMs. The case studies show that the method effectively improves the accuracy and consistency of the LLM in generating progress update instructions without providing additional equipment and significant computing costs. The method allows construction personnel to describe progress information in natural language and achieves accurate progress updating of the 3-D model of a building, which meets the demand for visualizing and updating progress information on construction sites. However, this study suffers from certain limitations. Due to the use of prompt fine-tuning for the LLM, consistency remains a challenge. Future work is expected to improve the model's accuracy and consistency by training a local model.

  • SPECIAL SECTION: NUCLEAR WASTE WATER AND GAS
    YANG Li, ZHANG Yujie, FANG Sheng, SONG Jiayue, LI Xinpeng, CHEN Yixue
    Journal of Tsinghua University(Science and Technology). 2024, 64(12): 2019-2030. https://doi.org/10.16511/j.cnki.qhdxxb.2024.27.029
    Abstract (472) PDF (188) HTML (0)   Knowledge map   Save CSCD(1)
    [Objective] Local-scale atmospheric dispersion modeling of radionuclides is crucial for nuclear emergency response during the early phase. The Lagrangian puff dispersion model excels in accurately and rapidly reproducing radioactive fields at this scale by accounting for natural turbulence and integrating wind fields with spatial and temporal variations. Given that nuclear power plants (NPPs), especially Chinese NPPs, are often located in heterogeneous terrains, which lead to channeling and slope flows, puff splitting in the puff dispersion model is necessary to accurately represent the phenomenon of plume splitting and layer decoupling phenomena. Despite its importance, the threshold values for puff splitting have not been adequately studied. In addition, the complex terrain around NPP sites generates highly complicated flows, necessitating the use of a diagnostic wind field model coupled with the atmospheric dispersion model to improve the accuracy of dispersion simulations. [Methods] To further provide an effective atmospheric dispersion modeling and establish threshold values of puff splitting for the Lagrangian puff dispersion model, the local-scale Lagrangian splitting puff dispersion model (SPUFF) was developed and fully integrated with the California meteorological model (CALMET). Two local-scale dispersion simulations were conducted using the CALMET to drive the SPUFF: one against the Sanmen NPP wind tunnel experiments with east (E) and northeast (NE) wind directions and another to simulate the Fukushima Daiichi nuclear accident. These simulations aimed to validate SPUFF's performance and practicality. Furthermore, a comprehensive sensitivity analysis was performed to determine the credible range of horizontal threshold values for puff splitting. The dispersion results were evaluated using multiple statistical metrics: the fraction of simulations within a factor of two/five/ten of the observations (FAC2/5/10), fractional mean bias (FB), normalized mean-square error (NMSE), normalized absolute difference (NAD), and geometric mean bias (MG). [Results] Validation results indicated that plumes generated by SPUFF effectively covered the majority of measurement sites, with coverage rates reaching 99.60% and 97.54% in the E and NE directions, respectively. All four crucial statistical metrics for SPUFF met acceptable criteria (FAC2: 0.52, FB: -0.17; NMSE: 0.75, NAD: 0.31 in the E direction; FAC2: 0.48, FB: 0.37; NMSE: 1.28, NAD: 0.39 in the NE direction), indicating remarkable performance. Practical evaluations demonstrated that SPUFF can reproduce more measurements in the Futaba station compared to the Lagrangian particle model (LAPMOD). SPUFF also successfully captured the concentration peak effects resulting from the reactor events during the Fukushima nuclear accident. Sensitivity analysis suggested that applying no puff splitting module might be sufficient for complex terrains with constant meteorological conditions (constant wind fields). However, puff splitting becomes crucial in complex terrains with variable meteorological conditions. For local-scale dispersion scenarios involving NPPs, the recommended threshold values for puff splitting range between 700 m and 1 100 m. [Conclusions] This paper provides a comprehensive evaluation of the Lagrangian splitting puff dispersion model (SPUFF) and demonstrates its practical application. The results strongly indicate that SPUFF is a valuable tool for future nuclear emergency responses. Additionally, this paper proposes a credible range of threshold values for puff splitting, offering guidelines for applying the puff model in local-scale dispersion scenarios at NPP sites.
  • Process Systems Engineering
    Dong QIU, Qiming ZHAO, Yijiong HU, Tong QIU
    Journal of Tsinghua University(Science and Technology). 2025, 65(5): 813-824. https://doi.org/10.16511/j.cnki.qhdxxb.2024.21.034
    Abstract (472) PDF (182) HTML (15)   Knowledge map   Save

    Objective: In the petrochemical industry, molecular reconstruction is crucial for understanding and optimizing the compositions of complex crude oil and petroleum products. As the first step of process simulation, quality control, and economic evaluation, precise molecular reconstruction approaches usually employ mathematical models to calculate the molecular compositions of petroleum products that align with their macroscopic properties. Traditional molecular reconstruction methods employ the gamma distribution to represent the carbon number distributions of homologs, but the coupling effects between the parameters "shape (α)" and "scale (β)" pose notable challenges in achieving desired interpretability and optimization efficiency. This study addresses these challenges by introducing a novel shape-decoupled parameter method that enhances the model's interpretability and simplifies the optimization process. Methods: The proposed shape-decoupled parameter method modifies a traditional gamma distribution by replacing the parameter's shape and scale with two new independent variables called peak position (m) and variance (σ2). Notably, m provides direct control over the zenith of the distribution, whereas σ2 independently determines the spread or width of the distribution, effectively reducing the coupling issue between parameters that exists in conventional gamma distribution models. Aiming at enhancing the stability and convergence speed during optimization, a multivariate linear regression (MLR) model was employed to estimate the initial parameter values. This regression model was trained on historical data of molecular compositions to provide reasonable initial values and decrease the probability of being trapped in local minima. The molecule-type homologous series (MTHS) matrix is used to represent the molecular composition of hydrocarbons, namely paraffins, isoparaffins, olefins, naphthenes, and aromatics (PIONA), with a comprehensive depiction of their multiple homologs. Moreover, an optimization problem was developed to minimize the prediction errors of the macroscopic properties, including molecular weight, density, PIONA group composition, and true boiling point curves. Upon a comparative analysis of multiple deterministic and heuristic optimization techniques, the differential evolution (DE) algorithm was determined as a favorable optimization tool by virtue of its superior accuracy and robustness. Results: Experimental evaluations showed that the shape-decoupled parameter method outperformed traditional methods in accuracy and optimization efficiency. Specifically, the density error decreased from 0.012 to 0.0059 g/cm3, and the average percentage relative error for the PIONA group composition also exhibits notable reductions. Moreover, the decoupled approach achieves faster convergence, requiring fewer iterations—reducing from 1 000 to as few as 20—without compromising accuracy. This reduction highlights the computational efficiency of the proposed method, which is a notable advantage in industrial applications with limited computational resources and time. Moreover, the proposed method exhibits enhanced robustness in addressing extreme molecular composition distributions, maintaining low errors in peak position and molecular composition predictions. This robustness becomes particularly evident when managing scenarios considered challenging by conventional methods, such as distributions with narrow ranges or hydrocarbons with approximately zero components at the boundary. Furthermore, the decoupled method provides better interpretability via independent control strategies for peak position and distribution width. The overall optimization performance was enhanced by the appropriate integration of the DE algorithm and effective initial parameter estimation by the MLR model. Conclusions: Compared with traditional methods, the proposed shape-decoupled parameter method provides a more interpretable, efficient, and accurate approach to the molecular reconstruction of petroleum products. By reducing the coupling effect between the parameters controlling the peak position and distribution width, this method simplifies the optimization process and achieves superior prediction accuracy and faster convergence. The results indicate the feasibility of its application for complex or extreme homolog distributions of hydrocarbons, revealing its higher reliability and robustness compared with traditional approaches. Future work is expected to focus on incorporating advanced machine learning techniques to further increase the accuracy and applicability of the model across a wider range of petroleum compositions, potentially enabling real-time molecular reconstruction for dynamic process optimization.

  • Jiliang MO, Qixiang ZHANG, Wei CHEN, Quan WANG, Zhiwei WANG
    Journal of Tsinghua University(Science and Technology). 2025, 65(2): 201-214. https://doi.org/10.16511/j.cnki.qhdxxb.2025.21.007
    Abstract (433) PDF (169) HTML (28)   Knowledge map   Save

    Significance: Rapid expansion of global rail transit requires higher operating speeds for high-speed trains, posing considerable challenges to the safety and reliability of braking systems, particularly under demanding conditions such as long, continuous, steep slopes. In such scenarios, stable speed regulation requires prolonged mechanical friction braking to complement electrical braking. This extended braking action causes a rapid temperature increase at the brake disc/pad interface, and this temperature often reaches extreme levels. Such thermal stress leads to significant degradation in braking performance and reduced reliability of brake components, with a greater risk of brake failure. Despite the critical nature of these issues, research on the tribological behavior and dynamic responses of braking systems under prolonged slope conditions remains insufficient. Progress: This review synthesizes experimental studies and numerical simulations to elucidate the mechanisms underlying braking system performance degradation on long steep slopes. Experimental research reveals wear patterns and damage behaviors of friction pairs during prolonged braking, highlighting the roles of heat accumulation and friction reduction at the brake disc/pad interface. Fully coupled thermal-mechanical-wear finite element models have been employed to explore the interrelated effects of temperature, stress, and wear throughout the braking process. Lumped parameter models offer a detailed characterization of contact behaviors in friction pairs and their impact on dynamic system responses, incorporating principles from fractal theory and Hertz contact theory to develop mathematical models for contact stiffness and damping. Additionally, two-degree-of-freedom models have been utilized to analyze braking system stability under realistic operational conditions. Furthermore, dynamic models incorporating wheel/rail adhesion have been developed to examine the coupled torsional interactions between the brake disc/pad subsystem and the wheel/rail subsystem, as well as the impacts of the interactions on system vibration behavior. These models also assess the influence of diverse service conditions, brake disc/pad friction properties, and wheel/rail adhesion characteristics on system stability and vibration dynamics, thereby revealing the interaction mechanisms among various components of the braking system. Conclusions and Prospects: Future research should account for complex environmental factors encountered at high altitudes and on steep slopes and elucidate the mechanisms of braking degradation under multi-factor coupling. Particular attention should be given to the effects of low temperatures, snow, and low pressure on the performance of friction pairs. Moreover, the integration of intelligent monitoring and predictive technologies will be crucial for developing efficient real-time monitoring systems capable of dynamically assessing braking performance and identifying potential failure risks at an early stage. These advancements will enhance the safe operation of trains under challenging conditions and provide a robust theoretical and technical foundation for improving the braking performance and safety of high-speed trains while contributing to the sustainable growth of the rail industry.

  • Special Section: Construction Management
    Zhiyuan GUO, Jian LI, Wei WANG, Jiangping MA, Xirui CHENG, Hanbin LUO
    Journal of Tsinghua University(Science and Technology). 2025, 65(1): 22-34. https://doi.org/10.16511/j.cnki.qhdxxb.2024.22.039
    Abstract (431) PDF (171) HTML (21)   Knowledge map   Save

    Objective: Promoting urban renewal activities using digital technologies is essential to achieve high-quality economic development. This study examines how digital technology can be integrated into various stages of urban renewal, such as real estate rights, planning, construction, completion, and operational management. The objective is to connect all data in the urban renewal process and protect rights related to resources, assets, and capitals involved in these projects. Furthermore, this study drives the industrialization of digital technology, accelerates the digitalization-driven transformation of traditional industries, and explores the transition from a land-based economy to a digital economy, thereby continuously strengthening, enhancing, and expanding China's digital economy. Methods: This study first examines the existing issues in the complete process management of urban renewal in China and the research status of digital platforms for urban renewal both domestically and internationally. This study then outlines construction concepts for an urban renewal big data platform, proposes a data-driven approach to managing urban renewal, and designs a platform system comprising support environment, data resource, foundational platform, and application system layers. Finally, the issue of information nonshareability is addressed by integrating multiple-source data fusion technology based on metadata to create an urban renewal data system. In addition, data attribute-based model lightening technologies and the integration of building information modeling (BIM) and geographic information systems (GIS) are used to design and develop the urban renewal big data platform. Results: The big data platform for urban renewal developed using the methods outlined in this study has been effectively applied to urban renewal projects across Wuhan. The platform, which is customized to meet the specific needs of each area, features several fundamental functional modules, including basic information queries, demolition management, asset maintenance, renewal project management, digital delivery, and smart community operational management. This platform, which embodies an innovative examination of urban renewal strategies and methodologies supported by digital technology, facilitates digital management throughout the process of urban renewal. This platform has been deployed and used in various urban renewal projects, such as Sanyang Design City, Wuhan Station, and Wuchang South Station, to effectively manage and control the quality of urban renewal projects and their financial investments. Conclusions: Urban renewal activities are crucial for assessing the inventory of resources, assets, and capitals within urban districts. Detailed data on these capitals supporting urban renewal initiatives are essential. The big data platform constructed using advanced digital technologies ensures data interconnectivity throughout the urban renewal process and facilitates the quantitative tracking and analytical evaluation of resources, assets, and capitals. Further studies should further examine data standards, data governance, data collection, and platform promotion mechanisms to improve these processes. This study contributes significantly to digitizing the traditional urban renewal industry, supports local governments in exploring new economic development models with transition from a land-based economy to a digital economy, and promotes high-quality economic development. This study shows the transformative potential of digital platforms in urban renewal through in-depth analysis and practical applications, thereby setting a benchmark for future urban renewal developments in China's digital economy.

  • Advanced Ocean Energy Technology
    Libing ZOU, Mingjun ZHOU, Chao WANG, Xiangyuan ZHENG, Zouduan SU, Junwei LI
    Journal of Tsinghua University(Science and Technology). 2025, 65(8): 1377-1386. https://doi.org/10.16511/j.cnki.qhdxxb.2025.27.039
    Abstract (411) PDF (151) HTML (72)   Knowledge map   Save

    Significance: Floating wind turbines (FWTs), as a revolutionary breakthrough in offshore renewable energy technology, are redefining the boundaries of human development of ocean energy with innovative technological solutions. With the collaborative innovation of floating foundations and dynamic anchoring systems, this technology has successfully broken through the limitations of traditional fixed wind turbines on water depth, expanding the scope of wind power development to deep-sea and high wind speed resource rich areas. Compared to offshore fixed wind turbines, FWTs not only significantly reduce marine ecological disturbances, but also provide a dual solution for global energy transformation that combines environmental friendliness and production efficiency through the potential for large-scale cluster deployment. This article systematically reviews the current development status of floating wind power technology and deeply analyzes the core pain points that constrain its commercialization process, including key technical challenges such as dynamic response control, mooring system durability, and life cycle cost optimization. Of particular note is the milestone breakthrough achieved by China's innovation forces in this field-the "Mingyang-Tiancheng" floating platform, as the world's largest single unit capacity floating wind turbine system, has opened up a new paradigm for the development of far-reaching offshore wind power and provided important technical references for the global iteration of floating wind power technology. Progress: Globally, floating wind power projects represented by Hywind (Spar) and WindFloat (semi-submersible) have completed the transition from experimental prototypes to small-scale commercial applications, and their technological level and industrial chain layout are in a world leading position. In contrast, China's floating wind power is still in the demonstration and verification stage, represented by the 5.5 MW (2021) and 7.25 MW (2023) units of the "Yinlinghao" and "Guanlanhao". Although key technological breakthroughs have been achieved, the maturity of technology and the construction of supporting industrial chains still need to be improved. Currently facing three development bottlenecks: at the economic level, floating wind power technology is not yet mature, research and application costs are high, and it is still far from achieving the goal of grid parity; In terms of environmental constraints, the special working conditions in typhoon prone areas require higher adaptability of the units; At the level of industrial synergy, an industrial cluster effect covering design, manufacturing, and operation and maintenance has not yet been formed. Therefore, it is urgent to promote technological innovation to drive the development of related industrial chains, gradually reduce development costs, and achieve large-scale commercial applications. At the same time, it is necessary to promote the coordinated upgrading of offshore wind power equipment manufacturing and marine engineering industry, build a full life cycle cost control system, and lay a technical and economic foundation for large-scale commercial applications. Conclusions and Prospects: In order to address these challenges, the "Mingyang-Tiancheng" floating wind power platform has made innovative breakthroughs in areas such as prestressed high-strength concrete technology, composite lightweight buoy design and construction technology, intelligent perception collaborative control technology, single point mooring technology, dual wind turbine technology, and typhoon resistance technology, reflecting China's emerging leadership position in floating wind technology. It combines material science breakthroughs, intelligent control systems, and ecological design principles. Future progress will require sustained interdisciplinary collaboration and accelerated global deployment through industrialization to reduce costs. The "Mingyang-Tiancheng" provides valuable practical experience and technical reference for the future development of floating wind power.

  • Research Article
    HUANG Ran, ZHU Shiyou, HE Mengchen, LI Ruoyu, GE Xinru, WANG Qiao, CHEN Juan, LO Jacqueline T. Y., MA Jian
    Journal of Tsinghua University(Science and Technology). 2025, 65(3): 479-494. https://doi.org/10.16511/j.cnki.qhdxxb.2025.26.013
    Abstract (389) PDF (152)   Knowledge map   Save
    [Objective] In case of catching fire, a train moving in a tunnel section might lose power and come to an emergency halt while approaching the next station for emergency rescue. Predicting the distribution of smoke and temperature in tunnels under such fire scenarios is difficult because of the influence of moving trains. This difficulty in prediction might seriously threaten the safety of passengers and impede metro evacuation management. [Methods] This study considers influencing factors, including train speed, fire source location, and fire heat release rate, to solve this problem, and 75 different fire scenarios are designed and simulated. Train movement in the simulated scenarios is realized using the equivalent piston wind method. The simulation results of the smoke and temperature distributions collected using sensors near the tunnel ceiling are used to construct a dataset for deep learning. Accordingly, a deep learning model comprising long short-term memory networks, a convolution (Conv) module, and a deconvolution (DeConv) module is then proposed for rapid prediction of temperature distribution in tunnels under moving train fire conditions. The train speed, train braking time, and temperature time-series information from the sensors together are fed as inputs to the model. [Results] The results indicated that: (1) Under various train movement states, the model was able to predict the temperature distribution of the lateral evacuation platform in the tunnel 30 s in advance using the current sensor data, with a mean absolute error (MAE) of only 2.2 ℃ and a mean absolute percentage error (MAPE) of 4%, indicating high accuracy. (2) In a stark contrast with the week-long time taken to obtain temperature distribution in a fire dynamics simulator (FDS), this deep learning model could make prediction within only 0.08 s, hence representing a computational efficiency improvement of four orders of magnitude versus the computational fluid dynamics method. (3) Validation with fire scenarios in none of the training, validation, and test datasets resulted in model MAE and MAPE values of 3.1 ℃ and 5%, respectively, indicating a strong generalization ability. (4) Considering the possibility of sensor failure within tunnels, this study investigated the influence of simulated sensor failures on the model's prediction accuracy by varying sensor spacing. The model continued to exhibit an effective predictive ability even at a sensor spacing of 4.00 m. At 8.00 m sensor spacing, the model's errors were larger albeit at very few time frames (with a maximum MAE lower than 10.0 ℃ and a maximum MAPE lower than 15%). However, for other sensor spacing cases, the model's MAE and MAPE were less than 5.0 ℃and 10%, respectively. Hence, it could be concluded that the model has strong robustness. [Conclusions] This study constructs a comprehensive dataset for a tunnel with moving train fire conditions using an FDS and leverages advanced deep neural networks to completely exploit the extensive information within the dataset, ultimately resulting in a high-precision, robust model for rapid prediction of temperature distribution in tunnels under moving train fire conditions. These advancements are highly important for effective emergency management and response planning in tunnels under these challenging conditions.
  • Editorial
    Journal of Tsinghua University(Science and Technology). 2025, 65(3): 413-413.
    Abstract (381) PDF (154)   Knowledge map   Save
  • Intelligent Construction
    Peng LIN, Jianqi YIN, Yunfei XIANG, Chaoyi LI, Yong XIA, Houlei XU, Hua MAO
    Journal of Tsinghua University(Science and Technology). 2025, 65(7): 1173-1184. https://doi.org/10.16511/j.cnki.qhdxxb.2025.26.022
    Abstract (380) PDF (148) HTML (26)   Knowledge map   Save

    Objective: High-altitude hydropower projects present significant challenges owing to harsh environmental conditions, project clustering, limited data availability, and high construction risks. Accurate carbon emission calculations are crucial in such environments to mitigate environmental impacts and promote sustainable development. This study targets the full lifecycle of carbon emissions during intelligent construction in high-altitude hydropower projects. Methods: This study establishes a comprehensive framework for calculating lifecycle carbon emissions tailored to the unique challenges of high-altitude hydropower construction. The methodology covers three primary stages: data collection, model formulation, and real-world implementation. Lifecycle boundaries and emission factors are established for material production, transportation, construction, and operational maintenance. Key emissions are identified based on quality, energy consumption, and cost criteria to build a detailed carbon inventory. To address altitude effects, an adjustment coefficient is derived by correlating field-monitored data with baseline values, accounting for altitude impacts on emission intensities. The carbon emission model incorporates a discrete event simulation (DES) to capture the dynamic characteristics of construction and equipment operations. This model couples static and dynamic elements, applying static calculations to stable phases such as material production and maintenance while using dynamic simulations for variable stages such as transportation and active construction. This DES approach simulates the sequential and interdependent nature of equipment operations, providing an accurate reflection of emission behavior over time. Furthermore, a network of onsite carbon monitoring devices was implemented across different construction sites in a case project, and real-time CO2 concentration data were collected. These data calibrate and validate emission factors within the model, ensuring accurate altitude-adjusted emission assessments. Results: The model was applied to the JX hydropower project in a high-altitude region with distinct climatic and geographical challenges. The findings indicated that material production and construction machinery were the largest carbon emitters, accounting for 65.7% and 27.4% of total emissions, respectively. Cement manufacturing was identified as the dominant emission source, emphasizing the need for greener materials and cement production. The DES model revealed that equipment states, such as idling and operation, significantly influence emission intensities, especially under reduced oxygen at high altitudes. By integrating the DES results with real-time monitoring, the model supports precise, responsive emission control strategies. The proposed mitigation measures included adopting cleaner fuels, optimizing equipment idle time, and enhancing operational efficiency through scheduled maintenance. The model reliability was demonstrated by the close alignment of the simulated results with actual onsite measurements. Conclusions: The developed model offers a structured approach to calculating lifecycle carbon emissions for intelligent hydropower construction in high-altitude regions. By addressing the unique characteristics of such projects, including altitude-induced effects on emission intensities and equipment behavior, the model serves as a reference for emission reduction in future high-altitude hydropower projects. This study advances the understanding and management of emissions in high-altitude construction, underscoring the potential of intelligent construction methods to drive sustainable hydropower development.

  • Liang JIANG, Yuan WU, Yongshun ZHANG, Jiaxin ZHENG, Yushan CHEN, Xia ZHONG, Liao ZHOU, Yuting WEI, Lei CHEN, Linmao QIAN
    Journal of Tsinghua University(Science and Technology). 2025, 65(2): 215-232. https://doi.org/10.16511/j.cnki.qhdxxb.2025.21.010
    Abstract (367) PDF (137) HTML (31)   Knowledge map   Save

    Significance: This review highlights the progress and challenges in chip atomic layer polishing. Chips are fundamental to the modern information society. According to Moore's Law, the chip feature size is shrinking and approaching the physical limit. At the same time, advanced packaging technologies such as hybrid bonding continuously evolve. These create pressing needs to develop atomic layer polishing, a technique that enables extremely precise material removal at the atomic layer level, to achieve surfaces with atomic-level precision for demanding processes such as photolithography and bonding. Currently, chemical mechanical polishing (CMP) is the only key technology in chip manufacturing capable of simultaneously achieving local and global planarization of the wafer surface, with the potential to realize atomic layer polishing. This review provides a systematic summary of the mechanisms and processes of CMP for chip substrate surfaces and interconnect heterogeneous surfaces. Progress: Significant progress has been made in the controlled removal with a single atomic layer precision at the microscopic level and in the CMP with surface roughness close to the theoretical limit at the macroscopic level for the monocrystalline silicon substrate. These advances highlight the extreme precision processing capability of CMP for the wafer surface. Furthermore, ongoing developments in multi-field assisted CMP and energy particle beam polishing hold promise for enabling atomic layer polishing for new substrates like GaN, SiC and diamond. In the case of interconnect heterogeneous surfaces, two material removal modes in CMP are summarized from a tribological perspective based on the interactions between the abrasive and material surfaces: mechanical plowing and chemical bonding. Copper, cobalt, and nickel are mainly removed through the mechanical plowing mode, while tantalum, ruthenium, and titanium are mainly removed through chemical bonding. According to this foundation, control principles and methods for achieving equivalent removal of heterogeneous surfaces are proposed based on different material removal modes. In the mechanical plowing material removal mode, corrosion and its impact on the mechanical strength of the material surface can be adjusted through the modulation of the effects of oxidation, complexation, and corrosion inhibition, as well as their synergistic effects. This approach allows for controlling the material removal rate (MRR). In the chemical bonding material removal mode, the number of reactive sites and their influence on interfacial chemical bonds can be regulated through the adjustment of pH, oxidation, and ionic strength, along with their synergistic effects, thus controlling the MRR. According to these control principles, a systematic summary of the planarization processes for interconnect heterogeneous surfaces, such as copper/tantalum, copper/cobalt, and copper/ruthenium, is provided. Finally, based on existing research progress, it is proposed to leverage the synergistic effects of mechanical, chemical, and electrical/optical/plasma/energy beams to confine chemical reactions and mechanical-chemical reactions to the surface atomic layer, to achieve atomic layer polishing. Specifically, for the mechanical plowing material removal mode, the focus is on designing the molecular structures of chemical additives to precisely modulate the effects of oxidation, complexation, and corrosion inhibition. This approach confines the corrosion behavior to the outermost atomic layer while controlling the mechanical action of the abrasive to enable precise, controllable atomic layer removal. For the chemical bonding material removal mode, the investigation focuses on confining chemical bonding reactions to the outermost atomic layer, weakening back bonds, and simultaneously controlling the mechanical action of the abrasive to disrupt chemical bonds between the outermost and sub-surface atoms, thus achieving controlled atomic layer removal. Conclusions and Prospects: This review highlights the growing demand for atomic-level manufacturing of high-end chips and the development of atomic layer polishing. It provides a systematic summary of the mechanisms, progress, and challenges in atomic layer polishing, with the aim of offering critical theoretical and technical support for the atomic-level manufacturing of advanced chips. The findings from this research have potential applications in key areas such as high-end optical components and superlubricity devices.

  • Research Article
    XIANG Yunfei, LUO Yiming, NING Zeyu, LIU Yuanguang, YANG Zuobin, LI Zichang, LIN Peng
    Journal of Tsinghua University(Science and Technology). 2025, 65(3): 433-445. https://doi.org/10.16511/j.cnki.qhdxxb.2025.26.014
    Abstract (358) PDF (144)   Knowledge map   Save
    [Object] Hydropower underground engineering encounters significant safety management challenges owing to overlapping construction activities, diverse process stages, and dynamic resource flows. This involves multidisciplinary safety tasks, such as safety hazard identification and rectification, emergency response, and regulatory compliance checks, which require specialized domain knowledge. In this context, safety management knowledge is intricate, such as expert experience, patterns and characteristics, and management codes, and is dispersed across multimodal data formats, including text, tables, and images. Efficient extraction of these multimodal data sources can significantly enhance data utility and support intelligent safety management. However, owing to the diverse nature of data formats, the complexity of the knowledge system, and the various management scenarios, current research struggles with limited knowledge sources, acquisition difficulties, and poor generalization. [Methods] This study proposes a method of constructing a multimodal knowledge graph (KG) for safety management in hydropower underground engineering. (1) A large-scale, high-quality, multisource heterogeneous dataset is built from safety hazard identification and rectification records, regulations, and images. (2) Knowledge modeling employs top-down and bottom-up approaches to define entities, relationships, attributes, and events pertinent to safety management in hydropower underground engineering. (3) The entity and relationship information from text data is obtained using a knowledge extraction method that uses a large language model (LLM) tuned with domain knowledge, enriched by specific examples for each entity type to handle small sample sizes. This approach uses demonstrations to provide the model with prior knowledge. (4) Instance segmentation is used to annotate safety hazard images. The entities identified in the images are then converted into vectors. Image and text data are linked based on semantic similarity. Image data are integrated into the textual KG, enabling the transformation from multimodal data to multimodal knowledge. (5) The multimodal KG is stored in Neo4j, an open-source graph database management system. (6) A scenario-specific knowledge acquisition method addresses the specific needs of safety management scenarios, integrating KG with LLMs to enable retrieval-augmented generation and interpretable knowledge reasoning. [Results] (1) This paper collected more than 120 000 safety hazard records, 30 regulatory documents, and 300 000 images of safety hazards. Leveraging these comprehensive data, this paper constructed a large-scale, high-quality, multisource heterogeneous dataset specifically designed for managing safety in hydropower underground engineering projects. (2) Taking a hydropower underground engineering project as an example, the constructed multimodal KG was applied to intelligent recommendations for safety hazard rectification and compliance checks. (3) The workflow for generating intelligent recommendations for safety hazard rectification measures involved the following steps. After users input safety hazard information, the scene-KG was extracted from the multimodal KG and fed into an LLM to generate appropriate rectification measures. (4) Based on the scene-KG, an inference retrieval method extended neighboring nodes and constructed inference-KG for compliance checks. By integrating inference-KG with an LLM, the system retrieved relevant content from regulatory documents based on user input. [Conclusions] The proposed method effectively extracts and applies domain knowledge from multimodal data in the context of safety management in hydropower underground engineering. It also successfully applies domain knowledge for safety management. The results serve as a reference for transitioning infrastructure construction safety management from a data-driven approach to a knowledge-driven approach.
  • Review
    ZHOU Wendong, CUI Yanwei, WANG Hetang, REN Gehui, CUI Xinyue, WANG Hao, SHEN Aojie
    Journal of Tsinghua University(Science and Technology). 2025, 65(3): 414-432. https://doi.org/10.16511/j.cnki.qhdxxb.2025.26.017
    Abstract (344) PDF (128)   Knowledge map   Save CSCD(1)
    [Significance] Coal remains the most essential fossil energy source in China, with production from underground coal mines accounting for more than 80%. In underground coal mining operations, dust is a pervasive and hazardous material that can significantly compromise the safety and health of miners. High dust concentrations are associated with an increased incidence of pneumoconiosis and a heightened risk of catastrophic events, such as coal dust or gas explosions. The primary source of dust is the cutting processes of excavators and roadheaders during the extraction of coal and rock. Therefore, it is essential to comprehensively elucidate the mechanism of dust generation during cutting and heading operations for effective dust control in underground coal mines. [Progress] This paper presents an overview of the latest research developments in the field of dust generation, with a particular emphasis on three key areas: the behavior of dust generation during cutting, research methodology, and influencing mechanisms. First, regarding the behavior of dust generation during cutting by excavators and roadheaders, researchers have proposed several theoretical models to elucidate both the fragmentation processes of coal and rock bodies and the generation of dust under the influence of cutting. These models are based not only on the principle of energy conversion but also on the influence of the geometry of the cutting pick during the dust generation process. This provides a solid theoretical basis for understanding the physical nature of dust generation. Regarding the research tools employed, researchers have simulated the dust generation phenomena when mining machinery cuts coal and rock bodies through physical experiments conducted on self-designed experimental platforms. Researchers have also conducted numerical simulations using finite element or discrete element methods. These advanced experimental techniques elucidate the actual cutting conditions and offer a robust analytical tool for investigating the dust generation mechanism in depth. Additionally, this study provides a comprehensive analysis of the mechanisms influencing dust generation during cutting, examining both internal and external factors. These factors include the physicochemical properties of coal and rock, such as coal rank, moisture content, pore characteristics, strength, and brittleness, as well as the parameters of the cutting conditions, such as cutting depth, advance speed, drum rotation speed, and the morphology and arrangement of picks. [Conclusions and Prospects] Current research on the dust generation mechanism during cutting reveals several contradictions. Existing models often rely on simplified assumptions, neglecting the anisotropy of coal and the actual cutting conditions in the field, leading to discrepancies between the calculated results and experimental observations. Moreover, existing experimental platforms struggle to accurately replicate the motion of the cutting pick during actual operations. Although many studies have focused on the properties of coal and rock and dust characteristics, some of the conclusions are conflicting. Future research should prioritize the construction of full-scale experimental platforms and the development of high-precision monitoring technologies. Comprehensively investigating the dust generation characteristics of complex coal seams and quantifying the energy conversion mechanisms during the cutting process are crucial. These efforts are essential for improving the efficiency of cutting operations and achieving more effective dust control.
  • Public Safety
    Dingli LIU, Xiao LEI, Diping YUAN, Yanglong WU, Zhisheng XU
    Journal of Tsinghua University(Science and Technology). 2025, 65(6): 1009-1018. https://doi.org/10.16511/j.cnki.qhdxxb.2025.22.017
    Abstract (344) PDF (131) HTML (16)   Knowledge map   Save

    Objective: The efficient allocation and dispatch of fire rescue resources are crucial to urban public safety. Traditional approaches assume continuous spatial distribution of fire service coverage areas and give less consideration to the impact of real-time traffic conditions on rescue route selection and response times. This study aims to introduce and define the concept of "rescue enclaves"—areas that, although not directly adjacent to fire stations, can be effectively covered by them—and proposes a method to identify and calculate these spatially discontinuous coverage areas. Methods: This study proposed a method for identifying and calculating spatially discontinuous coverage areas by mapping points to grids. Using this method: (1) fire truck travel times were calculated using real-time traffic data, (2) geographic coordinates were converted to universal transverse Mercator (UTM) coordinates, (3) the region was divided into fine grids, (4) grid coverage status was determined, (5) transition grids were processed through neighborhood analysis, and (6) rescue enclaves were identified using a breadth-first search (BFS) algorithm. The CS-XX urban fire station in a Chinese city was selected as a case study to validate the method. In this case study, 3 818 points of interest were identified as rescue demand points across 49 evaluation periods in one day, generating 187 082 valid data samples. A target response time of 4 min was established, and an 80% reduction coefficient was applied to convert regular vehicle travel times to fire truck travel times. Results: The rescue enclave areas were successfully identified and calculated using the proposed method, through which the following key findings were revealed: (1) the dynamic coverage area of CS-XX was observed to vary from 1.83 to 4.57 km2, with the minimum fire service coverage of 1.83 km2 being recorded during the morning peak at 8:00, (2) the calculated coverage area trends were found to be consistent with the percentage of demand points accessible within 4 min, whereby the reliability of the method was validated, (3) critical rescue enclaves were identified near CS-XX, with enclave areas ranging from 0.25 to 1.12 km2, accounting for 12.20%-27.53% of the total coverage area, (4) the rescue enclaves were observed to occasionally extend beyond the traditional coverage of 7.00 km2 prescribed by standard area determination methods, and (5) coverage areas and rescue enclave areas were demonstrated to synchronously vary with traffic conditions, with traffic congestion leading to a significant reduction in their sizes. Conclusions: The proposed conceptualization of rescue enclaves is elucidated in this study, and their substantial manifestation within fire service coverage areas is substantiated through rigorous analysis. The rescue enclaves are systematically identified and quantified via an algorithmically driven methodological framework, and it is ascertained that such enclaves may comprise up to 27.53% of the coverage area of a fire station. If rescue enclaves are integrated into fire rescue jurisdiction planning protocols, they can substantially optimize resource allocation efficacy. While real-time traffic conditions and different flow efficiencies across heterogeneous route typologies are identified as the primary determinants of enclave formation, subsequent investigations are warranted to elucidate the precise mechanistic underpinnings and contributory factors governing rescue enclave emergence as well as to establish quantitative metrics for rescue passage efficiency across diverse route configurations.

  • Fire in Forests
    Fangpu LI, Xue RUI, Zijun LI, Weiguo SONG
    Journal of Tsinghua University(Science and Technology). 2025, 65(4): 655-663. https://doi.org/10.16511/j.cnki.qhdxxb.2025.27.004
    Abstract (344) PDF (125) HTML (26)   Knowledge map   Save

    Objective: Fires are disaster events with destructive power. In relation to fire-related accidents, fire monitoring is one of the effective measures to reduce the casualties and economic losses caused by such incidents. Compared to traditional methods in fire monitoring, target detection has shown its strengths in terms of cost and outcome. Many researchers have investigated various ways to improve the efficiency of target detection by proposing new algorithms. Thus, numerous algorithms suited for fire monitoring applications have been proposed. However, these typically lack the capacity to detect small targets, which is the main characteristic of flame targets in incipient fires. To enhance the capacity to detect small targets for fire target detection, this paper improved the YOLOv5 algorithm and trained a model based on it with corresponding datasets collected. Methods: First, a fire image dataset with small target scene conditions is prepared for model training and performance testing. In the validation set, eight sets of mutually exclusive sub-datasets of environmental conditions are divided for the purpose of performance testing. Second, three improvements are introduced to improve the YOLOv5 algorithm: a) expansion of the multiscale detection layer to improve its receptive resolution; b) enhancement of the multiscale feature extraction capability by embedding the Swin transformer module, thus reducing the cost of calculation in algorithm deployment; and c) optimization of the postprocessing function by replacing the original algorithm with soft-NMS algorithm to maintain more potential adjacent targets. Next, an improved model YOLOv5s-SSS (swin transformer with soft-NMS for small target) is proposed. To verify the effect of every improvement and their contributions to the final model, the new model is evaluated using four sets of ablation experiments. After parameter optimization, a set of fire images is inputted into the models in the ablation experiment to compare and verify their outputs. Results: The ablation experimental results indicate that, first, all the improvements introduced into the algorithm are valid. Furthermore, the average accuracy of the improved model is 16.3% higher than that of the original algorithm in flame image targets under challenging scene conditions and 5.9% higher in normal-sized image targets. The verification result shows, compared to the original model, the improved model has obvious improvements in terms of reducing the location range of fire targets, thus minimizing the missing detection of small-sized and densely-distributed fire targets and clearly dividing densely or overlapping distributed fire targets. Conclusions: The dataset prepared in this paper can effectively support the training and testing of the improved fire detection algorithm model. Furthermore, the proposed model improvement has been shown to work effectively, along with the reliable performance test, thus providing a new improvement scheme for fire image detection technology. It can also serve as a reference in improving efficiency in various applications, such as accurate positioning of fire points in incipient forest fires and remote sensing monitoring of large-scale fires. However, the overall accuracy of the improved model is relatively low, possibly due to the images in the validation set being deliberately limited to small targets to assess the model's improvement. In the future, more improvements should be introduced to enhance the model's detection ability under various scenarios, such as low-light conditions, so that it can be adequate for industrial applications.

  • MECHANICAL ENGINEERING
    XIE Jiaqi, ZHANG Han, ZHU Zhiming
    Journal of Tsinghua University(Science and Technology). 2024, 64(12): 2068-2083. https://doi.org/10.16511/j.cnki.qhdxxb.2024.22.043
    Abstract (339) PDF (136) HTML (0)   Knowledge map   Save
    [Significance] Rotary friction welding (RFW) is an extensively studied and applied solid-state welding method that can achieve high-quality welding between similar or dissimilar materials. RFW involves complex thermal, mechanical, and metallurgical processes, with heat generation occurring due to intense thermomechanical coupling. The stress and temperature histories at the friction interface considerably affect the microstructure evolution and mechanical properties of welded joints. Consequently, a primary focus of RFW research is deeply exploring the mechanisms and processes of thermomechanical coupling, obtaining the stress and temperature histories during welding to guide process-parameter selection and welded-joint-microstructure regulation. However, because of high-speed rotation and substantial plastic deformation during RFW, the temperature evolution and plastic deformation at the welding interface cannot be directly measured experimentally. Thus, mathematical models must be developed to study RFW. Currently, numerical simulation has become the predominant method for RFW theoretical research. With the development of computational technology, various numerical simulation methods have emerged, further elucidating the evolution laws of various physical fields during RFW and supporting theoretical research on the thermomechanical coupling behavior of RFW. [Progress] This paper reviews the research progress on the thermomechanical coupling behavior and numerical simulation technologies of RFW. It encompasses the theoretical underpinnings of the friction behavior of RFW, the development of heat-generation models, and the discussion of prevalent analytical and numerical methods for calculating temperature and stress fields during RFW. The proposal and research on RFW have a long history, resulting in the establishment of three friction-behavior theories: slide, stick, and slide-tick friction theories. These theories have informed the development of various thermomechanical coupling heat-generation models as well as material models. Analytical methods directly employ the thermomechanical coupling model to compute analytical solutions for the temperature and stress fields. These methods offer high computational efficiency and provide intuitive insights into heat generation and transfer processes, as well as material flow and deformation characteristics during RFW. However, analytical methods have challenges, such as their reliance on one-dimensional assumptions and simplified boundary conditions. Different stages of friction require distinct mathematical physics equations, complicating the achievement of coherent calculations for the entire welding process. Meanwhile, numerical simulation methods are more various, mainly including thermal conduction numerical models and the finite element method (FEM). As a mainstream numerical simulation method, the FEM can simulate material flow models and friction models, extending from two-dimensional to three-dimensional analyses. This allows for the obtainment of detailed information on temperature, stress and strain fields, residual stress distribution after welding, interface contact, and joint formation during RFW. In addition, the FEM can be effectively integrated with other simulation and prediction methods, such as microstructure evolution simulations and neural networks, offering comprehensive guidance for welded-joint-microstructure regulation and process-parameter selection. [Conclusions and Prospects] Presently, the simulation accuracy of RFW highly depends on material parameters and boundary condition settings, and the prediction capability of simulation models remains limited. Therefore, further research on welding mechanisms and the introduction of various computational methods to enhance the efficiency and accuracy of numerical simulation technologies while reducing computational costs represent current challenges and developmental directions for RFW simulations.
  • MECHANICAL ENGINEERING
    DAI Jingzhou, TIAN Ling, HAN Tianlin
    Journal of Tsinghua University(Science and Technology). 2024, 64(12): 2092-2104. https://doi.org/10.16511/j.cnki.qhdxxb.2024.27.025
    Abstract (327) PDF (123) HTML (0)   Knowledge map   Save
    [Objective] Sliding bearings are critical components of modern machinery, and their proper function is critical. However, inadequate lubrication can cause significant wear and degradation of the bearing contact surfaces, posing significant safety risks. Monitoring the wear state of sliding bearings during operation and predicting their remaining useful life (RUL) is crucial for ensuring equipment safety and reducing maintenance costs. Despite this need, the current online diagnostic and prognostic methods for sliding bearings are lacking. To address this issue, this study proposes an intelligent diagnostic and prognostic method for sliding bearing wear based on multidomain features and relevance-vector-based iterative exponential degradation (RV-IED). [Methods] This paper designed and built a test rig to simulate real-world operating conditions of sliding bearings and collected data under different wear conditions. Wear degradation and vibration measurement tests were conducted to measure the maximum wear depth (MWD) and vibration signals during the tests. For feature engineering, multidomain features combining the time domain, frequency domain, and nonlinear characteristics were constructed. From the energy of the vibration signals, time-domain features were derived. A Fourier transform was then applied to these signals to obtain frequency-domain waveforms, which are decomposed into multiple normal distributions. Calculating the intensity values of the top ten peaks' $\widetilde{\mu} \pm \widetilde{\sigma}$ provided a ten-dimensional frequency-domain vector, which was then reduced to one-dimensional frequency-domain features using principal component analysis. To handle the high-dimensionality of feature spaces, dynamic time warping was used to compute distances between different spectra as nonlinear features. These multidomain features served as input vectors for diagnosis, with the corresponding MWDs used as labels for training the relevance vector machine (RVM). New samples were diagnosed by outputting the current MWD. After each diagnosis, another RVM extracted sparse relevance vectors and corresponding weights from historical diagnosis data, fitting an exponential model using nonlinear least squares. This model predicts the bearing RUL by extending the trend to a preset threshold. [Results] The proposed method was evaluated against traditional time domain and frequency domain features combined with SVM/RVM methods as a control group. The experimental results showed that: (1) In diagnosing wear depth, the proposed method achieved a diagnostic error within 10%, outperforming the control group; (2) Although the diagnostic error increased as the training set size decreased, the changes were minimal beyond a reduction of 30%, making the method suitable for small sample sizes. We recommend a dataset size that does not exceed 1?04; (3) For RUL prediction, the proposed method's cumulative relative accuracy is 0.59, compared to 0.36 for the control group. [Conclusions] By leveraging the constructed sliding bearing wear test rig and monitoring data, multidomain features were created to accurately reflect bearing wear degradation. A diagnostic and prognostic method based on RV-IED for sliding bearing wear provides accurate diagnostics and RUL predictions, even for small sample sizes. This method surpasses traditional approaches and effectively supports intelligent diagnostics and predictive maintenance of sliding bearing wear states. This innovative approach holds promise for further advancement in fault prediction and health management in mechanical systems.
  • Special Section: Construction Management
    Zhao ZHANG, Zhehao YAN, Yihua MAO
    Journal of Tsinghua University(Science and Technology). 2025, 65(1): 12-21. https://doi.org/10.16511/j.cnki.qhdxxb.2024.22.051
    Abstract (323) PDF (127) HTML (14)   Knowledge map   Save

    Objective: Confined to a particular stage of social development, most existing communities in China have been designed with young and middle-aged people as the targets, neglecting the housing needs of the elderly. In addition, there is a considerable lack of elderly facilities and support services in community living areas. Therefore, expediting the development of age-friendly communities is a crucial initiative to improve the living environment of elderly residents and address the effects of aging. The development of age-friendly communities is an urgent topic that demands immediate attention. Unfortunately, there is still no systematic evaluation study on the effect of establishing age-friendly communities. Methods: Using data from a survey of residents of Beijing's first batch of national model age-friendly communities, we examined the effect of age-friendly community establishment on the life satisfaction levels of the elderly as well as the mediating role of their mental health. We also tested the robustness of the results based on a propensity score matching model. In addition, subjective and objective indicators of the seven functions of an age-friendly community were used to test the effectiveness of the age-friendly community functions. Finally, we employed grouped regression to test the heterogeneity of age-friendly community well-being among different elderly groups. Results: The results demonstrate the following: (1) The establishment of age-friendly communities enhances the life satisfaction and mental health of the elderly, and their mental health mediates the relationship between the establishment of age-friendly communities and their life satisfaction. (2) The seven functions of age-friendly communities affect the life satisfaction of the elderly to varying degrees, and only four functions—community atmosphere, community service, smart elderly care, and social participation—improve the mental health of the elderly. (3) The establishment of age-friendly communities more strongly affects the life satisfaction and mental health of older people who are younger, more capable of self-care, more educated, and lower income group. Conclusions: The abovementioned results show that overall, China's current age-friendly communities have achieved some success; however, some aspects still require attention during the follow-up promotion of age-friendly communities. First, when promoting the establishment of age-friendly communities, we should not only focus on improving the physical environment of the community but also consider various aspects of the social environment to help alleviate the mental health problems of the elderly. Concurrently, we should give more attention to the needs of vulnerable groups, such as the elderly with low education levels, limited self-care ability, and low incomes. We should implement policies such as home care, group assistance, and simplified and inexpensive procedures so that the benefits of age-friendly community establishments can be enjoyed by as many people as possible.

  • Fire in Buildings and Timber Structures
    Qiang MA, Hongrui JIANG, Ke WANG, Bo WANG, Long DING, Jie JI
    Journal of Tsinghua University(Science and Technology). 2025, 65(4): 625-633. https://doi.org/10.16511/j.cnki.qhdxxb.2024.27.027
    Abstract (320) PDF (121) HTML (20)   Knowledge map   Save

    Objective: Most historic buildings in China wood or brick-wood structures; consequently, they have low fire resistance ratings and large fire loading values. Furthermore, firefighting problems associated with these buildings include high building density, insufficient fire separation distances, and narrow fire passages; thus, historical buildings face a high risk of damage owing to fire. Therefore, to prevent and control fires in historical buildings, exploring efficient early fire detection and alarm systems for these buildings is necessary. Image fire detectors enable rapid identification of fires and provide the location of fires; thus, they are used in early fire detection and alarm systems for historic buildings. However, there is a lack of study that accurately calculate the coverage and evaluated the placement of image fire detectors. Currently, the placement of outdoor image fire detectors depends on semiquantitative approaches such as engineers' experiences and existing regulations. Methods: Consequently, this study proposes a placement optimization methodology for outdoor image fire detectors in historical buildings based on the set covering and maximum covering models. First, a three-dimensional model of the target area is constructed by integrating the structural information of historical buildings and mesh division. Second, the field of view of an image fire detector is determined based on its models and specifications to analyze its viewshed; subsequently, a set of candidate detector positions is constructed using the selection rule that selects more important areas and less occluded areas, thus providing support for the reasonable placement of image fire detectors. Furthermore, the extent of coverage of the target area grid by the candidate detectors is determined by constructing a binary observation matrix that maps the position and direction of image fire detectors in the target area. Third, considering the optimization of the cost and coverage of image fire detectors as the goal, a mathematical model for the placement optimization of image fire detectors is developed based on the set covering model and maximum covering model. Finally, based on the genetic algorithm, the optimal placement scheme of image fire detectors is obtained using the previously constructed placement optimization model. Results: To demonstrate the feasibility and effectiveness of the proposed methodology, this study considers the joint area composed of the Hall of Supreme Harmony, Hall of Central Harmony, and Hall of Preserving Harmony of The Imperial Palace as a case study. Under the given target area coverage and predetermined cost, the optimal placement plan of image fire detectors is obtained, i.e., {11, 13, 16, 20, 23, 25, 33, 35, 48, 55, 58, 60, 62, 64, 67, 68} and {11, 22, 26, 29, 35, 44, 48, 59, 64, 67}. In addition, the joint area coverage reached 98.17% and 92.41%, respectively. Conclusions: Compared with the existing semiquantitative approaches to placing image fire detectors, the proposed methodology simplifies several manual calculation processes and can meet different requirements for cost and coverage within the detection area, thus optimizing the placement of image fire detectors. In addition, this methodology can quickly and accurately obtain the position and direction of the detector, which can be used for installing and calibrating outdoor image fire detectors in historical buildings. Thus, the proposed methodology can be implemented in the construction of early fire detection and alarm system in historical buildings to prevent massive economic and cultural losses owing to fire damage.

  • Special Section: Public Safety
    Shengyu WEI, Yue ZHAI, Nian ZHAO
    Journal of Tsinghua University(Science and Technology). 2025, 65(1): 174-185. https://doi.org/10.16511/j.cnki.qhdxxb.2025.22.004
    Abstract (309) PDF (116) HTML (23)   Knowledge map   Save

    Objective: Extreme precipitation events have increased globally in recent years, leading to more frequent and intense urban flooding that seriously threatens public safety. Reliable models for assessing storm waterlogging vulnerability and studies on spatial differentiation are essential for effective disaster prevention and mitigation. However, traditional models often face two main limitations. First, traditional models frequently overlook human adaptive responses to flooding and rely on single-weight calculation methods, reducing the accuracy of their insights. Second, traditional models generally apply to larger scales, such as cities or regions, and fail to capture smaller-scale spatial differences in vulnerability. This study introduces a refined storm waterlogging vulnerability assessment model that includes exposure, sensitivity, and coping capacity. The model allows researchers to reveal the spatial clustering of storm waterlogging vulnerability in more detail, providing more in-depth insights into areas most prone to flooding. Methods: To establish a comprehensive assessment system, this study combined city-specific conditions with human adaptive responses. Nine key indicators, such as annual rainfall, were selected to capture urban-specific vulnerability. Subjective weights were assigned based on an improved expert scoring method, effectively incorporating expert insights. To enhance objectivity, the entropy method was used to calculate objective weights. Then, these subjective and objective weights were combined and optimized using the Nash equilibrium equation to achieve a balanced vulnerability evaluation. Multisource data and ArcGIS software enabled the visualization of storm waterlogging vulnerability on a 1 km grid scale in Xi'an. Global Moran's I and local indicators of spatial association (LISA) score clustering were used to analyze spatial patterns in storm waterlogging vulnerability, revealing clusters and trends across the city. In addition, a vulnerability triangle classified vulnerability levels into eight distinct types, highlighting dominant factors across regions and supporting targeted resilience planning. Results: The vulnerability assessment showed that areas with high and relatively high vulnerability to urban storm waterlogging mainly clustered in the central old city within the Third Ring Road. This area primarily consisted of various functional zones with hard-paved surfaces, dense construction, intensive development, and high population density. In contrast, areas with low and relatively low vulnerability were mainly clustered in Chang'an, Huyi, Lintong, Yanliang, Zhouzhi, and Lantian Districts, which consisted mostly of forest and agricultural lands, providing high ecological resilience. LISA clustering analysis revealed that storm waterlogging vulnerability had a clear spatial clustering pattern with a significant positive spatial correlation. Eight storm waterlogging vulnerability types are identified: strongly integrated vulnerability (ESC), sensitivity-dominated (S), exposure and sensitivity-dominated (ES), sensitivity and coping capability dominated (SC), coping capability-dominated (C), exposure and coping capability-dominated (EC), weakly integrated vulnerability (O), and exposure-dominated (E) types. The ESC type mainly appeared in the northwest and northern areas of Xi'an; the S and ES types were concentrated within the central urban area; the SC types appeared around the city center; the C types were found in the eastern regions; the EC types occupied much of the southern area of the city; and the O and E types were less common and appeared sporadically in various locations. In lower vulnerability areas, the C types were predominant. In lower to moderate vulnerability areas, the EC types were the most common. In higher vulnerability areas, the SC and ESC types were the most frequent. Conclusions: The proposed method provides a reliable scientific approach to assessing storm waterlogging vulnerability. It effectively visualizes quantitative data and identifies key factors influencing vulnerability across regions. The findings support the creation of storm waterlogging risk maps and indicate targeted disaster prevention and mitigation strategies.

  • SPECIAL SECTION: NUCLEAR WASTE WATER AND GAS
    CHEN Jiachen, CHEN Hailong, LIAN Bing, WANG Yan
    Journal of Tsinghua University(Science and Technology). 2024, 64(12): 2053-2058. https://doi.org/10.16511/j.cnki.qhdxxb.2024.27.033
    Abstract (305) PDF (114) HTML (1)   Knowledge map   Save
    [Objective] Radioactive materials may enter the soil through atmospheric deposition, surface water, and groundwater pathways, resulting in radioactive contamination. Physical disturbances caused by natural winds or anthropogenic activities may cause particulate matter in the soil to be resuspended from the ground into the air. Some nuclides (e.g., uranium) that are difficult to transfer to the human body through the food chain are likely to be inhaled in the form of resuspended particulate matter and cause internal irradiation. Therefore, the resuspension behavior of radioactive particulate matter must be investigated for the control of suspended particles. CFD(computational fluid dynamics) studies are mostly carried out on the resuspension of particulate matter in the pipe wall; only a few CFD numerical simulation studies have been conducted on the resuspension release of particulate matter from contaminated soil. [Methods] In this work, a CFD simulation of the resuspension release of radioactive aerosol particles from a contaminated site is carried out. First, the resuspension of radioactive aerosol particles caused by the air inlet is achieved, and the influence of the air inlet and exhaust outlet settings on the distribution characteristics of the airflow field is then analyzed. In the simulation, the air supply port is set as the velocity inlet, and the air velocity of the air inlet is set to 10 m/s. The air exhaust port is set as the outflow boundary, the wall is set as the solid-wall boundary condition, and the ground surface is set as the surface source for generating aerosol particles. The generated aerosol particles have a size of 2.5 μm and a density of 1.65 g/cm3. The discrete phase model is chosen to solve the gas-solid two-phase flow, and the transient state is used to calculate the motion-diffusion characteristics of the resuspended aerosol particles. To determine the state of particulate matter and the trajectory of the motion with time and space, the coupled SIMPLE algorithm of pressure and velocity is selected with the standard pressure and the second-order on-wind discrete format, and the turbulence model is adopted as the standard k-ε model. According to the resuspension motion of the aerosol particles, the ventilation of the air inlet violently disturbs the airflow in the box, forming an approximate left-right symmetric airflow field in the box. [Results] Simulation results show that: 1) when the inlet wind speed is 10 m/s, the airflow inside the resuspension box is uniformly distributed, and the vortex flow is sufficient to resuspend the radioactive aerosol particles on the ground. In addition, the amount of resuspension of particles is large, which is convenient for aerosol sampling and measurements. 2) The simulated wind speed of the resuspension box near the ground is within the range of the ground surface wind speed in the area and thus can be applied to characterize the effect of surface wind speed on the resuspension of particles in this area. [Conclusions] This simulation provides a basis for the experimental design of the deposition and resuspension of radioactive aerosol particles. The findings provide guidance for the radiation protection of people working or moving in radiation-contaminated areas.
  • Safety Science
    Siyuan MU, Quanyi LIU, Ruxuan YANG, Yi LIU, Rui YANG
    Journal of Tsinghua University(Science and Technology). 2025, 65(7): 1368-1376. https://doi.org/10.16511/j.cnki.qhdxxb.2025.22.001
    Abstract (304) PDF (124) HTML (25)   Knowledge map   Save

    Objective: Due to the high flammability of nonflame-retardant pure acrylonitrile-butadiene-styrene (ABS), a material often used for passenger luggage, it is easily ignited by open flames, posing risks to aviation operations. Therefore, in-depth research on the pyrolytic combustion characteristics of ABS at high temperatures and high radiation intensities is crucial for the safe operation of aircraft. Methods: This study evaluated the thermal stability and combustion characteristics of ABS under different heating rates and radiation intensity conditions using thermogravimetric analysis and cone calorimeter systems. This study also analyzed the variations in the characteristic parameters of ABS. Results: The results show that the pyrolysis process of ABS can be divided into an initial volatilization stage, a rapid decomposition stage, a residual combustion stage, and a pyrolysis termination stage. In the rapid decomposition stage, when ABS reaches temperatures of approximately 310 ℃ to 343 ℃, the main polymer chains of ABS undergo cleavage, breaking down into different components, such as acrylonitrile and polyethylene monomers, leading to the decomposition of polymer molecules. When heated, the main chain of ABS ruptures. The molecular structure of ABS contains different components, such as styrene and butadiene, which are prone to decomposition and cross-linking reactions upon heating, resulting in the occurrence of the pyrolysis process. An increase in heating rate significantly shortens the pyrolysis time and enhances the maximum thermal decomposition rate. As the radiation intensity increases, the combustion process of ABS accelerates, with the heat release rate increasing and the peak heat release rate increasing by 53%. The combustion and ignition times decrease by 32% and 78%, respectively, because of the increase in material temperature and the exacerbation of heat conduction and convection phenomena leading to an increase in heat release rate. Under low radiation intensities, ABS cannot rapidly absorb energy to reach combustion conditions. However, as the radiation intensity increases, ABS can rapidly absorb sufficient energy for faster decomposition, thus shortening the combustion time. The generation time of carbon monoxide (CO) and carbon dioxide (CO2) is enhanced, and the maximum generation amounts of CO2 and CO increase by 49% and 74%, respectively. The oxygen consumption increases and the oxygen consumption rate accelerates due to the intensified molecular motion caused by thermal radiation, leading to a faster reaction with oxygen in the air. The mass loss time is enhanced, the remaining sample mass decreases, and the maximum mass loss rate increases by 53.8%. Based on the thermal penetration model, 2 mm thick ABS material is classified as a thermally thin material, and verification is conducted. Based on the ignition time model, a critical radiative heat flux formula is established, and the critical radiative heat flux is calculated to be 16.255 kW/m2. Finally, according to the fire performance indicators, as the radiation intensity increases, the material combustion rate increases, releasing higher amounts of heat, leading to faster fire growth and development, thereby increasing fire risk. The fire risk of ABS is positively correlated with the radiation intensity. Conclusions: This study concludes that ABS exhibits a high fire risk. This research provides crucial data and practical references on the fire risks associated with ABS material for safe aviation operations.

  • SPECIAL SECTION: NUCLEAR WASTE WATER AND GAS
    WANG Xuebin, WU Dong, YIN Yuguo, LIU Yaming, RUAN Hao
    Journal of Tsinghua University(Science and Technology). 2024, 64(12): 2031-2044. https://doi.org/10.16511/j.cnki.qhdxxb.2024.27.019
    Abstract (300) PDF (105) HTML (0)   Knowledge map   Save CSCD(1)
    [Significance] In nuclear reactors and spent fuel reprocessing plants, the production of tritiated light water is unavoidable, amounting to thousands of tons annually. The direct discharge of this byproduct into the environment poses significant ecological risks. Consequently, strict tritium emission standards have been established worldwide, propelling the development of detritiation technologies. Among these, combined electrolysis and catalytic exchange (CECE) technology stands out because of its high detritiation factor and mild operating conditions, positioning it as a focal point in global research. This study explores the current state of CECE technology, highlights the three key technologies that underpin it, and addresses the challenges faced in its engineering application, thereby promoting its practical implementation. [Progress] CECE technology comprises liquid-phase catalytic exchange (LPCE), electrolysis, and hydrogen-oxygen recombination processes. LPCE technology is instrumental in the operation of CECE technology. The LPCE column, a critical component, operates on complicated principles, and its efficiency is influenced by various factors such as temperature, pressure, and packing material. Research conducted over the years has shed light on the effect of these elements on the performance of LPCE columns. Electrolysis technology serves as the bottom reflux mechanism within the CECE, with alkaline electrolyzers and proton exchange membrane (PEM) electrolyzers as the main devices. Alkaline electrolyzers, characterized by their limited liquid inventory, good tritium radiation resistance, and high operational stability, are widely regarded as mature technologies. Efforts are currently being directed toward increasing gas production capabilities. PEM electrolyzers represent a new area of development. Compared to alkaline electrolyzers, their notable advantage lies in the absence of alkaline electrolytes. However, their susceptibility to tritium poses significant challenges to their widespread application. Hydrogen-oxygen recombination technology is the top reflux technology of the CECE technology, with the recombiner device playing a pivotal role. Recent advancements have seen a transition from hydrophilic to hydrophobic catalysts within the recombiners, coupled with a reduction in the reaction temperature by over 100℃ while maintaining an efficiency rate exceeding 99.9%. Concurrently, theoretical simulations of CECE technology have evolved with the development of models such as two-film mass transfer and three-fluid models, alongside simulation programs such as FLOSHEET and EVIO. These tools have been instrumental in guiding the design of the CECE process by combining theoretical simulations and experimental analyses. With the development of theoretical simulations and key technologies underpinning CECE, several countries have designed processing schemes to remove tritium from tritiated light water using CECE technology. This study details the process proposed by Canada and Japan. [Conclusions and Prospects] Advances in the key technologies of CECE demonstrate significant advantages in removing tritium from tritiated light water. Moreover, there is substantial potential for further development in engineering applications. Furthermore, the efficiency and cost-effectiveness of CECE technology can be further improved in several ways: the development of more efficient and economical catalysts, the enhancement of PEM electrolyzers to offer better resistance to tritium irradiation, increased gas production, and advancements in hydrogen fuel cell technology.
  • HYDRAULIC ENGINEERING
    YANG Fang, HU Yuying, SONG Lixiang, ZHAO Jianshi
    Journal of Tsinghua University(Science and Technology). 2024, 64(12): 2132-2143. https://doi.org/10.16511/j.cnki.qhdxxb.2024.22.032
    Abstract (296) PDF (108) HTML (0)   Knowledge map   Save
    [Objective] The two-dimensional (2D) hydrodynamic model can simulate the process of flood inundation and evolution. This model is widely used in flood forecasting. The number of grids and time steps considerably affect the computational efficiency of this model. Various methods have been proposed to improve the computational efficiency of the 2D hydrodynamic model. Some researchers use the dynamic grid strategy to calculate only effective grid cells to reduce the impact of the increased number of grids. Others use the local time step (LTS) technology to decrease the time consumption caused by small time steps. Whether the efficiency of model computation can be improved by combining the advantages of the two strategies requires further research. [Methods] Herein, a hybrid algorithm that combines the dynamic grid strategy and LTS technology is proposed to further improve the model performance based on the self-developed flood simulation model, HydroMPM. Technically, the grid cells that actually assist in the flux calculation are first selected as effective cells. Then, the LTS technology is applied to these cells to further optimize the flux calculation and update strategy. The calculation accuracy and efficiency of the hybrid algorithm are compared and analyzed using an ideal dam break case and a typical flood simulation scenario in the Nangang River basin, Guangdong Province, China. [Results] The dynamic grid strategy can accelerate model computation by computing only the effective grid cells. However, the effective cells actually contain all computation grids when the computation area is completely submerged. In this case, the dynamic grid strategy may lead to a high computation amount and low computation efficiency owing to the dynamic update mechanism on all grid cells. The LTS technology can improve the average time step by hierarchically updating the grid cells. However, the performance of this technology barely depends on the difference in grid scale and flow velocity distribution. The urban flood process often has a scattered distribution of the local inundation area, which is suitable for the application of the dynamic grid strategy. At the same time, local mesh refinement is also required in urban flood simulation. This refinement enables the model to better describe the topographic variation in local waterlogging-vulnerable areas. However, it also leads to a large difference in the maximum time step allowed by different grid scales, which requires the application of the LTS technology. By combining the two strategies, the hybrid dynamic grid and LTS technology can further enhance the model performance. In the ideal dam break case, the hybrid algorithm can reduce computation time by 13.1%-64.8%. In the practical application case of the Nangang River basin, the calculation time can be saved by approximately 60%. [Conclusions] The hybrid algorithm successfully combines the advantages of the original dynamic grid strategy and LTS technology to further accelerate the efficiency of the model computation. However, the performance of this hybrid algorithm varies depending on the application scenarios. If the 2D hydrodynamic model is used to simulate only river floods, estuaries, and other areas, the effect of the dynamic grid strategy may not be obvious. The LTS technology may have a general effect when the grid distribution is uniform and the flow state is stable. The hybrid algorithm can combine the advantages of the two abovementioned algorithms. However, this hybrid algorithm has the inherent shortcomings of the two algorithms. In practical applications, a suitable algorithm should be chosen depending on the specific application scenario to achieve good simulation effect and computing efficiency.
  • Hydraulic Engineering
    Shouguang WANG, Huaguang LIU, Pengyu MU, Qiang YANG, Yaoru LIU, Qianghui LIU, Chi LIU, Xingyu JIANG
    Journal of Tsinghua University(Science and Technology). 2025, 65(10): 1821-1837. https://doi.org/10.16511/j.cnki.qhdxxb.2025.21.016
    Abstract (296) PDF (113) HTML (115)   Knowledge map   Save

    Objective: The construction of hydraulic tunnels in high-stress surrounding rock environments often leads to the occurrence of rock bursts, thereby posing a substantial threat to engineering safety. Among the various active prevention and control measures for rock bursts, drilling pressure relief in surrounding rocks is considered a relatively economical and effective method. By creating drilled holes in the rock mass, stress concentration can be redistributed, thereby reducing the likelihood of sudden failures and improving the overall stability of the tunnel structure. Methods: In order to investigate the mechanical properties and damage characteristics of sandstone in hydraulic tunnels under different combinations of drilling numbers and drilling depths, a series of uniaxial compression tests were conducted. These tests utilized an advanced uniaxial compression testing machine and the VIC-3D noncontact full-field strain measurement system. The experiment involved eight different combinations of drilling holes in the sandstone specimens. This study comprehensively analyzed key parameters such as compressive strength, the accumulation and release characteristics of elastic strain energy, and the residual volume rate of sandstone. A regression analysis was conducted to establish a quantitative relationship between the residual volume rate of sandstone and its compressive strength. In addition, the crack evolution and damage characteristics of sandstone under different drilling hole configurations were studied using digital image correlation (DIC) technology and fracture phase field simulation. Furthermore, numerical simulations based on finite element methods were performed to compare the effects of straight holes and 10° inclined holes on stress redistribution within the rock mass. Results: The experimental and numerical results led to the following key findings: (1) when the radius of the drilled holes remains constant, an increase in the drilling depth leads to a decrease in the compressive strength of sandstone. This finding indicates that deeper drilling can effectively weaken the rock mass and facilitate stress relief. (2) Under the condition of identical hole radius and depth, an increase in the number of drilled holes results in a discontinuous reduction in the compressive strength of sandstone. Moreover, the arrangement of the drilled holes plays a crucial role in determining the overall strength of sandstone. For instance, the specimens with asymmetrical three-borehole configurations exhibited lower compressive strength than those with symmetrical four-borehole configurations. This finding suggests that asymmetrical arrangements can enhance energy dissipation efficiency and reduce the overall stress level within the rock. (3) The elastic strain energy of sandstone exhibits a strong positive correlation with compressive strength. Moreover, as the ratio of loss energy to elastic strain energy approaches zero, the intensity of sandstone destruction considerably increases. This outcome highlights the role of energy release in the failure process of rock materials. (4) DIC strain field analysis and numerical simulations confirm that sandstone under uniaxial compression follows a characteristic butterfly-shaped damage pattern. The three-borehole asymmetric configuration showed lower compressive strength, greater far-field stress reduction, earlier failure onset, and higher economic feasibility for pressure relief applications than the four-borehole symmetric configurations. (5) Under identical rock formation and borehole depth conditions, the impact of straight and 10° inclined boreholes on stress redistribution is found to be similar. However, practical construction decisions should be made, considering site-specific conditions and operational requirements. Conclusions: This study provides valuable insights for optimizing the design of borehole pressure relief schemes for hydraulic tunnels. The findings provide a reference for engineers seeking to improve tunnel stability through effective stress redistribution strategies. By systematically evaluating different drilling configurations, this study contributes to the development of more efficient and cost-effective methods for mitigating rock bursts in high-stress environments.

  • HYDRAULIC ENGINEERING
    LI Dan, QIN Chao, XUE Yuan, XU Mengzhen, LIU Yingjie
    Journal of Tsinghua University(Science and Technology). 2024, 64(12): 2122-2131. https://doi.org/10.16511/j.cnki.qhdxxb.2024.21.021
    Abstract (292) PDF (126) HTML (1)   Knowledge map   Save
    [Objective] Rivers carry water and material transport within certain boundary forms. Due to the difficulties of in-site measurement, limited hydrological observation stations, and the precision constraints of digital elevation model (DEM), there is a significant scarcity of information on small river cross-sections distributed in river source areas, mountainous regions, and remote areas, which hinders research on river hydrology and hydraulic processes. Since the beginning of this century, the International Association of Hydrological Sciences (IAHS) has been advocating for solutions to the challenges of hydrological prediction in ungauged basins (PUB). Despite large-scale remote sensing technology is increasingly applied to the extraction of river hydraulic parameters, the spatial resolution of satellite altimetry data is too low for small rivers (river width less that 150 m), which account for a high proportion of river networks. The National Aeronautics and Space Administration (NASA) launched the ICESat-2 (Ice, Cloud and land Elevation Satellite-2) satellite in 2018. This satellite was equipped with the Advanced Topographic Laser Altimeter System (ATLAS), a photon-counting LiDAR system for the first time. The light spot (footprint) it projects onto the Earth's surface has a diameter of about 17 m, with a center-to-center distance between spots of only 0.7 m. This allows for the acquisition of photon point cloud data with smaller spots and higher density along the track. The high-density photon point cloud provided by ICESat-2 offers the possibility of extracting hydrological parameters of narrow rivers with high precision. This study focuses on the small rivers with a river width of less than 10 m in the Huangfu River basin, a first-order tributary of the middle reaches of the Yellow River, which is a data-scarce region. [Methods] A method for extracting the cross-sectional morphology of small rivers using ICESat-2 ATL03 data has been proposed. First, photons with medium to high confidence levels are selected to eliminate most of the noise. Then, a smoothing filter is applied for precise de-noising. Finally, the point cloud that has been precisely de-noised is manually edited to generate a DEM. The cross-sectional morphology of three different locations of small rivers were extracted based on DEM and compared with unmanned aerial vehicle (UAV) in-situ measurement results. [Results] The results show that: (1) the method proposed in this study, which combines the selection of medium to high confidence point clouds with filtering denoising, can effectively remove the noise from photon point clouds, with a denoising rate above 63%; (2) the completeness and richness of ground points extracted based on ATL03 data are superior to those obtained by reclassifying ATL03 using ATL08 product; (3) the results of river cross-sections extracted based on ATL03 data are basically consistent with the UAV in-situ measurement results (R2>0.96, RMSE=0.69 m). [Conclusions] The research results preliminarily demonstrate the feasibility of using ICESat-2 altimetry data for extracting cross-sections of small rivers in data-scarce areas, partially supplying the three-dimensional spatiotemporal of small rivers in data-deficient areas, providing technical support for construction of three-dimensional river network across entire basins and the simulation of hydrological and hydraulic processes. This also indicates that ICESat-2 altimetry data have research prospects in obtaining hydrological parameters of rivers in data-scarce areas.
  • Special Section: Construction Management
    Dingyuan MA, Yixin LI, Xiaodong LI
    Journal of Tsinghua University(Science and Technology). 2025, 65(1): 53-61. https://doi.org/10.16511/j.cnki.qhdxxb.2024.22.049
    Abstract (290) PDF (114) HTML (13)   Knowledge map   Save

    Objective: To achieve energy-saving and emission reduction in buildings, green building design is increasingly gaining attention. However, traditional design methods often rely heavily on the designer's experience, which complicates the consideration of multidimensional factors such as technical strategies and costs, thus limiting decision-making efficiency. Mining tacit knowledge to support green building design decisions and improve decision-making efficiency presents a significant challenge. Methods: This study proposes a two-stage intelligent decision-making model for energy-saving building strategies based on tacit knowledge. The first stage employs a case-based reasoning (CBR) model to determine energy-saving technical strategies. A case library containing 147 green-certified buildings provides reference strategies using attributes from the preliminary design phase, such as building type, structure, number of floors, height, orientation, shape coefficient, floor area, and green certification level. Cosine similarity helps retrieve relevant cases and identify technical strategies like window-to-wall ratios, heat transfer coefficients of the building envelope, heat pump loads, and renewable energy use. The second stage involves an incremental cost prediction model that uses machine learning algorithms. A 2∶8 split of the case library into test and training sets enables comparison across four machine learning algorithms: artificial neural network, extreme gradient boosting (XGBoost), support vector machine, and random forest. Each model's prediction accuracy, precision, and F1 score (the harmonic mean of precision and recall) are evaluated. The model takes the technical strategies identified in the first stage and the known information from the preliminary design phase as input feature parameters. The import_plot module analyzes feature importance to eliminate redundant features. The two-stage model is validated on buildings from regions with hot summers and cold winters. Results: Findings indicate the following: (1) The CBR model effectively identifies and reuses the most similar energy-saving technical strategies, thereby improving decision-making efficiency. Most target cases achieve a similarity greater than 0.8 in the case library. (2) Among the machine learning models, the XGBoost-based incremental cost prediction model exhibits the highest accuracy, achieving 72.41%. (3) By applying the synthetic minority oversampling technique to balance samples and remove outliers, the prediction accuracies for four types of costs reach approximately 70%. However, the prediction accuracy for the fifth type of incremental cost is lower owing to varying owner preferences and requirements. Conclusions: The proposed two-stage intelligent decision-making model successfully integrates the CBR model with machine learning algorithms. The proposed model optimizes the use of limited known information available during the preliminary design stage to predict both technical strategies and incremental costs. This model enhances the scientific rigor and efficiency of energy-saving decision-making, providing significant support for green building design.

  • CIVIL ENGINEERING
    ZHAO Huanshuai, PAN Yongtai, YU Chao, QIAO Xin, CAO Xingjian, NIU Xuechao
    Journal of Tsinghua University(Science and Technology). 2024, 64(12): 2155-2165. https://doi.org/10.16511/j.cnki.qhdxxb.2024.22.029
    Abstract (288) PDF (108) HTML (0)   Knowledge map   Save CSCD(3)
    [Objective] In rock-crushing processes, external loading methods are important factors affecting the mechanical properties and fracture behavior of rocks. Among these loading methods, vibration and impact methods are the most common ones. However, previous research has mainly focused on macroscopic failure features and energy dissipation properties under the singular loading of vibration or impact. Research on the composite loading of vibration and impact is relatively scarce, and few studies have investigated the influence of vibration loading on the microscopic fracture characteristics and energy evolution during rock impacts. In particular, quantitative characterization studies are lacking. The research on the influence of vibration loading on the propagation of impact cracks and the energy utilization efficiency in rocks has significant academic and engineering applications to fully adapt to the needs of modern mine construction and high efficiency, energy saving, and green production. [Methods] The quasi-brittle green sandstone material, commonly used in rock-crushing operations, was taken as the research object. The macro/micromechanical response relationship of green sandstone was established by integrating indoor experiments with microscopic parameter calibration. The parallel bonding model was adopted, and two loading methods — impact and composite loading of vibration and impact — were compared and analyzed to investigate the influence of vibration loading on the propagation of impact cracks and the energy utilization efficiency in the failure process of green sandstone. The analysis was conducted using the particle flow code (PFC). [Results] The research results indicate that under the same impact velocity, increasing the frequency or amplitude of vibration leads to an increasing trend in the number of cracks in green sandstone. Under the two loading methods, the maximum number of cracks in green sandstone shows a nearly linear increase as the impact velocity increases, with the majority being tensile cracks. The distribution characteristics of cracks exhibit the X-shaped conjugate slope. However, the growth rate of cracks is relatively high under composite loading of vibration and impact. The quantitative characterization of the increase in the number of cracks and impact velocity under vibration loading is established. Under equivalent impact velocity, as the frequency and amplitude increase, there is a corresponding increase in both the proportion of vibration input energy and the energy utilization efficiency in green sandstone. However, as the impact velocity increases, the proportion of vibration input energy within the total input energy in green sandstone decreases. Concurrently, the maximum energy utilization efficiency shows a trend of rapid increase followed by a decrease, with the maximum increase reaching 0.725%. [Conclusions] In practical rock-crushing applications, appropriately increasing vibration loading can exacerbate the damage and deterioration of rocks. This process significantly enhances the energy utilization efficiency with lower energy input. This study preliminarily explores the impact of vibration loading on the propagation of impact cracks and the energy utilization efficiency in green sandstone to provide a reference for the rational selection of parameters in rock-crushing processes.
  • PUBLIC SAFETY
    ZHANG Ying, Wu Lingli, Zhang Bo
    Journal of Tsinghua University(Science and Technology). 2024, 64(12): 2177-2184. https://doi.org/10.16511/j.cnki.qhdxxb.2024.27.020
    Abstract (284) PDF (107) HTML (0)   Knowledge map   Save CSCD(1)
    [Objective] To investigate the impact of thermal aging on the fire propagation characteristics of conductors, the fire propagation characteristics of conductors under different equivalent aging ages were experimentally studied. The effects of different equivalent aging years on the fire propagation behavior of polyethylene wire, including the spread and dripping behavior, were studied. [Methods] First, the service life of the expected test wire in an actual scene was derived from the empirical formula of the aging model investigated in previous studies, and the required test conditions were accordingly calculated. [Results] The experimental results show that with the extension of thermal aging time, the flame height, width, and spread speed decrease. During the equivalent aging time, the frequency and mass of the droplets in the melt decreased with increasing equivalent aging time, whereas the mass of individual droplets increased with increasing equivalent aging time. According to the experimental results and theoretical analysis, a thermal aging law for the wire fire propagation characteristics was established. An analysis of the effect mechanism of thermal aging on fire spread behavior reveals that an increase in the equivalent aging age mainly affects the change in the elongation at break and melting point of the thermal insulation layer. With the increase in the equivalent aging time, the elongation at break of the polyethylene insulation layer decreases, which affects the increase in the surface tension of the layer. Under the influence of high temperature, the melting speed of the solid insulation layer was accelerated, and the layer can more easily transition into a liquid state. The liquid melt on the upper surface of the insulation layer flows down the surface under the action of gravity. Once the self-gravity of the accumulated molten material overcomes the constraint of its surface tension, individual droplets are formed for dripping, and this dynamic constraint process becomes longer as the equivalent aging age increases. This reduces the frequency of dripping molten droplets. With the accumulation of the melt on the surface of the insulation layer, the volume of the melt accumulation increases, which affects the increase in the heat loss of the melt in the combustion zone. The flame located above the insulation layer reduces the width, height, and width of the flame because of the loss of fuel on the surface of the insulation layer, thus affecting the reduction of the fire spread speed. The self-constructed fire spread and propagation model reveals that the change in flame width directly affects the convective heat transfer of flame to the insulation layer of the pyrolysis zone. Moreover, less heat is transferred via convection, which results in a slower spread of the fire. [Conclusions] Overall, this study revealed the internal mechanism of fire spread and dripping behavior of heat-aged wire, which can inspire engineers to develop more flame-retardant materials, promote the innovation of refractory materials, enhance the fire safety of various electrical equipment, and mitigate the harm caused by aging wire fires to human life and property.
  • Special Section: Construction Management
    Xiaozhe WANG, Xinxiang JIN, Xiao LIN, Zhubang LUO, Hongling GUO, Rongxiang LAN
    Journal of Tsinghua University(Science and Technology). 2025, 65(1): 45-52. https://doi.org/10.16511/j.cnki.qhdxxb.2025.22.009
    Abstract (281) PDF (112) HTML (12)   Knowledge map   Save

    Objective: Crane lifting is a critical process in steel and concrete structure construction, but safety accidents occur frequently, necessitating effective preventive measures. Traditional supervision relies on human experience and on-site judgment, which are vulnerable to operator fatigue and distraction. As information technology advances, it plays a vital role in crane safety risk monitoring. However, current research mainly focuses on monitoring lifted objects, with insufficient comprehensive studies on the cranes' overall workspaces. In terms of worker monitoring, existing wireless sensing methods perform poorly in steel structure construction scenarios due to significant interference, while visual methods mostly focus on tracking workers' locations in the image, lacking their accurate real-world coordinates. This study aims to propose a spatial collision monitoring method that integrates building information modeling, crane sensing, and computer vision technologies to enhance safety monitoring of crane-worker interactions in steel structure construction scenarios. Methods: The specific research framework and methods are illustrated as follows: First, a crane's workspace is categorized into low-risk, medium-risk, and high-risk areas, and equations for defining medium- and high-risk boundaries are established. Then, the crane's location and posture are monitored in real time using sensors. Simultaneously, using the YOLO11-OBB model and perspective transformation, the actual location of the workers on the floor is calculated based on precalibrated reference points and their location in the image. Finally, the on-site monitoring data are integrated into a 3-D management platform, which calculates and visualizes the spatial collision risks between the crane and the workers in real time. Results: A case study from a steel structure construction project in Xi'an was used to test the accuracy and feasibility of the proposed method. The test results showed that the monitoring error for the medium-risk and high-risk crane workspaces was ±2.600 and ±2.611 m, respectively. The accuracy of the YOLO11-OBB model for worker localization and recognition was 0.968, with a recall rate of 0.969, the mean average precision at intersection over union (IoU) 50% (mPA50) of 0.980, and the mean average precision at IoU 50%-95% (mPA50-95) of 0.864. The mean absolute error, mean relative error, and root mean square error for the calculation of the actual locations of the workers were 67.44 mm, 4.32%, and 86.16 mm, respectively. During the 9-month monitoring of the site, the frequency of workers entering high-risk areas showed a fluctuating decline, demonstrating the feasibility of the method in enhancing safety warnings on construction sites. Conclusions: This study presents a spatial collision monitoring method for crane and worker interaction in steel structure construction scenarios, including the definition and monitoring of crane hazardous workspaces and worker locations. A 3-D visualization platform is used to monitor the collision situation between cranes and workers. The case study indicates that using only regular cameras and without the need for workers to wear additional monitoring devices, the method can effectively ensure the accuracy of worker localization within a distance of 20 m from cameras. The method also enables differentiated responses to cranes operation in spaces with varying risk levels, allowing managers to intuitively view the spatial collision status of the crane and workers through the 3-D model on-site.

  • Mintang LIU, Lei LEI, Jing ZHENG, Zhonghang ZHAO, Qian CAO
    Journal of Tsinghua University(Science and Technology). 2025, 65(2): 233-248. https://doi.org/10.16511/j.cnki.qhdxxb.2024.21.040
    Abstract (276) PDF (96) HTML (26)   Knowledge map   Save

    Significance: The tribological properties of ceramic materials are crucial for the long-term reliability of ceramic components. Understanding the friction and wear mechanisms of ceramics is essential for designing, optimizing, and improving the operating performance. Numerical simulation methods, because of their low cost and high efficiency, are valuable for analyzing tribological behavior. They allow for real-time analysis of stress, temperature, cracks, and molecular motion during friction and wear. These capabilities make numerical simulations a widely discussed approach in tribology research. However, most studies on the tribological behavior of ceramics using simulations remain fragmented and lack systematic induction and summary. Progress: This paper categorizes numerical simulations of ceramic tribological behavior into three main methods: finite element method (FEM), molecular dynamics (MD), and discrete element method (DEM). The applicable scenarios, research status, and limitations of each method are reviewed. FEM uses mathematical approximations to solve differential equations, simulating real-world physical systems. Initially, it was applied to study elastic stress distribution on ceramic surfaces during friction, serving primarily as an experimental support tool. Over time, FEM has advanced to incorporate surface fracture analysis, thermomechanical coupling, and wear modeling. Recent developments allow FEM to investigate subsurface crack initiation, crack propagation, and temperature distribution at friction interfaces under high-stress conditions, such as those in ceramic cutting tools and machining. Furthermore, FEM-based wear models can quantitatively estimate the wear volume of ceramic surfaces; however, they are highly dependent on experimental data, limiting their general applicability. MD simulations, based on Newton's laws of motion, track the trajectories of atoms and molecules during ceramic friction and wear processes by modeling interatomic interactions. This method provides a detailed view of the microfriction and wear mechanisms in ceramics. However, current research is primarily focused on SiC ceramics, with limited research on other ceramics. DEM simulations model ceramics as a collection of discrete elements and predict their tribological behavior based on interactions between these elements. This approach overcomes the continuous medium assumption and provides insights into microcrack initiation and propagation during ceramic friction and wear. However, its application is limited, primarily focusing on ceramic cutting tools and grinding wheels. Conclusions and Prospects: Numerical simulations are crucial for understanding the tribological behavior and mechanisms of ceramic materials and components. While its use is increasingly widespread, existing studies often focus on specific scales and boundary conditions, hindering a comprehensive understanding of the tribological mechanisms of ceramics. Moreover, a single numerical simulation method cannot completely account for the complex physical and chemical boundary conditions involved. Therefore, the development of multiscale, multifield simulation methods is essential. Additionally, tribological information methods based on machine learning and artificial intelligence can enhance data correlations, improve empirical parameter exploration, and accelerate numerical simulations with approximate calculations. Integrating these advanced techniques with traditional numerical methods can create more efficient and innovative computational tools for ceramic tribology.

  • Special Section: Public Safety
    Haobo ZHANG, Kejun LI, Peng CHEN, Nan JIA
    Journal of Tsinghua University(Science and Technology). 2025, 65(1): 186-199. https://doi.org/10.16511/j.cnki.qhdxxb.2024.22.055
    Abstract (271) PDF (99) HTML (17)   Knowledge map   Save

    Objective: Hot events often lead to rampant online rumor spread. To prevent the incitement of public sentiment and the exacerbation of social contradictions, government departments must conduct timely and accurate situation assessment and response efficiency analysis before the outbreak of an online rumor crisis. In this regard, this paper investigates the trigger mechanism and congestion effects of online rumor crises. Methods: By analyzing the evolution system of online rumors, a model for the trigger mechanism and congestion effects of online rumor crises is constructed using the improved susceptible exposed infectious recovered (SEIR) model and the stochastic Petri net (SPN). The constructed trigger model, SE(ER)IR-SPN, is refined by delineating the involved latent population group into exaggerators or rational spreaders. The equilibrium system state and precise trigger timing are obtained by analyzing transmission equilibrium points, trigger thresholds, and the density change trends of different characteristic groups. The congestion effects of emergency responses to crisis events after the outbreak of rumors are analyzed based on the busy rates of places and the utilization rates of transitions. Finally, the model applicability is verified using a medical and health event in City A as a case study. Results: The research indicates that the SE(ER)IR-SPN model can detect high-risk online rumor events early, providing decision support for government departments during the disposal phase based on the busy rates of places and the utilization rates of transitions. The model effectively captures the dynamics of rumor spread and the subsequent congestion effects in emergency response processes. Conclusions: The SE(ER)IR-SPN model is a valuable tool for the early identification of online rumor crises, enabling government departments to make informed decisions during the disposal phase. Detailed analysis of the model components, including the busy rates of places and the utilization rates of transitions, offers insights into the optimization of emergency response workflows. The case study considered herein confirms the practical utility of the model, highlighting the potential for broad application in managing and mitigating the impact of online rumor crises.irms the practical utility of the model, highlighting the potential for broad application in managing and mitigating the impact of online rumor crises.

  • CIVIL ENGINEERING
    ZHOU Jiaxing, WANG Jin-an, LI Fei
    Journal of Tsinghua University(Science and Technology). 2024, 64(12): 2166-2176. https://doi.org/10.16511/j.cnki.qhdxxb.2024.21.018
    Abstract (271) PDF (100) HTML (0)   Knowledge map   Save
    [Objective] As the depth of coal mining increases, the activation of discontinuous structures, such as faults, poses a significant risk to the safe and efficient mining of coal seams. Therefore, acquiring precise knowledge of the distribution of in-situ stress is paramount for the design, construction, and disaster prevention of mining engineering. [Methods] This study proposes an inversion method for in-situ fields applicable to discontinuous zones of deep coal seams. (1) Given the discontinuity characteristics of the deep in-situ stress field, stability discriminants for normal faults and stability discriminant equations for positive faults, reverse faults, and strike-slip fault zones are derived based on the lateral pressure coefficients of in-situ stress. (2) A long short-term memory neural network algorithm is adopted to optimize the learning of the in-situ stress field data formed in different periods sequentially to effectively solve the nonlinearity, discreteness, and multi-noise problems of the measured deep in-situ stress data and to ensure that the excellent in-situ stress data information is remembered for a long time and that the inferior in-situ stress data information is forgotten in time. [Results] This study considers the main and auxiliary well areas of Yingjun's second mining area in Shanghai Miao as the research background and establishes an algorithm model for long short-term memory neural networks. Given the distribution characteristics of the in-situ stress field in fault areas at different scales, an inversion calculation of the in-situ stress field in discontinuous areas of deep coal seams was conducted. [Conclusions] The correlation coefficient between inverted and measured stress fields was 0.945, with an average error of 12.897%. The standard deviation of the stress difference is 2.000. The amount and direction of the regional stress field of the fault will also change. Compared with the regional stress field, the in-situ stress field in the DF15 and SF15 large-scale fault zones is approximately 5 MPa lower, and the counterclockwise direction is deflected. The in situ stress in the surrounding rock area at the top and bottom of the coal seams adjacent to DF15 and SF15 large-scale fault zones is relatively small, and no stress concentration area is detected. The eighth overlying coal seam was tilted toward horizontal in-situ stress by extrusion during deposition, and a concentration of in-situ stress was detected on the top rock. Therefore, for deep coal seam mining in the well field, the protective-layer mining method can be adopted, i.e., the lower 15 coals can be mined first to provide protection and pressure relief for the mining of the upper 8 coals to ensure the safety and reliability of deep-mining in the well field. Folds mainly control the distribution of the horizontal in-situ stress field, and the in-situ stress of the rock mass in the axial part of the backslope and the inner arc increases, while the in-situ stress of the outer arc of the obliquity is relatively small. Therefore, the inversion method proposed in this study can offer a new perspective for reconstructing the stress fields in deep discontinuous areas.
  • MECHANICAL ENGINEERING
    YAN Bo, HU Jinchun, ZHU Yu, WEN Tingrui, XU Dengfeng
    Journal of Tsinghua University(Science and Technology). 2024, 64(12): 2059-2067. https://doi.org/10.16511/j.cnki.qhdxxb.2024.21.025
    Abstract (269) PDF (100) HTML (0)   Knowledge map   Save
    [Objective] Accurate pose measurement is crucial for precise motion control in multi-degree-of-freedom motors. Traditional methods for pose measurement often rely on external sensors such as encoders, inertial measurement units, and optical sensors, which can be complex and less reliable. This paper introduces a three-degree-of-freedom pose measurement technique that uses redundant magnetic field information. By utilizing the motor's intrinsic magnetic field, this approach aims to simplify the system design and improve its robustness. [Methods] The proposed method employs an array of redundant magnetic sensors to detect the magnetic field generated by the motor's rotor permanent magnets. A measurement algorithm is developed to quickly calculate the motor's multi-degree-of-freedom pose from the detected magnetic flux density data. In addition, a calibration algorithm is introduced to ensure real-time, precise measurement of the motor's pose by accurately aligning the magnetic field model parameters with the actual magnetic field generated by the motor. This calibration process leverages simulation data and a linear magnetic field model to achieve accurate alignment. To further enhance accuracy, spatial transformation matrices are used to construct a mathematical model of the magnetic sensor signals. This allows the system to effectively map the detected magnetic field information to the motor's pose. The Gauss-Newton method is then employed to solve the overdetermined equations arising from the redundant information provided by multiple sensors, and multi-degree-of-freedom pose measurement and position parameter calibration are realized. [Results] A significant advantage of this method is its reliance on the motor's intrinsic magnetic field for pose measurement. This intrinsic measurement approach simplifies the overall system structure and enhances its robustness. By measuring the pose based on the motor's magnetic field, the system can simultaneously determine the installation position of the sensor array and calibrate the parameter offsets of the permanent magnets. This dual functionality is achieved without any special requirements for the sensor arrangement, providing versatility and adaptability to different motor configurations. In addition, this method is less susceptible to external environmental factors such as vibrational noise and dust contamination. Traditional external sensors can be significantly affected by such factors, resulting in measurement errors and reduced reliability. In contrast, the proposed method provides a more reliable and precise measurement system that can be easily integrated into various motor configurations and withstand demanding environmental conditions. [Conclusions] To validate the proposed method, experiments were conducted on a self-built physical system that included the Real-Time eXtension (RTX) operating system, a Windows software platform, a magnetic sensor array, and an FPGA hardware platform. These experiments covered permanent magnet parameter offset calibration, stability tests, and comparative performance evaluations. The results from both simulations and experiments show that the proposed method can rapidly and accurately measure the pose of a three-degree-of-freedom rotary motor. The root-mean-square errors in orientation measurement for the three rotational axes were 0.031, 0.025, and 0.056 rad, respectively, with a resolution of approximately 1.77?0-4 rad. These results confirm the method's capability to provide high-precision pose measurements, positioning it as a promising solution for advanced motion control applications in multi-degree-of-freedom motors. Both simulation and experimental results validate the high-precision pose measurement capability of the proposed method, highlighting its potential for such advanced motion control applications.
  • MECHANICAL ENGINEERING
    LI Zheng, ZHENG Shigang, DANG Xiaoyong, JI Wen, LIU Qu, CAI Zhipeng
    Journal of Tsinghua University(Science and Technology). 2024, 64(12): 2084-2091. https://doi.org/10.16511/j.cnki.qhdxxb.2024.22.030
    Abstract (266) PDF (98) HTML (1)   Knowledge map   Save
    [Objective] The primary objective of this research is to meticulously examine how pulse magnetic field assisted deep cryogenic (MDC) treatment affects the transformation and stabilization of retained austenite in Cr4Mo4V bearing steel. This study aims to elucidate the underlying mechanisms by which the pulse magnetic field influences the microstructural changes in bearing steel, particularly focusing on the stabilization of retained austenite, which plays a crucial role in determining the mechanical properties and overall performance of the steel. [Methods] To achieve a comprehensive understanding of how retained austenite transformed under various treatment conditions, this study utilized several material characterization techniques, including X-ray diffraction (XRD), vibrating sample magnetometry (VSM), and electron backscatter diffraction (EBSD). The use of EBSD analysis allows for a detailed comparison of variations in the dislocation density among samples processed under different conditions. For comparative analysis, the experimental set-up was divided into two distinct treatment processes: the conventional deep cryogenic (DC) treatment and the MDC treatment. Following these treatments, the samples were subjected to high-temperature tempering to evaluate the thermal stability of the retained austenite. [Results] The XRD analysis revealed a reduction in the volume fraction of retained austenite from (23.8%?.6)% to (21.5%?.9)% following the DC process. A relatively smaller reduction to (22.5%?.5)% was observed with the MDC process. These results, supported by VSM and EBSD analyses, highlight the capacity of the pulse magnetic field to partially inhibit the transformation of retained austenite. Further examination of the high-temperature stability of austenite in samples treated with DC and MDC revealed that MDC samples demonstrated improved retention, maintaining 7.1% of retained austenite after high-temperature tempering, compared to 4.9% in DC-treated samples. This indicates that the retained austenite in Cr4Mo4V bearing steel exhibits improved high-temperature stability following treatment with the MDC process. Furthermore, the dislocation density analysis revealed that the DC process led to a 9.8% increase in the dislocation density, whereas the MDC process moderated this increase to only 6.5%. This difference suggests the magnetic field's role in inhibiting dislocation diffusion, which in turn reduces martensite nucleation sites, thereby stabilizing retained austenite. The dislocation density change of the samples treated with DC and MDC after a high-temperature tempering validates this point. The dislocation density in DC-treated samples was approximately 1.23?015 m-2, while it decreased to 1.13?015 m-2 in MDC-treated samples. The dislocation density change reflects the extent of phase transformation. [Conclusions] This study provides a thorough analysis that clearly demonstrates the significant impact of applying a pulse magnetic field during deep cryogenic treatment on the microstructural evolution of Cr4Mo4V bearing steel. The magnetic field not only moderates the increase in the dislocation density but also enhances the mobility of dislocations. This contributes to the stabilization of retained austenite, which is crucial for improving the mechanical properties and performance of bearing steel. The findings of this research lay a solid foundation for optimizing heat treatment processes using the magnetic field assisted deep cryogenic treatment.
  • Hydraulic Engineering
    Jiahong LIU, Mengxue ZHANG, Jia WANG, Chao MEI
    Journal of Tsinghua University(Science and Technology). 2025, 65(10): 1853-1867. https://doi.org/10.16511/j.cnki.qhdxxb.2025.21.038
    Abstract (261) PDF (115) HTML (63)   Knowledge map   Save

    Objective: The frequency and intensity of global flood events are increasing, which is deeply driven by climate change and human activities. The key risk factors of hazard, exposure, and vulnerability are interconnected and collectively influence the occurrence and progression of flood disasters. Therefore, the relationships between these three factors need to be understood, and a comprehensive indicator system for the integrated assessment of flood risk needs to be developed. This study aims to analyze the spatiotemporal trends of global flood events from 1965 to 2023. Moreover, based on the key risk factors of hazard, exposure, and vulnerability, the spatiotemporal characteristics of the flood risk were revealed, which could provide a scientific basis for flood prevention and disaster mitigation decision-making. Methods: This study uses the Emergency Events Database, which includes global flood data, to analyze flood events from 1965 to 2023. This study conducts a trend analysis of the global flood occurrence, affected population, and mortality per unit area from 1965 to 2023. Based on the affected population and mortality per unit area, floods in six continents are classified into light, moderate, and severe categories using the percentage method. Spatial analysis of the flood occurrence, affected population, and mortality per unit area was performed for each country. The results showed the spatial distribution and impact intensity of flood disasters in different regions. In addition, key risk indicators, such as geographic elevation, precipitation, population density, and urbanization rate, are selected to analyze the characteristics of flood risk. Elevation and precipitation represent hazards. Population density indicates exposure, and urbanization rate reflects vulnerability. Trend analysis of these indicators was performed for three distinct periods, i.e., 1965-1984, 1985-2004, and 2005-2023. To examine the spatial trends of these indicators across countries over the entire study period, the Theil-Sen slope estimation method was employed. The entropy weight method was applied to calculate the weight of each risk indicator, and the flood risk values of six continents from 1965 to 2023 were calculated. Results: The main results are as follows: (1) From 1965 to 2023, global flood events show a fluctuating upward trend, although the affected population and number of deaths have shown a downward trend since the 1990s. At the continental level, floods occur most frequently in Asia, Africa, and South America, with a total of 2 322, 1 266, and 1 084 events, respectively. At the national level, Haiti experiences the highest frequency of flood events per unit area, with 23 events per 104 km2. Bangladesh has the highest total number of flood-affected people per unit area, with 27.1 million people per 104 km2, and the highest record of cumulative deaths, with 3 313 deaths per 104 km2. (2) Flood hazard, exposure, and vulnerability vary significantly across six continents. Among the indicators, population density and precipitation show the greatest influence on flood risk, with weights of 0.33 and 0.30, respectively. From 1965 to 2023, an obvious regional variation in flood risk across six continents is detected. The flood risk in Asia is significantly higher than that in other continents, with the flood risk values of both Asia and Africa showing a significant increase. By contrast, the flood risk value of South America decreased after 2010. Europe and North America show relatively low and stable flood risk values. Oceania exhibits the lowest flood risk values with significant fluctuations. Conclusions: This study conducts not only a systematic analysis of global flood events over a long time series but also an analysis of the changes in risk indicators, such as precipitation, geographic elevation, population density, and urbanization rate, from 1965 to 2023. Moreover, the relative impact of different indicators is quantified, which clarifies their respective contributions to flood risk. The results further revealed the comprehensively changing characteristics of flood risk. The findings provide guidance and evidence to inform flood prevention planning and disaster response strategies. In the future, exposure change based on population mobility and integrated adaptive capacity should be considered to reveal the dynamic characteristics of flood risk.

  • Special Section: Public Safety
    Jia WANG, Mengyao HUANG, Shizhe JIA, Weixi WANG, Lei ZHANG, Xin PEI, Shifei SHEN
    Journal of Tsinghua University(Science and Technology). 2025, 65(1): 125-134. https://doi.org/10.16511/j.cnki.qhdxxb.2024.22.048
    Abstract (250) PDF (92) HTML (20)   Knowledge map   Save

    Objective: Driving under the influence of drugs or drugged driving refers to operating a vehicle after consuming certain drugs, posing a significant risk to public safety. While international research on drugged driving is extensive, domestic studies are lacking. This paper aims to bridge this gap by reviewing international research progress and summarizing specific research directions and achievements to guide domestic research. Methods: To thoroughly assess the research progress on drugged driving, data was collected from the Web of Science Core Collection Database. The search used keywords such as "drug (medicine) and drive (driving)", limiting the research direction to "transportation" and publication period from "1999 to 2023". Totally 264 research articles were gathered. The mapping knowledge domain (MKD) method was used to analyze the annual distribution, source publications, keyword co-occurrence, and other relevant literature aspects, providing specific insights into progress in drugged driving research. Results: The results show that international research on drugged driving has been extensive and diverse since the 1990s. Qualitative and quantitative studies have explored various aspects of the issue, including the types of drugs affecting drivers, their impact on driving abilities, the risks associated with drugged driving, the prevalence of drugged, driver attitudes and perceptions, drug detection technologies, and relevant legislation. To promote governance and prevent drugged driving incidents in China, several projects need attention: classifying drugs that impair driving and understanding their pharmacological effects, developing drug detection technologies, conducting epidemiological investigations on the prevalence of drugged driving among drivers, and carrying out empirical analysis and legislative research on drugged driving cases. Conclusions: This paper employs structured network analysis methods to comprehensively review international research achievements in drugged driving during the past 30 years. The analysis of annual publication distribution, source publications, and keyword co-occurrence supplements existing literature reviews. This study offers valuable guidance for future research and governance strategies related to drugged driving in the domestic domain.

  • Research Article
    AN Ruinan, LIN Peng, WANG Xin, LI Zichang, LIU Yuanguang, HE Pinjie
    Journal of Tsinghua University(Science and Technology). 2025, 65(3): 446-454. https://doi.org/10.16511/j.cnki.qhdxxb.2025.26.016
    Abstract (245) PDF (88)   Knowledge map   Save
    [Objective] Intelligent ventilation on demand is crucial for ensuring environmental safety in underground caving groups and for the high-quality construction and development of hydropower projects. Ventilation systems for large underground caving groups during construction frequently exhibit complex three-dimensional layouts, different air loads across regions, and dynamic demand under varying regulation conditions. [Methods] To achieve spatial node extraction, branch correlation decoupling, and stable joint adjustment of complex flow fields, this paper examines the development characteristics of fluids under construction ventilation in extensive spatial structures. It demonstrates the necessity of constructing a graph structure based on the ventilation flow characteristics for analyzing and adjusting ventilation system parameters. The regional modeling theory is discussed, detailing the principles and methods of node extraction for one-dimensional tube bundle fluids (network) and three-dimensional spatial flow field elements (field). Among these, the area where fluid parameter information changes along the main airflow direction employs network node extraction, while the regions with multi-directional complex flow paths utilize the three-dimensional field node extraction method. Virtual branches address the network-field coupling problem, utilizing the nodal pressure approach. This method treats the nodal pressure as the unknown variable and airflow deviation as the assessment criterion. Nodes with known pressure values serve as reference nodes for solving the pressure at all network nodes, and are further assigned to field simulation boundaries. By numerically simulating the three-dimensional spatial flow field, the virtual branch air flow rates are iteratively fed back into the air network calculation for a coupled solution. This paper also introduces the node-property-edge triplet, which effectively reflects the structure, performance, and behavioral characteristics of nodes. Furthermore, to optimize the ventilation coordination efficiency, a hypergraph structure for joint adjustment, with edges as the analysis object, displays the coupling interactions between the ventilation branches and loops. Considering the joint adjustment sensitivity, an optimal resistance control method is proposed, which involves constructing target and response node sets, setting response efficiency constraints, and optimizing to form a ventilation adjustment plan. An intelligent ventilation coordination platform integrates the resistance control model of coupling interactions, including modules for network design, ventilation design, field-network integration, loop generation, and optimization analysis. Within this framework, the network design module is dedicated to reconstructing the physical model of the ventilation system, while the ventilation design and field-network integration modules are used to assign basic fluid characteristic parameters of ventilation to the established model. The loop generation and optimization analysis modules are employed for solving the overall wind network parameters, including air volume, air pressure, and wind resistance. [Results] The field-network coupling method using nodal pressure eliminated the need for loop identification and effectively addressed the interdependent coupling between network nodes and flow field boundaries. The intelligent ventilation coordination platform was integrated with online environmental monitoring devices to automatically gather critical ventilation environment parameters, thereby enabling real-time calculations of the ventilation system based on environmental monitoring data and providing 3D visualization and early warning capabilities. [Conclusions] The ventilation design parameters of an engineering project are used to implement targeted air volume control deployment. The integrated control system exhibits high responsiveness. On the premise that the air volume of each unit meets the threshold requirements, the air volume adjustment efficiency of the target unit and the overall stability of the air distribution network can always fulfill the specified requirements. The results indicate a timely and stable system response and can provide a reference for similar projects.