Chinese  |  English
Home Table of Contents

06 March 2026, Volume 66 Issue 3
    

  • Select all
    |
    Space Launch Support Technology and Engineering Application
  • Zhiyong SHU, Tianxiang WANG, Gang LEI, Qiang CHEN, Hua QIAN, Wenqing LIANG
    Journal of Tsinghua University(Science and Technology). 2026, 66(3): 417-428. https://doi.org/10.16511/j.cnki.qhdxxb.2026.26.006
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    Significance: With the rapid development of routine launches, deep space exploration, lunar and Mars missions, and commercial spaceflights, increasingly stringent demands have been placed on the safety, reliability, and efficiency of propulsion systems. Cryogenic propellants, such as liquid hydrogen, liquid oxygen, and liquid methane, have become core working fluids for next-generation launch vehicles and deep space missions owing to their high specific impulse and clean combustion products. However, under conditions of a high flow rate and rapid fuel, severe water hammer and complex transient flow phenomena are likely to occur in propellant feed lines; these phenomer include large-amplitude pressure oscillations, rapid multiphase transitions, and strong coupling between thermal and structural fields. If uncontrolled, such nonequilibrium transient processes may induce pipeline vibrations, valve malfunctions, tank structural damage, or even catastrophic failures. Therefore, elucidating the mechanisms of water hammer during the rapid filling of cryogenic propellants is not only of great scientific importance but also an urgent requirement for ensuring spacecraft mission safety and improving the design of modern launch sites. Progress: In recent years, extensive research has been conducted worldwide on the transient flow characteristics of cryogenic propellant filling, leading to a series of significant advances. Theoretical modeling has evolved from traditional one-dimensional water hammer equations to multiphase flow models that incorporate thermodynamic nonequilibrium effects, phase-change dynamics, and interfacial heat transfer, thereby providing a more accurate representation of cryogenic operating conditions. Numerical simulation methods, including high-resolution finite-volume methods, multiscale direct numerical simulations, and coupled fluid-thermal-structural approaches, have been developed, enabling more accurate simulations of pressure oscillations, gas-liquid interface evolution, and heat transfer phenomena. In terms of experimental validation, given the stringent requirements during liquid hydrogen and liquid oxygen testing, researchers have employed substitution experiments with liquid nitrogen, liquid helium, and cryogenic simulation test facilities to obtain key transient data; some of these experimental results have already been successfully applied to assess the safety of rocket ground fuel systems. From an engineering perspective, organizations such as the National Aeronautics and Space Administration (NASA) and SpaceX have introduced energy dissipators, buffer devices, and active control strategies into next-generation launch site construction to effectively suppress local cavitation, flashing, and water hammer risks. Collectively, these efforts indicate that the research focus is gradually shifting from single-flow modeling to an integrated development paradigm, encompassing mechanistic understanding, model development, experimental validation, and engineering applications. Conclusions and Prospects: Despite these advances, critical challenges remain in the study of cryogenic propellant filling. First, the nonequilibrium nature of phase-change processes under cryogenic conditions still lacks a complete and unified quantitative description. Second, the mechanisms of fluid-thermal-structural multiphysics coupling are highly complex, and the current models show limitations in terms of boundary adaptability and predictive accuracy. Third, experimental validation remains limited, highlighting the urgent need for safe and controllable substitution methods and advanced testing platforms. Future research should establish a closed-loop framework of "mechanistic understanding-model development-experimental validation", promote the integration of high-fidelity multiphase flow models with multiscale simulation techniques, and leverage artificial intelligence and big data approaches to achieve intelligent prediction and real-time control of transient fueling processes. The coordinated advancement of theory, experimentation, and engineering practice in this field is essential to provide solid theoretical support and technological assurance for the safe implementation of next-generation routine launch operations and high-frequency mission scenarios.

  • Yongyi ZHOU, Junlin LIU, Zhicheng ZHANG, Zhanpeng CUI
    Journal of Tsinghua University(Science and Technology). 2026, 66(3): 429-439. https://doi.org/10.16511/j.cnki.qhdxxb.2025.26.035
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    Significance: With the expansion of deep space exploration endeavors and increasing demand for satellite constellation networking, the global aerospace industry is experiencing a strategic shift from singular payload launches toward a high-cadence and cost-effective transportation paradigm. Consequently, launch vehicle technology has exhibited a discernible trend toward heavy-lift capacity and reusability. To achieve the technical objective of obtaining substantial takeoff thrust, heavy-lift rocket primary engines typically employ a power architecture involving multiple engines operating in parallel. Heavy-lift rockets are poised to assume a pivotal role in forthcoming space transportation missions. As a fundamental component of a rocket launch system, the primary function of the deflector is to direct the controlled diffusion of high-temperature and high-velocity gas jets through a precisely engineered aerodynamic configuration. This mitigates aerodynamic forces exerted on the rocket body and payload resulting from shock wave reflection and attenuates the thermal loading induced by gas backflow. The deflector ensures the structural integrity of the rocket body. Consequently, future heavy-lift rocket launch missions present unprecedented technical challenges for launch site deflectors. These deflectors must achieve large-scale gas evacuation, high-reliability thermal protection, and rapid reusability capabilities in an extreme gas jet environment. The formulation of a deflector design amenable to future heavy-lift rockets necessitates the resolution of several key scientific questions and the circumvention of prevailing technical limitations. Progress: In the field of the flow characteristics of multinozzle gas jets, the multiphysical field coupling process is the main focus. The interaction between the jets and the free flow forms a complex flow structure comprising shear layers, recirculation vortices, and impingement-reversed flows. The pressure ratio of combustion chamber to ambient has a decisive influence on the interaction mode of gas jets. The coupling of the reignition reaction and jet interaction can induce the formation of local high-temperature regions, and the entrainment effect of the vortex structure on high-temperature gas will intensify thermal erosion, which has important guiding significance for the design of thermal protection systems. In the field of the impingement flow of gas jets on inclined plates, the typical flow pattern under the jet-wall interaction is studied. The factors influencing the flow pattern include the impingement height, impingement angle, jet core Mach number, jet underexpansion ratio, and shock system structural parameters. In the field of flame deflectors, the basic configuration is discussed. The core parameters—impingement height, impingement angle, and deflection direction—are investigated, and the performance characteristics of different configurations of flame deflectors are compared. In the field of flame deflector thermal protection technology, two methods, namely active water-cooling technology and passive material thermal protection technology, are discussed. The cooling efficiency of water sprays is influenced by multiple parameters, such as the spray nozzle layout, spray angle, and mass flow rate. To cope with high-frequency and low-cost launch requirements, the rapid post-launch repair of launch facilities is also an important step in the research of thermal protection technology. Conclusions and Prospects: In terms of flow mechanism, the coupling effect of dense nozzle gas-jet should be analyzed, and the laws of the evolution of the jet mixing boundary layers and water spray mixing should be explored. For thermal protection technology, it is necessary to continue developing high-efficiency and low-cost materials for flame deflector thermal protection to improve the reusability and maintainability of diversion facilities.

  • Ruotong WANG, Zhiyong SHU, Gang LEI, Wenqing LIANG, Qiang CHEN, Haiyang LIU, Xiaohong ZHENG, Hua QIAN
    Journal of Tsinghua University(Science and Technology). 2026, 66(3): 440-451. https://doi.org/10.16511/j.cnki.qhdxxb.2026.26.011
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    Objective: Although liquid hydrogen provides high energy density and clean combustion, it poses significant safety risks due to its low boiling point, high diffusivity, and flammability. Therefore, addressing the safety challenges associated with large-scale liquid hydrogen leaks is essential for operational safety and risk management in hydrogen infrastructure, particularly in large-scale storage areas. This study investigates the dispersion characteristics of hydrogen vapor clouds and subsequent fire hazards following catastrophic leakage events, providing critical insights into safety implications under diverse environmental conditions. Methods: A numerical model based on the fire dynamics simulator (FDS)—an advanced computational fluid dynamics tool developed by the U.S. National Institute of Standards and Technology—was developed. The applicability and accuracy of FDS in hydrogen safety simulations were systematically validated through comparisons with experimental data, quantitative error analysis, and cross-validation using peer studies. The simulations examined leakage scenarios involving evaporation, vapor dispersion, and fire development, incorporating fluid dynamics, phase change, combustion, and thermal radiation processes. Environmental factors such as overhead obstructions and solar radiation were also considered. The leakage source was modeled as a liquid hydrogen storage tank located at the center of a large-scale storage area. The lateral dispersion behavior of the hydrogen cloud under varying wind speeds and its impact on structures located off the downwind direction were analyzed. Subsequently, three comparative scenarios were established: a baseline case, a case with overhead confinement, and one with solar radiation. The effects of ceiling obstructions and solar heating on hazardous exposure times and cloud dispersion were evaluated. Moreover, the fire development and potential hazards in the vicinity of the storage area following immediate ignition after leakage were examined. Results: Simulation results indicated that: (1) Buildings located offset from the main wind direction were not affected by the lateral dispersion of the flammable hydrogen cloud. (2) The ceiling structure suppressed the vertical dispersion of the hydrogen cloud during the initial leakage phase and increased the horizontal transport rate, leading to a large gas-to-ground contact area and a prolonged hazardous exposure duration in near-field regions, which resulted in an increase in hazardous exposure time at Building 2 by 5s. (3) Solar radiation played a minimal role in the early-stage evaporation and dispersion of liquid hydrogen. Its thermal effect grew with time, serving to prolong the hazardous exposure time at building facility 2 by 12 s through accelerated cloud dispersion, with a corresponding delay in far-field arrival due to enhanced dilution. (4) In the event of ignition, the liquid hydrogen fire developed extremely rapidly, reaching a flame height of 30m and a lateral spread of 15 m within 9s; the core flame temperature exceeded 700℃. In the simulated fire scenario, the thermal radiation intensity in the ceiling monitoring area consistently exceeded the safety threshold. The thermal radiation intensity exceeded 12.5kW/m2, accounting for over 70% of the total simulation time, potentially causing structural damage such as steel failure and concrete spalling. Conclusions: Quantitative studies on overhead confinement and solar radiation provide new insights into the potential risks of hydrogen facilities in complex environments, while research on the dynamic characteristics of accidental fires offers valuable information for the prevention of daily fires and the development of appropriate strategies to suppress fires.

  • Ke HUANG, Qiang DONG, Yuanqing XIA, Xin XIE, Qiang CHEN, Cheng GU
    Journal of Tsinghua University(Science and Technology). 2026, 66(3): 452-462. https://doi.org/10.16511/j.cnki.qhdxxb.2026.26.017
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    Objective: The low-gravity simulation test platform serves as a pivotal and indispensable component of facilities dedicated to spacecraft landing and takeoff verification. Considering the rapid diversification and increasing complexity of modern aerospace missions—including crewed lunar landings, Mars sample return, and deep-space exploration—the need for in-depth research on digital design methodologies for low-gravity simulation systems tailored for manned lunar modules has reached an unprecedented level. Equally critical is the construction of ultralarge-load low-gravity simulation test facilities capable of meeting the stringent technical demands of next-generation heavy-duty spacecraft. However, a prominent theoretical bottleneck persists in the current research landscape: a complete and accurate theoretical description of the force—displacement characteristics of core mechanisms, such as parallel cable drives in low-gravity simulation test platforms, remains elusive. This bottleneck limits the understanding of key performance parameters and characteristics of parallel cable drive systems. Examples include torque transmission efficiency, displacement response characteristics, and the intricate correlations of performance parameters with environmental interference factors (e.g., temperature fluctuations and ground vibrations) and variable load conditions. Consequently, practical guidance for engineering applications remains limited and fragmented. This hinders further improvement in position and force control accuracy during spacecraft low-gravity simulation tests and in the precise design of future ultralarge-load low-gravity simulation test platforms, such as those required for landing and takeoff validation of manned lunar modules. Methods: To address major national strategic needs, such as the construction of specialized low-gravity simulation test platforms for manned lunar modules, this study focused on the core technical challenge that the force-displacement characteristics of parallel cable drive systems in low-gravity simulation test platforms are difficult to predict and control. First, a rigorous, systematic force—displacement characteristic model of the parallel cable drive system of a three-dimensional servo platform was established, covering the full effective workspace. This model incorporated key influencing factors, including cable elasticity, geometric layout constraints, and dynamic coupling between the servo platform and the payload. Furthermore, a high-fidelity model for simulating the multibody dynamics of the parallel cable drive system was developed using MATLAB/Simulink. The simulation model reproduced the physical model of the parallel cable drive system with a strict 1∶1 ratio and was composed of 18 independent cable drive mechanisms. These mechanisms were categorized into three subsystems: upper, middle, and lower diagonal cable systems, each responsible for controlling specific degrees of freedom of the servo platform. Results: Using the established simulation model, researchers could accurately simulate real three-dimensional working scenarios related to typical spacecraft operating conditions, such as high-dynamic landing with variable impact loads and ignition takeoff with thrust vector control. The model enabled comprehensive quantitative analysis of multiple critical performance indicators, including the real-time motor motion state (position, velocity, acceleration, and torque output) of the parallel cable drive system, dynamic motion trajectories and tension distribution characteristics of each cable in the parallel cable drive system, and directional stiffness of the cables at any arbitrary position within the workspace. Conclusions: This study fulfills two major objectives. First, it enables rapid identification of the root cause of reduced positioning and speed control accuracy in lunar surface detectors during low-gravity simulation tests. This, in turn, provides targeted technical guidance for the detectors to conduct a full range of comprehensive verification tests—such as hovering stability, obstacle avoidance maneuvering, and controlled slow descent—on the low-gravity simulation test platform. Second, the established theoretical model and simulation framework offer a solid theoretical reference and technical support for the design of future ultralarge-load low-gravity simulation test platforms, such as those intended for manned lunar modules. This lays the foundation for the successful implementation of subsequent deep-space exploration missions.

  • Junwei WANG, Mingxin SHAO, Chun LEI, Qiang CHEN
    Journal of Tsinghua University(Science and Technology). 2026, 66(3): 463-471. https://doi.org/10.16511/j.cnki.qhdxxb.2025.26.051
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    Objective: With the rapid proliferation of spacecraft, on-orbit satellite resources have become highly congested, resulting in increasingly scarce and non-repeatable experimental windows. This bottleneck severely impedes the iterative validation of emerging optical payloads, synthetic aperture radar (SAR) architectures, and interference countermeasure technologies. To address this challenge, this paper proposes the use of an unmanned aerial vehicle (UAV) system as a controllable and functionally equivalent surrogate for a satellite. By developing a high-fidelity UAV trajectory emulation framework, the study systematically evaluates the UAV's substitutability in three representative mission scenarios (optical imaging, SAR imaging, and SAR jamming), thus providing a low-cost, repeatable, and risk-controllable ground-air integrated validation pathway for future satellite experiments and enabling a rapid-prototyping technology loop for forthcoming space missions. Methods: The proposed methodology comprises the following components: (1) Trajectory feasibility analysis: a straight-line emulated trajectory, combined with a piecewise constant velocity flight pattern, is configured to simulate a satellite orbit. Simulation results are used to optimize the UAV's flight path for improved fidelity and reliability. (2) UAV-based satellite trajectory planning: the two-line element (TLE) set of the target satellite is parsed to obtain orbital parameters. The satellite's azimuth-elevation profile relative to a ground-based jammer is derived and converted using an angle-equivalent mapping function into a sequence of UAV waypoints. This mapping accounts for airspace constraints, yielding a practical waypoint-generation protocol. (3) Joint imaging-jamming experiment: the UAV is equipped with optical and SAR payloads to measure discrepancies between its imagery and satellite benchmarks. Simultaneously, the electromagnetic field intensity of SAR jamming signals at the UAV location is assessed to validate equivalence. Results: The optimized UAV trajectory enabled the aircraft to follow predefined routes with constant or piecewise variable speeds. The entry-point timing deviations remained within 2 s, and waypoint position errors stayed below 5.00 m, ensuring consistent alignment within the jammer-to-satellite antenna beam. This demonstrated the system's viability for small-area emulation with high fidelity and reliability. The experimental analysis confirmed that UAV-based satellite observation was technically feasible. The grayscale root-mean-square error between UAV imagery and satellite references remained below 5%, supporting the effectiveness of the emulation strategy. Conclusions: In the absence of an operational satellite, a rotary-wing UAV equipped with optical and SAR sensors can emulate a satellite's observational and jamming behavior when flown along a predefined trajectory. The piecewise constant velocity flight pattern allows the UAV to remain aligned along the satellite-target axis, with temporal deviations kept within acceptable limits despite minor speed variations. Following flight control optimization and electromagnetic hardening, the UAV-based emulation platform proves feasible for real-world deployment. Within a restricted operational zone, the UAVs onboard generate spatially resolved data comparable to satellite outputs. Moreover, a co-tracking jammer can effectively intercept SAR signals at the UAV location, thereby achieving an equivalent satellite observation scenario to a significant extent.

  • Qiang CHEN, Yanwei LIANG, Zhao ZHANG, Nan PENG, Liqiang LIU, Gang LEI
    Journal of Tsinghua University(Science and Technology). 2026, 66(3): 472-482. https://doi.org/10.16511/j.cnki.qhdxxb.2025.26.037
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    Objective: Liquid hydrogen, owing to its remarkably high specific energy and broad application prospects has attracted widespread attention in research and development, particularly in aerospace propulsion and sustainable clean energy storage and transport. However, the practical large-scale use of liquid hydrogen faces notable and demanding safety challenges. Liquid hydrogen has an exceedingly low boiling point; once a leak occurs, it rapidly vaporizes to form a hydrogen cloud. At flammable concentrations, this cloud can ignite or even explode upon encountering a flame or a static spark, posing serious safety hazards. Therefore, an in-depth investigation of liquid hydrogen leakage, diffusion, and vaporization behavior in open environments is crucial for devising effective safety protection strategies and emergency-response mechanisms. Owing to the inherently high risk, technical complexity, and substantial economic cost of liquid hydrogen experiments, publicly available data on liquid hydrogen release remain extremely scarce, severely limiting further progress in this field. Methods: To address this research gap, a large-scale open-environment liquid hydrogen release experiment was executed, aiming to systematically obtain the diffusion characteristics and evolution process of the released liquid hydrogen. For this purpose, a comprehensive experimental platform was built, integrating a liquid hydrogen storage and release system, a sensor-array system, a remote monitoring and control system, and video and infrared observation systems. This platform collects key parameters in real time—including hydrogen volume fraction, temperature, wind speed, and wind direction—and records the hydrogen cloud evolution from multiple angles, ensuring thorough experimental data acquisition. By fully considering diverse environmental and operational parameters—such as wind speed, ambient temperature, release angle, ground-cover type, and release flow rate—the experiment completed 12 sets of valid release conditions. Results: Given the large data volume, this study focused on presenting the detailed results and analysis of the second release experiment. During this experiment, multiple high-definition cameras were placed on the ground, and gimbaled aerial cameras captured the real-time spatial distribution of the hydrogen cloud from multiple perspectives. moreover, the sensor array recorded the dynamic changes in the hydrogen volume fraction at each measurement point. Based on these data, the liquid hydrogen release process was divided into four characteristic stages—ground spread, buoyant transition, stable plume, and cloud dissipation—and the evolution features of each stage were systematically summarized and analyzed. For the quantitative analysis, the sensor data were used to track the hydrogen cloud movement. Furthermore, this study employed the open-source computational fluid dynamics (CFD) code OpenFOAM to perform three-dimensional, transient numerical simulations under the second experimental condition. The deviations between the simulated results and experimental observations fell within acceptable tolerances. Conclusions: This study not only provides large-scale experimental data on liquid hydrogen leakage in open environments—offering valuable foundational data for subsequent research—but also provides robust theoretical support through advanced CFD simulations coupled with empirical experiments. The integrated findings hold considerable reference value and practical importance for advancing the safe, reliable, and sustainable development of the hydrogen energy industry worldwide.

  • Yi SUI, Hao WU, Yuheng WANG, He JIA
    Journal of Tsinghua University(Science and Technology). 2026, 66(3): 483-497. https://doi.org/10.16511/j.cnki.qhdxxb.2025.26.046
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    Objective: Mars sample-return missions require high-precision ground testing of the Mars Ascent Vehicle thermal launch process to verify key technologies in a low-gravity environment. This task requires a simulation mechanism with high operating speed, large stroke, and heavy load, which aligns well with the advantages of cable-driven mechanisms. However, although dynamic error analysis is critical for the early-stage design of such cable-driven systems, it has not been fully discussed—specifically regarding trajectory accuracy under the influence of multiple time-varying error sources. This study aims to design a low-gravity ground test facility and develop a dynamic error calculation method for evaluating the facility's feasibility for Mars Ascent Vehicle thermal launch simulations. Methods: In this study, a test facility that is driven by five cables and equipped with a horizontal follow-up mechanism is presented. The system consists of a test tower, a fixture for inclined launch, a cable-suspension gravity unloading system, and a cable-driven thrust-simulation system. Kinematic and dynamic models are developed to depict the six-degree-of-freedom motion of the Mars Ascent Vehicle mockup. The dynamic error propagation model is obtained through first-order perturbation expansion around the nominal trajectory, considering displacement errors of the two horizontal platforms, cable force fluctuations, and external disturbances as error sources. A model-predictive approach based on linear time-varying system discretization and linear programming is used to determine dynamic error bounds for trajectory deviations. The dynamic error propagation model is verified using the error calculation results of the model under single and multiple error sources, with a multibody dynamics model established in Maplesim as a reference. Tests for a random sine wave error input are also conducted through simulation to verify the effectiveness of dynamic error bounds calculated by the proposed method. Results: Theoretical analysis showed that an approximately linear propagation relationship existed between the error sources and the Mars Ascent Vehicle trajectory error in the test facility, which enabled the decomposition of error analysis problems under complex error sources into individual analysis problems under single-error-source conditions. When each error source acted independently, the maximum displacement error for each degree of freedom during the Mars Ascent Vehicle thermal launch process mostly increased over time, which was consistent with the error accumulation effect. The effects of different error sources displayed significant magnitude variations. The simulation results further revealed the following: (1) Under the conditions of single-and multiple-error sources, the trajectory errors computed by the dynamic error propagation model agreed well with those from the dynamics simulations, which suggested the effectiveness of the model in characterizing the error transmission mechanisms of the test facility. (2) When random sine wave error sources within specified amplitude ranges were presented, all the Mars Ascent Vehicle trajectory errors persisted within the theoretical error bounds forecasted by the proposed method, which demonstrated its capability to produce reliable dynamic error range estimates for uncertain environments. Conclusions: This study offers a design scheme for cable-driven ground test facilities for spacecraft. The dynamic error analysis of the facility reveals its error propagation mechanism, which lays a foundation for its error synthesis, optimization of mechanical structures, and control method design. In addition, this research also develops a generalizable theoretical framework for dynamic error analysis in similar cable-driven parallel mechanisms.

  • Liqian ZHANG
    Journal of Tsinghua University(Science and Technology). 2026, 66(3): 498-509. https://doi.org/10.16511/j.cnki.qhdxxb.2025.26.041
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    Objective: During a rocket launch, the rocket barrel generates a strong impact load on the runway, and the load-bearing performance of the runway needs to meet specific requirements. The most critical control indicator of this performance is the runway settlement value at the impact center. Earlier launch test data revealed that, if the runway settlement at the impact center can be controlled within 0.070 m, its impact on launch will be negligible. Accurately predicting the load-bearing performance of the runway pavement and selecting a launch pavement that meets the requirements have become prominent issues that need to be solved. Hence, studying the load-bearing capacity of the pavement under impact loads is of great significance. Methods: Existing analyses of the pavement impact dynamic response are mostly based on elastic or elastoplastic layered systems, focusing on the surface and base layers and simplifying the soil foundation as an elastic medium. However, one of the most important factors causing the settlement and deformation of a site is the insufficient bearing capacity of its soil foundation. The reasona[KG-0.15mm]b[KG-0.1mm]le selection of the basic soil structure model and the sensitivity analysis of parameters are important in evaluating the feasibility of unsupported launch. This article proposes an accurate nonlinear numerical simulation model to study the settlement deformation law of pavement under impact loads, conducts sensitivity analysis of key parameters, identifies key factors controlling site deformation, provides key control indicators for site selection, proposes targeted pavement strengthening schemes, and conducts numerical verification. Results: This study explored the deformation behavior and bearing capacity of two typical low-grade pavements, namely cement concrete pavement and asphalt concrete pavement, under impact loads. The following four conclusions were drawn from this study. First, the most important factor affecting the settlement of the impact center was the strength of the soil foundation, and this settlement was mainly caused by the settlement deformation of the soil foundation. The settlement deformations of the surface and base layers had little effect. Second, for cement concrete pavement, a surface layer thickness of 15 cm, a base layer thickness of 20 cm, and a soil foundation rebound modulus of over 30 MPa could meet the impact settlement requirements. For soil foundations with a rebound modulus below 25 MPa, the settlement requirements could not be met. Third, for asphalt concrete pavement, a surface layer thickness of 7 cm, a base layer thickness of 20 cm, and a soil foundation rebound modulus of 35 MPa or above could meet the impact settlement requirements. For soil foundations with a rebound modulus below 30 MPa, the settlement requirements could not be met. Finally, the simple steel pad reinforcement effect in the pavement strengthening scheme was limited. Improving the stiffness of the steel pad itself was the key to controlling the impact settlement. The reinforcement effect of the 10 cm thick empty box lattice steel pad was ideal, which could significantly reduce the impact settlement. Conclusions: Regardless of regional differences, the soil foundation rebound modulus of domestic medium and high-grade pavements can generally reach 30 MPa. Even if there is a soft foundation, necessary foundation treatment measures are usually taken in the design stage to control post-construction settlement. Considering that after years of operation, the roadbed will be further consolidated and compacted, the soil foundation rebound modulus will subsequently increase. Therefore, medium and high-grade pavements can meet the requirements of an impact load. Attention should be given to the situation of a soft foundation under the surface "hard shell" of a low-grade pavement, which needs to be confirmed through site investigation and testing (such as geophysical exploration). In case of an emergency launch, a steel pad reinforcement plan should be supplemented to ensure safety.

  • Yufei GU, Wenan ZHONG, Shaojiang CHEN, Wen YANG, Ke HUANG
    Journal of Tsinghua University(Science and Technology). 2026, 66(3): 510-518. https://doi.org/10.16511/j.cnki.qhdxxb.2025.26.42
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    Objective: High-density space launch operations require reliable environmental control in spacecraft testing facilities, where energy-intensive heating, ventilation, and air-conditioning (HVAC) systems account for more than 70.00% of the total power consumption in low-latitude launch sites. Conventional strategies based on proportional-integral-derivative (PID) control fail to address systemic inefficiencies, including frequent equipment switching, uncoordinated chiller and air handling unit (AHU) operations, and sensitivity to extreme outdoor conditions. This study aims to develop an integrated data-driven control framework that synergizes load prediction and dynamic optimization to achieve two goals: maintaining temperature stability within ±0.5℃ and relative humidity (RH) stability within ±3% and reducing annual HVAC energy consumption by at least 15.00% in large-volume industrial facilities exceeding 300 000 m3. To address the dual objectives of environmental stability and energy efficiency in green spaceport construction, this study proposes an advanced air-conditioning control framework integrating an improved long short-term memory (LSTM) prediction model with hybrid PID-model predictive control (MPC) algorithms. Methods: Focusing on the large-scale complex HVAC systems of spacecraft testing facilities, this study systematically tackles the high energy consumption caused by independent PID control strategies, frequent equipment switching, and insufficient coordination between chillers and AHUs. A crucial innovation lies in the development of an LSTM prediction model enhanced by temporal attention mechanisms to address the limitations of conventional LSTM models, such as overfitting, weak long-term dependency capture, and poor generalization. By analyzing six key load-influencing factors (i.e., ambient temperature, relative humidity, solar radiation, occupancy, equipment heat, and lighting) through Pearson correlation and weight allocation, the model prioritizes temperature (27% weight) and solar radiation (18% weight) as the dominant variables. Trained on 8 760 h transient system simulation data covering seasonal and diurnal variations, the enhanced LSTM architecture incorporates eight hidden layers, six-dimensional inputs, and a 0.2 dropout rate, achieving exceptional prediction accuracy. Results: Comparative evaluations against baseline LSTM models showed significant improvements: the explained variance score increased from 0.999 0 to 0.999 5, the mean squared error decreased from 5.163 0 to 2.086 1, and the prediction accuracy within ±3 kW error bounds improved from 83.74% to 96.76%. The proposed model excelled in tracking irregular load fluctuations and long-term trends, as visualized in multiscale prediction-output comparisons. Building on these predictions, a PID-MPC hybrid control system was designed to synergize chiller plant optimization and AHU dynamic adjustments. The PID controller regulated chilled water supply parameters based on real-time load predictions, whereas the MPC module compensated for air-conditioning process delays and disturbances from volatile outdoor conditions. Field tests at a representative testing hall with a volume exceeding 300 000 m3 revealed that the integrated strategy achieved a 15.00% annual energy reduction compared with conventional PID controls, with monthly peak savings reaching 17.55%. The system mitigated issues such as frequent compressor cycling, uneven load distribution, and excessive reheating energy consumption, maintaining temperature and humidity within ±0.5℃ and ±3% RH thresholds, respectively. Conclusions: This study introduces three pivotal contributions to intelligent HVAC systems. First, the attention-augmented LSTM model establishes a new benchmark for industrial load forecasting, particularly in capturing nonlinear interactions between external weather and internal thermal loads. Second, the PID-MPC hybrid strategy effectively bridges the local actuator responsiveness and the global energy optimization, outperforming single-method approaches in terms of stability and efficiency. Next, validated in ultralarge spacecraft test facilities, the framework offers a replicable solution for comparable high-bay environments, such as semiconductor fabrication facilities or aerospace assembly plants. Practically, the 15.00% energy reduction means that launch sites can save hundreds of millions of kilowatt-hours of electricity annually, significantly advancing the development of green launch facilities.

  • Power Grid Disaster Emergencyscience
  • Yuchen LIN, Xinwei ZHANG, Sihang ZHANG, Zhi YANG, Jiting GU, Qiujie SUN, Maohua ZHONG
    Journal of Tsinghua University(Science and Technology). 2026, 66(3): 519-529. https://doi.org/10.16511/j.cnki.qhdxxb.2026.26.015
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    Objective: Typhoons, characterized by sudden onset, extensive geographic impact, and considerable destructive power, pose recurring threats to the stability and safety of power grid systems, particularly in China's coastal regions. As extreme weather events become more frequent due to climate change, conventional emergency management approaches are inadequate. These methods often suffer from fragmented knowledge sources, inefficient information extraction, and limited support for intelligent decision-making. Hence, this article proposes an integrated technical framework that combines knowledge graphs with large language models (LLMs). This study aimed to improve risk perception, enhance decision-making accuracy, and bolster emergency response effectiveness in typhoon-triggered power grid incidents. Zhejiang Province, a coastal area frequently impacted by typhoons, was selected as the demonstration case for the framework. Methods: The framework of knowledge graph construction included the design of two graph types: one derived from accident reports and the other derived from emergency plans. The accident-based knowledge graph was structured according to the Triangular Framework for Public Security Science and Technology at the schema layer. It organized knowledge into three primary dimensions: emergency events, affected infrastructure, and corresponding emergency management strategies. Meanwhile, the emergency-plan-based knowledge graph was structured based on the electric power production life cycle, covering key stages such as power transmission, power transformation, power distribution, power utilization, and energy storage. Both graphs worked together to support emergency planning. The system employed a hybrid approach at the data processing layer that integrated BERT with a bidirectional long short-term memory network. This hybrid model performed named entity recognition and relationship extraction. The extracted entities and relationships were visualized to improve model interpretability, enabling domain experts to validate and understand the underlying information. An enhanced discriminative similarity algorithm was introduced in the knowledge fusion process. Initially, cosine similarity and Pearson correlation filtered out low-relevance entity pairs. High-similarity entities were then semantically validated using the LLM, ensuring accurate fusion and reducing erroneous entity alignments. Experimental results showed a 10.11% improvement in accuracy compared to conventional methods. The final knowledge graphs were stored in the Neo4j graph database, which supported interactive visualization and real-time query functionalities. The system enabled intelligent reasoning for handling real-world disaster scenarios in the application stage. Using the Cypher query language, this study conducted a fuzzy query based on disaster descriptions. Relevant information was retrieved from the knowledge graph as a structured knowledge base. The ECO-STAR prompting template was used to guide the model in generating targeted risk analyses and emergency recommendations. Results: A case study was conducted in Zhejiang Province to validate the proposed framework. The results showed that integrating knowledge graphs and LLMs improved semantic precision. The integration also enhanced the relevance of decision support outputs. In addition, it reduced hallucination phenomena that often occurred when general-purpose LLMs were applied in specialized domains. Conclusions: This study highlights the value of leveraging LLMs in constructing an emergency knowledge graph for power grids during typhoons. The proposed method offers a scalable, intelligent solution for managing power grid emergencies during typhoons and serves as a valuable reference for enhancing the disaster resilience of energy systems.

  • Power Grid Disaster Emergency Science
  • Wei ZHU, Shaobo ZHONG, Danqing ZHAO, Xin MEI
    Journal of Tsinghua University(Science and Technology). 2026, 66(3): 530-541. https://doi.org/10.16511/j.cnki.qhdxxb.2025.26.049
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    Objective: Conducting resilience assessments of transmission corridors under wildfire scenarios is crucial for enhancing the power grid's ability to respond to wildfire disasters. Given the lack of a unified definition for the resilience of transmission corridors under wildfire scenarios, the insufficient consideration of key resilience dimensions throughout the process of disaster incubation, occurrence, and development; and the complexity of existing assessment methods, this study aims to establish a quantifiable and operable dynamic resilience assessment framework for transmission corridors under wildfire scenarios. Methods: Combining resilience theory with the characteristics of wildfire incidents affecting transmission corridors, this study clarified the core connotations of resilience in wildfire scenarios. On this basis, a resilience assessment index system was developed considering 5 dimensions, namely preventive, resistance, absorptive, recovery, and adaptive capabilities, and 37 indicators, such as relative air humidity, temperature, vegetation coverage, operating voltage of power transmission lines, and centrality degree of transmission lines. By incorporating geographic information systems and remote sensing technologies, multisource data were integrated, and the data of the indicators were extracted. The analytic hierarchy process-entropy weight combination method was employed to determine indicator weights, and a systematic resilience assessment framework was developed. The framework was applied to the Tai'an region as a case study, with spatiotemporal analysis and sensitivity analysis conducted on the resilience assessment results. Results: (1) The spatiotemporal analysis revealed that on the whole, more than 70.00% of the transmission corridors in Tai'an presented high or very high resilience levels. The resilience levels of transmission corridors in different months and regions fluctuated to varying degrees. The resilience level of the transmission corridors at the border between the Daiyue District and Xintai City remained low for a long time. In May and September 2024, the proportions of transmission corridors with low resilience levels in Tai'an were 0.66% and 0.02%, respectively. Compared with that in the entire year, the resilience level of the transmission corridors significantly decreased in May and September 2024, indicating that these months are the key periods for strengthening the management of system resilience levels. (2) The sensitivity analysis showed that indicators such as the centrality degree of transmission lines, the density of the power department, vegetation coverage, distributed power sources, and transmission-line material significantly affect the overall resilience level of a system. Management measures for enhancing the resilience level of transmission corridors can be formulated based on such indicators. Conclusions: The research results revealed the following: (1) The proposed assessment framework systematically and quantitatively evaluated the resilience status of transmission corridors throughout wildfire disasters using geographic information systems and remote sensing technologies. (2) This framework shows good operability and applicability and can effectively support the spatiotemporal analysis of resilience levels and the identification of key influencing factors. (3) This framework can also promptly identify weak links in the resilience of transmission corridors, providing decision support for the formulation of targeted resilience improvement strategies.

  • Jun LIU, Ling HAO, Lei CHEN, Yong MIN, Fei XU
    Journal of Tsinghua University(Science and Technology). 2026, 66(3): 542-552. https://doi.org/10.16511/j.cnki.qhdxxb.2025.26.045
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    Objective: The excellent electrochemical stability, high safety, and long service life of lithium iron phosphate (LiFePO4) batteries have led to their widespread use in energy storage systems. However, in practical applications, large-capacity energy storage batteries often suffer from significant inhomogeneity of the internal current. This uneven current distribution can lead to localized temperature variations, alter the shape of the battery's terminal voltage curve, and accelerate degradation processes such as the formation of lithium dendrites and thermal stress. Existing models fail to comprehensively account for the coupled thermal-electrical-aging characteristics when modeling LiFePO4 batteries, rendering them incapable of accurately reflecting the variations in the voltage curve caused by inconsistencies in the internal current distribution. Furthermore, such models lack credible and systematic validation across multiple operating conditions. Methods: Thus, in this study, a 280.000 Ah LiFePO4 battery was selected as the research target for investigating a thermal-electrical-aging coupling model for large-capacity LiFePO4 batteries that considers changes in the shape of the voltage curve. By modeling the battery as a parallel combination of multiple subcells with varying electrical characteristics, the internal inhomogeneity of the battery and the resulting influence on deformation of the voltage curve are effectively simulated, where the voltage of the parallel battery pack serves as the terminal voltage output by the model. A temperature estimation module is integrated to simulate the processes of heat generation and transfer, while an aging module is introduced to capture the evolution of capacity degradation. Together, these modules form a comprehensive thermal-electrical-aging coupling model. By constructing a comprehensive thermal-electrical-aging coupling model, the terminal voltage, temperature, and capacity of the battery can be obtained directly from the input current. The model is then validated under constant current discharge conditions with variations in temperature, dynamic operating conditions, and multiple charge-discharge cycle conditions. Results: Experimental validation using LiFePO4 batteries demonstrated that the proposed model could accurately predict the voltage, temperature, and aging state with relatively low computational complexity. Specifically, during constant current discharging of LiFePO4 batteries, the root mean square error (RMSE) of the temperature estimation remained below 1.0 ℃ across different temperatures and discharge rates, except under the highest rate condition. The increased RMSE at the highest discharge rate was attributed to a larger internal-external temperature gradient, which reduced the accuracy of the estimation. Additionally, faster heat dissipation at lower temperatures further reduced the precision of the temperature prediction. Because the thermal and electrical models were coupled, their respective errors were compounded. Across the four dynamic conditions, the absolute error in the voltage simulated by the model was below 20.0 mV for all intervals, except at the current-switching points, and the absolute error for the temperature estimation was below 1.0 ℃. The RMSEs for capacity degradation estimated through simulation using the coupled model were 0.223, 1.640, 1.320, and 2.700 Ah for cycling aging at 35.0 ℃/0.5 C, 35.0 ℃/1.0 C, 45.0 ℃/0.5 C, and 45.0 ℃/1.0 C, respectively—all below 1% of the total capacity—demonstrating the strong ability of the model to accurately simulate the battery capacity after aging. Conclusions: The proposed thermal-electrical-aging coupling model effectively addresses the limitations of traditional equivalent circuit models, which often lack the capability to account for inhomogeneity in the internal current, temperature variations, and aging effects. This model thus provides a solid theoretical foundation and practical methodology for estimating the state of the battery in real-time, diagnosing faults, and predicting the lifetime of energy storage systems.

  • Junjie PENG, Weiguang AN, Chang LIU
    Journal of Tsinghua University(Science and Technology). 2026, 66(3): 553-562. https://doi.org/10.16511/j.cnki.qhdxxb.2025.26.048
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    Objective: In recent years, global climate change and intensified human activities have significantly increased surface fire risk in power transmission corridors spanning mountainous and forested areas, posing a serious threat to the safe and stable operation of modern power grids. Surface vegetation serves as the primary fuel for ground fires and wildfires, directly influencing the flame spread velocity, fire development stages, and combustion intensity. Therefore, investigating the combustion characteristics of surface vegetation is crucial for improving wildfire prevention systems, enhancing the accuracy of fire behavior predictions, and ensuring the operational reliability of transmission infrastructures. However, current research on the combustibility and fire behavior of various surface fuels in transmission corridors remains insufficient, particularly in terms of comparative analyses across different vegetation types and accumulation thicknesses. There is a pressing need for systematic experimental studies to reveal the heat release mechanisms, fire growth, and gas emission characteristics of these vegetation types. Methods: This study aimed to experimentally explore the combustion characteristics of typical surface vegetation in transmission corridors under varying accumulation thicknesses, compare differences in fire behavior among vegetation types, and identify key parameters, including flame spread patterns, mass loss rate evolution, temperature distributions, and gas volume fraction dynamics. The combustion characteristics of different types of surface vegetation in transmission corridors were investigated through small-scale experiments. A self-designed 1m2 small-scale combustion platform was constructed and equipped with cameras, a thermocouple array, a high-precision electronic balance, and gas sensors to collect key combustion data, including flame behavior, temperature distribution, mass loss, and gas volume fractions, during the tests. Three typical vegetation types—shrubs, coniferous litter (pine needles), and broadleaf litter (maple leaves)—were selected as the research objects. For each vegetation type, three accumulation thicknesses (10, 15, and 20cm) were specified. Under an ambient wind speed of 1 m/s, key parameters, including flame morphology, flame height, mass loss rate, temperature distributions, and smoke gas volume fraction, were recorded and analyzed to reveal the spatiotemporal evolution of their combustion characteristics. Results: The results showed that the combustion process of typical surface vegetation in transmission corridors could be divided into four stages: initial, development, peak, and extinguishment. Flame propagation exhibited an arc-shaped outward expansion pattern. As the accumulation thickness increased, the flame height, mass loss rate, and peak temperature also increased. The CO2 volume fraction followed a "sharp rise-gradual decline" trend, whereas the volume fraction of CO exhibited a phased characteristic of concentrated release in the initial and extinguishing stages. The volume fractions of both CO and CO2 increased with increasing accumulation thickness. At the same accumulation thickness, flame heights ranked from highest to lowest as coniferous litter, shrubs, and broadleaf litter. Among them, coniferous litter exhibited the highest combustion intensity, greatest flame height, maximum mass loss rate, and most pronounced multipeak fluctuation behavior. Conclusions: This study reveals the combustion characteristics of typical surface vegetation in transmission corridors under varying accumulation thicknesses. This study also systematically analyzes the differences in flame spread characteristics, thermal behavior, and gas emissions across vegetation types. The findings provide a robust experimental basis for assessing surface fire risk, modeling fire behavior, and developing wildfire warning strategies for power transmission corridors.

  • Feng WANG, Zhuoyi ZOU, Sen LI, Jiaying WANG, Jinghai PAN, Li WANG, Xiaowei HUANG
    Journal of Tsinghua University(Science and Technology). 2026, 66(3): 563-576. https://doi.org/10.16511/j.cnki.qhdxxb.2025.26.040
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    Objective: Existing specifications inadequately address the determination of wind vibration coefficients for flexible photovoltaic supports, and research on interference effects in multi-row structures as well as coupled vibrations with multiple degrees of freedom is limited. This study investigates the wind-induced vibration response and fluctuating wind loads on flexible photovoltaic supports through wind tunnel experiments. Furthermore, a method for determining wind vibration coefficients is developed by considering the coupling effects of torsion, vertical, and bending modes, and the results are compared with those of five international codes. The aim is to provide a highly accurate scientific basis for evaluating wind vibration coefficients in practical engineering applications and to support the wind-resistant structural design of highly flexible photovoltaic systems. Methods: An aeroelastic model of a double-row, three-span photovoltaic system was designed, fabricated, and installed in a wind tunnel test environment. Vibration measurement tests were performed under a wide range of inclination angles (from -30° to 30°), attack angles (from -60° to 60°), and flow conditions, including uniform and turbulent flow fields. The tests were designed to systematically examine the interference effects among upstream and downstream photovoltaic rows and the influence of flow characteristics on the wind-induced vibration behavior of the structure. Using vertical and torsional displacements as primary indicators, the wind vibration coefficient was determined by the envelope value method to capture the maximum response range. Results: For nonzero inclination angles, the presence of upstream photovoltaic panels reduced the average displacement response of downstream panels by approximately 30.00% because of shielding and aerodynamic interference. Under turbulent flow conditions, the average displacement trend of the photovoltaic supports was highly complex and irregular, but overall response amplitudes were relatively small. The pulsating displacement amplitudes in the turbulent fields were generally greater than those under uniform flow conditions. Torsional vibration dominated the dynamic response characteristics, while the wind vibration coefficient exhibited a distinctive "decrease-increase-decrease" pattern with varying inclination angles, peaked at attack angles of ±45°, and showed a clear symmetrical distribution. The measured wind vibration coefficients ranged from 1.50-3.00 at different inclination angles, mostly falling between the Japanese and British code values. The values under varying attack angles ranged from 4.50-5.50, which significantly exceeded those calculated according to the existing specifications. Conclusions: The wind vibration coefficient value prescribed by the Chinese standard (1.00) is significantly underestimated and does not reflect the actual wind-induced dynamic behavior of flexible photovoltaic structures. For highly flexible photovoltaic systems, it is strongly recommended that wind tunnel tests or dynamic time-history analyses be conducted to determine the wind vibration coefficient accurately. In addition, adequate safety redundancy must be incorporated into the structural design process. Existing design codes should incorporate structural flexibility correction factors to improve the accuracy and applicability of the prescribed wind vibration coefficient values. In regions frequently affected by typhoons, conservative values from Japanese standards are recommended as a reference. In typical wind zones, an intermediate value between experimental values and those provided by British and American standards is advised for reliable and efficient design.

  • Chao ZHOU, Jun REN, Kunpeng JI, Junhui LI, Li LI
    Journal of Tsinghua University(Science and Technology). 2026, 66(3): 577-585. https://doi.org/10.16511/j.cnki.qhdxxb.2025.26.044
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    Objective: Considering the characteristics of thermal de-icing of transmission lines, which is safe but time-consuming, and those of mechanical impact de-icing, which is simple and fast but may damage the transmission line components, this article proposes a combined de-icing technology involving initial thermal and mechanical impact de-icing. Methods: The combined de-icing technology first uses thermal de-icing to melt the ice layer at the contact surface between the ice and the conductor, thereby reducing the adhesion force between the ice and the conductor to nearly 0. Mechanical impact de-icing is then initiated, where only a small impact force is required to exceed the cohesion force of the ice and cause the ice to fall off. To verify the effectiveness of the combined de-icing technology, a complete numerical model for de-icing of ice-covered conductors was established in stages. Using FLUENT, a heat transfer model for melting ice in ice-covered conductors (considering factors such as Joule heating and latent heat of the phase change) was established, and key parameters (such as ambient temperature, ice thickness, and the de-icing current) were set. Through transient thermal flow coupling calculations, the initial melting process was simulated, and the time threshold for initial thermal ice melting under different working conditions was determined. Subsequently, using ABAQUS (a de-icing model for ice-covered conductors considering the anisotropic mechanical properties of the ice layer) was established based on the ice melting model of ice-covered conductors. The cohesive element was introduced, and the failure behavior of the ice cohesive force was simulated through the maximum stress and the ice shedding criterion. A calculation of the de-icing process was conducted, considering only the ice cohesive force. In terms of load application, the explicit dynamics analysis method was adopted, and the critical (impact force) impact acceleration for ice shedding was obtained by applying transient impact loads. Results: The results showed that during the initial thermal ice melting stage, the lower the ambient temperature was, the greater the wind speed, and the smaller the de-icing current was, the longer the ice melting time. The ice thickness had no significant effect on the initial thermal ice melting time. Under the condition of no adhesion force, the increased in the cohesive strength of the ice increased the impact force required for ice shedding. The de-icing time of the combined de-icing technology was approximately 10.00%-20.00% less than that of thermal ice melting alone, and the impact force (critical acceleration) was approximately 40.00% less than that of mechanical impact de-icing alone. Conclusions: This study proposes a combined de-icing technology that achieves efficient and low-damage de-icing through staged coordinated action. This technology first uses the Joule heating effect to cause a phase change at the conductor-ice interface and generate a water film, reducing the interface adhesion force to below the critical value. A mechanical impact load is then applied, and only the cohesive force of the ice needs to be overcome to achieve ice shedding, reducing both the de-icing time and the impact force of mechanical impact de-icing. The mechanical impact de-icing test results under the condition of no adhesion force verify the feasibility of this combined de-icing technology for transmission lines. De-icing is achieved in a short time with negligible damage to transmission lines, and no residual ice is left on the conductor surface following de-icing. As such, the economy and safety of de-icing operations are significantly improved.

  • Kai WANG, Jianming HAO, Hongjie ZHANG, Shouhao YUAN
    Journal of Tsinghua University(Science and Technology). 2026, 66(3): 586-596. https://doi.org/10.16511/j.cnki.qhdxxb.2026.26.002
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    Objective: Although existing research on conductor fatigue life has established a corresponding theoretical basis and provided core support for subsequent calculation work, two key issues remain in engineering applications. First, although the combined dynamic effects of wind speed and direction in the field environment are critical to the actual fatigue evolution process and service life of conductors, existing models insufficiently consider this factor, leading to discrepancies between life predictions based on theoretical calculations and actual engineering observations. Second, the existing theoretical system for fatigue life calculation is complex, requires specified calculation methods, and demands high levels of professional and technical knowledge from customers. As a result, wide promotion and efficient application in the power industry—especially in grassroots operation and maintenance units—are difficult, restricting the transformation of theoretical results into engineering practice. To address these problems, this study proposes a method for evaluating conductor fatigue life based on the probability distributions of wind speed and direction. Methods: Transmission lines in Northwest China were selected as the research object. More than 10 years of meteorological data from this region were systematically collected and analyzed, and the probability density functions of wind speed and wind direction were accurately obtained. Considering the key factors such as conductor type, span, and tension, more than 40 000 finite element models of conductor-insulator string suspension systems were constructed. Three typical conductors corresponding to maximum, median, and minimum frequencies were selected for fatigue analysis based on the first-order positive symmetrical side-bending frequency of the conductor. Nonlinear dynamic response analysis was then carried out for these conductors, and their stress time histories were extracted. Finally, the annual fatigue performance of different conductors was evaluated using the Wöhler safety curve, the rain flow counting method, and Miner's linear cumulative damage theory based on the measured wind speed distribution. Results: The analysis showed the following results: (1) Monitoring of local meteorological conditions revealed that the dominant wind direction in the study area was southeast, and the wind speed distribution conformed to the generalized extreme value distribution with a position parameter of 2.507, a shape parameter of 0.080, and a scale parameter of 1.440. (2) Stress time histories were extracted at the suspension point edge (40 cm), the 1/4-span position, and the midspan position. The stress level at the suspension point edge was considerably higher than that at other positions, and the average stress of the conductor increased with decreasing natural frequency and increasing wind speed. (3) The degree of wind-induced fatigue damage was closely related to the conductor's natural frequency and the ambient wind speed. The lower the natural frequency was, the greater the fatigue damage, especially at higher wind speeds. A fatigue performance database of transmission lines was finally established based on conductor parameters. Conclusions: The findings of this study provide an efficient and practical tool for transmission line operation and maintenance. By entering key parameters such as conductor type, tension, and span and combining them with regional wind speed and direction data, the method enables rapid estimation of conductor fatigue life, scientific assessment of remaining service life, proactive maintenance of high-risk line segments, and significant improvement in the operational efficiency of transmission line maintenance. Its application contributes to ensuring the safety and stability of power system operation.

  • Xiao ZHU, Qian WANG, Qianbo XIAO, Haitao WU, Huiying XIANG
    Journal of Tsinghua University(Science and Technology). 2026, 66(3): 597-607. https://doi.org/10.16511/j.cnki.qhdxxb.2025.26.047
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    Objective: With the continuous development of ultrahigh-voltage (UHV) power transmission projects, transmission corridors often traverse mountainous regions that are prone to seasonal icing and deicing. Ice shedding on transmission lines, especially nonuniform ice shedding, can cause severe tension imbalances, large vertical jumps of ground wires, and transient dynamic forces acting on suspension insulator-hardware string systems. These effects pose significant threats to structural integrity, including geometric interference, insulator deflection, and potential mechanical failure. Moreover, conventional static-based design approaches often underestimate these dynamic events, as they neglect transient load amplification and complex system interactions.However, the mechanical modeling of suspension systems under such abrupt loading conditions remains underdeveloped, especially in the context of unbalanced tension resulting from nonuniform ice shedding. Methods: To address this issue, this study develops a detailed finite element model of a two-span UHV transmission line segment using the OpenSeesPy platform. This model simulates a coupled system comprising ground wires, suspension insulators, and hardware components, introducing time-dependent ice-shedding loads to reproduce realistic deicing scenarios. The modeling framework accounts for nonlinear cable behavior, interactions among components, and geometric nonlinearity due to large displacements to capture the full dynamic response. Full-span instantaneous ice shedding on the large-span side, recognized as the most critical and unfavorable scenario due to severe transient excitations, is specifically analyzed. To ensure the accuracy of the results, the model was validated against comparable experimental benchmarks prior to extensive parametric simulations. Results: Through dynamic simulations, this study evaluated the responses of the suspension string system under various initial wire tensions, ice thicknesses, and structural damping ratios. Parametric sensitivity analysis revealed that increasing the initial tension increased the axial stiffness, reduced insulator swing and unbalanced horizontal force, but magnified the vertical jump amplitude of the ground wires. Thicker ice significantly amplified the dynamic responses, including larger insulator deflections and increased unbalanced forces. Although greater damping effectively reduced jump heights, it had a limited influence on the maximum deflection angle and tension imbalance of the insulator during the initial impact phase. In addition, two failure criteria based on experimental benchmarks were established: a maximum allowable unbalanced horizontal force (5.98 kN) and a critical insulator deflection angle (10.58°), beyond which geometric interference occurred between components. The simulation results showed that for certain combinations of low initial tension and high ice thickness, these thresholds were exceeded, indicating high failure risk. Response surfaces were constructed to visualize how different parameter combinations approached or surpassed these limits, providing intuitive references for assessing safety margins. Conclusions: This study highlights that current design standards may underestimate the dynamic effects of ice shedding on suspension systems. The findings emphasize the necessity of integrating mechanical interference checks and parametric robustness assessments into the design process for suspension insulator assemblies in UHV systems. The modeling framework and sensitivity results guide for configuring damping, wire tension, and hardware design to mitigate dynamic risks during extreme weather conditions. Overall, this study provides a comprehensive mechanical analysis and failure risk evaluation of UHV transmission line suspension systems under ice-shedding events, offers validated simulation tools, insights into governing parameters, and practical recommendations to improve the resilience of power transmission infrastructure in cold-climate regions.

  • Haitao WU, Qian WANG, Anxin ZOU, Jia LIU, Bin WU, Sihua GUO, Gaohui HE
    Journal of Tsinghua University(Science and Technology). 2026, 66(3): 608-616. https://doi.org/10.16511/j.cnki.qhdxxb.2025.26.038
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    Objective: Icing on overhead transmission lines presents a serious risk to the safety and stability of power grids, particularly amid the rapid expansion of ultra-high voltage networks in China. Accurate simulation of conductor icing is crucial for effective disaster prevention and mitigation. However, many existing models primarily depend on the median volume diameter (MVD) of droplets, often overlooking droplet size distribution (DSD) characteristics and leading to simulation errors. This study addresses this challenge by developing an optimized method for selecting droplet size parameters in icing simulations, thereby improving computational accuracy and efficiency. This work is essential for enhancing the reliability of icing predictions and reinforcing the resilience of power infrastructure under extreme weather conditions. Methods: This study employs two primary methodologies: the analytical and Eulerian methods. The analytical method, based on the Finstad's model, calculates the droplet collision coefficient (α1) using MVD, whereas the Eulerian method leverages computational fluid dynamics to simulate air-droplet two-phase flow, incorporating DSD for higher precision. A comparative analysis of these methods is conducted to evaluate their efficiency and accuracy. In addition, this study investigates the impact of environmental parameters (wind speed, MVD, and conductor diameter) and droplet dispersion on α1 errors. A dynamic selection strategy is proposed to determine when MVD could suffice or when DSD is necessary based on predefined error thresholds. Results: The key findings included the following: the analytical method outperformed the Eulerian method in computational speed but tended to overestimate α1 due to unaccounted turbulence effects. Meanwhile, owing to the influence of DSD, directly using MVD to calculate α1 in conductor icing simulations also introduced a certain error. The error diminished with higher wind speeds and larger MVD values. Using MVD alone introduced errors (Δα1) in α1 calculations, which exhibited a nonlinear trend: Δα1 initially decreased to zero and then increased as MVD, wind speed, or conductor diameter increased. To avoid calculation errors in the conductor's α1, one might consider using the DSD instead of MVD for computing α1. However, this method involved significantly greater computational requirements and was therefore unsuitable for rapid assessment of conductor icing accumulation. This study identified critical thresholds where MVD can replace DSD without significant accuracy loss, optimizing computational resources. In detail, leveraging the high computational efficiency of the analytical method, Δα1 was calculated. When this error was less than or equal to the maximum allowable error, MVD could be used in place of DSD, thereby achieving an optimal balance between computational efficiency and accuracy. The results were validated through case studies using experimental data from Pavlo et al. Conclusions: This study highlights the limitations of MVD-based icing simulations and underscores the importance of droplet dispersion characteristics. By integrating analytical and Eulerian approaches, this study provides a practical framework for dynamically selecting droplet size parameters, ensuring accurate and efficient icing predictions. The results show that, although MVD suffices under specific conditions, DSD is indispensable for scenarios involving highly dispersed droplets or smaller conductors. This study advances the field by offering a scalable solution for power grid resilience against icing hazards, with implications for academic research and industrial applications.

  • Demand Response Customized Bus
  • Yuhang GUO, Kun AN, Wanjing MA
    Journal of Tsinghua University(Science and Technology). 2026, 66(3): 617-626. https://doi.org/10.16511/j.cnki.qhdxxb.2025.26.034
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    Objective: Transportation hubs, such as airports and high-speed rail stations, frequently experience taxi shortages, especially at night, when conventional public transit is unavailable. This challenge diminishes passenger satisfaction and reduces the efficiency of passenger dispersal from these hubs. To address this issue, this study proposes an optimization framework for static customized shuttle bus routes that incorporates collaborative taxi services. Unlike traditional door-to-door customized bus services, the proposed approach utilizes customized shuttle buses to transport passengers from the hub to strategic stops closer to their destinations, where taxis are more accessible to complete the final leg of their journeys. Methods: A mixed-integer nonlinear programming model was developed to optimize shuttle bus routes and service frequencies. The objective function minimizes the weighted total costs, encompassing the operational costs for companies and the travel time costs for passengers. The model accounts for the possibility that passengers may depart within a 60.0 min window after making a reservation caused by delays, such as baggage claim. Route design is based on estimated passenger demand patterns. Passengers select services according to the performance level of the collaborative mode, ultimately achieving an equilibrium state. This study integrates passenger choice behavior through a mode choice model that estimates the proportion of travelers opting for either collaborative service or direct taxi service from the hub. To solve this complex model, this study developed a customized algorithm that initially relaxes the problem through fixed proportions for passenger mode choices. The algorithm then designs the customized shuttle bus system, calculates the actual passenger choice probabilities, and iteratively updates the choice proportions until convergence is reached. This study evaluated the proposed model using data from Shanghai Hongqiao Hub and analyzed the system performance under various scenarios, including different service modes, pricing strategies, and vehicle types. Results: A case study of Shanghai Hongqiao Hub revealed the following findings: (1) The optimization yielded four customized shuttle bus routes that efficiently dispersed passenger flow from the hub. The implementation of the collaborative service reduced the system's hourly comprehensive cost by 47% compared with a taxi-only service. The average taxi waiting time at the hub decreased dramatically by 71%, from 30.0 min to 8.7 min. (2) The collaborative approach demonstrated significant advantages over traditional door-to-door customized shuttle bus services, offering substantial advantages in both operational cost savings and passenger appeal. (3) Sensitivity analysis revealed an optimal price point of 1.50 yuan/km that balances operator profitability with service attractiveness to passengers. Additionally, by selecting differentiated vehicle types based on demand density, the profitability of customized shuttle bus services can be significantly increased while also improving service quality. Conclusions: The optimized collaborative service model effectively resolves taxi shortage problems at transportation hubs by integrating strategically designed customized shuttle bus routes with taxi services. This integration ensures an optimal balance between operational efficiency and passenger convenience. The optimization framework and solution algorithm developed in this study provide a practical approach for planning static customized shuttle bus routes and schedules while incorporating cooperation with taxi services. These findings offer valuable guidance for transportation planners and hub managers seeking to increase passenger dispersal efficiency and the overall travel experience through innovative intermodal solutions.

  • Zhaohui DING, Xiaoning ZHU, Liujiang KANG
    Journal of Tsinghua University(Science and Technology). 2026, 66(3): 627-637. https://doi.org/10.16511/j.cnki.qhdxxb.2025.26.043
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    Objective: In public transportation, the "last mile" challenge encountered by residents of large residential communities remains a persistent issue. Existing feeder-bus systems operating within such areas often encounter issues, such as high rates of empty vehicles, traffic congestion, and inadequate capacity during peak hours, primarily stemming from suboptimal route designs and inflexible scheduling. To address these challenges, this study aims to optimize the route design and timetable of modular microcirculation buses that shuttle passengers to subways within large residential areas. Methods: First, a mixed-integer nonlinear programming model considering various constraints, such as route generation, passenger assignment, and vehicle utilization, is constructed to minimize total cost, which encompasses the operating expenses of the company, the reservation time of the passengers, and the in-transit time. To increase the efficiency of the constructed model in obtaining solutions, auxiliary variables are introduced to minimize the degree of high-order terms in the objective function and constraint conditions. Second, an improved hybrid genetic algorithm is designed to overcome the shortcomings of commercial solvers in obtaining exact solutions for large-scale problem models. This improved algorithm comprises the following features: passenger-assignment operators and route-repair operators are embedded in the algorithm to accelerate the process of obtaining solutions and ensure the feasibility of offspring individuals, and the elitism preservation strategy is adopted, followed by the integration of simulated annealing operators into the genetic algorithm. These features improve the optimization efficiency of the algorithm and prevent premature convergence to local optima. Finally, a case study is conducted on real regional road networks and generated passenger demands, followed by a series of sensitivity tests. Results: The results of the case study revealed the following: (1) The driving speed of the modular buses had a significant effect. As the bus driving speed increased from 33.00 to 36.00km/h, the total system cost decreased significantly owing to the reduced number of deployed vehicles. Conversely, as the driving speed exceeded 39.00km/h, the total system cost exhibited diminished sensitivity to further variations in speed. (2) The total system cost generally decreased linearly with the relaxation of the tolerance for reservation-time errors. When the tolerance for reservation-time errors was relaxed from 10.00 to 13.00min, the number of deployed vehicles decreased from eight to five. (3) When the fixed costs were set at ¥1100.00, ¥1050.00, ¥850.00, and ¥800.00, the numbers of deployed modular buses in all the cases exceeded five, and the average reservation-time errors of the passengers in all four experiments were significantly smaller than those in the other experiments. Conclusions: The following conclusions can be drawn from the case study results: (1) Increasing speed within a certain range can lead to reduced operational costs. However, beyond this range, the cost-reduction effect diminishes and safety risks increase, requiring a balance between efficiency and risk. (2) Strict tolerance of reservation time errors reduces passengers' average reservation errors but increases operational costs and passengers' in-transit time. Therefore, setting an appropriate driving speed and error tolerance is crucial for maximizing system benefits. (3) Flexible parameter settings help maintain greater population diversity during the early and middle stages of algorithm execution, thereby enriching the types of individuals in the system and creating favorable conditions for the algorithm to obtain improved solutions.

  • Di HUANG, Ziyu LIU, Yue LIU, Zhiyuan LIU
    Journal of Tsinghua University(Science and Technology). 2026, 66(3): 638-650. https://doi.org/10.16511/j.cnki.qhdxxb.2025.26.039
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    Objective: Demand-responsive customized bus (CB) services, an emerging mode of public transportation, offer flexible routing for passengers with similar travel preferences. However, existing route planning algorithms for CB services do not fully utilize the extensive route data generated during daily operations. Owing to the large data size and variability in passenger travel demands, directly using all historical route planning results makes it difficult to extract effective information, which reduces algorithm efficiency. Common similarity analysis methods include the Jaccard and mean squared error (MSE) algorithms. The Jaccard algorithm measures similarity by calculating the ratio of the intersection to the union of two sets of stops, but it considers only the presence of bus stops and overlooks differences in passenger demand, potentially resulting in routes that risk vehicle overload. The common MSE algorithm calculates the squared differences in passenger numbers at each stop between historical and current data, offering a highly accurate reflection of data discrepancies. However, these methods do not consider the practical feasibility of historical routes, such as vehicle capacity and passenger waiting times. Methods: To address these issues, this study proposes a modified MSE algorithm based on Lagrangian relaxation that incorporates penalty terms for vehicle capacity and time window constraints into the objective function. By comparing historical data with current passenger demand, the modified MSE identifies similar and feasible historical routes for route planning. This approach comprehensively accounts for operational constraints to ensure that the selected routes are similar and feasible. Lagrangian relaxation is also applied in the iterative process of the adaptive large neighborhood search (ALNS) algorithm to ensure that the generated route plans meet the capacity and time requirements. In addition, a mathematical model based on historical route similarity is established using a sequence similarity matching algorithm to evaluate the similarity between historical and current routes. Stops are treated as elements in sequences, with similarity measured by the longest common subsequence of stops appearing in the same order. For multiple historical routes, the similarity between a current route and the historical set is defined as the maximum sequence similarity across all historical routes. To accelerate the model's solution process, a modified ALNS is developed, which constructs initial solutions based on historical data and integrates novel removal and insertion operators that consider route similarity to achieve better solutions. To prevent premature convergence, ALNS incorporates simulated annealing, accepts inferior solutions in the early stages and gradually limits its acceptance as iterations progress. ALNS terminates after a fixed number of iterations or when no improvement is observed. Additionally, a path-relinking process improves efficiency by fixing routes that exactly match historical results and carry equal or more passengers, thereby excluding them from further iterations. Results: The proposed approach leveraged high-quality historical data to generate highly competitive results. Case studies using real travel data from Nanjing demonstrated that the modified ALNS algorithm, incorporating route similarity, outperformed the common ALNS in terms of effectiveness, convergence speed, and route similarity. By using the modified MSE algorithm that accounted for historical scenarios, the proposed algorithm improved route feasibility, stability, and quality compared with common similarity analysis methods. Moreover, selecting historical data from scenarios that closely matched the current situation helped ensure high-quality routing. Increasing the time window length reduced the total travel cost while maintaining a consistent level of route similarity. As the weight of the similarity cost parameter increased, the similarity between current and historical routes steadily rose until reaching a maximum value of one, whereas the total travel cost gradually decreased, highlighting the algorithm's ability to balance cost efficiency and route consistency effectively. Conclusions: Using historical route data that are similar to those used in current scenarios improves the quality of CB route planning solutions. This study not only advances research on CB route planning but also identifies gaps that serve as potential directions for future research. Exploring heterogeneous fleets, accounting for uncertain travel times, and incorporating additional heuristic techniques may further enhance the operational efficiency and practical applicability of the results.The research results of this paper can provide a reference for solving the CB line planning problem by using historical information.

  • Hanqing WANG, Enjian YAO, Tianyu ZHANG, Yang YANG, Shasha LIU
    Journal of Tsinghua University(Science and Technology). 2026, 66(3): 651-660. https://doi.org/10.16511/j.cnki.qhdxxb.2026.26.012
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    Objective: The rapid adoption of electric vehicles has highlighted the urgent need for efficient and reliable recharging infrastructure. Battery swapping technology offers advantages over traditional plug-in charging, including faster turnaround times, grid-friendliness, and improved space efficiency, and has garnered increasing attention. A centralized-charging battery swapping network comprising swapping stations (BSS) for battery replacement and centralized-charging stations (CCS) for dedicated battery charging can optimize grid load distribution and reduce infrastructure costs. Nonetheless, existing planning approaches focus on facility siting and construction costs, largely overlooking spatiotemporal resource constraints caused by battery distribution and its impact on network operation costs. This oversight results in suboptimal resource allocation, increased operational costs, and conflicts between operator investments and user service quality. To address these challenges, this study proposes a joint planning method for centralized-charging battery swapping networks. This approach integrates dynamic battery logistics with infrastructure siting, aiming to minimize the total cost, which comprises infrastructure, operational expenses, and user time losses. Methods: This study develops a mixed-integer optimization model to formalize the co-location and capacity planning of BSSs and CCSs. The objective function minimizes the annual comprehensive cost, which includes construction and equipment costs for CCSs and BSSs, battery procurement and logistics costs, and user-related costs such as taxi operational losses due to travel, queuing, and swapping times. Constraints include proximity-based demand allocation using Voronoi partitioning, maximum queue length limits to ensure service quality, and a CCS-BSS linkage that ensures each BSS is served by its nearest CCS. A nested simulation framework couples planning with dynamic operations to capture the operational intricacies. Battery logistics are modeled as a multi-vehicle routing problem with hard time windows, which is solved using insertion heuristics after virtual-node transformations to accommodate dynamic delivery requests. Delivery costs include distance-based fuel and lease expenses. CCS charging follows the "shortest time charge first" scheduling, with charger counts derived from daily demand and charging rates. Battery inventories are updated through the operational simulation of BSS and CCS, which calculates minimum procurement thresholds based on non-negative stock levels. The model utilizes an elite-preserving genetic algorithm to optimize siting decisions, iteratively refining planning based on simulation feedback, including delivery costs and battery inventory requirements. Results: The framework was applied to a case study in Tianjin Binhai New Area, where the optimal configuration consisted of 7 BSSs and 2 CCSs. The cost breakdown revealed that battery procurement was the dominant expense, followed by infrastructure and user time losses, while logistics costs were minimal. Model validation against static logistics baselines demonstrated a 13.60% reduction in battery procurement and a 5.98% decrease in total cost, resulting from the integration of planning and dynamic logistics. Sensitivity analysis revealed that the battery configuration at swap stations, including battery reserve quantity and delivery request thresholds, had a significant impact on the operation of the entire battery swapping network. Additionally, the maximum queue length constraint balanced service level and station construction cost; a smaller queue length required more stations and higher costs, while the total battery purchase quantity varied with queue length to maintain service levels. Conclusions: This paper integrates the operation and planning of battery swapping and CCS into a unified model that dynamically links site selection with battery delivery costs and user time loss. This approach addresses the shortcomings of traditional planning, which often overlooks delivery costs and the quantity of batteries purchased. Experimental results show a reduction in battery purchases and compressed cost compared to plans that ignore dynamic battery delivery, demonstrating its effectiveness in resource optimization and cost control. Additionally, this study's detailed co-simulation model, which captures battery charging, swapping, and delivery processes, enables multidimensional coordination of charging scheduling, battery reserves, and delivery route planning. Moreover, sensitivity analysis confirms that considering dynamic delivery guides reasonable site selection and resource allocation, while further controlling the scale of battery purchases and reducing comprehensive costs.

  • Tianyang GAO, Dawei HU
    Journal of Tsinghua University(Science and Technology). 2026, 66(3): 661-676. https://doi.org/10.16511/j.cnki.qhdxxb.2026.26.018
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    Objective: The mismatch between vehicle supply and passenger demand remains a persistent challenge in public transportation. Modular, zonal-based flexible bus services, as an innovative urban public transit mode, can adjust vehicle capacity and routes in response to passenger demand. However, the simultaneous optimization of vehicle speed, route, and capacity has not been adequately addressed, limiting the system's overall efficiency and flexibility. To address these challenges and minimize total system costs, this study aims to jointly optimize vehicle speed, route, and capacity allocation for modular zonal-based flexible bus services. Methods: First, a mixed-integer nonlinear programming (MINLP) model was developed to minimize total costs, integrating decisions across three interrelated dimensions: vehicle speed regulation, capacity allocation, and route planning. This model considers various constraints, including route generation, operating time, and adjustments to vehicle capacity. Furthermore, to improve computational efficiency, the MINLP model was linearized into a mixed-integer linear programming model by introducing auxiliary variables and constructing non-negative integer sequences. This linearization facilitated the use of standard optimization solvers for small-scale instances. Second, a hybrid heuristic algorithm combining adaptive large neighborhood search and speed optimization algorithms was designed to solve large-scale real-world problems. To validate the proposed model and algorithm, numerical experiments were conducted using the established Sioux Falls traffic network. Subsequently, a real-world case study of the Xi'an regional road network was performed, comparing the proposed model with a baseline that did not consider speed optimization, followed by a series of sensitivity tests. Results: The results revealed the following: First, compared with the baseline, the proposed model reduced total system costs by 25.03%, with vehicle and passenger time costs decreasing by 25.24% and 24.79%, respectively. These improvements primarily resulted from dynamic speed adjustment, which aligns vehicle arrivals with passenger time windows, thereby reducing waiting time and improving efficiency. Second, incorporating speed optimization reduced the number of deployed buses from 13 to 9 and shortened total travel distances from 90.28 to 75.54 km, demonstrating improved resource utilization. Third, the total costs and route numbers initially decreased and then stabilized as the maximum operating time increased. When the maximum operating time was short, more buses were required to meet demand, leading to higher total costs. Appropriately relaxing this parameter could effectively expand the service coverage of individual routes and improve vehicle utilization efficiency, thereby reducing total costs. However, once the parameter exceeded a certain threshold, further increases in the operating time would no longer yield optimization benefits due to constraints imposed by passenger time costs. Conclusions: The following conclusions can be drawn from the study's findings: (1) The proposed model demonstrates superior performance in minimizing total system costs compared with baseline models. (2) Integrating speed optimization significantly reduces passenger waiting times and operational expenses. (3) Sensitivity analysis reveals the diminishing marginal returns of maximum operating time, identifying a critical threshold for balanced service efficiency. These findings provide a validated theoretical framework to enhance the efficiency and sustainability of modular autonomous vehicle systems in flexible public transit.