
Objective: Understanding how multicomponent fuel droplets combust in microgravity environments is crucial for characterizing real fuels during spray combustion. Constructing surrogate fuels that represent real fuels is significant, balancing the need for accurate representation of complex, multicomponent fuels with acceptable computational costs. Existing surrogate fuels often match the overall properties of real fuels; however, this can lead to inaccuracies due to differences in how components vaporize due to their volatility. This article aims to study the differences in droplet evaporation and autoignition behaviors among hydrocarbon fuels with various carbon numbers and volatilities in both single- and multi-component scenarios. Methods: We developed a numerical simulation model for single droplet combustion under the assumption of spherical symmetry for multicomponent fuels. This model solves the one-dimensional convective diffusion equation for heat and mass transfer and the continuity equation for gas and liquid phases. We examined single-component n-alkanes with carbon numbers ranging from 9 to 13 and four multicomponent mixtures with an average carbon number of 10, designed to represent surrogate fuels. For single-component fuels, our analysis focused on the zero-dimensional ignition delay time, ignition delay time for droplet combustion, and droplet radius changes over time. For multicomponent fuels, we also studied component distribution in the gas and liquid phases to understand the coupling relationship between evaporation, diffusion, and reaction during droplet combustion. Our quantitative analysis of ignition delay time involved simulating zero-dimensional ignition time under local temperature and composition throughout the droplet combustion process and the entire space. Results: The results showed that although zero-dimensional ignition delay times did not differ significantly among components with different carbon numbers, fuel droplets with lower carbon numbers had shorter ignition delay times because of higher volatility. In multicomponent mixtures, despite similar molecular structures, average carbon numbers, and ignition delay times in zero-dimension reactors between the four mixtures and n-decane, there were significant differences in droplet ignition delay times and droplet radius evolution. The analysis revealed that in multicomponent fuels, high-volatility components evaporate first, and low-volatility components evaporate later. With limited liquid mass transfer rates, low-volatility components accumulate on the droplet surface. This accumulation reduces volatility, extends the ignition delay time, and prolongs the droplet combustion lifetime. Our quantitative analysis found that droplet evaporation influences ignition in two ways: it mixes components and propagates the reaction, and it accelerates the ignition process through continuous fuel evaporation. Both processes are closely related to preferential vaporization. Conclusions: Therefore, surrogate fuel models that match the overall physical and chemical properties of real fuels may exhibit different droplet combustion properties compared with real fuels. This finding highlights the importance of accurately modeling the complex evaporation and combustion processes of multicomponent fuels in microgravity environments to improve spray combustion simulation accuracy.
Objective: Studying near-limit flames under microgravity conditions offers significant advantages because the elimination of gravity-driven natural convection removes buoyancy effects, which simplifies the analysis of flame dynamics. Key objectives include examining the influence of nitrogen dilution and gravity on flame lift-off height and its mechanisms governing flame stability. Methods: This research investigates the controlling factors in lifted laminar diffusion flames of dimethyl ether (DME) under both normal gravity and microgravity environments, utilizing the Tsinghua University Freefall Facility and its dedicated jet flame apparatus. Results: The experimental results reveal that increased nitrogen dilution ratio shifts flame lift-off behavior from buoyancy-dominated to inertia-dominated. This shift is attributed to reduced chemical heat release and diminished buoyancy-driven convection. Additionally, as the DME volume fraction increases, the range between the upper and lower limits of lift-off height narrows, suggesting a critical DME concentration. Beyond this critical point, the flame struggles to stabilize, indicating fundamental constraints in flame behavior under certain fuel concentrations.Under microgravity, flames behave differently compared to normal gravity. At the same lift-off height, flames display higher flow velocity at the intersection of the stoichiometric contour and horizontal lift-off plane. This outcome is attributed to reduced momentum dissipation in the absence of gravity, leading to a concentrated and faster flow field. Furthermore, the flame's relative surface area increases, leading to enhanced heat loss and a subsequent reduction in flame temperature. Consequently, the flame propagation speed decreases, causing the flame to lift higher to maintain stability.The study also explores the theoretical aspects of flame stability by balancing flow velocity with flame propagation speed along the stoichiometric contour. In normal gravity, buoyancy acts as a key stabilizing force. However, in microgravity, where buoyancy is negligible, stability relies on chemical reactions, radiation, and forced convection. Theoretical models support these findings, showing how flow fields and flame propagation dynamics differ across gravity environments.The relative time scales of buoyancy, chemical reactions, and heat transfer are critical in determining flame behavior. Under normal gravity, the time scale of buoyancy-induced convection aligns with that of chemical reactions, making buoyancy a major factor in flame stability. However, in microgravity, the buoyancy time scale becomes much larger, rendering buoyancy effects negligible and allowing other factors, such as chemical kinetics and heat loss, to dominate.The findings of this study have broader implications for combustion science, particularly in space exploration and propulsion system design. Understanding flame behavior in microgravity is crucial for designing safe and efficient combustion systems for spacecraft, where fire safety and fuel efficiency are paramount. Additionally, these insights have terrestrial applications, contributing to cleaner and more efficient engine designs. Conclusions: This research sheds light on the transition from buoyancy-dominated to inertia-dominated flame lift-off behavior with increasing nitrogen dilution, highlighting the role of a critical DME volume fraction that limits flame lift-off behavior. The study enhances our understanding of DME flame dynamics under microgravity, with potential applications in spacecraft fire safety, combustion optimization, and propulsion systems. The results underscore the critical role of gravity in designing and analyzing combustion systems, particularly for space-related applications. Future work could extend these findings to other fuels and more complex combustion scenarios, further advancing our understanding of combustion in extreme environments.
Objective: Large-scale hydropower projects generate substantial amounts of heterogeneous data dispersed across design, construction, and supervision units. The interoperability among stakeholders is suboptimal due to the heterogeneity of data structures and professional contexts. Consequently, information sharing remains inefficient. Existing studies have typically focused on specific data types or lifecycle stages, lacking a unifying framework to facilitate comprehensive, full-cycle data sharing. To address this issue, this study proposes the development of a data-sharing agent tailored to the needs of hydropower engineering. The proposed agent is designed to accommodate structured, semi-structured, and unstructured data, and it integrates external tools such as time-series databases, knowledge graphs, and text vector databases. This integration enables accurate, on-demand data retrieval. By enhancing the tool-learning capabilities of large language models, the agent bridges data silos, enhances cross-domain collaboration, and lays a solid technical foundation for intelligent construction in complex hydropower projects. Methods: The research commences with a systematic analysis of data-sharing requirements across the full lifecycle of hydropower projects, encompassing time-series monitoring data, technical documentation, and parametric design files. Based on this analysis, a comprehensive agent framework is designed to support multi-modal data interoperability. To ensure its practicality, a supporting tool system is constructed that integrates intelligent modules for database retrieval, knowledge graph querying, and rule-based inference. Furthermore, an action-planning dataset comprising over 4000 samples is developed to train the agent in decision-making and tool invocation. Two versions of the DeepSeek-R1-Distill-Qwen model (1.5B and 7.0B parameters) are fine-tuned using this dataset to enhance structured parameter extraction, multi-step reasoning, and action planning capabilities. To assess performance, a benchmark testing dataset comprising hundreds of real-world business queries derived from hydropower project workflows is established and manually annotated to ensure fairness and reproducibility. Results: Experimental results demonstrated that the fine-tuned models substantially improved planning and reasoning performance. A comparative analysis revealed that the 1.5B and 7.0B models achieved 270% and 104% improvements in planning accuracy, respectively, compared with their pre-fine-tuning counterparts. On the business query test set, the overall output accuracies were 65.83% and 90.83%, respectively, thereby confirming a significant enhancement in model reliability and practical utility through fine-tuning. Notably, the 7.0B model consistently outperformed the smaller version, highlighting the larger model's capacity to handle complex, multi-step reasoning tasks. A practical deployment of the agent-based data-sharing platform was conducted for a real hydropower project in a representative watershed. Under static and structured data-sharing conditions, the agent maintained an average response time of less than 20s. Conversely, dynamic monitoring scenarios involving high-frequency data streams exhibited average latencies exceeding 30s, with peaks exceeding 60s under intensive analytical loads. Conclusions: This study proposes a comprehensive framework for constructing a data-sharing agent that effectively addresses critical challenges in current hydropower data-sharing practices, particularly in high-altitude, data-scarce environments. By aligning agent design with engineering-specific requirements and integrating a highly refined large language model with a domain-oriented tool ecosystem, the proposed method significantly enhances the efficiency, intelligence, and semantic interoperability of data sharing. The agent reduces cross-disciplinary access barriers, improves system responsiveness, and supports knowledge-driven decision-making. The results from field applications confirm its considerable potential for practical implementation in intelligent construction platforms. Furthermore, the findings of this study provide a scalable, generalizable technical foundation for the future development of data-driven management and intelligent decision-support systems in complex hydropower projects.
Objective: Self-protected underwater concrete (SPUC), characterized by strong anti-washout performance and adaptability to underwater construction, offers a promising solution for scour protection of offshore wind turbine monopile foundations. Conventional riprap protection systems rely on loose granular materials and are therefore susceptible to stone displacement, progressive scour, and structural instability under long-term wave-current interactions, leading to limited durability and high maintenance costs. Cementing riprap using underwater concrete has been proposed to improve the integrity and stability of the protection layer; however, the formation mechanisms of cemented rockfill under underwater conditions remain insufficiently understood. In particular, the effects of key construction and material parameters on the filling behavior and final morphology of cemented rockfill have not been systematically quantified. Methods: Systematic physical model experiments were conducted to investigate the free filling and cementation behavior of SPUC in rockfill under unconfined conditions. Rockfill with particle sizes of 10—15 cm, representative of offshore wind engineering practice, was used. Three key parameters-pouring height, concrete slump-flow diameter, and aggregate particle size-were varied to examine their effects on cemented rockfill morphology and cementation characteristics under static water conditions. Nine static-water test cases were designed. After curing, uncemented surrounding stones were removed to expose the cemented rockfill bodies. A layered measurement method based on equivalent diameters was applied to quantitatively characterize the spatial distribution, slope angle, and fully cemented height. In addition, large-scale flume experiments were performed to assess SPUC feasibility under dynamic water conditions representative of offshore construction environments. Results: Pouring height was identified as a critical construction parameter controlling concrete discharge behavior and penetration depth. At a pouring height of 15 cm, outlet blockage occurred, causing concrete accumulation near the rockfill surface and resulting in hourglass-shaped or inverted conical cemented structures. When the pouring height was increased to 25 cm or greater, concrete flowed freely into rockfill pores, forming typical pyramidal cemented bodies with wider bases. Increasing pouring height enhanced shear-induced mixing between concrete and water, increasing the apparent water-cement ratio and reducing viscosity and yield stress, thereby improving penetration capacity. The slump-flow diameter governed cemented rockfill morphology through the competition between surface spreading and downward penetration. A slump-flow diameter of approximately 500 mm achieved an optimal balance between low yield stress and moderate plastic viscosity, promoting deeper penetration and producing the largest cemented base area. In contrast, excessively large or small slump-flow diameters intensified surface spreading and inhibited penetration, resulting in steeper cemented slopes. Aggregate particle size controlled the filling mechanism through a particle-to-throat size ratio effect. Concrete with smaller aggregates (5—10 mm) readily passed through rockfill pore channels and achieved integral cementation, whereas larger aggregates (10—20 mm) induced pronounced particle blocking, leading to surface retention and inverted conical cemented structures. Dynamic water experiments demonstrated that SPUC application remains feasible at a flow velocity of approximately 0.63 m/s. Although flowing water enhanced surface spreading and reduced penetration depth relative to static conditions, stable cemented rockfill structures were still achieved through appropriate aggregate selection and continuous anti-washout admixture supply. Conclusions: This study advances the mechanistic understanding of cemented rockfill formation by quantitatively elucidating penetration-diffusion competition and particle exclusion thresholds. Pouring height, slump-flow diameter, and aggregate size were identified as key design parameters governing SPUC performance. Large-scale flume experiments confirmed the feasibility of SPUC under representative offshore flow conditions. The results provide practical guidance for material design, construction parameter optimization, and offshore engineering applications.
Objective: Scour around pile foundations, which are induced by marine currents and waves, greatly threatens the structural integrity of offshore wind turbines and submarine cables. While various scour protection measures have been employed to mitigate these effects, conventional methods such as riprap and sandbags present poor durability under complex hydrodynamic conditions, necessitating frequent maintenance and incurring high long-term costs. Consequently, there is a pressing demand for the development of more efficient and durable scour protection technologies. Methods: This study proposes a scour protection technology using a gridded cemented riprap system for offshore wind power projects, detailing its protective mechanism, material selection, and structural design. Large-scale tests were performed to explore the protective effectiveness and scour resistance of both gridded cemented and traditional riprap systems. To further confirm its engineering applicability, pre-construction casting tests were performed to determine the optimal mix ratio and flowability of self-protecting underwater concrete. In parallel, an investigation was conducted to determine the effects of different pouring parameters on concrete diffusion. Additionally, a case study was performed to compare the performances of the gridded cemented and traditional riprap systems using a newly developed evaluation method for construction quality and long-term scour protection durability. First, the annual protective structure around the foundation was divided into grid units. Then, key indicators such as average elevation, mean elevation difference, and elevation standard deviation within each grid were calculated and analyzed using field monitoring data. By comparing pre- and post-construction (or pre- and post-operation) monitoring data, these parameters were assessed to confirm compliance with protection specifications, with a focus on examining riprap geometry, thickness distribution, and structural uniformity. Results: (1) The large-scale flume tests revealed that the cemented riprap system presented excellent anti-scour performance. Compared with the unprotected condition, the maximum scour depth decreased from pile diameter (D) to 0.18D, and the maximum scour range decreased from 2.00D to 0.42D. By contrast, the traditional riprap system showed that more than 50% of the stone particles were displaced by water flow, leading to structural instability and damage. (2) Pre-construction casting tests demonstrated that the diffusion range of underwater concrete (Ds) on the riprap surface was closely related to the pouring pipe diameter (Dt) and volume (Vm). The ratio Ds/Dt exhibited an inverse proportionality to pipe diameter until reaching a minimum value of 4.5 while demonstrating a linear correlation with pouring volume. (3) Construction quality assessment by the new evaluation method, which was based on average elevation, elevation difference, and standard deviation within grid units, showed that the scour protection for the offshore foundation met design specifications. Specifically, 88% of grids achieved a backfill thickness exceeding three times the median particle size, and post-pouring elevation standard deviations remained within 0.63, satisfying uniformity requirements. (4) Long-term performance comparisons indicated that the gridded cemented riprap system significantly outperformed the traditional system in extreme scour resistance and durability. Under combined wave-current conditions, the gridded cemented riprap system exhibited no structural erosion (local or global) over a one-year monitoring period, with a maximum elevation loss of only 0.30 m and a total scour volume of 18 m3, which was significantly lower than that of the conventional riprap system. Conclusions: This study comprehensively investigates the application of gridded cemented riprap technology in offshore wind turbine scour protection through experimental investigations, construction methodology, and performance evaluation. The results demonstrate its superior erosion resistance and long-term durability compared with the traditional riprap system, offering valuable insights for future engineering applications.
Objective: With the continuous expansion and increasing complexity of water diversion tunnels in hydropower projects, their long-term structural safety has become a critical engineering challenge. Conventional safety evaluation methods often rely on qualitative assessments or multi-index systems, which are highly subjective and fail to adequately account for progressive material deterioration and time-dependent deformation. This study proposes an integrated quantitative framework that combines real-time inversion of mechanical parameters and deformation evolution analysis to dynamically evaluate the structural safety of tunnels. Methods: The proposed framework integrates four major components: data preprocessing, parameter inversion, deformation simulation, and safety evaluation. First, the raw deformation monitoring data are preprocessed by imputing missing values using the K-nearest neighbors (KNN) algorithm, identifying and correcting outliers with a sliding-window Z-score method, and reducing noise through logarithmic trend fitting. Second, a physics-informed inversion approach combining deep learning architectures—fully connected layers (FCL) and gated recurrent units (GRUs)—with Bayesian optimization is established to infer the current mechanical parameters of the tunnel from preprocessed deformation data. Third, an elastoviscoplastic damage creep constitutive model based on internal variable thermodynamics is employed to simulate deformation behavior under various material-degradation scenarios, represented by different strength reduction coefficients (Kr). Finally, based on the analysis of material creep behavior and deformation evolution patterns, a time-dependent "3S" safety evaluation index system is established. This system comprises the long-term deformation acceleration safety coefficient (S1), the nonlinear deformation initiation safety coefficient (S2), and the short-term deformation acceleration safety coefficient (S3). The safety state of the tunnel structure is quantified according to the relevant deformation evolution characteristics using the proposed 3S safety coefficients. The physical implications of these indices are as follows: S1 characterizes the critical point at which the structure transitions into an accelerated creep phase under continuous strength attenuation, indicating the long-term instability risk; S2 reflects the onset of deviation from the linear response during initial deformation, marking the beginning of dominance by nonlinear mechanical behavior; and S3 indicates the threshold for notable acceleration of deformation within a defined short-term period, serving as an indicator of potential sudden instability. Results: The proposed method was implemented in the JLL Tunnel, a 20km-long underground structure located in Hunan Province, China, which features complex geological conditions. The mechanical parameters were successfully inverted from field monitoring data, with simulated deformation curves showing high agreement with the measured values. Numerical simulations under different Kr conditions revealed distinct deformation patterns. For Kr≥0.7, deformation stabilized after initial convergence. When 0.4≤Kr≤0.6, deformation exhibited slow growth, followed by an acceleration phase. For Kr≤0.3, deformation accelerated rapidly within a short time. The computed 3S safety coefficients were S1=2.4-3.0, S2=3.7-4.6, and S3=5.7-7.3, indicating that the tunnel is currently in a safe state with sufficient safety margins. These results validated the method's effectiveness in distinguishing between short- and long-term risks and in providing early safety warnings through deformation trajectory analysis. Conclusions: This study proposes an integrated quantitative framework for tunnel structural safety evaluation that effectively combines real-time monitoring data, physics-based modeling, and deformation evolution analysis. The established 3S index system provides a refined insight into structural behavior under material degradation and enables safety assessment across multiple time scales. Compared with conventional methods, the proposed framework enhances objectivity, supports the dynamic prediction of time-dependent performance, and facilitates lifecycle safety management and preventive maintenance of tunnel structures. The methodology demonstrates strong generalizability and offers remarkable practical value for risk prevention and sustainable operation in tunnel engineering.
Objective: Accurate identification of road sections that critically impact the efficiency of urban road network operation is an important prerequisite for optimizing traffic resource allocation and improving road network resilience. Traditional identification methods often rely on topological attributes and ignore the time-varying characteristics of traffic states, which makes it difficult to capture key sections across different periods. Therefore, this study proposes a key link identification model that integrates structural and feature information, aiming to provide a more accurate basis for decision-making in urban traffic control and emergency management. Methods: In this study, a key road segment recognition model based on a graph attention network (GAT-KSI) is constructed. First, given the time-varying nature of traffic, GAT is used to obtain a feature influence score that quantifies the influence intensity of the running state between neighboring road sections. Second, by combining topological information from neighboring road sections, a structural influence score quantifying the potential for traffic flow to propagate between them is obtained. Then, the feature influence score and the structural influence score are combined to measure the feature-structure fusion influence between neighboring road sections in the road network. On this basis, the PageRank model is improved by using a fusion influence score between neighboring road sections to rank key road sections. Finally, based on road network data from Shenzhen, China, a load-capacity model of link cascading failure is constructed, and the congestion failure process of key links under two typical traffic situations, peak and flat peak, is simulated. The network efficiency index is used to evaluate the impact of congestion failures in key sections identified by different methods on the operation efficiency of the road network, thereby reflecting the ability of each method to identify these sections. Results: The experimental results show that the first 20 key sections identified by the GAT-KSI model can cause the road network efficiency to decrease by 55.9% during the peak period and 50.0% during the flat peak period when congestion failures occur. By contrast, the first 20 critical links identified by baseline methods such as degree ranking, betweenness centrality ranking, the traditional PageRank model, the gravity model, graph convolutional network, graph sample and aggregate, and GAT v2 only resulted in a decrease in road network efficiency of 20.0% to 43.9%. This result verifies that the GAT-KSI model can more accurately identify key road sections that have a significant impact on the operation of the road network under different traffic conditions, demonstrating stronger scene adaptability and recognition stability. Conclusions: The GAT-KSI model proposed in this study achieves accurate identification of key road sections by integrating road network topology and traffic time-varying characteristics. The model not only considers the dynamic propagation characteristics of traffic states but also accounts for the potential impact mechanisms of the road network structure, and it shows superior recognition performance across different traffic scenarios. The model can provide effective technical support for key node identification, congestion prevention, and emergency response in urban traffic management, offering important theoretical value and practical benefits.
Objective: This study aims to propose a vehicle carbon emission calculation model for provincial highway networks. Drawing on actual traffic volume on observation data, the model seeks to integrate vehicle type differences, gross vehicle weight, driving conditions, and a theoretical fuel consumption framework to enable refined, road-segment-specific emissions estimation. Methods: 1) A "down to up" approach was adopted. Based on a maximum entropy OD backpropagation model, the OD matrix was redistributed to transform the observed traffic data of some road sections into the entire road network traffic distribution. 2) The average speed of road sections estimated using the function proposed by Bureau of Public Roads was divided into free-flow and non-free-flow states, and the driving cycles of highway vehicles in different speed intervals were obtained. 3) CATC series driving circles were deconstructed based on vehicle type, total mass, and average speed. Typical operating condition segments were extracted, and new driving circles adapted to provincial highway networks were reconstructed using Python. 4) Using the operating condition curves and driving volume, the fuel consumption and electricity consumption of each vehicle type were calculated and then converted to carbon emissions based on the carbon balance principle. 5) Because the fuel consumption rate model parameter was difficult to obtain, fuel consumption rates were calculated based on energy consumption values per second under different operating conditions in the MOVES open-source database. 6) The results of the proposed model were compared with those of simulations based on MOVES 4.0 and COPERT 5.8, as well as various fuel consumption data released by the provincial bureau of statistics, and the model's feasibility was verified in terms of carbon emission deviation rate and vehicle energy consumption reference value. Results: Considering the Ningxia Hui Autonomous Region as a case study, the monthly total carbon emissions of the provincial highway network were calculated, and the distribution characteristics of carbon emissions on the highway sections were analyzed. The results showed that the total carbon emission of the provincial highway network in November 2019 was 195 900 tons. Trucks were the main source of carbon emissions, accounting for as high as 87.85%. Heavy-duty trucks made a particularly significant contribution, constituting 76.47% of the carbon emissions from trucks. The carbon emissions from the operation of small passenger cars accounted for only 12.15% and 82.93% of all vehicle types and passenger vehicles, respectively. Conclusions: Through the case study, the proposed model was verified to have high calculation accuracy. The reconstructed working conditions are suitable for complex road networks in China. The proposed model considered the driving circles of various vehicle types in different speed ranges, avoiding the use of inadequate international carbon emission software such as MOVES and COPERT. This research will support the precise calculation of vehicle carbon emissions in provincial highway networks.
Objective: With the rapid development of vehicular networks (vehicle-to-everything) and autonomous driving technologies, cooperative perception has become a crucial technology to enhance the environmental perception capability of connected and autonomous vehicles (CAVs). Individual perception information is shared among CAVs, and cooperative perception can effectively expand the sensing range, reduce occlusion effects, and improve perception redundancy. However, vehicle localization errors are unavoidable in real driving scenarios due to sensor noise, environmental interference, and communication uncertainty. Localization errors often lead to spatial misalignment among point clouds from multiple vehicles, thereby reducing the performance of multivehicle cooperative perception. Mitigating the impact of localization errors on cooperative perception and improving computational efficiency for on-board deployment remain challenging problems. Methods: To address the above issues, this paper proposed a cooperative perception method of CAVs based on point feature fine alignment. First, a lightweight point feature extraction module was designed using PointConvFormer to process point cloud data collected by individual vehicles. By integrating PointConvFormer layers into bottleneck residual blocks, the proposed feature extraction module preserves the three-dimensional spatial structure of the point cloud while capturing local geometric features and global contextual information. Second, the cross-vehicle point feature hierarchical fine alignment module was designed to address spatial misalignment in cross-vehicle data fusion. This module used the global poses of multiple CAVs, collected from positioning systems, to achieve coarse alignment of point features between the surrounding CAVs and the ego-vehicle. The fine-grained alignment strategy was further implemented using local overlapping point-cloud registration to improve the spatial feature consistency of the aggregated point cloud, and the point feature similarity within overlapping regions was exploited to maximize cross-vehicle feature correspondence and alleviate feature alignment deviation caused by localization errors. Furthermore, the multiscale feature fusion module was built to integrate local fine-grained features with global contextual information; it employed multiscale mask sampling to retain the structural information of the aligned aggregated point cloud at various spatial resolutions. Results: Extensive experiments and ablation studies were conducted on V2V4real and V2XSet datasets to comprehensively evaluate the performance of the proposed method. The experimental results demonstrated that the proposed approach achieved superior perception accuracy and robustness compared to other state-of-the-art methods across traffic scenarios with varying levels of localization errors. Moreover, the proposed method maintained high computational efficiency and satisfied the real-time requirements of on-board deployment. Conclusions: The proposed cooperative perception method, based on point feature fine alignment, integrates a lightweight point feature extraction module, a cross-vehicle point feature fine alignment module, and a multiscale feature fusion module. It effectively addressed the perception performance degradation problem caused by vehicle localization errors and improved the accuracy and robustness of cooperative perception among CAVs. In future work, we will enhance the collaborative perception performance of CAVs in complex scenarios, such as rain and fog, by integrating information from multimodal sensors, including cameras and millimeter-wave radars.
Objective: Urban expressway interchanges with closely spaced diverging and merging areas are typical recurrent bottlenecks under high demand. Mandatory divergence toward off-ramps and merging from on-ramps induce intensive weaving conflicts, and local disturbances may propagate across adjacent bottlenecks, resulting in system-level congestion. Conventional traffic management approaches (e.g., ramp metering and variable speed limits) are usually designed at a macroscopic level and often target isolated bottleneck segments; therefore, they can mitigate local congestion but may not effectively suppress congestion coupling between successive diverge–merge areas within an interchange network. With the development of intelligent connected vehicles (ICVs) and cloud control systems, vehicle-level cooperative decision-making has become feasible. Nevertheless, existing studies predominantly address either a single merging zone or a single diverging zone, while integrated interchange-level coordination of diverging and merging operations remains insufficient. To bridge this gap, this paper proposes an interchange-level cloud-based cooperative optimization framework that jointly coordinates diverging and merging operations. The objective of this study is to compute a near-optimal passing sequence and trajectory plan that minimizes the weighted sum of travel delay (WD) for all vehicles while ensuring safety and comfort. The main challenge lies in the combinatorial complexity caused by multilane interactions and mixed mandatory and discretionary lane changes. Methods: To improve computational scalability, a rolling traversal scheduling (RTS) mechanism is developed. Instead of solving a single large-scale optimization for all vehicles at the interchange, the RTS constructs a rolling decision group comprising the current leading vehicle (or platoon) from each relevant lane. At each decision step, the cloud controller formulates a mixed-integer linear programming (MILP) subproblem to determine which candidate to serve next in the conflict area, along with the corresponding discrete sequence decisions. Once the decision is fixed, the selected candidate leaves the group, the next vehicle in that lane enters, and another MILP is solved. Through this rolling update, the interchange-level scheduling task is decomposed into a sequence of tractable MILP subproblems, while the cost design accounts for the delay impacts on other vehicles to preserve near-global optimality. To further enhance multilane utilization, a free lane-change strategy for through vehicles is proposed that jointly searches for discretionary lane-change positions alongside mandatory lane-change requirements to create gaps and reduce conflicts. In addition, a dual-mode trajectory planning method is introduced to translate the optimized sequence into implementable motion: an optimal-control-based trajectory is generated for efficiency and smoothness, while a car-following-model-based trajectory is retained as a safety-guaranteed fallback. Results: A simulation platform is implemented on an interchange segment with stochastic arrivals under 720, 1 080, and 1 440 pla·h-1·ln-1. Compared with a baseline strategy that optimizes diverging and merging areas separately, the proposed method reduces WD by 22.5% in an illustrative case, exhibits considerably better resilience at 1 080 pla·h-1·ln-1 (delay increase 33.7% versus 320.0%), and maintains a relatively fluent state at 1 440 pla·h-1·ln-1 with a 24.6% higher average speed and markedly lower average delay (~15.5 s versus 55.0 s). Conclusions: Overall, the proposed framework integrates rolling MILP scheduling, discretionary lane-change coordination, and robust trajectory planning, demonstrating the potential of cloud-controlled ICV coordination to mitigate congestion coupling and enhance interchange operational efficiency.
Objective: As China gradually phases out its subsidy policy for offshore wind power (OWP), the industry has entered a decisive cost reduction and efficiency enhancement stage. Unlike onshore wind projects, OWP projects still face substantially higher construction and operating costs, which makes competitiveness under grid-parity conditions highly challenging. However, the theoretical framework for managing the smart lifecycle of OWP projects remains underdeveloped, with incomplete asset management and evaluation methods. The smart lifecycle management of OWP projects is in its nascent stage, with fragmented data systems and poor integration across planning, construction, and operations. Consequently, there is an urgent need for an innovative, lifecycle-oriented management approach to support the OWP industry's high-quality and sustainable development. Methods: This study proposes a closed-loop smart management mechanism for OWP projects, focusing on four key dimensions: perception, analysis, real-time control, and optimization. This study summarizes the digital transformation and smart management pathways of OWP assets throughout their lifecycle, from feasibility to construction, operation, and final decommissioning. This study classifies management elements into cost, efficiency, and risk, creating a closed-loop evaluation model for OWP assets. This model enables a dynamic representation of each asset's status in terms of cost, efficiency, and risk levels at any moment. Furthermore, this study identifies and analyzes key smart management technologies relevant to various phases of the lifecycle. Accordingly, a smart lifecycle management platform has been designed with a five-layer architecture: data acquisition, data management, modular functional applications, decision support, and interactive visualization. A prototype system was developed to address project development and design, smart construction, and smart operation and maintenance. The system was applied in a large offshore wind farm in Jiangsu Province, China, and the asset conditions were compared before and after its implementation. Results: A comparative analysis based on actual operational data showed significant improvements: (1) Effective cost control was achieved during the construction period, along with rational planning for operation and maintenance, reduced downtime, and improved generation efficiency. Thus, the lifecycle levelized cost of energy decreased from 0.92 yuan/(kW·h) to 0.76 yuan/(kW·h), which was approximately 17.4%. (2) When evaluated against three key efficiency indicators—compliance rate of power generation, effective operation time assurance rate, and average failure rate—the poorly performing turbines (#3, #10, and #16) showed significant improvements after rectification, with lower osculating values. (3) Comprehensive risk assessment identified anomalies in the transmission and blade systems as primary concerns, which allowed for targeted maintenance interventions, improved equipment availability, and reduced unplanned maintenance requirements. Conclusions: The proposed closed-loop smart lifecycle management system offers a new technical solution for reducing costs and improving efficiency in OWP projects. Practical application shows that the system can enhance project planning and design, reduce development and operational costs, improve asset equipment reliability, extend the power generation lifecycle, and ensure ongoing value creation throughout the lifecycle. The study findings offer valuable guidance for smart management in similar offshore renewable energy projects, thereby contributing to the sustainable development of the wind power industry through digital transformation.
Objective: With the rapid advancement of artificial intelligence and high-performance computing, data centers are increasingly challenged to achieve effective thermal management and high energy efficiency. Traditional air-cooling methods struggle to handle rising power density, while liquid cooling offers superior heat transfer capabilities but often lacks deployment flexibility. To fully leverage the advantages of air and liquid cooling systems, this study proposes two server-level phase-change air-assisted liquid cooling (AALC) systems to enhance cooling performance, maintain modular deployment, and reduce energy consumption in high-density data centers. Methods: Two technical configurations were investigated: an air-cooling retrofitting scheme (System 1) and a hybrid solution incorporating a heat pipe composite air conditioner (System 2). Both systems utilized a pump-free, gas-liquid phase-change heat transfer loop to enable passive fluid circulation and simplify the structural design. System 1 was designed to combine the structural simplicity and deployment flexibility of air-cooling systems with the high heat transfer efficiency of liquid cooling. It enabled seamless upgradation of high-density cabinets within limited space in existing air-cooled data centers without requiring large-scale infrastructure modifications. An experimental platform simulating a 50kW cabinet heat load was developed using five electric heating modules. The thermal performance of System 1 was evaluated by monitoring the average temperature and surface uniformity across multiple evaporators. To further improve the energy-saving capability and year-round deployment potential across different climatic zones of China, System 2 was developed based on System 1 by incorporating a heat pipe composite air conditioner on the cold source side. This configuration retained the core advantages of passive two-phase cooling while enhancing the effectiveness of natural cooling. For System 2, annual power usage effectiveness (PUE) was simulated using hourly meteorological data from five representative Chinese cities. The total power consumption of the cooling system—including terminal fan power, outdoor fan power, and compressor power—was modeled using empirical polynomial-based expressions. In addition, economic metrics, such as annual energy savings and payback period, were evaluated. Results: System 1 exhibited stable operation under a 50kW cabinet heat load. The average temperature of the two-phase loop reached 39.85 ℃, and the maximum surface temperature difference among five vertical evaporators was only 1.9℃, demonstrating excellent thermal uniformity and heat dissipation capacity. The simulation results of System 2 showed up to 30% energy savings over conventional computer room air handler (CRAH) systems because of the enhanced use of natural cooling. In cities such as Beijing and Xi'an, the annual PUE was reduced to as low as 1.16. The hybrid operation considerably lowered fan and compressor energy consumption. Economic analysis indicated a payback period of approximately 1.89 years, confirming the financial viability of System 2. Conclusions: The proposed AALC systems provide a practical and scalable solution for thermal management in high-density data centers. System 1 verifies the feasibility of passive phase-change cooling for flexible cabinet-level retrofitting.
Objective: The wide application of proton exchange membrane (PEM) electrolyzers is limited by their high capital costs, which are primarily due to the extensive use of precious metal iridium. However, reducing iridium loading still poses challenges related to the efficiency and stability of PEM electrolyzers. Although the catalysts prepared using manganese dioxide loaded with iridium have achieved a high iridium mass activity, the design and preparation of iridium-manganese oxide catalyst electrodes for PEM electrolysis continue to face other challenges, including catalyst layer (CL) dead zones, reduced three-phase interfaces, and poor stability. This study reports an oxygen evolution reaction (OER) electrode with an ultralow iridium load dispersed by manganese dioxide carriers. The highly dispersed iridium is anchored on the surface of manganese dioxide with a high electrochemical surface area and low ohmic resistance, thereby achieving high performance and stability. Methods: Manganese dioxide electrodes prepared by pulsed electrodeposition (PED), continuous electrodeposition (CED), and pulse-gradient electrodeposition (PGED) were fabricated on titanium felt, followed by reaction with iridium precursor solutions for iridium loading to yield PED-IrMnO2/porous transport layer (PTL), CED-IrMnO2/PTL, and PGED-IrMnO2/PTL, respectively. The physical properties of the samples were characterized by scanning electron microscopy (SEM), high-resolution transmission electron microscopy (HR-TEM), X-ray diffraction (XRD), Brunauer-Emmett-Teller (BET), and X-ray photoelectron spectroscopy (XPS). The electrochemical performance of the samples was evaluated in a three-electrode system and a PEM electrolyzer. Linear sweep voltammetry, cyclic voltammetry, polarization curves, and electrochemical impedance spectroscopy (EIS) were also analyzed. Results: SEM results showed that PED increased the MnO2 particle size and decreased the number of particles per unit area. The BET values were the lowest in the PED sample, followed by the PGED sample, and the highest in the CED sample at 44.78 m2·g-1. SEM analysis of the PED sample revealed that a small duty cycle resulted in uniform MnO2 growth, indicating the presence of many contact sites with the PTL. Thus, the PED sample had the lowest resistance. Compared with the CED and PED samples, the PGED sample had reduced resistance and increased specific surface area, respectively. XRD and TEM results revealed the (131) crystal plane corresponding to γ-MnO2. XPS results showed that PGED-Ir MnO2/PTL had the highest Ir valence state of +5.73, indicating its high OER activity. In the three-electrode system, PGED-Ir MnO2/PTL exhibited the highest iridium mass activity, reaching 112.32 A·g-1 at 1.53 V versus reversible hydrogen electrode (RHE), which was 4.41 times that of IrO2/PTL. The stability test results showed that the potential change of PGED-IrMnO2/PTL was negligible throughout a 100 h duration test in 0.5 mol·L-1 H2SO4. In the PEM electrolyzer, the PGED sample exhibited the best performance with a voltage of only 1.859 V at 2 A·cm-2 and the lowest Tafel slope of 74.3 mV·dec-1. EIS analysis indicated that the improved OER performance of the PGED sample was due to a balance between ohmic losses and charge transfer losses. Conclusions: This study develops a preparation technique for 3D gradient-ordered OER electrodes for PEM electrolyzers, reducing iridium usage to 1% of commercial levels while maintaining high performance. The prepared gradient-ordered OER electrodes demonstrates enhanced electron conduction sites between the CL and the PTL. The gradient design increases the specific surface area, allowing PGED-IrMnO2/PTL to achieve an iridium mass activity of up to 1.1 × 105 A·g-1 in the PEM electrolyzer. These findings contribute to the development of low-cost, high-performance, and stable ultralow-iridium-loading OER electrodes, thereby advancing the terawatt-scale application of PEM electrolyzers.
Objective: Generative artificial intelligence (GAI), exemplified by models such as ChatGPT, is driving disruptive technological transformations. Its rapid and widespread adoption presents dual challenges: job displacement and skill renewal, placing unprecedented pressure on the public's sense of occupational security. To clarify the mechanisms through which GAI adoption affects occupational security, this study analyzes public commentary data from major social media platforms. Sentiment analysis and topic modeling are employed to identify key influencing factors, map their causal relationships, and construct a hierarchical structure. The aim is to offer targeted mitigation strategies to address the occupational security challenges arising from GAI adoption. Methods: Public comments related to the occupational impact of GAI were primarily collected from TikTok, with additional data obtained from Weibo and bilibili, all of which are widely used social media platforms in China. After data cleaning and manual filtering, a bidirectional encoder representations from Transformers—based sentiment classification model was employed to extract comments expressing negative sentiment, resulting in a perception-based corpus focused on occupational insecurity triggered by GAI adoption. The biterm topic model was then used for topic modeling, identifying eight core themes—including employment and societal dynamics and human-AI collaboration. Semantic analysis of topic keywords distilled seven critical influencing factors. The decision-making trial and evaluation laboratory (DEMATEL) method was applied to construct an influence matrix quantifying the strength and direction of causal relationships among these factors. Finally, interpretive structural modeling (ISM) was used to build a hierarchical structure of the influencing factors, revealing their stratified distribution and transmission pathways. Results: The impact of GAI adoption on public occupational security was found to result from interconnected, multi-level structural factors. Based on DEMATEL analysis, the seven influencing factors were categorized into four functional zones: core driving, auxiliary support, adaptive adjustment, and comprehensive transmission. Human-AI collaboration was placed in the core driving zone, exerting a strong influence on other factors. Employment market changes, situated in the comprehensive transmission zone, showed the highest centrality. This factor was significantly influenced by other variables, indicating its pivotal role in the overall impact mechanism. ISM further revealed a three-level hierarchical structure. The foundational level included human-AI collaboration and legal/ethical and privacy safeguards, which initiated the occupational security shock. The middle level comprised factors such as future development prospects, industrial restructuring, and infrastructure development, which reflected structural adjustments driven by technological change and functioned as transitional nodes. The top level encompassed digital literacy education and employment market changes, representing the most direct pathways through which GAI impacted the public's occupational security. Conclusions: Based on these findings, this study proposes a multi-level response framework from the perspectives of individuals, enterprises, social organizations, and government actors to address the occupational security challenges posed by GAI adoption. Furthermore, the analytical framework developed herein provides a theoretical foundation and practical reference for future research on occupational risk assessment and governance strategies in the era of rapid GAI advancement, thereby supporting the coordinated development of technological progress and societal stability.
Objective: To elucidate the evolution patterns and hazard characteristics of overheating-induced short-circuit faults in energized conductors under external thermal radiation, this study systematically investigated critical heat flux, characteristic temperatures, fault initiation time, insulation resistance evolution, short-circuit current-voltage waveform characteristics, and arc energy variation. The objective was to identify key parameters for the early warning of electrical fires during conductor short-circuit failure under varying thermal radiation intensities. The findings aim to provide experimental evidence for risk identification and assessment of electrical circuit failures in high-temperature environments. Methods: Using an electrical fault simulation apparatus, stable thermal radiation intensities of 23—32kW·m-2 were applied to ZR-RVVB conductors operating under rated current conditions. Parameters including surface and internal temperatures, insulation resistance, pre-and post-short-circuit current and voltage waveforms, and fault occurrence times were recorded simultaneously. Thermal failure stages were defined using temperature-time curves. Short-circuit types were classified through waveform and time-frequency domain analyses, and short-circuit arc energy was calculated based on voltage-current integration. Comparative analyses were conducted to determine parameter variation patterns across different thermal radiation intensities. Results: Experimental findings indicated that 23kW·m-2 represents the minimum critical thermal radiation intensity that causes short-circuit failures in ZR-RVVB conductors under rated current conditions. With increasing thermal radiation intensity, the conductor temperature rise showed four stages: transient thermal shock, accelerated pyrolysis, critical failure, and thermal steady state. The initial pyrolysis temperature (T1), peak temperature (T2), short-circuit trigger temperature (Tsc), and steady-state temperature (T3) increased approximately linearly with increasing heat flux. Meanwhile, the duration of each stage decreased with increasing heat flux, showing a power-law relationship. This reduction is associated with faster heating and accelerated insulation degradation under higher thermal radiation intensities. Notably, the short-circuit trigger time shortened from ~1053.4s to 172.4s. At a heat flux of 32kW·m-2, the insulation resistance dropped rapidly to ~0GΩ within 180s. Overall, insulation resistance declined significantly with increasing thermal radiation intensity. A reduction below ~1GΩ signaled imminent insulation failure. Fault mechanisms transitioned from metallic short circuits to carbonization path-type and arc-type faults as the thermal radiation intensity increased. At a heat flux of 25kW·m-2, metallic short circuits were the dominant failure mode, accounting for ~70% of failures. When the heat flux exceeded 26kW·m-2, the frequency of carbonization path-type faults increased significantly, peaking near 31kW·m-2. Time-frequency energy analysis indicated that arc-type short circuits exhibited the highest high-frequency energy characteristic parameters, with a high-frequency energy peak (HHF) of 0.860 and a high-frequency energy ratio (RHF) of 0.416, both of which were significantly higher than those of the other two fault types. Energy released after short-circuit increased significantly with increasing thermal radiation intensity; arc-type faults released the highest energy (approximately ~9324.89J), followed by carbonization path-type faults. Meanwhile, metallic short circuits released the least energy. This indicated that higher thermal radiation intensities lead to greater short-circuit energy release and increased destructive potential. Conclusions: This study characterized the temperature rise behavior, insulation resistance evolution, fault type transitions, and energy release characteristics of energized conductors under varying thermal radiation intensities. The findings provide a foundation for rapid short-circuit fault classification and the development of early-warning models.
Objective: Reactive power compensation is crucial for enhancing fault ride-through capability in new energy field clusters, making the effective configuration of reactive power compensation devices increasingly important. This involves selecting compensation points and determining reactive power compensation capacity. Traditional site selection methods for reactive power compensation devices overlook the nonlinear characteristics of continuous power flow in the sending end system of the new energy field group and the impact of later compensation points on previous compensation points, resulting in inaccurate site selection. Methods: Considering the nonlinear characteristics of the sending end system of new energy field groups, based on the traditional first-order network loss reactive power sensitivity method, this study develops a second-order network loss reactive power sensitivity calculation method. To eliminate the influence of later compensation points on previous compensation points, this paper adopts the method of traversing each node. Firstly, the second-order network loss reactive power sensitivity of the current node is calculated, and then compared with the calculated sensitivity in a loop. After traversal, the compensation points are selected based on the sensitivity ranking of all nodes. To meet the fault ride-through requirements of the sending end system of new energy field groups and reduce the calculation consumption in optimizing the compensation capacity of reactive power compensation devices, based on the undervoltage area of low-voltage ride-through, a fault ride-through risk index for new energy field groups is proposed. The proposed index is developed in the context of low-voltage and high-voltage ride-through requirements for new energy sources such as photovoltaics and wind power. By scanning the faults in the sending end system of new energy field groups and calculating the fault ride-through risk index under different faults based on the time series voltage values, key fault sets are selected by sorting the fault ride-through risk index values under each fault, thereby reducing the flow sample data in capacity optimization. In terms of optimizing the capacity of reactive power compensation devices, an optimization model was established using the equal annual value method with the comprehensive investment cost of static var generator (SVG) and synchronous condenser (SC) and fault ride-through penalty as the optimization goal. In order to meet the timing voltage requirements of the sending end system of new energy field groups during fault ride-through, an optimization iterative algorithm for electromechanical transient simulation in the loop was designed. Through electromechanical transient simulation, the timing voltage is verified to meet the fault ride-through requirements. If the timing voltage does not meet the requirements, the initial value of the optimization model is updated with the optimized configuration capacity of reactive power compensation devices, and the next optimization step is carried out until the timing voltage requirements during fault ride-through were met. Results: During the verification of the calculation example, under the same compensation capacity, the system network loss increment and network loss reactive power sensitivity calculated using the traditional first-order network loss reactive power sensitivity method and the second-order network loss reactive power sensitivity method proposed in this paper were compared. Compared with the network loss increment obtained by power flow calculation, the method proposed in this paper calculates a smaller error in the network loss increment, and using the second-order network loss sensitivity correction method to select the position of the reactive power compensation device is more accurate. Four key faults were selected using the fault ride-through risk index. While optimizing the capacity of the reactive power compensation device, a comparison was made between the economic value of two configuration schemes: "SVG" and "SVG+SC". Conclusions: The results showed that the configuration scheme of "SVG+SC" can fully utilize the overvoltage suppression ability of SVG during fault recovery and the voltage support ability of SC during faults, thereby reducing the investment cost by 72.1 million yuan. This validates the feasibility and effectiveness of the reactive power compensation device capacity optimization method proposed in this paper.
Objective: The two-section gas centrifuge rotor connected via bellows is a technical approach aimed at enhancing the separative power of a single centrifuge unit. However, current numerical studies on this type of centrifuge design exhibit notable deficiencies. On the one hand, the cross-section of the bellows is often oversimplified as rectangular in computational models, leading to remarkable deviations in simulating gas flow near the bellows and reducing the accuracy of separative power calculations. On the other hand, boundary conditions in flow field simulations typically assume idealized sidewall temperature distributions—specifically, a linear temperature profile along the rotor sidewall—that substantially deviate from actual operational temperature distributions. In addition, existing models of the temperature field often oversimplify radiative heat transfer processes and overlook the coupled interactions between heat transfer and fluid dynamics within the rotor. To address these issues, this paper proposes two improvements to the numerical calculation method for the internal flow field of a two-section gas centrifuge rotor. Methods: First, a coupled-iterative numerical calculation method is developed to concurrently solve the flow field, temperature field, and radiative heat transfer. Leveraging the unique structural characteristics of gas centrifuges, the computational domain is divided into fluid field and solid regions, with coupled-iterative solutions achieved through continuity conditions at the regional boundaries. The fluid field region is solved using a predictor-corrector homotopy algorithm, an algorithm that is extensively applied and rigorously validated for its robustness and accuracy in modeling disturbed and strong swirling flows within gas centrifuge rotors. Convective heat transfer boundary conditions (of the third kind) are applied to simulate heat exchange among the outer casing of the gas centrifuge, the external environment, and cooling water. Radiative view factors between complex surfaces are calculated using integral methods to enable high-accuracy modeling of radiative heat transfer. This approach yields accurate flow and temperature field distributions under actual operating conditions and supports multiparameter optimization of the separative power of a gas centrifuge. Second, a staircase approximation method is introduced to simulate the complex cross-sectional shape of the bellows. This enhancement improves the accuracy of flow field simulations near the bellows and the accuracy of separative power calculations while enabling geometric optimization of the bellow design. Results: The application of the improved calculation method yields the following results: (1) the actual temperature distribution along the sidewall of the gas centrifuge rotor notably deviates from the ideal linear profile. Elevated temperatures at the product end generate reverse circulation, which adversely affects isotope separation efficiency. (2) Optimizing the emissivity of radiation heat transfer surfaces improves the temperature distribution along the rotor sidewall, thereby substantially enhancing separative power. (3) Incorporating staircase approximations for the below cross-section improves the accuracy of separative power calculations. (4) The staircase approximation facilitates shape optimization of the bellows, with a six-level approximation satisfying high-precision computational requirements. Conclusions: The improved numerical calculation method proposed in this paper for analyzing the flow field within the rotor of a two-section gas centrifuge enables a coupled solution of the flow field, temperature field, and radiative heat transfer under actual operating conditions. This approach markedly enhances computational accuracy for capturing flow dynamics near the bellows and for determining separative power. By leveraging this calculation method, multiparameter optimization of the centrifuge separative power, as well as shape optimization of the bellow cross-section, can be effectively achieved.