
Objective: Against the backdrop of accelerated urbanization and climate change, accurately assessing flood resilience holds significant implications for the sustainable development of cities. Previous studies on flood resilience assessment have employed classical statistical methods, which are relatively sensitive to small-sample data and fail to consider the effect of spatial correlation on urban flood resilience. To mitigate disaster risks and enhance cities' capacity to withstand urban floods, this study focuses on examining the intrinsic spatial structures of flood resilience while accounting for spatial correlation and space-time heterogeneity. Methods: This study employed the Bayesian space-time interaction model (BSTIM) based on the Markov chain Monte Carlo method to achieve an accurate assessment of flood resilience. The model incorporated the spatial information of adjacent cities into the space-time evolution of flood resilience, enabling the revelation of the coupled space-time dynamics. In addition, a comparative study of the model was conducted to verify the effectiveness of the selected parameter. Finally, BSTIM was applied to assess flood resilience in 41 cities within the Yangtze River Delta region, and targeted policy recommendations were proposed. Results: The results showed that the space-time interpretation degree of BSTIM for urban flood resilience exceeded 93%, demonstrating excellent performance in fitting space-time data and enabling an accurate assessment of the space-time characteristics. (1) Flood resilience in the Yangtze River Delta region showed an upward trend during the study period, with the highest annual growth rate of 17.89% in 2017 and the lowest of 14.71% in 2009. (2) Significant differences existed in the spatial distribution of flood resilience across the region, forming an overall "low-high-medium" spatial pattern. (3) Local change trends varied considerably, presenting a characteristic pattern of "high in the east, low in the west, and medium in the north." (4) The spatial pattern of urban flood resilience exhibited significant differences in cold-hot spot clustering, with local changes forming a "cold in the west, hot in the east" pattern. Over half of the areas in the Yangtze River Delta region displayed "hot-hot" and "cold-hot" agglomeration characteristics, with "hot-hot" being the most prevalent. Conclusion: BSTIM provided an accurate assessment of space-time interaction effects in the space-time evolution of flood resilience. In addition, the model identified spatial distribution patterns, local change trends, cold-hot agglomeration distribution, and future development trends in flood resilience. Based on the results, the study captured the development dynamics of flood resilience in various cities of the Yangtze River Delta region, highlighting key priorities for flood prevention and mitigation. This research provides important reference value and theoretical support for effectively identifying critical areas for urban flood resilience development, thereby enhancing the flood resistance and disaster relief capabilities of urban agglomerations.
Objective: Given the increasing deployment of photovoltaic (PV) power generation systems worldwide, ensuring the safe and reliable operation of these systems is of paramount importance. Among the various safety concerns, DC series arc faults have emerged as a significant threat to the stability and performance of PV systems. These faults, often the result of disconnections or substandard connections within the PV circuit, can potentially result in fires and system failures if not identified promptly. The objective of this study is to accurately identify DC series arc faults in PV systems, which would greatly enhance the safety measures and operational reliability of these systems. In addition, the study investigates the characteristics of these faults under different experimental conditions, focusing on the impact of electrode shapes and current intensity on fault characteristics. Finally, a robust model for detecting such faults is developed based on extracted time-domain and frequency-domain features of fault signals. Methods: A series of experiments were conducted to simulate DC series arc faults in PV systems. The experiments utilized electrodes of various shapes to study the effect of electrode design on arc characteristics and were performed under various current intensities to simulate real-world operating conditions of PV systems. For each fault condition, the voltage-current characteristics of the arcs were recorded to gain an initial understanding of the fault dynamics. The data were then analyzed using both time-domain and frequency-domain methods. Specifically, the Fast Fourier Transform (FFT) was used to convert time-domain fault signals into the frequency domain for further feature extraction. Mathematical statistics techniques were also applied to analyze the spectral energy distribution across different frequency bands, with a particular focus on the 0—50 kHz frequency range, which has been identified as critical for distinguishing different arc fault signatures. Based on these extracted features, a hybrid model combining FFT and deep learning techniques was developed. This model integrates the FFT with a 1D Convolutional Neural Network (1DCNN) and a Long Short-Term Memory (LSTM) network. This architecture identifies arc fault types based on their frequency-domain characteristics. Results: The experimental results revealed that the frequency characteristics of DC series arc faults highly depend on the shape of the electrodes and the current intensity. Specifically, faults generated by electrodes of different shapes exhibited distinct features in the frequency domain, with significant variations observed in the spectral energy distribution within the 0—50 kHz frequency range. These results imply that electrode shape plays a significant role in determining the frequency signature of arc faults, which can be used for fault identification purposes. The FFT-based feature extraction technique successfully isolated the most relevant frequency components indicative of arc faults. The FFT-1DCNN-LSTM model was then trained using these features and achieved an accuracy rate of 99.87% in correctly classifying arc faults generated by different electrode shapes. This result demonstrates the model's robustness and potential for real-world applications, as it can effectively differentiate various fault scenarios in PV systems. Furthermore, the model's high accuracy indicates its potential for the early detection of arc faults, which can significantly improve the safety and reliability of PV systems. Conclusions: This study introduces an effective and innovative method for detecting DC series arc faults in PV systems. By analyzing fault characteristics under different electrode shapes and current intensities, the study provides valuable insights into the role of these parameters in detection. The FFT-based feature extraction method, combined with an advanced deep learning model (FFT-1DCNN-LSTM), achieves exceptional performance in accurately identifying arc faults. The model's high classification accuracy highlights its potential for practical deployment in real PV systems, where early arc fault detection is critical for preventing potential hazards such as fires. The study's findings contribute to ongoing efforts to enhance system safety and provide a reliable technical framework for arc fault detection and mitigation. Future research may focus on refining the model by including additional fault scenarios and exploring its scalability to larger, more complex PV system configurations.
Objective: Alternating current (AC) series fault arcs pose significant fire hazards in residential and industrial settings because of their high energy density and ability to ignite combustible materials. Environmental factors, such as ambient temperature and relative humidity, influence the initiation, development, and energy release of fault arcs. Existing studies have primarily utilized simplified experimental conditions or qualitative observations, providing limited quantitative evidence on the effects of ambient temperature and relative humidity on arc characteristics. This study aims to systematically quantify the effects of ambient temperature and relative humidity on the electrical characteristics and energy release of AC series fault arcs. Methods: A multiparameter fault arc experimental platform was built by combining a temperature-and humidity-controlled chamber with high-precision electrical signal acquisition instruments. The setup included an AC power supply, a 40 Ω resistive load, voltage and current probes, an oscilloscope, and a manual electrode separation mechanism. Series fault arcs were generated between a copper cone electrode and a fixed carbon electrode under controlled separation conditions. Two series of experiments were conducted: (1) In one series, the ambient temperature varied from 5 to 30 ℃ at 45% relative humidity, (2) while in the other, the relative humidity varied from 20% to 80% at 25 ℃. For each condition, multiple repetitions of the experiment were performed to ensure statistical reliability. Electrical signals were recorded and then processed by a discrete wavelet transform to remove noise while preserving the transient characteristics. The root mean square (RMS) values of current and voltage, as well as the instantaneous power integrals, were calculated to quantify the electrical characteristics and energy release. Results: The results revealed that RMS current generally increased from 4.64 A to 4.85 A as the ambient temperature increased from 5 to 30 ℃, indicating enhanced ionization and reduced gas density in the arc channel; meanwhile, the RMS voltage decreased from 30.55 V to 21.68 V, indicating lower arc impedance at higher temperatures. Increasing the relative humidity caused slight reductions in RMS voltage and energy release, while the RMS current remained largely stable, suggesting that higher humidity suppresses arc stability through enhanced cooling and electron recombination. These findings indicate that ambient temperature has a dominant influence on arc current, whereas voltage and energy release are moderately or weakly affected by environmental factors. Conclusions: This study establishes a robust experimental and analytical framework to quantify the impact of ambient temperature and relative humidity on AC series fault arcs. The results demonstrate that the electrical characteristics and energy release of fault arcs are sensitive to environmental parameters and exhibit nonlinear and condition-specific responses. These findings provide quantitative evidence for understanding the mechanisms underlying fault arc behavior and highlight the importance of considering environmental factors in fire risk assessments. The framework can support the development of tailored arc-fault mitigation strategies and improve the design of electrical systems to reduce fire hazards across diverse residential and industrial environments.
Objective: In complex mountainous environments, unmanned aerial vehicle (UAV) coverage search tasks often encounter two core challenges: path redundancy and terrain obstructions. Although fixed-pattern search methods offer convenience and high efficiency in simple scenarios, they struggle to effectively avoid dead points and obstructures in complex terrains due to their rigid pre-planned trajectories. As a result, path repetition and reduced search efficiency become particularly prominent. To address the challenges of path redundancy and terrain obstructions in UAV coverage search tasks within complex mountainous environments, this study proposes a hybrid strategy that integrates traditional fixed-pattern search with an improved particle swarm optimization (PSO) algorithm. This strategy optimizes return path planning, minimizes path redundancy, and enhances adaptability in complex terrains. Methods: This research adopts a grid-based modeling approach to discretize complex terrains, constructing a simulation environment using real-world digital elevation model data from a specific area of Luding County, Sichuan Province, China. During data preprocessing, high-precision terrain data are converted into 3D surfaces via bi-linear interpolation, and threshold segmentation algorithms create binary representations of obstacle zones and passable areas. To address the challenge of dead points in fixed-pattern searches, this study introduces a hybrid backtracking mechanism that integrates queue-based and stack-based backtracking. When encountering dead points, an improved PSO algorithm with adaptive inertia weights is introduced to plan safe and efficient cross-regional paths. In the early iterations, the algorithm assigns larger inertia weights to enhance global exploration. Subsequently, these weights are reduced to refine local searches. In addition, path safety is ensured through various constraint functions, including mathematical models to avoid terrain blockages, maintain safe distances from obstacles, and ensure path continuity. Results: The experimental results indicate that the proposed hybrid strategy exhibits significant advantages in complex mountainous enviornments. This strategy, which combines queue-based backtracking and stack-based backtracking, reduces total path length by 0.66% and 21.1%, respectively. Path coverage gradually increases from initial levels to full coverage (100%), demonstrating robust performance across various terrain conditions. Notably, in highly complex environments, the improved PSO algorithm exhibits faster convergence speed and higher path-planning accuracy than the traditional PSO and the artificial bee colony algorithms. Comparative analysis reveals that stack-based backtracking performs better in complex terrains, whereas queue-based backtracking is more suitable for regions with greater local connectivity. Furthermore, this research is the first to demonstrate that the hybrid strategy can automatically adjust the number of backtrackings without prior information, ensuring flight safety while achieving optimal coverage. The overall optimization reaches 21.1%. Conclusions: This paper presents a hybrid-strategy-based UAV coverage search method for complex mountainous areas and validates its applicability and superiority across various terrain features through experiments. The findings reveal that the hybrid strategy maintains strong terrain adaptability while balancing efficiency and feasibility. In addition, the selection of backtracking methods directly influences the frequency of heuristic algorithm invocations and ultimately affects the quality of path planning. The successful application of the improved PSO algorithm demonstrates its potential for multi-objective optimization in complex environments, laying a foundation for further exploration of more intelligent and flexible UAV path planning technologies. This study holds significant implications for UAV applications in critical scenarios such as emergency rescue and disaster reconnaissance and provides new perspectives for autonomous UAV navigation.
Objective: The evolution of accidents during road transportation of hazardous chemicals based on the conditions of vehicles, roads, and the environment is highly complex and poses a serious threat to public safety. Clarifying the intensity of the interaction and the coupling characteristics of different risk factors in the transportation system is crucial for preventing related accidents and enhancing transportation safety. However, traditional models rarely explain the mechanism of interaction between risk factors at the micro level, and the risk calculation process generally requires substantial high-quality data and information support while exhibiting limitations in handling event uncertainty. Methods: To address these issues and accurately describe the dynamic characteristics of accident risk evolution, this paper proposes a risk evolution assessment method for the road transportation of hazardous chemicals based on a hybrid model. First, risk factors are identified based on the accident database, and risk coupling types are determined. Second, the network scale-interaction degree (N-K) model is used to calculate the coupling degree for different risk types, quantifying the interaction intensity among risk factors. Thereafter, a dynamic Bayesian network (DBN) containing coupling nodes is constructed based on the different coupling types and degrees. The node parameters are determined based on multiple databases, including a comprehensive database of hazardous chemical road transportation accidents, an environmental monitoring database for key road sections, and existing knowledge. DBN analysis is performed considering time steps. Finally, the time characteristics and potential mechanisms of risk evolution across different risk coupling types are revealed using risk-level indicators, based on which risk-prevention strategies are proposed. Results: Analysis of coupling degree shows that the coupling degree of multi-factor coupling nodes is significantly higher than that of dual-factor coupling nodes. In the dual-factor coupling nodes, the degree of the driver-environment coupling node is the highest. Driver factors are more likely to couple with other factors, thereby increasing the risk of traffic accidents. The DBN analysis shows that the probability of risk coupling nodes gradually increases over time. Illegal operations and bad weather significantly affect the corresponding risk coupling nodes. Among all the risk coupling nodes, driver-environment coupling has the highest probability, indicating that driver-environment risk coupling is the most likely factor to cause accidents in the transportation of hazardous chemicals. The risk level calculation shows that the driver-environment coupling factor has the greatest impact on accident risk. Although the hazardous chemical factor does not have the highest degree of coupling, it must still be considered because of its relatively high probability of occurrence. Furthermore, although the coupling degree of multi-factor coupling nodes is higher, due to the relatively low probability of multiple factors occurring simultaneously, the risk intensity level is actually lower. Conclusions: The proposed hybrid model reveals the interaction intensity and coupling characteristics of different risk factors at the micro level. By considering the time step, the model effectively captures the evolution of accident probability driven by different types of risk coupling. Applying the hybrid model can address the challenge of assessing accident likelihood in scenarios involving the coupling of different risk factors, providing a reference to help decision-makers and safety managers effectively prevent related accidents.
Objective: Aiming to address the problem of insufficient optimization of energy consumption in unmanned aerial vehicle (UAV) path planning, a three-dimensional (3D) path planning method based on a tangent map and considering energy consumption is proposed. Methods: First, to ensure safe UAV flight, an ellipsoidal obstacle modeling approach is introduced. This approach represents irregular obstacles using a safety envelope, ensuring a minimum safe distance between the UAV and obstacles. Unlike conventional envelope-based methods, the proposed approach eliminates path redundancy, thereby lowering computational complexity and enhancing planning efficiency and flight safety. Second, the traditional elliptic tangent graph method is improved by incorporating a bidirectional search strategy and a node screening mechanism. These enhancements generate optimized two-dimensional (2D) reference path points, notably reducing the number of turning points along the path and shortening the overall path length. Finally, the proposed method integrates the 2D reference path points with an energy consumption model to enable 3D path planning. The 3D reference path points are derived from their 2D counterparts. When the start and end points of the UAV lie at the same altitude, a dimensionality reduction strategy is applied to convert the 3D planning problem into a 2D planar one, which is then solved using the elliptic tangent graph method. In cases involving height differences between the start and end points, an energy evaluation model is used to compare the energy costs of two strategies (horizontal flyover and vertical climb). The path point with the lowest energy consumption is selected, and cubic B-spline curves are applied to smooth the path. Aiming to evaluate the performance of the proposed method, three test scenarios with varying obstacle densities and layouts are designed. Comparative experiments are conducted against four benchmark algorithms: A*, rapidly-exploring random trees (RRT), particle swarm optimization (PSO), and the vector field histogram (VFH). Results: Results demonstrate that, in 2D environments, the improved elliptic tangent graph method consistently generates the shortest paths with the fewest turns, regardless of obstacle distribution. Its performance advantage becomes increasingly evident as environmental complexity rises. In complex 3D environments, the method not only delivers shorter and smoother flight paths but also substantially reduces the overall energy consumption of UAV operations. Specifically, compared with the A*, RRT, PSO, and VFH algorithms, the proposed method achieves average reductions in path length of 8.7%, 18.7%, 13.4%, and 4.1%, respectively; reductions in the number of turns of 68.8%, 82.1%, 82.8%, and 75.0%; and reductions in energy consumption of 51.6%, 34.0%, 59.1%, and 55.3%. Additionally, comparative experiments conducted with varying safety distances (2, 4, and 6 m) reveal that appropriately increasing the safety distance can improve flight safety without compromising path optimality. However, excessively large safety distances may lead to inefficient use of free space and reduced planning efficiency. Conclusions: These improvements effectively overcome the traditional tradeoffs between path length, motion smoothness, and energy efficiency, offering a solution that combines theoretical innovation with engineering practicality to enhance UAV mission endurance and operational safety.
Objective: In modern chemical production, process monitoring is important for ensuring operational safety, improving product quality, and enhancing economic benefits. In recent years, the deep integration and widespread application of distributed control systems, the Internet of Things, and large-scale data storage infrastructure have significantly improved the capability to acquire industrial process data in real time. Within this context, data-driven process monitoring systems have gradually become a major branch of research in the chemical production field because they can operate without relying on mechanistic models and have strong capabilities for learning from historical data. However, practical industrial processes generally exhibit significant non-stationary characteristics. Unlike stationary processes, the statistical characteristics of non-stationary process variables change over time, making it difficult to effectively distinguish between actual process faults and normal non-stationary variations using traditional multivariate statistical process monitoring methods, such as principal component analysis (PCA). Consequently, the rate of false alarms increases, the fault detection sensitivity declines, and the reliability and accuracy of monitoring systems are severely affected. Furthermore, complex physicochemical reactions and production equipment result in nonlinear relationships among process variables, which further complicates process monitoring. Conventional linear methods are inadequate for capturing the complex nonlinear interactions between variables, result ing in unsatisfactory performance of models and monitoring systems. Methods: To address these challenges, an integrated process monitoring method that combines stationary subspace analysis (SSA) with kernel principal component analysis (KPCA) is proposed in this study. First, SSA is used to process the original data, decomposing it into stationary and non-stationary subspaces. By extracting and retaining the stationary components, the approach effectively eliminates interference caused by non-stationary trends and supplies data with stable statistical characteristics for subsequent analysis. The processed stationary data are then input into the KPCA model. Using a kernel function, the data are implicitly mapped into a high-dimensional feature space, where linear PCA is performed, substantially enhancing the ability to capture complex nonlinear relationships. This monitoring strategy effectively overcomes the limitations of conventional methods in handling both non-stationarity and nonlinearity. The effectiveness of the proposed method was validated by application in an industrial case study involving continuous catalytic reforming. Results: SSA successfully separated the stationary source signals, providing an ideal input for KPCA, fully leveraging its advantages in nonlinear feature extraction. The proposed method achieved effective fault detection while maintaining a low false-alarm rate. Comparative experiments with traditional methods, such as PCA and cointegration analysis, further highlight the superiority of the proposed approach. Conclusions: Conventional methods are ineffective for handling the combined effects of non-stationarity and nonlinearity and thus exhibit limited fault identification capability and high false-alarm rates. In contrast, the proposed method maintains an extremely low false-alarm rate under normal operating conditions while enabling rapid and accurate alarms, significantly improving the precision and reliability of process monitoring, demonstrating superior overall monitoring performance. Such improvements in practical industrial applications can greatly reduce unnecessary production interventions and shutdowns caused by false alarms, avoiding substantial economic losses and providing reliable technical support for achieving safe, efficient, and stable production.
Objective: ] Chemical plant layout design involves inherently high levels of hazards, strong constraint coupling, and conflicting multi objective requirements, making it a long-standing challenge in process systems engineering. Traditional layout planning methods rely heavily on expert knowledge, deterministic rules, and heuristic optimization, which are often insufficient for dealing with high-dimensional, nonlinear, and nonconvex design spaces. To address these limitations and improve the intelligence and automation of hazardous chemical plant layout design, this study proposes a generative optimization framework named SAC-GEN, based on deep reinforcement learning and agent-environment interactive learning. The goal is to enable the autonomous generation of safe, compliant, and cost-effective plant layouts, especially for facilities involving flammable and explosive materials. Methods: The proposed SAC-GEN framework integrates the Soft Actor-Critic (SAC) algorithm as the core decision-making kernel, leveraging its entropy-regularized policy to ensure stable learning and effective exploration in continuous action spaces. A domain-informed simulation environment was constructed to embed fundamental chemical engineering design knowledge into the reinforcement learning process. A multilayer state representation scheme was developed to describe equipment geometric characteristics, relative spatial relationships, explosion hazard levels, pipeline connectivity, and layout feasibility. To ensure realistic, practical layout outcomes, multiple engineering constraint mechanisms were incorporated, including equipment boundary restrictions, nonoverlapping spatial feasibility checks, and layout invalidation penalties. A dynamic reward-shaping strategy, combining safety performance, spatial rationality, and economic indicators was designed to guide the agent toward balanced, high-quality layouts under trade-off conditions. For safety modeling, a TNT-equivalent explosion model was used to calculate the blast impact radius of hazardous units, and a quantitative risk diffusion model was implemented to simulate the attenuation of explosion energy across the plant area. In addition, a domino-effect propagation mechanism was developed to capture secondary explosions triggered by equipment-to-equipment impact. In this mechanism, the ignition sequence evolves dynamically based on spatial adjacency and blast-wave impact magnitude, enabling evaluation of both individual explosion consequences and cascading failure risks frequently observed in chemical industrial accidents. For economic evaluation, a hybrid pipeline routing algorithm based on axis-aligned (HV) path planning was constructed to compute material transfer paths, pipeline lengths, and connection complexity between units. This algorithm provides a practical economic indicator for layout rationality. By integrating these mechanisms, SAC-GEN achieves an intelligent mapping from safety-economic design objectives to spatial layout solutions. Results: The framework was validated through a case study on a 100 m × 80 m sulfuric acid alkylation unit. Ten representative equipment items, including a reactor, tanks, distillation columns, a compressor, and separation systems, were modeled with corresponding explosion hazard levels and logistics flow connections. The reinforcement learning model autonomously generated multiple feasible layout solutions, which were then analyzed from three perspectives: minimum explosion- risk, minimum pipeline- length, and overall optimization. Results showed that the safety-oriented layout substantially reduced the blast impact zone and eliminated domino risks by spatially isolating high-energy units. The economy-oriented layout reduced the total pipeline length by more than 20% and improved material transfer efficiency, albeit at the expense of reduced safety margins. The comprehensive optimal solution achieved a desirable balance between inherent safety and economic performance, demonstrating SAC-GEN's capability to navigate multi objective conflicts and produce practically adoptable layouts. Conclusions: The SAC-GEN framework provides a systematic and scalable method ology for intelligent layout generation in hazardous chemical plants. It successfully handles complex spatial constraints, nonlinear safety-economy relationships, and large continuous design spaces that challenge conventional methods. The approach significantly reduces dependence on expert knowledge, enhances inherent safety performance, and improves decision-making efficiency. Future work will focus on integrating digital twin technology, real-time risk assessment, and online optimization to support industrial deployment within smart chemical plant design and operation systems.
Objective: High arch dams impose stringent requirements to ensure safety, requiring robust bearing capacity, deformation control, and resistance to seepage failure. The stability of the dam foundation serves as the cornerstone of the entire arch dam system. During operation, the enormous thrust generated by arch abutments acts on the dam-foundation interface, potentially inducing instability risks such as macroscopic fractures and shear sliding, particularly in weak foundation zones. These risks, if left unchecked, can compromise dam safety and may trigger catastrophic failure. Addressing weak zone reinforcement design in complex dam foundations poses a significant challenge, as no standardized system currently exists for prioritizing reinforcements or quantifying stability evaluation indicators. Methods: To address this gap, this study proposes an energy-based method for stability evaluation and reinforcement design of weak dam foundation zones. A stability evolution analysis model was established using energy dissipation rate and domain integral variation, enabling the identification of critical weak zones and their evolutionary patterns. The study employed a three-dimensional numerical model of the arch dam-foundation system, accounting for complex geological factors such as faults, abutment slopes, and dam geometry. A thermodynamically driven creep constitutive model with internal variables was employed to conduct three-dimensional numerical simulations, revealing the stability evolution process of weak foundation zones. By analyzing energy dissipation rate curves and domain integrals, critical moments (marked by peak dissipation rates) and vulnerable areas (highlighted by energy concentration zones) were pinpointed. This method was then applied to parallel fault groups in a high arch dam foundation, with the reinforcement effectiveness analyzed in terms of energy dissipation rates, dam deformation, fault yield zones, and results from comparative testing using the super-water unit weight method. Results: Results indicate that energy dissipation rates and domain integrals for abutment faults initially increased rapidly after reservoir impoundment, gradually decreased, and eventually stabilized. The stability evolution of dam foundation faults under impoundment exhibits distinct time-dependent behavior, progressing through three phases: instability, transition, and stabilization. A significant observation is the delayed occurrence of peak energy dissipation rates in downstream faults, reflecting a spatiotemporal hysteresis in arch thrust transmission. During normal operations, the thrust from the arch extends its influence on deep foundation stability to a distance approximately twice the width of the arch abutment. However, its impact on downstream stability ranges between 2-3 times the abutment width. Comparative analysis using the super-water unit weight method demonstrated reduced dam deformation, improved fault yield zone distribution, and significant decreases in energy dissipation rates and domain integrals for critical faults after reinforcement. Conclusions: The proposed method reveals spatiotemporal hysteresis in arch thrust transmission and its disturbance on structural stability. For multifault dam foundations, upstream faults exhibit less susceptibility to hydraulic disturbances when compared to downstream faults. Weak zones in downstream faults are primarily concentrated near their intersections with the dam abutment as well as along the strike direction. The f123 and f120 faults on the left bank were identified as critical to global stability, with key reinforcement areas at elevations of 2 440-2 470 m (f123) and 2 395-2 425 m (f120). Targeted reinforcement measures effectively enhanced fault and foundation stability, significantly improving the overall stability of the arch dam-foundation system.
Objective: Blue spaces have been gradually recognized due to their positive impact on human mental and physical well-being. However, existing works have greatly depended on subjective questionnaires and lack objective and quantitative evidence, especially regarding the effects of specific spatial scales of water environments on brain activity. Methods: To address this gap, this study uses virtual reality (VR) technology to construct immersive waterbody environments with systematically varied visual and auditory properties, which enables controlled experimental exposure to different water space scales. A total of 52 healthy participants aged 18-36 from Tsinghua University were involved in experiencing 35 water scenarios characterized by five levels of visible water area (0%, 30%, 50%, 80%, and 100%) and seven flow velocity levels (0-3.0 m/s). During the VR exposure, two neurophysiological indicators—electroencephalogram (EEG) alpha power and heart rate variability (HRV)—were concurrently recorded to reflect the cognitive and autonomic brain states of the participants. EEG alpha activity, which is associated with relaxation and creative ideation, and HRV, which is an index of emotional regulation and adaptive capacity, served as core outcome measures. After signal preprocessing and normalization, Gaussian process regression was adopted to model the nonlinear coupling between water environment features and physiological responses. Results: Results revealed significant interindividual variability in responses to water scale. For the majority of participants (Class I), moderate visible water areas (10%-30%) combined with low flow velocities (0.5-1.5 m/s) exhibited the most favorable neurophysiological responses, with EEG alpha power and HRV values increasing beyond resting baseline levels. However, these values decreased substantially in scenes with excessively large water areas (>50%) or higher flow velocities, which suggests that overstimulation from water features may suppress cognitive readiness and emotional stability. By contrast, a minority group (Class II) displayed the opposite pattern, which exhibited stronger EEG and HRV responses under conditions of larger water areas and either very low or high flow speeds. This divergence emphasizes the important role of individual backgrounds in shaping responses to blue space. Furthermore, a support vector machine classification model was developed based on the demographic and environmental background data of participants (including birthplace precipitation, humidity, and surface water ratio), which accurately predicted individual response categories with an accuracy of 95%. In addition, a single-factor analysis of water sound levels disclosed that moderate auditory stimuli (~30 dB) improved EEG alpha activity even in the absence of visual water elements, which reinforces the cognitive benefits of natural soundscapes and implies potential for non-visual design interventions. Conclusions: Overall, this study constructs a robust experimental and modeling framework to quantify the neurocognitive impact of waterbody scale, which offers new insights into the modulating mechanism of specific aquatic features on brain states in dynamic and individualized ways. The findings show that the restorative and creativity-enhancing effects of blue space are neither universal nor linear but rather depend on environmental parameters and individual characteristics. Thus, these outcomes challenge conventional assumptions and highlight the need for tailored blue space design. The proposed method provides valuable scientific evidence for optimizing urban water landscapes not only for aesthetic or ecological purposes but also as cognitive infrastructures that support mental health, emotional resilience, and innovation across diverse populations and geographic contexts.
Objective: Water supply systems with branch pipes, relying exclusively on valve control, frequently demonstrate deficiencies in effectively mitigating water hammer during valve closure. This inadequacy often results in transient system pressures surpassing safety thresholds, thereby compromising operational safety. Consequently, there is an imperative for research on the combined protection of bidirectional surge tanks and valves in such systems. However, the coordinated control of tanks and valves involves numerous variables and strongly coupled nonlinear relationships, in which parameters interact, rendering it a complex optimization problem. In light of the challenges associated with meeting multiple safety requirements concurrently, it is essential to conduct a comprehensive investigation into the synergistic mechanisms between tanks and valves and to formulate a corresponding optimization and decision-making method that will facilitate the attainment of globally optimal control strategies. Methods: The present study developed an integrated optimization model for the coordinated control of tanks and valves. The model treats the control parameters of both the valves and the bidirectional surge tank as decision variables. The objective functions were defined as minimizing the extreme pressure values in the system and shortening the valve closure duration. A novel optimization and decision-making method was proposed, which utilizes a large-scale multi-objective evolutionary algorithm with directed sampling combined with the technique for order preference by similarity to ideal solution (LMOEA-DS-TOPSIS). To solve the optimization model, LMOEA-DS was employed, and TOPSIS was applied to select the optimal solution from the generated Pareto front. Results: An investigation into a particular water supply project in Shandong, China, revealed that the combined control of the main pipeline's end valve and the branch pipe's end valve was inadequate in ensuring safety compliance. This finding necessitated further optimization of the subsequent tank-valve joint control strategy. To address this challenge, the LMOEA-DS algorithm was employed to solve the established multi-objective optimization model for tank-valve coordination. The parameters were set as follows: 50 individuals, 30 guiding solutions, and a total population size of 8 000. The hypervolume (HV) and spacing (SP) metrics were 32 062.6 and 0.527 8, respectively, indicating adequate convergence performance. Moreover, the resulting Pareto front demonstrates considerable spans of 14.62 m, 4.36 m, and 240 s across the three objective dimensions, with solutions exhibiting uniform distribution and adequate accessibility. This outcome suggests that Pareto solutions, when distributed widely, can be obtained in a relatively brief computational time. The optimization effect is particularly noteworthy in the context of the positive pressure protection objective. The collective analysis of these results substantiates the effectiveness of the LMOEA-DS algorithm in addressing the proposed model and validates the rationality of the parameter settings. Subsequently, employing a TOPSIS analysis to select the optimal tank-valve joint scheme from the Pareto front resulted in reductions of 21.25% and 47.72%, respectively, in the amplitude of the maximum and minimum pressure envelope lines. Furthermore, a 5.43% shortening of the total valve closure time was demonstrated. This outcome significantly enhanced the operational stability of the system and confirmed the superiority of the LMOEA-DS-TOPSIS optimization and decision-making method. Conclusions: The results verify that the synergistic strategy employing a bidirectional surge tank and valves exhibits significantly superior effectiveness in safeguarding against transient processes in comparison to conventional single-method approaches. Moreover, the outcomes corroborate the effectiveness of the proposed LMOEA-DS-TOPSIS optimization and decision-making method for coordinated tank-valve control. The research findings not only successfully achieve effective transient protection for water supply systems with branch pipes but also provide an innovative solution for the design and optimization of water hammer protection systems in the field of engineering.
Objective: The YMM landslide is the largest landslide accumulation body nearest to the dam in the near-dam reservoir section of the Three Gorges Reservoir area. The landslide, located on the north bank of the Yangtze River within Zigui County, Yichang City, Hubei Province, with a surface area of approximately 0.48 km2 and has a total volume of approximately 2 000 × 104 m3. It is situated 17 km upstream of the Three Gorges Dam. Owing to its massive scale, sensitive location, and severe consequences of potential instability, the YMM landslide has attracted significant attention. Nearly 20 years of observation data indicate that although the landslide's deformation has been slow, it has continued without convergence. Methods: This study comprehensively considers the relationships among geological conditions, external influencing factors, and deformation characteristics of the landslide. A stepwise linear regression method is applied to analyze the observational data. Combined with a mechanical model of the hydrodynamic triggering mechanism of reservoir bank landslide deformation, the study quantitatively decomposes the roles and effects of various external triggering factors in the landslide's deformation process. Based on the phase-transition time nodes of these effects, the deformation evolution process due to reservoir impoundment is divided into three stages. Results: The study shows that the YMM landslide was stable before the impoundment. The reservoir impoundment led to its reactivation, which was followed by a complex deformation adjustment process. In the first stage (June 2003—September 2006), the landslide was reactivated in a retrogressive mode by a significant rise in the reservoir water level. In the second stage (October 2006—September 2018), the deformation mode shifted from front retrogressive to overall creep deformation, mainly due to the deterioration of the landslide rock-soil medium caused by reservoir water infiltration. The deformation rate gradually decreased as the deterioration effect weakened, and reservoir water level fluctuations had a more significant influence than seasonal rainfall during this period. In the third stage (October 2018—February 2024), the deterioration process of the physical and mechanical properties of the rock-soil medium induced by water-rock interaction was essentially complete. The landslide adapted to changes in the groundwater environment, resulting in a further significant reduction in the overall deformation rate. During this stage, the influence of seasonal rainfall on landslide deformation exceeded that of reservoir water level fluctuations. In terms of geological conditions, landslide characteristics, and deformation patterns, time-dependent deformation mainly convergent creep indicates that the landslide is generally stable. However, extreme rainfall remains a key triggering factor for potential local instability of the YMM landslide. Conclusions: This study provides a robust framework for interpreting the long-term deformation evolution of large-scale reservoir landslides by integrating monitoring data, statistical modeling, and mechanical analysis. Identifying stage-specific deformation patterns and dominant triggers enhances the understanding of landslide behavior in response to external forcing. These insights are crucial for improving early warning systems and developing targeted mitigation strategies in similarly high-risk reservoir environments.
Objective: In recent years, China's high-speed train industry has developed rapidly. As a key core component, the process of localizing axle box bearings is limited by insufficient fatigue performance. At present, high-speed train bearings rely entirely on imports, and fatigue of domestically manufactured bearing rings has become an urgent problem that must be addressed. The fatigue performance of axle box bearing rings is mainly determined by the metallurgical quality of the bearing steel and the heat treatment process. G20CrNi2MoA steel is a high-quality carburized bearing steel for manufacturing the inner and outer rings of this type of bearing. Although domestic smelting levels have gradually caught up with international levels, there is still a significant gap in carburizing heat treatment technology, and the hardness distribution of the carburized layer is the key factor determining the rolling contact fatigue life of carburized steel bearings. To address this dilemma, the outer ring of the domestic G20CrNi2MoA bearing is the core research object of this study, aiming to explore the influence of the hardness distribution of the carburized layer on the bearing's rolling contact fatigue performance and to optimize the process. Methods: A systematic research system of experimental verification, simulation, and intelligent optimization was constructed. First, the key factors affecting performance were determined through a rolling contact fatigue test, after which the formation mechanism of the carburized layer was analyzed via simulated heat treatment. Subsequently, a model combining a genetic algorithm (GA) and a backpropagation (BP) neural network was introduced to optimize the parameters. Results: Using this system, innovative results were achieved. The results confirm that a carburized layer with a moderate depth is key to extending the fatigue life of the outer ring, and the model provides a more accurate solution for the hardness distribution curve of the optimal carburized layer (surface hardness: 693 HV; depth of carburized layer: 1.71 mm). Based on this reverse optimization, a complete carburized heat treatment scheme is obtained. The key parameters include a long infiltration time of 16.5 h, a diffusion temperature of 930 ℃, a diffusion time of 6.54 h, a carbon diffusion potential of 1.05%, and an isothermal time of 3.6 h. Targeted tests verify that the rolling contact fatigue life of the outer ring of the bearing treated using this process is approximately 4.7% higher than that of the existing domestic bearing outer ring. Conclusions: First, there is a significant nonlinear correlation between the rolling contact fatigue performance of the domestic G20CrNi2MoA bearing outer ring and the hardness distribution of the carburized layer, the depth of the carburized layer, and other parameters. Accurately regulating these parameters is key to improving performance. Second, the model combining a GA and a BP neural network provides an efficient and accurate technical approach for optimizing the carburizing heat treatment of bearings, thereby greatly reducing the cost and cycle time of traditional trial-and-error methods. Third, the carburizing heat treatment process proposed in this study effectively improves the fatigue performance of domestic bearing rings. It provides important support, with both theoretical value and practical guiding significance, for addressing the technical gap in the heat treatment of high-speed bearings in China, breaking the monopoly of foreign technology, and promoting the localization of high-speed train bearings. It also lays a foundation for subsequent research and the development of higher-speed grade bearings.
Objective: The mobility of the wrist joint is critical to the accuracy and stability of hand manipulation. Individuals with movement disorders, such as stroke survivors, require repetitive rehabilitation to restore wrist function. Rigid exoskeleton rehabilitation robots are limited by various issues, including misalignment with anatomical joints and high inertia. Cable-driven robots, with their flexible structures, offer distinct advantages including reduced weight, improved human-robot interaction, and better joint alignment. Consequently, they mitigate the limitations of rigid exoskeleton systems. To facilitate accurate wrist rehabilitation, a 3 degrees of freedom cable-driven wrist rehabilitation robot (CDWRR) is proposed. Methods: An adaptive configuration design is developed for a 3 degrees of freedom cable-driven mechanism designed to meet the functional demands of wrist rehabilitation, with a focus on user comfort and wearability. The open-structure CDWRR utilizes the human skeleton as a support structure and models the wrist joint as a constrained hinge. A kinematic model is established, and both the inverse position and inverse velocity are derived. Subsequently, a static is constructed, and cable forces are optimized using quadratic programming to ensure positive, continuous tension within safe limits. The wrench-feasible workspace is determined by integrating cable force constraints with a boundary search method. Motion dexterity is evaluated using the condition number of the Jacobian matrix. Finally, an experimental platform is developed, and a passive compliant training strategy—combining passive motion and admittance control—is proposed for early-stage rehabilitation. The feasibility of the configuration and control algorithms is validated experimentally. Results: The proposed 3 degrees of freedom CDWRR achieved full radial/ulnar deviation and covered 87.7% and 38.7% of the activities of daily living motion range for extension/flexion and pronation/supination, respectively. Within the workspace, the condition number ranged from 1.6 to 4.0, indicating good dexterity. Under external torque, the robot's workspace shifted in the direction of the applied force. During passive compliant training, when the interaction torque exceeded a set threshold, the robot demonstrated compliant behavior to ensure user safety. The cable length and force remained continuous and stable throughout motion, with no significant fluctuations, confirming the system's operational stability. Conclusions: The proposed 3 degrees of freedom CDWRR incorporates an adaptive configuration design that offers lightweight construction, improved compliance, and high wearability. The experimental results demonstrate that the robot satisfies the key rehabilitation requirements for wrist range of motion and dexterity. The passive compliant training experiments validated the feasibility and applicability of the wrist rehabilitation mechanism, providing an effective solution for early-stage wrist joint rehabilitation training. This paper provides a reference for the design and motion performance analysis of the CDWRR. The research results of this paper can provide a reference for the configuration design and motion performance analysis of the cable-driven wrist rehabilitation robot.
Objective: Continuous silicon carbide (SiC) fiber-reinforced titanium matrix composites (TMCs) have become critical structural materials in aerospace because of their exceptional specific stiffness and strength. However, their anisotropic mechanical properties and complex interfacial failure modes pose notable challenges for damage prediction and structural reliability. This study addresses the critical knowledge gap regarding the multiscale fracture mechanisms of practical SiC fiber-reinforced TMCs containing hierarchical C/TiC/Ti interfacial architectures formed by hot isostatic pressing (HIP). Existing research predominantly focuses on idealized Ti/TiC systems; the crucial influence of pyrolytic carbon layers with turbostratic structures is neglected. Our work pioneers a comprehensive investigation into the mixed-mode fracture behaviors of carbon-rich (pyrolytic carbon/amorphous carbon) and TiC-dominated interfaces through atomic-scale modeling, providing essential parameters for the optimization of interfacial design against multiaxial failures. Methods: We developed a multiscale simulation framework combining molecular dynamics and interfacial mechanics analysis. Atomic models of SiC/C/Ti multilayer interfaces were constructed to replicate realistic HIP-generated microstructures. For pyrolytic carbon/amorphous carbon interfaces, the analytical bond-order potential (ABOP) was employed to simulate liquid quenching (8 000 K) and annealing (4 000 K). Turbostratic carbon configurations matching chemical vapor deposition (CVD) characteristics were generated. The Ti/TiC interfaces were modeled using the 2-nearest-neighbor modified embedded atom method (2NN-MEAM) potential to capture lattice mismatch (4%) and interdiffusion between α-Ti (0 0 0 1) and TiC (1 1 1) planes. The following two critical loading scenarios were simulated: (1) tensile separation (Mode Ⅰ) with 0.5 Å/ps displacement rate and (2) shear deformation (Mode Ⅱ) at 5 Å/ps sliding velocity. The NVT ensemble with Nose-Hoover thermostat was used to maintain a 300 K operating temperature. Fracture energy release rates were calculated through the J-integral analysis of traction-separation curves. Atomic bond evolution was quantified through polyhedral template matching and common neighbor analysis in OVITO software. Results: The pyrolytic carbon/amorphous carbon interfaces demonstrated distinct anisotropic fracture mechanisms as follows: (1) tensile loading caused the sequential fracturing of graphene-like layers (max traction: 7.36 GPa; Type Ⅰ energy release rate: 7.87 J/m2), and (2) shear deformation induced 45° delamination through interlayer sliding (max shear: 4.53 GPa; Type Ⅱ energy release rate: 15.50 J/m2). By contrast, the Ti/TiC interface exhibited superior tensile strength (12.8 GPa; Type Ⅰ energy release rate: 8.71 J/m2) but unexpected shear-induced failure: (1) shear stress was concentrated at the Ti lattice defects rather than at the interface, and (2) 45° cleavage fracture in Ti matrix (Type Ⅱ energy release rate: 12.14 J/m2) revealed matrix failure. The crack propagation paths fundamentally differed between the interfaces. The pyrolytic carbon interfaces showed self-similar crack growth along weak Van Der Waals gaps, and the Ti/TiC interfaces displayed crystallography-dependent branching along (0 0 0 1) planes. Conclusions: This study establishes the quantitative correlation between HIP-processed interfacial architectures and fracture resistance anisotropy in SiC fiber-reinforced TMCs, bridging atomic-scale mechanisms to macroscopic composite performance. The following three key advances are achieved: (1) identification of pyrolytic carbon interfaces as the tensile weak link (10.6% lower critical energy release rate compared with Ti/TiC interfaces), (2) discovery of shear-induced matrix failure mechanisms through dislocation pileup and shear band formation overriding interfacial strength, and (3) development of process-informed traction-separation laws incorporating HIP temperature for multiscale modeling. The results fundamentally revise the conventional "weak interface" paradigm by demonstrating the load-dependent dominance of different interfacial layers—pyrolytic carbon governs tensile failure, whereas Ti matrix plasticity dictates the shear response. This study provides fundamental data and mechanistic insights for the subsequent development of multiscale simulation models for predicting spontaneous crack initiation in fiber-reinforced Ti matrix composites and addresses interfacial delamination under combined thermomechanical loading in aeroengine components.
Objective: Environmental DNA (eDNA) technology is an emerging tool for the biological monitoring of aquatic ecosystems. It enables an efficient molecular analysis of the structural characteristics of aquatic communities by capturing DNA fragments freely present in water. This study focuses on the following objectives: (1) analyzing the spatial distribution patterns of winter zooplankton diversity and community composition in the Xiangjiaba section of the lower Jinsha River; (2) exploring the assembly processes and driving factors of winter zooplankton communities; (3) assessing the potential impacts of hydropower development on zooplankton diversity in this section during winter; and (4) identifying key environmental factors affecting zooplankton community structure and elucidating the regulatory mechanisms involved. Methods: This study applied eDNA metabarcoding to investigate winter zooplankton communities in the Xiangjiaba section of the lower Jinsha River. Species, phylogenetic, and functional diversity indices were calculated to evaluate the α-diversity. β-diversity was partitioned into species turnover and nestedness components. Community composition variations were assessed by applying Bray-Curtis distance, principal coordinates analysis (PCoA), and permutational multivariate analysis of variance (PerMANOVA). Key taxa were identified using linear discriminant analysis effect size (LEfSe). Co-occurrence networks were constructed utilizing sparse correlations for compositional data (SparCC) to evaluate community structure and stability. Random forest models were employed to identify crucial environmental drivers shaping zooplankton diversity patterns. Results: The Xiangjiaba Hydropower Station significantly impacted the α-diversity of winter zooplankton in the lower Jinsha River. Specifically, the downstream area of the mainstream river exhibited markedly higher Chao1 richness (Chao1) and phylogenetic diversity than the upstream regions. Additionally, both indices gradually declined toward the dam, suggesting a "homogenization effect" caused by reservoir regulation. Regarding functional traits, functional richness (FRic) and functional divergence (FDiv) were markedly greater in the tributaries than in the mainstream, reflecting a greater ecological niche differentiation in less-regulated habitats. The composition of dominant zooplankton groups varied across different water body types. Protozoans dominated the downstream region of the mainstream, whereas copepods were predominant in the upstream tributaries. The upstream region of the mainstream exhibited moderate protozoan abundance levels but lacked a single dominant group. Co-occurrence network analysis revealed that the downstream area of the mainstream had a more complex and robust network structure, with higher connectivity and lower vulnerability, indicating enhanced community stability. At the river section scale, the β-diversity of the winter zooplankton was primarily driven by species turnover, with species replacement being the primary community assembly process. Tributaries exhibited significantly enhanced β-diversity compared to the mainstream, largely due to spatial isolation and heterogeneous environmental conditions. The upstream area of the mainstream, functioning as a transition zone between the tributaries and downstream, demonstrated greater environmental homogenization and reduced community dissimilarity. Conclusions: Hydrological dynamics (e.g., water depth, flow velocity, and water level fluctuations) and water quality (e.g., temperature and turbidity) are the main environmental factors influencing α-diversity patterns of winter zooplankton. Variations in nutrient levels (e.g., chlorophyll a) and water quality (e.g., conductivity and water temperature) are the key drivers of β-diversity and its components, particularly species turnover. These findings suggest that developing hydropower stations and associated environmental changes notably influence zooplankton community structure and assembly processes. Tributary inflow and dam-induced habitat modifications are critical in shaping spatial biodiversity patterns in regulated river systems.
Objective: The explosive growth in the number of beams and overall bandwidth requirements in broadband communication satellites has made onboard hybrid optical and electrical switching a critical technology for supporting large-scale and low-complexity switching systems. However, the design of scaling optical switching using Clos networks, onboard hybrid optical and electrical switching, which is based on the decoupled design method, currently disregards the coordination among various switching parts. This decoupled method results in redundant switching paths, inefficient mapping, and substantial increases in the number of optical switching units and interconnecting optical fibers. These inefficiencies cause inconvenience in the limited onboard transponder resources and hinder the scalability of future satellite systems. Methods: To address this issue, this study proposes a joint design method that combines expansion and merging. Firstly, the optical switching scale is expanded to meet the overall functional requirements, and the optical switching functions of down- and up-conversion parts are merged and executed in the down-conversion part, thereby reducing the number of optical switching mappings. This expansion results in the formation of a unified switching domain that increases flexibility in resource allocation and lays the foundation for merging. Secondly, the Clos network topology is used as a basis to achieve a part of the optical switching functionality through the reuse of electrical switching capabilities. This functional reuse enables the effective merging of optical switching modules, which reduces redundancy and minimizes intermodule communication overhead. Results: The analysis indicates that the number of optical switching units is reduced by 9.5% to 50% without increasing the scheduling complexity. The number of corresponding interconnecting optical fibers is reduced by between 1/8 and 3/8. Furthermore, the channel scale varies in response to demand changes, with a typical scenario involving the selection of half of the total broadband channels for fine-grained electrical switching. The effectiveness of the proposed methods is demonstrated in various cases under the corresponding scenarios. When the total number of channels is 84, the optical switching modules attain a reduction ratio of 50%, whereas that of interconnection fibers is 1/4. Moreover, the proportion of fine-grained switching in the total switching scale consistently varies based on the changes in demand. For specific cases, at a fine-grained switching ratio below 0.6, a considerable reduction in optical switching modules occurs, which highlights the effectiveness of the proposed method. Finally, the scheduling complexity introduced by the joint design is analyzed. Although the design introduces additional swap operations at the input and output channel levels during electrical switching, such operations cause no increase in the time complexity of the optical switching scheduling algorithm. Thus, the joint design maintains computational efficiency similar to that of traditional methods while achieving substantial improvements in physical resource savings. Conclusions: The proposed joint optimization method based on expansion and merging offers an effective and scalable solution for hybrid optical and electrical switching in broadband satellite communication systems. Through enhanced coordination across various switching parts and optimization of the utilization of optical and electrical resources, this method effectively addresses the challenges encountered in scaling Clos networks in onboard environments. Thus, it holds crucial promise in enabling future satellite networks that require high capacity, low complexity, and great flexibility in switching architecture design.
Significance: High flux reactors are research reactors characterized by ultrahigh neutron flux levels (typically ranging from 1014 to 1015 n/(cm2·s) or higher). These devices primarily generate neutron irradiation for fundamental and applied research, including nuclear fuel and material irradiation testing, radioisotope production, and neutron science experimentation. They serve as critical irradiation testing platforms and fundamental research devices in nuclear science and technology, playing an indispensable role in industry, agriculture, and medical applications. This study systematically examined the technical attributes of high flux reactors and comprehensively reviewed China's achievements in high flux reactor design, technological development, and multipurpose utilization. Progress: Diverging from nuclear power reactor designs that focus on optimizing energy conversion efficiency and operational stability, high flux reactors aim to maximize neutron fluxes in irradiation channels, providing a technical foundation for fundamental and applied research while achieving optimal neutron economy. Key technical domains in high flux reactor development include reactor core configuration design, primary process system engineering, irradiation facility integration, and auxiliary system optimization. China's activities of high flux reactor development commenced in the 1960s, leading to landmark achievements, including the development of the High Flux Engineering Test Reactor, China Advanced Research Reactor, and China Mianyang Research Reactor. These facilities have achieved critical technological breakthroughs and diversified applications, contributing considerably to national nuclear energy advancement and socioeconomic development. The modern design philosophy of high flux reactors emphasizes advanced technology integration, functional versatility, and environmental sustainability. Design optimization focuses on enhancing reactor performance metrics while maintaining safety-economy balance and operational flexibility. Nevertheless, China's current high flux reactor designs show discernible limitations compared with international benchmarks, particularly in neutron flux intensity. For instance, the current maximum thermal neutron flux generated is ≤1.5×1015 n/(cm2·s), which limits the production of rare nuclides, such as 252Cf, 249Bk, and 253Es, through long transmutation chains and Pu/Am/Cm target irradiation. Further, insufficient irradiation testing capabilities in representative fast-neutron spectrums hinder the development and qualification of nuclear fuel and materials for Gen-IV advanced nuclear energy systems. Limitations are also observed in terms of irradiation capacity, auxiliary systems (e.g., deficiency in post-irradiation processing and radiochemical separation facilities), and application diversity. These limitations constrain the progress of strategic nuclear science and technology frontiers, including advanced nuclear material development, high-specific-activity radioisotope production, and cutting-edge neutron science research. Conclusions and Prospects: Strategic recommendations for China's high flux reactor development activities are proposed considering three aspects: innovation-driven technological upgrading to improve pivotal technical parameters and reactor performances, coordinated infrastructure planning to achieve the efficient utilization of irradiation resources, and open-resource sharing mechanisms to better drive the development of nuclear technology and related industries. Through technical upgrading and resource integration along with parallel research efforts in next-generation high flux reactor designs, which can substantially enhance technological sophistication and competitiveness, such initiatives are expected to provide robust support for nuclear energy innovation and nuclear technology advancement.
Objective: The rising global energy demand and the pressing need to reduce carbon emissions have prompted the exploration of sustainable energy solutions. Energy tunnels, which incorporate heat exchange pipes within tunnel structures to harness geothermal energy, offer a promising technology for achieving structural and thermal functions. This study focused on the energy shield tunnel associated with the underground relocation of the Beijing East Sixth Ring Road, a key initiative aimed at transforming urban infrastructure while supporting China's "dual carbon" goals (peak carbon emissions by 2030 and carbon neutrality by 2060). The study addressed critical gaps in the design and thermal performance evaluation of energy tunnels, particularly under varying geological and climatic conditions. By conducting in situ experiments and developing analytical models, the study aimed to provide reliable methods for calculating heat exchange power, assessing structural responses, and optimizing system design for large-scale applications. Methods: A comprehensive experimental and analytical approach was employed to evaluate the thermal and structural performance of the energy shield tunnel. Two primary in situ testing methods were utilized: thermal performance tests and thermal response tests. These tests simulated extreme summer and winter conditions (35 ℃ and 5 ℃ inlet temperatures, respectively) to measure the tunnel's heat exchange power. Distributed fiber optic sensors were embedded in the tunnel's secondary lining to monitor temperature and strain variations during thermal cycles, ensuring accurate assessment of structural responses. Critical parameters, including heat exchange power, thermal influence radius, and temperature distribution, were derived using analytical solutions. For example, heat exchange power was calculated based on the temperature difference between inlet and outlet fluids, mass flow rate, and the specific heat capacity of water. The thermal influence radius was determined using a derived formula under constant heat flux conditions, validated against measured data. In addition, real-world engineering data were used to design a geothermal energy supply system for adjacent buildings, comparing its economic and environmental advantages with traditional air-source heat pump systems. Results: The experiments yielded significant results. Under extreme summer conditions (35 ℃ inlet temperature), the heat exchange power reached 45 W/m2, whereas under extreme winter conditions (5 ℃ inlet temperature), it was -39 W/m2, indicating effective energy extraction and storage. A linear relationship was established between the temperature difference (between inlet fluid temperature and ground temperature) and heat exchange power, expressed as Qd=2.635ΔTd. This formula proved adaptable to particular geological and airflow conditions. The thermal influence radius of the tunnel was calculated to be approximately 8 m after 120 days of operation, suggesting a minimum spacing of 16 m between adjacent energy tunnels to avoid thermal interference. Structural monitoring revealed negligible additional thermal stresses in the tunnel lining, with maximum strains of 155 με (tensile in summer) and 80 με (compressive in winter), confirming the safety of the integrated heat exchange system. The designed geothermal system demonstrated substantial economic and environmental benefits. For the South Management Zone (1 226.3 m2), the energy tunnel system reduced initial costs by 49% compared to traditional borehole systems and realized 31.6% annual energy savings over air-source heat pumps. Carbon emissions were reduced by 43 tons annually, with potential savings of 1 792 tons if implemented throughout the entire 10 km tunnel. Conclusions: This study provides a comprehensive framework for designing and evaluating energy shield tunnels, addressing thermal performance and structural integrity. The derived formulas for heat exchange power and thermal influence radius serve as practical tools for engineers, promoting efficient system design across diverse conditions. The negligible structural effects confirmed the feasibility of retrofitting existing tunnels into energy tunnels without compromising safety. The energy tunnel system demonstrated significant economic and environmental advantages, aligning with global sustainability targets. To refine design methodologies and explore broader applications, future work should expand the dataset with long-term operational monitoring. By leveraging underground infrastructure for renewable energy, this research contributes to the advancement of smart, low-carbon urban development.
Objective: The medium-low maturity shale oil reservoirs of the Nenjiang Formation in the Songliao Basin, China, exhibit considerable development potential. In-situ conversion technology is considered the most promising method for the extraction of oil from such reservoirs. This conversion process involves multiple coupled pyrolysis reactions of organic matter in oil shale, necessitating a valid kinetic model capable of accurately predicting the multiple reactions in the pyrolysis process and the resulting product distribution. The establishment of such a model is critical for optimizing in-situ conversion efficiency and guiding the technological development of this extraction method. Methods: In this study, pyrolysis experiments were conducted at heating rates of 0.5, 1, 2, and 4 ℃/min to analyze the pyrolysis characteristics of oil shale from the Nenjiang Formation in the Songliao Basin. The derivative thermogravimetric (DTG) curves were deconvoluted to decouple the pyrolysis process into three distinct reactions: kerogen conversion, primary pyrolysis of bitumen, and secondary pyrolysis of bitumen. The weight loss characteristics of these reactions were obtained at various heating rates. The results showed that with the increase in the heating rate, the weight loss process of each reaction shifted toward higher temperatures. However, the relative shape and corresponding weight loss rate of each peak remained largely unchanged. To meet the in-situ conversion requirements, we classified the pyrolysis products into six categories and determined their distributions for each reaction, leading to the establishment of equations for product distribution. Kinetic analyses were subsequently performed based on the Starink method, Coats-Redfern method, and the kinetic compensation effect to determine the initial values and distribution ranges of kinetic parameters for each reaction. The Bayesian optimization method was then performed to iteratively refine these parameters, which minimized prediction errors and yielded final kinetic parameters. Results: The resulting kinetic equations exhibited a high predictive accuracy, with R2 values exceeding 0.96 for each reaction. The calculated relative contributions of the three reactions to the overall pyrolysis process of organic matter reached 0.229, 0.509, and 0.262. Through the application of the corresponding weights to the kinetic equations and incorporation of the product distribution equations, a kinetic model was established for the pyrolysis of organic matter in oil shale from the Nenjiang Formation. The model demonstrated a strong predictive accuracy at heating rates ranging from 0.5 to 4 ℃/min, achieving an overall correlation coefficient of R2>0.994 8. To further evaluate the model's applicability under slow heating conditions relevant to in-situ conversion, we conducted temperature-programmed pyrolysis experiments on the oil shale from the Nenjiang Formation at 0.2 ℃/min through thermogravimetric analysis. This model demonstrated a high prediction accuracy (R2>0.994) for the pyrolysis process of organic matter at a heating rate of 0.2 ℃/min, which substantially outperformed the global reaction kinetic model (prediction accuracy of 0.932). Model predictions for product distribution at this heating rate indicated that the product yields increased with temperature, with a gradual rise observed between 260-330 ℃, followed by a sharp increase between 350-430 ℃, which peaked at approximately 450 ℃. However, beyond 430 ℃, the increased presence of heteroatomic compounds and CO2 suggested a potential decline in the economic efficiency of in-situ conversion. Conclusions: The kinetic modeling method proposed in this work displays a higher prediction accuracy compared with the traditional method, and the developed kinetic model effectively predicts the pyrolysis process and product distribution of Nenjiang oil shale under varying heating conditions. This model offers crucial theoretical insights into the optimization and implementation of in-situ conversion technology, which supports the efficient exploitation of shale oil reservoirs in the Nenjiang Formation.